[ < ] [ > ]   [ << ] [ Up ] [ >> ]         [Top] [Contents] [Index] [ ? ]

4.17.1 Sound Plugins

The sound system consists of three components.

Sound Driver

System dependent sound drivers. This driver is used to pass one audio stream to the sound hardware (it does not mix several streams), and is only used by the software sound renderer. The other renderers do not need this driver. There are currently drivers for MacOS/9, OSS (GNU/Linux) and WaveOut (Windows).

Note: You don't need to load the sound driver yourself. It is loaded automatically by the software sound renderer. In fact, you normally don't have to deal with the sound driver in any way.

Sound Renderer

System dependent sound renderers. This driver can be used to play sound. There is currently only a software sound renderer.

Sound Loader

This module is used to load sound files. There is only one implementation for this module. It supports the following sound formats:

Sound Loader

The sound loader plug-in is used to load sound files and create sound data objects (iSndSysData) from it. A sound data object represents the sound in its raw form. It is the same as the iImage is for graphics.

Looking at the sound loader's methods, you have probably noticed that it takes an object of `csSoundFormat' type. This object tells the sound loader in what format (frequency, 8- or 16-bits, stereo or mono) the samples should be supplied. You can get this from the sound renderer.

Sound Data

A sound data object can be either static or streamed. These are fundamentally different. The only reason why they use the same interface is that this makes it easier for the user. A static sound is mainly a buffer of samples. A streamed sound is mainly a callback function to read samples. At this point we can already say that one main difference is that you can play many instances of a static sound at the same time (each with its own position counter), but only one instance of a streamed sound. However, this does not mean that you can't create more than one sound source from a streamed sound. Read on!

Sound Handle

The next step towards playing the sound is to hand over the sound data to the sound renderer. After doing this you may not use the sound data itself anymore. The renderer will now possibly convert the sound data to an internal format, load it into the soundcard memory, etc. In return you get a sound handle, which you should use from now.

For static sounds, the sound handle is just that: A wrapper around the sound data. But for streamed sounds it is more: We said before that there can be only one instance of a streamed sound. This is not really correct. There may be several instances, as long as they play the same sequence of samples all the time (like several speakers that are connected to the same recorder).

So here's a big difference between static and streamed sound. For static sounds, every instance has its own idea of which part of the sound to play, and you can control this on a per-instance basis. For streamed sounds, all instances of the same sound handle play the same stuff, and this is controlled on a per-handle basis. So the sound handle interface also contains methods to control playback, but only for streamed sounds.

In short, the sound handle can do the following:

Sound Source

The sound source is one instance of a sound. It controls position and velocity for 3d sounds. For static sounds it also controls playback position and activeness, while for streamed sounds it controls only activeness.

The Sound Listener

The sound renderer uses one global listener object. It controls how you hear sounds. This includes your own position and velocity (only for 3D sounds), and environmental effects. @@@FIXME do env effects affect non-3D sounds?

Sound System Example

Here's an example on how to set up the sound system:

Assume you have five players running around in a FPS game. Three of them have a pistol, two have a shotgun. You want to play sounds when they fire their weapons, and you also have a sound file for footsteps. Finally, you want to play some music from thee big speakers that are placed in the level.

First you load the sounds and register them. After that, you have three static sound handles for pistol, shotgun and footsteps, and one streamed sound handle for the music (note that there is currently no loader that creates streamed sound data, but let's assume there is).

Now you create the sound sources. As said before, a sound source is one instance of a sound. Thus, if you want to hear the same sound from two directions, it must come from two sources. We actually have some freedom on how we create the sound sources. One thing is sure: We need one source for each of the three speakers for the music. We create the sources, set them to active, set the position to the position of the speakers and start stream playback at the sound handle for the music.

For the footsteps we have several options. As one player cannot produce two footstep sounds at the same time, we can create one sound source for every player, then start playing on every step without looping. Or we could start looped playing when the player starts walking and stop playing when the player stops walking.

Instead of starting and stopping playback all the time, we could also create a new sound source on every step, and destroy at afterwards. Currently this should be considered slow, but this may change. On the other hand, every sound source takes its part of memory, possibly soundcard memory. This may also change.

What about the guns? It's similar to the footsteps. A player can only fire one shot at a time, so you can create one source per player for the shooting sound.

Advanced Sound Effects

The environmental effects of the listener are not enough for you? You want to dynamically generate the sound? This is possible. You have to create your own implementation of iSndSysData, usually as a streamed sound.

To create special effects, you may for example load a sound as a streamed sound. Instead of passing it to the sound renderer, you create an object of your special effects class that takes the original stream and is itself a sound stream. You then pass this object to the sound renderer. When it is asked for sample data by the sound renderer, it could for example read data from the original stream, apply an echoing effect and return the modified data to the sound renderer. Note that it has to copy the data before modifying it! Otherwise you may seriously damage the original stream.


[ < ] [ > ]   [ << ] [ Up ] [ >> ]

This document was generated using texi2html 1.76.