The Audio Streaming API is the interface to streaming sampled audio data to and from the low level audio controller part of the Multimedia framework (MMF).
Streamed audio data is sent and received incrementally. This means:
sound clips sent to the low level audio controller (audio play) can be sent as they arrive rather than waiting until the entire clip is received.
The user of the API should maintain the data fragments in a queue before sending them to the server. If the user attempts to send data faster than the server can receive it, the excess data fragments are maintained in another client side queue (invisible to the user), whose elements are references to the buffers passed to it. The server notifies the client using a callback each time it has received a data fragment. This indicates to the client that the data fragment can be deleted.
sound clips that are being captured by the low level audio controller (audio record) can be read incrementally without having to wait until audio capture is complete.
The low level audio controller maintains the received buffers where it can place the audio data that is being captured. The client uses a read function to read the received data into destination descriptors.
The client is also notified (for audio play and record) when the stream is opened and available for use (opening takes place asynchronously), and when the stream is closed.
This API can only be used to stream audio data, with the data being stored or sourced from a descriptor. Client applications must ensure that the data is in 16 bit PCM format as this is the only format supported. The API does not support mixing. A priority mechanism is used to control access to the sound device by more than one client.
How the audio streaming classes interact with other components of MMF is shown below.
The Audio Streaming API is related to the following APIs:
CMdaAudioPlayerUtility
- audio data is played
directly from a specified local file or descriptor rather than an audio stream.
Use CMdaAudioPlayerUtility
in preference to audio
streaming if no audio streaming is required.
CMdaAudioRecorderUtility
- audio data is
recorded directly to a file or descriptor rather than into a streaming
environment. Use CMdaAudioRecorderUtility
in preference to
audio streaming if no streaming is required.
Using the interface involves opening, setting audio properties, writing
to and closing the stream. It is implemented by the
CMdaAudioOutputStream
and
MMdaAudioOutputStreamCallback
classes.
Each stage of using an output stream is described below:
Create the new audio streaming object using
NewL()
and optionally set the priority of the audio
streaming object in relation to other clients that may attempt to use the audio
hardware.
Open the stream using Open()
. Once the stream
has been successfully opened a
MMdaAudioOutputStreamCallback::MaoscOpenComplete()
is
issued to indicate that the stream is ready for use.
Set the audio and mobile equipment properties. Use
SetAudioPropertiesL()
to set the sampling rate and number
of audio channels. Values must be specified as enum values, for example,
TMdaAudioDataSettings::ESampleRate8000Hz
rather than 8000. It is
not possible to reset these values once the stream is playing.
Use the volume and balance functions to determine current settings or set new ones. Volume and balance can be set while the stream is open with any new settings taking effect immediately.
Use WriteL()
specifying the buffer to use, to
send audio data to the lower layers of the MMF. Once the buffer has been
successfully copied, a pointer to its location is returned in a
MMdaAudioOutputStreamCallback::MaoscBufferCopied()
.
Upon reception of this callback the buffer can be deleted as it is
no longer required. It can also be used as an opportunity to issue addition
WriteL()
calls, although they can be issued anytime as the MMF
maintains its own list of user buffers to play.
Use Stop()
to stop audio playback. The
MMdaAudioOutputStreamCallback::MaoscPlayComplete()
callback is issued indicating successful closure of the stream.
Note:
MMdaAudioOutputStreamCallback::MaoscPlayComplete()
may also be called if there is no more audio data to play. In such
circumstances the audio stream is automatically closed and aError
of the callback is set to KErrUnderFlow
.
The user of the API is faced with a trade off between using small and large buffers to store the audio data. The use of smaller buffers increases the chance of an underrun (where the sound device finishes playing before the next buffer of sound data has been sent to it), but reduces the initial delay before sound begins to play.
Using the interface involves opening, setting audio and mobile
equipment properties, reading from and closing the stream. It is implemented by
the CMdaAudioInputStream
and
MMdaAudioInputStreamCallback
classes.
Each stage of using an input stream is described below:
Create the new audio streaming object using
NewL()
and optionally set the priority of the audio
streaming object in relation to other clients that may attempt to use the audio
hardware.
Open the stream using Open()
. Once the stream
has been successfully opened a
MMdaAudioInputStreamCallback::MaiscOpenComplete()
is
issued to indicate that the stream is ready for use.
Set the audio and mobile equipment properties. Use
SetAudioPropertiesL()
to set the sampling rate and number
of audio channels. Values must be specified as enum values, for example,
TMdaAudioDataSettings::ESampleRate8000Hz
rather than 8000. It is
not possible to reset these values once the stream is playing.
Use the gain and balance functions to determine current settings or set new ones. Gain and balance can be set while the stream is open with any new settings taking effect immediately.
Use ReadL()
specifying the buffer to use, to
request recorded audio data from the lower layer of the MMF. Once the buffer
has been successfully written, a pointer to its location is returned in a
MMdaAudioInputStreamCallback::MaiscBufferCopied()
.
MMF only starts recording audio data after the first
ReadL()
is issued (not after the Open()
).
Use Stop()
to stop recording. Two callbacks
are issued after a Stop()
, the first is a
MMdaAudioInputStreamCallback::MaiscBufferCopied()
pointing
to a buffer that contains the last of the recorded audio data (and an
aError
value of KErrAbort
). The second callback is
the MMdaAudioInputStreamCallback::MaiscRecordComplete()
indicating successful closure of the audio stream.
Multi Media Framework Client Overview - general overview of the various components of the MMF Client API and the functionality they provide.
Audio Recording, Conversion and Playing - advanced audio file manipulation features; specifically, the ability to record, convert and playback sound clips as well as to manipulate meta data.
Audio Recording - introduction to the audio recording class.
Audio Conversion - introduction to the audio conversion class.
Audio Playing - introduction to the audio playing class.
Audio Streaming - the interface to streaming sampled audio.
Audio Tone Player - a simple interface for tone generation (synthesized sounds) that is supported on all audio-capable devices.
Video Recording and Playing - video manipulation features; specifically, the ability to record and playback video clips as well as to manipulate meta data.
Video Recording - introduction to the video recording class.
Video Playing - introduction to the video playing class.