Recently I have been tackling problems in OpenAL code for an iOS app. The trouble with OpenAL is that despite there being a spec, the underlying implementation for this audio API is mostly undefined. What can work for one system can fail miserably in another.
In the case of iOS, no sourcecode is provided so the underlying implementation is partly a mystery, though we can infer from the documentation it uses the AudioUnit API.
Here are a some best practices i’ve developed based on experimentation:
Do you really need to use OpenAL?
In some cases OpenAL may be overkill. For instance if you are just playing a one-off sound effect when pressing a button, it’s probably a better idea to use AVAudioPlayer.
If however you are making a fully immersive 3D AAA shooter, you’re probably a better off using OpenAL. If you are really crazy, for ultimate control you could even try writing your own audio mixer.
Never re-use the same source twice
The implementation of OpenAL on iOS 5.x acts rather oddly when it comes to streaming sources. Lets say you make a music manager and decide to allocate a source. You allocate some buffers then queue them for the source. You then re-use this source to play various music tracks.
However a problems arise as soon as the source is re-used. If you simply stop it and de-allocate buffers, when you queue up a new set of buffers for a new music track the OpenAL implementation seems to get confused and only plays the first buffer you allocate.
Attempting to recover the source at this point is impossible. You can stop it, rewind it, throw buffers away… nothing seems to get it to work properly. Based on this I can only assume you are not meant to use the same source more than once, at least for streaming sources.
Don’t allocate sources you don’t need
Each source you allocate and play in OpenAL will add a mixing unit to the mixer, which will be mixed to produce the final stream. In addition every source you allocate uses memory.
Instead of allocating a bunch of sources, only allocate sources as you need them. From what i’ve been able to determine, sources aren’t that expensive to allocate. It might make sense to limit your allocations however, as there seems to be no fixed limit to the amount of sources you can allocate on iOS.
At one point after receiving a general speedup by changing my source allocation, I theorized that perhaps every source you allocate in OpenAL is mixed regardless of whether or not its playing. However basic experimentation seems to indicate there is no major performance issue with merely allocating lots of sources.
Cache your buffers to disk
If for example you choose to encode your sound effects in something like OGG format to save space, you might notice it takes a noticeable amount of time to decode the audio each time you buffer it, even when using Tremor. This is no fun to an end-user.
Using an easier to decode codec is one solution, but if you have long sound effects it’s still going to take time to decode all the samples.
So it makes sense to keep buffers around for sound effects. However if you have too many buffers you will likely bump into low memory alerts, forcing you to purge non-playing buffers. When you need to play your sound effects again, you need to decode them all over again.
One way you can solve this is by storing your decoded buffers temporarily to disk in the temporary folder (NSTemporaryDirectory). That way when you get a memory alert you can dump your buffers, then whenever you need them again you can quickly re-load them straight from disk.
Despite implementation-specific bugs, OpenAL is actually quite a nice and simple library for playing both 2d and 3d audio. There are no licensing fees, the specification is open. It’s available for practically every relevant modern development platform, what more could you want?
Have fun using OpenAL!