A2ML Reference Implementation API (ARIA)

Version SVN 16-05-2011

About A2ML

Advanced/Audio Mobile Markup Language (A2ML) is an XML format used for describing interactive audio scenes. It's a versatile and lightweight format based on a model/instance paradigm that can be used in many application domains: video games, guidance systems, user interfaces, interactive music, etc.

More info on A2ML can be found on the Team WAM interactive audio project page: http://wam.inrialpes.fr/iaudio


This API uses A2ML as its base input format. The primary concept of this API is to load interactive audio scenes described in A2ML using the ARIA API, build the corresponding audio objects hierachy, then communicate with the main program by the means of events. For most usages, simple or complex audio scenes can be completely managed using the event mechanism. When specific work must be done on audio objects, a DOM-like system allows the programmer to retrieve audio objects by IDs or by digging into the object hierarchy. There is 4 directly identifiable audio objects: cues, chunks, animations and sections. The properties of the playable audio objects (cues, chunks and sounds) are related to its synchronization to the audio scheduler. The audio parameters that affects the playback of these objects are modified throught Controls, for example the volume or pan. A Control object can be automatically animated using the Animation objects, but such behavior should be described in the A2ML documents.

While all audio objects can be created programmatically, it is strongly discouraged to do so. The A2ML format should be used to describe the audio hierarchy and interactions, and audio objects of the API should be accessed only in specific cases where the complex parameter animation behavior of these objects can't be described straight into the A2ML document.

Using ARIA

Most of the API usage is done through the Sound Manager object. It is a static class that controls all the behavior of the API, and also the starting point of any program using ARIA.

Here is the basic scenario for ARIA usage:

 // -------------------------------------------------------------
 // App inits
 // -------------------------------------------------------------
 [AMSoundManager createSoundManager];
 NSError *myerror;
 [AMSoundManager loadA2ML:@"myfile.xml" error:&myerror];
 // ...

 // -------------------------------------------------------------
 // Main code
 // -------------------------------------------------------------
 AMScheduler *scheduler = [AMSoundManager getScheduler];
 [scheduler start];
 // do things...
 // -------------------------------------------------------------
 // App terminates
 // -------------------------------------------------------------
 [AMSoundManager releaseSoundManager];
 // ...

This is the most basic usage of this API: it loads an A2ML document, and starts the scheduler.

Now, if you need to send events to the scheduler, here is how it is done:

 NSNotificationCenter *notificationCenter = [AMSoundManager getNotificationCenter];
 [notificationCenter postNotificationName:@"example.myevent" object:self];

Listening in your program for internally generated audio events, like when a cue ended, works in a similar way:

 NSNotificationCenter *notificationCenter = [AMSoundManager getNotificationCenter];
 [notificationCenter addObserver:self selector:@selector(mycallback) name:@"mycueid.end" object:nil];

This way, each time the cue with the ID "mycueid" ends, the method "mycallback" will be called. Here is a list of some internal events you can listen to:

For special cases where you might need advanced interaction with the audio objects described in the A2ML document, you can access them programmatically and access/modify their properties directly, for example:

 AMCue *mycue = [AMSoundManager getObjectFromID:@"mycueid"];
 mycue.loopCount = 2;   // change loop count value

Generated on Mon May 16 16:29:48 2011 for ARIA by  doxygen 1.5.9