Sound Implementation for Non-Linear Media

This advanced course assumes a decent level of understanding of the topics presented the Sound Design 2 course. Whereas the Sound Design 2 course zones in on individual instances of game-oriented applications of audio in various softwares, the Sound Project course focuses on integration and implementation.

The aim of this particularly technical course is to get the students proficient in implementing various behaviors of sound in non-linear and/or non-continuous story lines. This means certain ambients, vocalisations, voices, musical excerpts, sound effects and foley sounds need to be triggered based on in-engine occurrences, NPC’s, game calls and conditional parameters. An example, for instance, could be that natural ambient will wildly differ depending on a character’s whereabouts (e.g. whether in a forest or desert). Another example could be that something as simple as a footstep sound effect will wildly differ – both in terms of basic sound as in terms of reverb – considering the type of building, terrain or soil a character is walking in and on.

Both in-engine (e.g. Unity, Unreal) and through the use of middleware (e.g. FMOD, Wwise), students are trained to premeditate the incidence, dominance, nature and behavior of their sound sources and how they need to be processed in-game as such. Therefore, students are taught the basics of structuring and controlling of audio parameters, both in the engine itself or through the use of dedicated middleware. As the students familiarize themselves with concepts such as sound objects and point sounds, they learn how to think in terms of audio taxonomy and hierarchy and learn how to set behaviors (e.g. proximal and distal cues, collision detection, localized emitters, …).

What sets the Sound Project 2 course apart from the Sound Design 2 course, is that we are no longer working in the individual projects, but using middleware to facilitate audio behaviors in designated engines.

During the semester, students are regularly required to submit intermediary exercises, dedicated to certain types of sound and their intended behaviors. To this end, students learn to work with animation blueprints in Unreal, are taught to use the audio-editor and scripts in Unity and are trained to construct conglomerate action events (i.e. sonic sequences based on an in-engine occurrence) in Wwise. The exam consists of both a theory test and a several simple practical applications and implementations that needs to be built in the Wwise middleware as well as the engines themselves.

Graduates of this course fully grasp the fundamental concepts of sound processing in games and the audio signal flow in games, and are able to integrate/implement their Wwise projects into a variety of game projects.

Software: Unreal, Unity, Wwise
Harware used: Audio Interface, stereo headphones