Game Audio - Sound in Game Engines PDF

Summary

This document contains notes on game audio, detailing aspects like file playback, 3D sound environments, and audio effects. The presentation covers topics such as different audio features in game engines and audio middleware considerations.

Full Transcript

8 - Game Audio - Sound in Game Engines Sound Design - Games and Multimedia ©2024 Miguel Negrão CC BY-NC-ND 4.0 In current digital games audio is most often implemented using PCM audio file playback Audio files allow for sounds and music of any type and arbitrary qua...

8 - Game Audio - Sound in Game Engines Sound Design - Games and Multimedia ©2024 Miguel Negrão CC BY-NC-ND 4.0 In current digital games audio is most often implemented using PCM audio file playback Audio files allow for sounds and music of any type and arbitrary quality. Each game engine provides different audio features. Usually you are free to use the game engine's audio engine or use an external audio engine which can be coded from zero, open source or proprietary. Typical audio features in a game engine File playback Start playing an audio file. Stop playing an audio file. Loop a sound file seamlessly. File playback Importing files for audio playback. Usually files are imported in 44.1Khz or 48Khz and 16bits or 24bits. Usually in the shipped game the audio files are compressed (with loss) to save space. File playback Files can be loaded to RAM, streamed from disk. File playback It should be possible to change in realtime: volume pitch Simulating position of sound source in 3D environment There is a listener (like a microphone) which is positioned usually in the camera. There are sound sources which also have a position in the 3D environment. Audio level attenuation and panning is applied in order to simulate position of the sound source. Simulating position of sound source in 3D environment object d θ character Level decrease with distance The level of the sound emitted by a source decreases as we get farther way from it. The level is a function of the distance : There are different level curves (linear, logarithmic, etc). Panning The most common method of stereo panning for loudspeakers is applying a different attenuation to the left and right signals. *a left sound split *b right Usually a and b will depend on the angle between the listener and source. Headphones - Binaural For headphones a different technique can be used. HRTF/Binaural. This is used in VR because usually a helmet already includes headphones player rotates head around to view different directions therefore pointing away from loudspeakers, which in stereo loudspeaker systems doesn't work well. Most people don't have surround systems, binaural is an alternative. Will see in more detail in another class. Panning It should be possible to turn on attenuation and panning separately. Use case: broad sources which occupy a large space (e.g. sound of a forest). For these sources panning is turned off. Possibly only turn panning off when listener is closer to source. If a virtual source is inside a building this can be simulated using reverb. Usually reverb should only be applied while inside the building. Game engine usually provide a way to determine the areas where reverb should be applied (and with what settings). UE4: audio volume. Usually only one reverb operating at a given time. Reverb is an expensive effect. Also, we can only be in one enclosed space at a time. When transitioning between rooms some interpolation or crossfade between two reverbs might be required. Reverb Some new systems use the actual scene geometry to simulate reverb. Examples: Valve's Steam Audio Google's Resonance Audio Microsoft's Project Acoustics Low-pass filter Air absorbs more the high than the low frequencies as distance from the source increases. This can be simulated with a low-pass filter which depends on distance. Directional sources Some sources such as horn loudspeakers are highly directional. Some game engines can simulate this type of sources. Usually a low-pass filter and attenuation curve is applied based on the angle between front direction of source and listener-source direction. Occlusion A physical object between a source and listener will affect the sound. This is called occlusion. Some game engines can simulate this phenomenon. Examples: Sound inside a room, when standing outside the room. Listening to a sound inside a room when an object such as a wall or pillar is between listener and source. Occlusion Some game engines have automatic occlusion implemented using ray tracing. A ray is calculated between source and listener detecting if there is any object in between. Dynamic Occlusion When parts of buildings can be destroyed by the player it must be possible to change occlusion and reverb settings in real-time. Last slide, image from https://www.microsoft.com/en- us/research/project/project-triton/. Occlusion example Quantum Break - Audio occlusion and propagation in using Umbra https://www.dailymotion.com/video/x2wgj2f Doppler Some game engines allow simulating the Doppler effect for sound sources which are moving (fast). It is also possible to just record the Doppler effect in the sound itself. Dealing with time Wait n seconds before next action. Read through a curve (breakpoints) and affect a parameter. Can be implemented using general programming techniques (e.g. co-routines, events, triggers) or be specific to the audio API. Mixing Grouping the output of certain sources together and affecting their level. Usually at least music + one-shot + ambient sound + dialog. Approaches: Using the equivalent of mixer VCAs (just control data). Less CPU. Can't add effects to each group. UE: Sound Classes Using the equivalent of mixer Groups (mixing audio). More CPU, Can add effects per group. UE: submixes Audio effects Some game engines allow adding audio effects to either individual sources or buses. Automatic mixing Ducking: lowering all other sounds when a dialog is played. Adding more layers of sound as the player approaches a source. Adding additional layers of music depending on the emotional/stress level. Limitations Audio budget Maximum number of voices Audio Budget The ammount of space available for audio files. In a 4GB game disc it might be 1GB. Audio Budget lossy compressed audio Compression which removes components of the audio that we (mostly) cannot hear. E.g. MP3, AAC, Vorbis. Can reduce file size 10x. More CPU usage for decoding the file. Not possible to instantaneously jump to any sample within the file. Higher latency (~200ms for MP3, AAC, Vorbis). The file doesn't start playing right away. Audio compression Music/dialog: should be compressed. Usually long files. Sound effects: can be uncompressed. Usually small files. Sometimes all you can choose is the compression quality level. Usually sound file are imported in WAV format, and the engine is responsible for compressing the file for a given platform. Maximum number of voices Sound files can be loaded entirely into RAM, or streamed from disk in real-time. On each platform there are limits to how many sounds can be played at the same time due to disk or RAM constraints. Usually the game engine has a configurable maximum number of voices which can play at the same time. One voice = one sound. Default in Unreal Engine is 32 voices ( Project Settings > Engine > Audio > Max Channels). What happens if the game tries to play more sounds than the maximum number of voices ? What happens if the game tries to play more sounds than the maximum number of voices ? Some sounds will not play. Prioritization Assign a priority to each sound. When the maximum number of voices is reached stop playing the sounds with the lowest priority. Another rule is stop playing the softest sources. Audio culling To cull = to choose; select; pick. Some sounds will not be heard because they are too far from the listener and due to the 3D attenuation their level is zero. Then these sounds are not played for efficiency. Game engines usually have somewhat basic audio features, it is possible to expand the audio capabilities using external middleware software. Audio Middleware An external library which can be integrated into the game engine. ​ - Support for major game engines. ​ - More advanced audio features. ​ - Application similar to a DAW for preparing content. Audio Middleware Most relevant: FMOD (https://www.fmod.org) Wwise (https://www.audiokinetic.com) FMOD image from www.fmod.org Audio Middleware Composed of API (Application Programming Interface) e.g. C++. or engine integration (Unity3D, Unreal Engine, etc). And A GUI application which is user friendly and similar to a DAW. Audio Middleware Events in the game can trigger changes in the middleware side. Continuous parameters can be set at each tick by the game engine and used by the audio middleware. Example: distance between listener and source. Audio Middleware Features: Random playback with random delay. Complex timeline logic (jumping to different place in the file). Audio Middleware The GUI application can be used by sound designers without any knowledge of programming or even game engines. The sound playback, transitions, logic, etc can be tested on the middleware without even opening the game engine. The audio programmer then takes the exported package from the middleware and imports it into the game engine. The sound designer and audio programmer agree on a set of parameters to use. Audio Programmer Sound designer Example: playing the sound of car, which is different depending on the speed of the car. Speed of the car is determined by the player controls in the game engine. Create a parameter car_speed. Sound designer can now use this parameter to determine which sound to play, or to crossfade between sounds. Questions ?

Use Quizgecko on...
Browser
Browser