Researched Production Techniques

1. Production for Video Game Soundtracks

When I was helping compose music for a game it became clear that the planning stage and using the game as the priority inspiration is one of the most important aspects of composition. This inspiration can be taken from screenshots, concept art, gameplay videos and even play testing it if the game is that far into development.

Most music for games is either categorized as “static” or “dynamic”. Static music can be compared to most movie and tv soundtracks- it’s a piece of professional music that has been created with the theme, setting and characters of the game in mind but mainly serves its purpose as background music, something to further establish the atmosphere of the game. Static music can also be played during cutscenes or parts of the game where the player has no control. Dynamic music is music that is triggered to play during a certain action or player interaction. It is key that this music can be looped, repeated and faded out at any point of the game. We took this into account when working on the gameplay music for the game “Your Team”.

For example the track below starts to play in the game The Witcher 3 whenever the player sees a monster and walks towards it. The fights in the game are so long that most of the time the whole track plays all the way through which allows the faster percussions and constant repeating vocals to work with the gameplay and make the battle seem more intense.

Another key technique used in composition is to limit the repetition of one specific theme even if it does have different variations. Constant repetition can become irritating to the player and even make the game as a whole seem stale and boring which is the opposite of what the developers intended. Jesper Kyd, a composer for Borderlands 2 explained a solution to this problem in an interview with IGN, “For Borderlands 2, I wrote themes for the different areas. So there are themes for the Ice area, the Sanctuary theme is for your home town, and there is the Interlude theme which has a more Western feel.” By having different music for each area that the player comes across the player never gets bored by the music and each area feels more unique. This technique of composing for certain settings in the game can also become extremely effective if the composition focuses on a certain action or interaction with other characters.

An example of location specific music is the track below. This track plays whenever the character walks into a tavern in the game. The dampened hand drums, fast strings, repetitious flute and upbeat tempo perfectly captures the setting even without explaining the setting of the game.

Here are the soundtracks for the 2 games we worked on:

Sources:

IGN,. (2015). Behind the Music: How Video Game Soundtracks Are Made – IGN. Retrieved 2 August 2015, from http://au.ign.com/articles/2012/09/18/behind-the-music-how-video-game-soundtracks-are-made

Soundonsound.com,. (2015). MUSIC FOR VIDEO GAMES. Retrieved 2 August 2015, from http://www.soundonsound.com/sos/nov01/articles/smartdog1101.asp

2. Recording ADR for Film

When it comes to film, ADR or Automated Dialogue Replacement is essentially recording dialogue separate from the shooting of a scene. This would be necessary for a film for many reasons- the original shot may have included too much noise when recording, a quiet delivery from the actors may introduce too much noise when the volume level is increased or simply the audio equipment may malfunction.

When setting up a session the engineer must decide on which type of looping they will use. Visual looping  consists of the actor listening to the take several times before tracking the dialogue. However,when they are ready to record the audio from the previous take will be disabled and instead the visual recording of the scene will be played for the actor to match up the new dialogue with the old mouth movements. If the recording is not done in the control room than this requires a machine with a dual head video card or another method to send a split video feed to where the actor can see it.

The second method is Audio looping. Audio looping can usually have better results but is much more time consuming. Audio looping is set up the same way as video looping however there is no video feed at all and the old dialogue still plays during the actual new recording. Looping just the audio allows the actors to solely concentrate on the script and original delivery and emotion without worrying about the setting and visual sync. This method can take a while for the actors to make the new dialogue sound authentic and not “canned” due to the constant repetition and trying to reenact the scene perfectly. Most ADR engineers end up using a combination of these looping techniques throughout the ADR sessions for a whole film.

When it comes to the recording stage the microphone placement is usually dependant on the position of the actor in the original take. For example if the new audio is being recorded with a shotgun mic it is best to recreate the placement of the mic in the original recording, that way there is less editing and processing needed for the audio for it to match the camera angle. In any case the best position of the mic is slightly off axis and pointed downwards, slightly to the left or right rather than pointed directly at the actor’s mouth. This decreases sibilance and results in a more constant pronounciated volume. A pop filter may also be necessary if there is too much sibilance. Another way to result in a better take is to encourage the actor to replicate the body language and mood that was used in the original in the scene if they are comfortable to do so. Lots of movement may result in more noise so that also needs to be taken into account.

Sources:

The Beat: A Blog by PremiumBeat,. (2014). ADR: Automated Dialogue Replacement Tips and Tricks – The Beat: A Blog by PremiumBeat. Retrieved 10 August 2015, from http://www.premiumbeat.com/blog/adr-automated-dialogue-replacement-tips-and-tricks/

Microfilmmaker.com,. (2015). Microfilmmaker Magazine – Tips & Tricks – The Basics of ADR: Secrets of Dialogue Replacement for Video People, Pg. 3 of 4. Retrieved 10 August 2015, from http://www.microfilmmaker.com/tipstrick/Issue15/Bas_ADR3.html

3. Figure-8 Polar Pattern Vocal Recording

There are three main polar patterns microphones use- omnidirectional, cardioid, and figure-8. Omnidirectional is when the sound source is equally picked up, by the mic, regardless of the direction the source is facing. Cardioid polar pattern allows the sound to be picked up the best from the front of the mic, picked up less from the sides but not picked up at all from the back. Figure-8  however, uses two opposite facing diaphragms, one at the front and one at the back. Because of this the sound source at the front and back of the mic is easily picked up but nothing is picked up from the sides.

Adib, Jordan and I decided to experiment with this polar pattern yesterday when we recorded dialogue for our project. Originally the game developers didn’t want dialogue in the game which we were fine with. However one day after recording most of the assets we were just messing around in the raven studio and decided to record some dialogue just to see what it would sound like. Jordan and I both put on hillbilly accents and the result was just hilarious. One of the developers, Anthony, came to ask us a question and we showed him what we just recorded. He thought it was great and called up the rest of the team and they all changed their mind and asked for even more dialogue and offered some suggestions on what we should say.We covered what they wanted and the rest was improvised.All together we ended up with around 20 lines of dialogue.

They implemented most of the dialogue into the game before the second play test. After the play test they asked us to do even more dialogue and said that was a part of most of the feedback. We went back into the raven a few weeks later and recorded around another 70 lines of dialogue.

During the recording sessions we used an AKG C414 to record the dialogue. We set the polar pattern to figure-8 and Jordan and I stood on opposite sides of the microphone facing each other. Recording the dialogue with this technique made it sound more genuine and real as the characters were meant to be talking to one another. Minimal delay time between the responses meant it was easier for us to edit. The only downside to recording the dialogue this way was that we had to manually cut up each line and drag them onto separate tracks to process and bounce out. After we finished recording we EQ’d the dialogue as well as adding a bit of reverb to it so that it fit in the setting of the game.

Recording Dialogue
Recording Dialogue
AKG C414
AKG C414

However we did need to edit the recordings and move each piece of dialogue to 2 separate tracks to, one for each character, to process them individually. This could become time consuming if we were recording lots of dialogue or if we ended up talking at the same time. Fortunately we didn’t need to record that much dialogue (only about 30 seconds worth all up) and we paused for a second just after each take so that no breaths or other noise from the other person came through.

Sources:

Disc Makers Echoes,. (2015). Pros and cons of the Figure 8 mic. Retrieved 12 August 2015, from http://blog.discmakers.com/2015/03/pros-and-cons-of-the-figure-8-mic/

E-Home Recording Studio,. (2012). Microphone Polar Patterns: A Beginner’s Introduction. Retrieved 12 August 2015, from http://ehomerecordingstudio.com/microphone-polar-patterns/

Soundonsound.com,. (2015). Using Microphone Polar Patterns Effectively. Retrieved 12 August 2015, from https://www.soundonsound.com/sos/mar07/articles/micpatterns.htm

4. Using Multiband Compression

Compression is one of the key processing techniques and is absolutely crucial when it comes to any sort of audio. The general point of compression is to reduce the dynamic range of individual tracks or the range of one final track, when it comes to the mastering stage. During the mastering stage different parts of the frequency spectrum will need different compression settings to appropriately reduce the dynamic range, thats where multiband compression comes in.

Multiband compression separates the frequency spectrum into different bands (hence the name), or sections. This lets the mastering engineer set individual compression settings to each separate band. The mastering engineer would use this method to individually compress the different elements in the mix which would reduce the dynamic range of the overall track while still being subtle and transparent. Most multiband compressors use 3-4 different bands though it isn’t uncommon to find some with more.

Ozone Multiband Compressor
Ozone Multiband Compressor

Correctly setting up the crossover frequencies makes the processing all that more effective. The correct position of the crossover frequencies mainly depends on the instruments that were used in the mix and how they were previously processed. For example if you want to compress the band that covers the frequencies that vocals sit at you would set the low crossover just below the vocals and set the high crossover just above the vocals as to not cut out too much reverb, if it was there. Most compressors have a solo function to solo each individual band which can further help in setting up the crossovers.

From there the engineer can begin compressing. It’s important that the track doesn’t get over-compressed as the tracks should have already been compressed more in the mixing stage. Multiband compression should remain subtle, if it’s audible at all. With any sort of processing at any stage it’s important to constantly be aware of how the processing changes the audio by A/Bing, or bypassing, the processing and comparing the processed audio to the original audio.

Sources:

Soundonsound.com,. (2015). MULTI-BAND WORKSHOP. Retrieved 12 August 2015, from https://www.soundonsound.com/sos/aug02/articles/multiband.asp

Volans, M. (2011). How to Use Multi-band Compression in Mixing and Mastering – Tuts+ Music & Audio Tutorial. Music & Audio Tuts+. Retrieved 12 August 2015, from http://music.tutsplus.com/tutorials/how-to-use-multi-band-compression-in-mixing-and-mastering–audio-1904

Izotope.com,. (2015). Multiband Compression Basics | iZotope Mastering Tips. Retrieved 12 August 2015, from https://www.izotope.com/en/community/blog/tips-tutorials/2014/06/multiband-compression-basics-izotope-mastering-tips/

5. Creating Foley for Games

While foley was originally a key element in film, it has become increasingly important in the medium of gaming. Using foley in games has become a crucial element to maintain the immersion of first person games and to a lesser extent, third person games. For example including the sound of footsteps in a first person game can make the biggest difference in the player feeling immersed regardless of the actual gameplay. Foley has also become more popular in games as better compression software for audio files has become introduced.

The main difference between creating foley for film and foley for games is when you create foley for a film it must match a specific scene and correctly sync up with the images. However in games it is much more difficult to match the foley sounds with the actions because a player is controlling the movement of the character. This can even become more complicated if the studio you work for wants the audio assets for sections of the game that aren’t even in development yet, though most will give the engineer captured gameplay footage, let them play it themselves or concept art at the very least.

We fortunately got some tips from the developers we were working on the project with. They explained that we should create at least 3 different variations of one sound so that they could randomise them in the game. We took their advice and created a couple of different walking sounds and different variations of rocks falling.

Setting up the recording session of foley is extremely important to have an efficient and smooth recording session. Aside from having a breif it is also a good idea to list all of the sounds you NEED to create and some ideas on how to create them, and also list all of the optional sounds you would LIKE to create if you think of some that the client forgot. We suggested sounds for the character death, water splashes, and many other sounds to the developers we were working with while have a rough idea of how to create them as well.

Knowing the setting of where the foley audio will be triggered is also extremely helpful. This allows the engineer to apply the appropriate processing and fades so that it fits perfect in the setting. For example we worked on a game that took place in a cave so we added an appropriate amount of reverb to the foley to make it sound more legitimate. Some game development software such as Unity includes a small amount of processing options which makes it all the more easier for the developer to get the sounds just right if they weren’t satisfied with the processing the engineer did.

Creating the sound of rocks falling and rolling around was absolutely crucial to get right for this game. We recorded these sounds with a pair of AKG C-451bs using a stereo mic placement. Adib and I picked up handfuls of the stones and slowly dropped them back into the box. We dropped small rocks and large ones which resulted in a random mix of different sized rocks hitting each other as well as little pebbles falling more frequently with gave off a more constant sound amongst the clutter. A mixture of multiple takes of both of these approaches and we had a collection of sounds that fit well with what we had seen of the game. We recorded and submitted multiple takes so that the developers could randomise them. We also added heavy reverb to the sounds to make them fit into the setting when the sounds were triggered.

Mic Set Up
Mic Set Up

Moving Rocks

photo (1)

We also needed to create walking and jumping sounds for the characters in the game. This was done by having Adib walk in place on the rocks to make the walking sound and having him raise his feet up and then put them back down with some force to recreate the sound of jumping. To also give the jump sound a bit more of an authentic feel in a game Adib later on swung a piece of rubber tubing to make a swish sound that we just layered underneath the jump sound. This resulted in some great sounds that fit perfectly in the game. We of course also did 3 takes of each of these sounds for the developers to randomise.

Swish Sound
Swish Sound

Jump Sound
Jump Sound

When it came to creating the sound of a fire crackling Adib grabbed a handful of the tape that was in the studio and rubbed it between his hands which resulted in a perfect crackling sound. We just EQ’d out some of the high end and boosted some of the low and mids and it was almost a dead ringer. This time stereo was unnecessary so we recorded it with an AKG C414.

Tape Used To Create Fire Crackling
Tape Used To Create Fire Crackling
Tape Used To Create Fire Crackling
Tape Used To Create Fire Crackling

Setting, Pre-production, processing, compression and variations of the sounds are just some of the aspects that need to be taken into account when creating audio for games.

Sources:

Develop-online.net,. (2015). Audio Special: Foley for games. Retrieved 23 August 2015, from http://www.develop-online.net/analysis/audio-special-foley-for-games/0117620

Isaza, M. (2015). Andrew Lackey Special: Foley Sessions for Games | Designing Sound. Designingsound.org. Retrieved 23 August 2015, from http://designingsound.org/2009/12/andrew-lackey-special-foley-sessions-for-games/

Advertisements

2 thoughts on “Researched Production Techniques

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s