Saturday 27 April 2013

Editing and Mixing Sound: Optimising your Recordings

Now I've covered most of the basics for getting a sound source recorded, we can begin to touch on the mountain that is post-production. There are many stages involved in getting your sound from a 'raw' form (that is, straight from the recorder) into something that can be used and integrated into a production; be that a film, game, user interface for a vehicle etc. As this series of posts is going to cover quite a lot in terms of what you can do with sounds in post, I'll start with the step that all productions should start with - optimising recordings.

For the following post, I'll be using Audacity, just because it's a free tool that anyone can use on both Mac and PC. However, the methods shown can apply to most audio editing tools as well as DAWs (Digital Audio Workstations).

Step 1: Cut and Chop, but be wary of a Pop!
Inevitably when recording sound, there will be points of (almost) silence and sections of noise which will be of no use to you. The best thing to do with this sound is to either silence it or delete it all together. To silence the sounds you don't need, Audacity has a feature which allows you to 'Generate' it. The below image shows a sound recording with 2 pieces of audio we want, and some noise that we don't.
This is the editing window for Audacity. Here, we have the recording and I've selected
the noise that we want to remove.

With the default selection tool, you can drag along the desired area of which you want to 'create' the silence, or remove the unwanted noise.


With this noise still selected, we go to Generate and Silence. A small prompt appears
outlining the length of silence to generate (which should be the selected area).

The final product is shown below, now effectively without the unwanted noise.

The selected noise has now been removed from the recording.
You can (and should) take this further though. With all editing, there is the risk of interrupting a zero-crossing point. What is a zero crossing you ask? Well, when a sound wave fluctuates, it does so above and below a point of no movement - known as the zero crossing point. This is illustrated below, where the vertical line lies:


If you choose to remove a portion of your sound, you have to make sure that the start and end of the cut is on a zero crossing. Otherwise, you get what is shown below: a sharp jump in the waves fluctuation. The result of these non-zero crossing edits is a popping sound, which can cause a lot of issues for loops especially.


So how do you prevent such horrible audio gremlins? There are two things you can do which will both help keep the sound from popping and increase your work flow.

1. Zero Crossing Snap: In a lot of audio editing software, there is an option to only snap to zero crossing points. This means that, no matter where you click on an audio region, the cursor will only  snap on a point of no movement. This saves time, as you're not forever having to zoom in and fine tune the selection at a waveform level. This is something I would recommend using a lot of the time, with only a few cases where you might need otherwise, like creating loops that start and end above or below the zero crossing point.

2. Fading: Particularly for editing noise and unwanted audio either side of that you wish to keep, an easy way to clean up zero crossings and unnatural sounding cuts is to apply short fades. Obviously, this means a fade in before the desired audio and a fade out at the end of it. For splicing audio together, you can use this same technique on both bits of audio, and overlay one on the other ever so slightly so an almost 'seamless' transition occurs. Wherever possible, I would always try to bake these fades in, to save strain on a DAW session file. Fades can take up vital CPU power, so having all these edits in the audio file itself really helps put the work elsewhere for your processor, such as the Automation of a track.

Separating Sounds
When you've made a clear separation for your sounds by deleting unwanted audio, it's now time to create individual files for these. As far as I'm aware, Audacity doesn't have the most intuitive process to save separate sounds, but Adobe Audition (which I use a lot of the time) allows you to separate sounds and save them accordingly with very little input. Regardless, we'll go over how to process these sounds in Audacity.

So at this point, we have 2 sounds that were recorded within the same take. The first thing we'll need to do is cut one sound out (or copy it if you're wary of data loss). The next thing to do is create a new session and add a mono or stereo track, depending on the type that you copied from; stereo in this case, as the recording was made in stereo.

















When this has been created, we simply paste the audio into it and we have our sounds separated. Now, if you've recorded lots and lots of sounds in one session that you'd like to chop up, this is obviously going to take a bit of time. Unfortunately, Audacity doesn't have any tools to compensate for this, but another method is available where you can keep a single session and not have to chop up all your audio. This does require you bouncing the file directly out, but it means you can separate everything much quicker. All you have to do is select the audio you want separating and then Export Selection (make sure you have the format WAV or PCM selected, I'll explain why later).



Removing Unnecessary Audio
This can apply to every kind of production, but is especially important for Game Audio. Due to the memory limitations on consoles and mobile devices, every aspect of a game must be optimised to the nth degree, including audio. The first means of optimising a sound in this instance is removing any waste audio at the start and end of it. The most effective way of doing this is by selecting the silence up to the start of the waveforms movement and then zoom in to fine tune the selection.

Here, I've zoomed in to fine tune the selection. It's far too easy to delete too much by
accident, so always listen to the sound as you're deleting.

There are some instances you must be wary of though. In the past, I've removed a little too much audio either side, which has created a distinctive 'jump' in volume. More often than not, this occurs when removing audio from the end of a sound, as natural reverberence can be mistaken for unwanted noise. To avoid this, always listen to the sound as you're deleting; if you delete too much, just go back on yourself (undo is your friend!).

This is a good example of too much audio selected for deleting.

If for whatever reason the sound you have has a 'jump' in volume that you can't go back on, there is another solution - a short fade in or out, as I discussed earlier. For example, you have a sound which has a very short build up before the body of the sound (a ray gun possibly?). Unfortunately, when the sound was separated from a session with a few takes, the build up was cut off slightly and now sounds unnatural  To help improve this, you can select a short portion of the start and apply a 'fade in' to it.

If the cut off is at the start, select a small portion and apply a simple Fade In...

...Similarly if the end is cut off, apply a short Fade Out.

With other productions, such as sound for film or music, it's not a huge issue if silence is left in the file, as automation or gating can remove this from the final mix down. However, new technologies in the near future (I'm looking at you, Pro Tools 11) will mean than removing silence WILL have a benefit on CPU usage, at least in the post-production stages. This is due to the way in which plugins currently work to process sound: when the session is playing, all plugins are in constant use, regardless of there being sound in the track to play, which is quite a large waste of processing power. In this new software, the DAW will now look forward in a track to see exactly where a plugin needs to turn on and off, vastly reducing usage.

Now that the sound has gone from being recorded to separated to optimised with the removal of unwanted sound, we can now move on to the volume side of the waveform.

Step 2: Gain and Normalisation causes much less Frustration
Thankfully, this next step doesn't take too long to accomplish, but has a lot of issues that need to be avoided. Basically, when you have your sound recorded, the signal strength generally doesn't take advantage of the headroom available. This is for good reason though, as it's used as a safety net in case the sound decides to jump in volume. However, with the sound recorded, we can now adjust this gain level to take advantage of that space left behind.

The easiest, quickest and safest way to achieve this is by using the normalisation function on your audio editor. This will take the entire waveform (or a selection) and bring the gain up until the highest peak hits the limit that you set. For example, the below image shows a waveform on the left which is unchanged. If we normalise to 0dB, it will increase the volume until the highest peak is exactly at 0dB, which gives us the waveform on the right.

This way, the waveform is increased in volume, but avoids distortion.
The second method is by increasing the gain manually. This does give you a little more freedom and is great for when your sound source has a little more noise floor than anticipated, but runs the risk of distorting the waveform... and once the sound is distorted in a destructive editing environment, you can kiss it good bye. This is why you should ALWAYS account for backups, save your sessions often and save them separately for progress!

Step 3: Simple EQ, which is easy to do
So just from listening to your sound source, you should have an initial impression of the frequency range it has and what you'll want to gain from it in production. This is where you can use that impression to help you get a feel for what frequencies will ultimately be removed and kept. At this stage however, we just want some very simple EQ that will remove the 'invisible' frequencies. By this, I mean the frequencies outside of what the source uses. Well how is this possible? Surely if you record a source with a range between 500Hz and 5kHz, you'll only pick up those frequencies? Unfortunately, the nature of a recording states otherwise - your microphone will literally pick up everything it is capable of. This is why we have shock mounts for condenser microphones, to help prevent deep rumbles and sharp movements coming out on the recording. In fact, no matter how well you set up a microphone, there will always be some form of unwanted frequency that needs removing in some capacity; this is why the HPF (high pass filter) is your best friend. This will bypass all that sub-frequency content that would otherwise come back to haunt you in post production.

Now I've covered that, let's look at a real world example. Here again, I have my recorded sound, now cleaned up with all unwanted noise removed and gained correctly.

By going up to Effect and selecting Equalization from the toolbar, you can add a very simply HPF that will remove any unwanted low frequency content. Below you'll see the interface which has been built for Audacity. A lot of modern DAWs have a simple button which adds a HPF; all you have to do is adjust at what frequency it starts to cut and how quick the volume slopes off. With this though, you get a simple line tool which you can adjust by adding points and moving these around. I've drawn 2 points on this and left 1 at 0dB around 100Hz and the other dragged all the way down to remove all those frequencies and below.

It's worth mentioning here that you don't really want to cut too much. Later on in post-production or in another project, you may want to use a certain frequency range that was removed in this step. What you want to do is hear how the EQ affects the sound before applying it, as the whole point is NOT to have the HPF affect the sound obviously. It may seem very strange to say that after all I've explained, but it's more of a cleaning exercise than an attempt to make your source sounds better. There are two simple examples of where this is a clear benefit later on in production:

  1. Layering sounds: When you get to a point in the production where multiple sounds start to pull together and play along side each other, these sub-frequencies start to build up if you haven't removed them. They can cause a lot of problems with the bottom end, so you can save some time by removing these now.
  2. Pitch shifting: If you don't remove these frequencies and decide to pitch shift your sources up, you might start to hear what was once too low to hear naturally. E.g. if you have an 'invisible' frequency at 20Hz and pitch shifted your sound up 3x, this would turn into noise at 60Hz, which is well within hearing range.
You can also use a LPF (low pass filter) to cut out a lot of the higher frequency content. This is best for being used on any low frequency sounds that don't have any mid or high frequency content initially, as you don't want to remove anything you might need in the future. Again, it's a cleaning exercise to make your life a little easier later in the post-production stages. Removing unwanted low frequencies is more important at this stage though.

Now that we've got our basic source all cleaned and ready to go, we can bounce the sound ready for use in a production.


Step 4: Bouncing your Sound and the King is Crowned
This is very important to get right for any production. At this stage, you don't want to loose any quality from the recording session, which should have been done at as high a quality as possible. The biggest mistake made here is thinking that mixing down or bouncing a sound to mp3 or another lossy format is ok, as long as it has a high Kbps rate. This is not good! Only when your production has gone through a final mixing and mastering stage are you even allowed to think about mp3 or lossy, which I choose not to condone anyway. When bouncing your sound, use a lossless format like .aiff or .wav.

As long as you've taken the steps above, this should be very easy to do. In audacity, all you are required to do is Export the sound from the file menu, and make sure the format is WAV (or PCM [Pulse-Code Modulation] as some software shows it) as shown below.

I want to mention something that I feel is very important before I conclude this blog post. Consider how you name your sounds and where you bounce them, as you'll probably be saving a lot of them. It's far too easy to save sounds with odd names or ones that have no meaning. Before you know it, you're taking a significant amount of time to find what you need. It's therefore best to come up with a naming scheme that will allow for quick and easy searching of specific sounds. Let me explain what I mean.

First, you may want to start the name of the file with what kind of sound it is. If it's an ambience, you may put an 'A' at the start. Next, you may have many different types of ambiences, like mechanical or forestry. For this, you can add a '_Mec' for mechanical, or '_For' for forestry. Then finally if you have a few ambience tracks with the same feel, you can have a number for each one. The final name therefore would be along the lines of 'A_Mec1' or 'A_For3'. Another example for an SFX of a big gun with natural reverb could be 'SFX_Gun_Big_Verb1'. You get the idea.

Conclusion: I hope what I've covered has made sense for the most part. As said earlier: at this stage, you don't really want to be altering what your source sounds like; you just want to clean it up and make it easy to work with once you get to your production. When we come to the DAW stage, with adding sounds and creating track groups, we can really open the doors to EQ, compression, effects and all the lovely bits of Sound Design that really make it worthwhile. Also, many apologies for the section titles; on reflection, they're in line with what a teacher might put on some slides to make their subjects more interesting. Please feel free to slap me through the internet.

Next time, I'll attempt to cover what is the bigger picture of a production and how to frame your mind for mixing, levels, avoiding things like over-compression and the dreaded dynamic range which even I struggle with. It is all in the planning!

Alex.

1 comment:

  1. You certainly covered almost every important details about optimizing recordings. I am in the process of recording a sound at the moment and all the points you covered here will definitely help me with the editing and mixing part.

    - SoundOps.com

    ReplyDelete