Friday, January 20, 2012 music in non-profit New York art education

Leading Leaders is a non-profit educational organization that teaches kids about art, understanding art, and helping kids express themselves and their feelings through art. Our music is featured in this really nice YouTube film documenting the project. In the film we get to learn about the project and to hear some insights from teachers, volunteers and students alike.

The music you can hear throughout the film is the track Day After Day composed by Pawel Blaszczak. A royalty-free music track which - as you can see from the film - is not fingerprinted by YouTube, not in a Content ID program, and thus does not cause the video to be sullied by advertising or nasty copyright warnings from YouTube. (This is the case with all our music. We do not fingerprint our music - see this article for details.)

The film itself was created with the help of Peter Galperin and David Frieberg at Significant Films, a creative company that produces short films for big events. Highly original, engaging, theme-based films that entertain and inspire at special events, or online.

Wednesday, January 11, 2012

Mixing as part of the music composing process, Part 2

by Kole Hicks

In part 1 of this 2 part series, the idea of combining the mixing & composition processes were introduced... "Pre-meditated Mixing" if you will. Many of the potential benefits were laid out and even a few arguments on when it may not be practical. Furthermore, a hypothetical Game Audio example was introduced and we went through my thought process for the track before a single note was played.

This article is dedicated to explaining how the track was created, why certain decisions were made, and reinforcing the idea that "Pre-meditated Mixing" can be quite beneficial.

If you’ve yet to read the first Part of this article, please do so HERE. If you’ve yet to hear the track, please do so HERE after reading a blurb about the track’s intentions below.

"If I was playing a fictitious/high-fantasy game as the great explorer "Marco Polo" while he wondered through the borderlands of Mongolia/Northern China on a foggy night... what would that sound like?"

I answered this question in as much detail and to the best of my abilities as possible before touching an instrument so that I would have a better understanding of how to approach the track and accurately express that which is most important. However, like many things in life, once I "jumped" in it was easy to see which ideas worked great, others that needed slight alterations, and in a few cases which ones needed to be removed or changed completely. Even though I didn’t stick "true" to all of my original thoughts, I believe the end product is better for it and reaching that point would have been much more difficult if I didn’t have a foundation to work from. With all of that said, lets break this track down into "digestible" chunks. Form/Structure, Harmonic/Melodic Material, Instrumentation/Orchestration, and last but certainly not least... Mixing choices.

I. Form/Structure

In the previous article, I had hinted that this example track would follow a standard dynamic curve that could easily be found in many different games. The only difference in this track, is that I had to accomplish that within a few minutes, whereas if this were to be composed for an actual game, each layer would be a few minutes long, loop-able, and have different "cue-able" sections that interact with one another based off of the player’s choices. With that said, I decided to structure the piece into 3 main sections building to an overall peak: Ambient/Mood, Exploration, & Action with a happy resolution at the end.

The Ambient/Mood section starts from the beginning of the track until about :36 where there is a transition to the Exploration section. The Exploration section starts around :36, but doesn’t technically end until 1:28. However, between 1:11 – 1:28 there is added tension and increased pace to suggest we’re about to discover something BIG. This short passage can kind of be considered as an extended transition... perhaps something that would be used during a structured game sequence or within a cinematic.

This means that from 1:28 until around 1:46 we have the Action section. This could be used for combat, intense platforming challenges, etc. Near the very end I decided to end on a happy note to possibly suggest a resolution to whatever action just took place. The ending you’ve heard says "Congratulations, you lived through it!" however it could easily resolve to something darker if the player failed in their quest or perhaps died. As a side note, the track never changes tempo (in part for "syncability" within an interactive music system), but the intensity increases over time using many of the "tools" I’ll discuss below.

II. Harmonic/Melodic Material

In all honesty I could talk about Harmony/Melody all day, but we’re not here for a theory lesson. So, I’ll just quickly describe the most important parts in this piece and why I think they accurately express our overall intent.

Harmonically this piece develops over time as we transition between each of the sections described above. Starting at the beginning of the track, our focus is around "E." Because there really isn’t an identifiable "chord" happening during this Ambient section, yet we focus on "E" I consider this section as a static build on the Dominant. Melodically the most apparent thing in the Ambient section is the female vocal part.

If the Ambient section is considered a Dominant Prolongation, this would then imply that our tonic is "A" (specifically A Minor), which is exactly where we resolve transitioning into the Exploration section. It is at the beginning of this section that we hear fragments of Marco Polo’s theme in the piano. Harmonically it hovers around A minor until the second repetition of Marco’s theme when both the melody and harmony adjusts to G minor (specifically 1:02).

During the passage we identified earlier as an "Extended Transition" (1:11), the harmonic progression reaches it’s entirety. Specifically it moves between: Am, CMaj.b5, D13, Gm, B+7b9 :

Marco’s Theme and our entire Harmonic progression repeats in the Action section (although the parts are distributed differently). The only change comes at the very end when we resolve to E Major. An Authentic Cadence for the "B+7b9" and a resolution that sets us up for a repeat back to the Dominant Prolongation in the Ambient section at the beginning of the track.

III. Instrumentation/Orchestration

If you’ve read through the previous article, then you’ll have noticed that a few additional instruments were added in this track. I did so for two main reasons...

The first being that the piece "called" for additional instruments to enhance the emotion in the piece and the second being that I just bought all of Impact Soundworks libraries and wanted to test them out! (My hat is off to those guys for creating some great sounding libraries at an affordable price).

The additional instrument choices I made were: Koto, Bass Koto, Ambient Patch, Metallic Hits (All ISW!), Piano (NI), and a solo female vocalist (Bulgarian from EWQL Voices of Passion).

I know some of the more clever composers out there right now are saying... "But Hey, wait a minute Kole. The Koto is a Japanese Instrument and Bulgarian vocals... What!?" So, if you’ll allow, let me justify these choices.

Even though the Koto is a Japanese instrument, the Chinese have something similar in the "Guzheng." However, the other (and main reason) for choosing this instrument was because I needed an "exotic" replacement for the guitar. As you’ll hear in this piece, the nylon guitar is sparse, but Koto is present in quite a few sections.

I chose the female Bulgarian vocals, because they helped intensify the emotion in the piece ten fold. Sure, it’s not traditional Asian Folk vocals or Italian Opera, but (to me) it kind of sounds like a possible mix between the two... both evoking the spirit of Marco’s Italian roots and the exotic flair of this new land in Northern China.

As I mentioned above, the guitar didn’t play as big of a role in the piece as I had originally thought, so I needed the Koto. Likewise, I introduced the piano as a replacement to represent Marco’s side.

IV. Mixing Choices

Last but certainly not least, lets go over some of the mixing decisions that were used through the piece. To keep it organized, we’ll go by sections again.

I felt that the mixing decisions in the Ambient layer were the most important, as they set the mood for the game state and must create tension without adding much movement. A gradual fade over all of the instruments at the beginning is helpful when trying to introduce music without being overly invasive (especially important in a game where so much focus is probably being placed on other tasks). The Hulusi is gradually panned from left to right to help create motion without directly adding any more notes. As mentioned in the previous article, the overtone vocals have been equalized so that the majority of the sound coming through is the overtone rather than the fundamental. It was also very important that this beginning ambient section felt like it was in a huge room to unconsciously impose the enormity of the situation the gamer would currently be in.

At the beginning of the Exploration section, the Koto playing tremolo is being panned from left to right a bit faster than the Hulusi originally was. If you remember from the first article, this is a role I originally assigned to the guitar, but decided that the Koto’s timbre would fill better. When mixing the strings together, I decided that a more intimate recording was necessary (as I wanted to hear the bowing better). So, I chose to use & boost the "Close" mic positions for my EWQL Orch. strings. This is most easily heard when the Cello/Double Bass are playing the Melody together around 1:13.

In the Action section, the acoustic guitar’s presence is felt the most (although it’s not the focus). It’s a simple arpeggiated chord progression played in unison between the guitar, koto, and bass koto. We had so much motion in the section before that I wanted to continue (yet vary) that motion. Furthermore, the panning in this section allowed the strings to play the melody/bass in their respective areas as the arpeggiated line and female vocal fills sat comfortably in the middle.

Lastly, I would like to discuss the way I mixed the percussion in this track as it applies to each section. I originally mentioned that I would like to have a Taiko control the momentum of the piece, but quickly found that a single Taiko recording was too small and didn’t capture the essence of the piece. So, I created and mixed together other Taiko/drum samples from different libraries to create a "Big Drum" Hybrid sound. This included some of the metallic hits I mentioned earlier from the ISW libraries.

This concludes part 2 and the entirety of "Mixing as Part of the Composition Process." I hope you’ve taken something away from these articles and will try out "Pre-meditated Mixing" when writing your music in the future. Remember to listen to the example I composed (link located at the top of this article) and keep composing fellow artists!

Tuesday, January 10, 2012

Mixing as part of the music composing process, Part 1

By Kole Hicks

**In this article I refer to Mixing as if it includes all aspects of track production before mastering.

Mixing is typically considered by the majority of people as the thing you do to "sweeten" the music after it has been written and recorded. Furthermore, the "Mixing" definition tends to only refer to what we think of as "modern mixing" with balancing volume levels, EQ, Ducking, etc. via a Pro Tools (or other) session.


However as I see it, what we think of as modern "Mixing" existed before the time of computers or electronics and has been well known by many of the greatest musical minds throughout history. It is certain that these composers cared deeply for the way their music would be heard by others and I bet that a large majority of them would be right there in the mixing booth tweaking knobs if they had the tech available to them. In fact, many composers wrote very detailed notes on the score about how each instrument should be performed (Tone/EQ), where it’s physical location to everyone else is (Panning), etc. In this article, I would like to challenge the notion and will supply arguments against the idea that Mixing & Composing have to be two separate processes.

*I always have the "philosophy" that if it ends up influencing the way the music will sound, then it’s important enough to think of while I’m composing the piece.

The role and importance of a mixing engineer has become ever more apparent in newer styles of music (Pop, Electronica, Hip-Hop, etc.) where the professional is not only "fixing" up the vocal/instrumental parts and placing them in their appropriate "pockets," but adding unique filters, FX, EQs, etc. that directly influence the arrangement and are absolutely essential to the composition. It is my belief that these engineers don’t get nearly the credit they deserve... listen to anything in the Top 40 on the radio and I can guarantee you’ll hear how important Production is to many of those songs.

Also, if you’ve ever attended a Composer’s convention/conference of some kind, then it’s not a secret that many of the musicians there hold the belief that they should never mix their own music and that it is better left to a professional. While this can be true in some situations (Ex: Orchestral Music that is supposed to sound as "Classic" as possible) and I always advocate the use of or collaboration with other professionals, there are fantastic benefits of not only mixing your own music, but becoming aware of all the mixing tools available at your disposal.

So what happens when we no longer use Equalizers, Compressors, etc. in the way they were "intended," but instead think of and use them as creative tools? Something that you’re not aware of and thinking about after the music has been recorded, but BEFORE... My main intention with this article is to help you become aware of this possibility and guide you through one of my examples. There are way to many available directions you can go in with this new approach (and I’m sure many of you are already starting to generate ideas), so I’d just like to show you an example of one of my projects.

(All of the following has been written BEFORE composing a single note.)

To introduce this project, it would probably be best to start by explaining the overall goal of the music. As a specific challenge for myself to come up with new and creative stylistic combinations (which I recommend every Composer do as often as possible), I asked the question...

If I were playing a fictitious/high-fantasy game as the great explorer "Marco Polo" while he wondered through the borderlands of Mongolia/Northern China on a foggy night... what would that sound like?

The use of certain "ethnic" instruments indigenous to these areas would be an obvious choice for the instrumentation of the piece, but we must remember that Marco Polo has yet to visit these areas and wouldn’t really know what any of those instruments would look or sound like. Furthermore, (adding to the tension) we’re not only in a foreign (possibly hostile) land, but it is a foggy night with only the moon’s light enhancing the lack of true visibility. There are a million different directions you can go in and none of them are necessarily "good or bad," merely different and more effective depending on the situation and audience. I’ll explain some of my choices below...

Let's say in this game that the developers feel it is important to aurally depict each character, place, and the time in history. However, they also would like to keep the score rather modern so that people can still connect to it (nothing overly abstract). In the "scene" described in the question above, it has been decided that the following should be included in the piece of music: Marco Polo’s Origins/Theme, The Foreign Lands (Mongolia/Northern China), and the Fear/Tension/Excitement of being an explorer in a new land on a foggy night.

I’m a huge fan of Mongolian Folk Music (part of the reason I created this challenge) and more specifically throat/overtone singing. Also, one of my good friends is from China and she brought back a Hulusi (flute) for me that I’ve always wanted to use. Along with the overtone vocals and Hulusi (representing the foreign lands), I’ll use a Taiko drum to control the momentum/pulse of the piece. Furthermore, I need to introduce Marco Polo to the piece. To represent him, I’ll use a string section and nylon string guitar. The specific way I will be using them aren’t exactly "historically accurate" to the time Marco Polo was around, but sound familiar to the player and can easily be used to give a sense of "Home" rather than Italy specifically (especially to most Westerners... assuming this is the main demographic of the game).

Now that we have our instrumentation chosen for this "zone/scene," we have to think of how we can bring in the Fear/Tension/Excitement... this is when "Pre-Meditated Mixing" is useful. To further enhance the "creepy" atmosphere (and add to the "unknown" sound quality of the instruments used in the foreign lands), I’ll place an EQ on the overtone vocals so that I cut most (if not all) of the fundamental and instead focus on the overtone. Furthermore, I’ll add massive delay to the Hulusi part and change the panning at random intervals (not drastic enough to draw the players attention away from the game... just enhance the unpredictability of the piece/zone). The Taiko’s part will develop over time and the volume/panning would be interactive based off the player’s proximity to danger (something we could control via Wwise/FMOD if the Taiko part was it’s own separate layer).

As mentioned above, chances are high that we’re dealing with a western audience and while you can get away with a lot aurally when there is a picture in front of someone, its wise to know when you’re writing bizarre because something calls for it rather than "out for out’s sake." With that in mind, I’ll keep the harmony intact and this will NOT be an atonal piece. It will be tonal, although with it’s slightly "out" instrumentation, mixing, and other elements the piece will still sound "foreign" without actually being completely foreign to the gamer.

The guitar will play a consistent "pulse" of tones and fade in/out while gradually panning around the "aural environment." The string section will act as the "western foundation," establishing something more traditional that the listener’s ear can cling on to while concurrently representing Marco Polo himself. While there will only be 1 underlying harmonic progression, the progression (in it’s entirety) will be developed over time. Because this is purely an example (and not exactly a cue from an actual game) I will attempt to move through different game states within a few minutes. However, if this music was to be absolutely interactive, then each section would be a few minutes long, comprised of multiple layers, and fade in/out from one another depending on the player’s predicament.

All of this I have thought of before I’ve even written a note down and while it may not be practical in every situation, I recommend you try something like this at least once (if only to learn that it doesn’t work with your writing process). It would be cruel for me to list off everything, show pictures, and not provide you with a link to what I created... so click on this link to listen to the piece.

This article continues in Part 2

Make sure to read through Part 2 of this article RIGHT HERE to find out what changes the piece went through and why certain decisions were made. Thanks for reading and keep composing fellow artists!

Sunday, January 8, 2012

Sound for Picture - Faking it

by Terry Wilson

As a sound editor and designer, the most important thing is re-enforcing the visual landscape presented to the audience; providing an audible focus, and creating sonic cues to match the pictures.

The vast majority of low and mid budget film & TV productions are set in the real world, or in everyday life that we see around us, so the sounds in the final production should be sounds that we should all have heard before in some way shape or form.

This article is about some basic principles to follow and techniques to apply in creating the sonic landscape to complement a director's vision. It's about faking what people perceive to be real, as opposed to creating new sounds from nothing, and it assumes you are in a position of limited resources and don't have the time and money of a hollywood film budget to get the sound mix done. I have deliberately avoided talking too much about working with dialogue or ADR as it's a subject that really deserves an article in it's own right, even though many of the principles here do still apply.

In real life, people have an incredible ability to filter out a huge amount of the surrounding sonic landscape and zero-in on what they need to hear. We all do it unconsciously and we only really appreciate it when we're challenged with doing the opposite, such as trying to listen to two conversations at once (see cocktail party phenomenon). In the world of reproduced audio with dynamic range, harmonic content and spatial awareness all dramatically reduced if not gone, it's your job to be the "human filter" and help decide what the audience needs to hear in order to make sense of the world they're being presented

The two key priorities which you should remember to help you do this are:

  • Create a world which has distractions removed as much as effects put in.
  • Create a world which is believable as opposed to "real".

A director of photography uses light, framing and depth of field to get the audience to focus on the most important part of the picture. The basic principle of mixing sound is the same: The focus should be clear, crisp and sharp while the background is more indistinct, helping to create the required sense of space and time.

Removing Distractions

Backgrounds stop becoming backgrounds if they have sounds which too readily pique your consciousness. Everyday sounds like ambulance sirens, construction, airplanes, car horns, even the unnecessary rustling of clothes should all be removed if they're not part of the story, as they and many other sounds can potentially divert the audience away from the action.

You may be limited by what got recorded with the dialogue but you should do all you can to get rid of potential distractions. This often means creating a soundscape from scratch and layering sounds together; a bit of ambient wildtrack, with a few footsteps and a couple of bird calls may be all it takes to recreate a lot of outdoor ambiences. But it's important to remember whether it's background or foreground sounds you're creating. A shot of people going down a flight of stairs can be a noisy foreground, but the moment two people on the steps come into picture and starts having a conversation, the footsteps must be part of the background and not intrude on the dialogue.

The other key to a realistic soundscape is to think of the one or two "key sounds" that help the audience very easily and quickly identify the environment. Airports and trains stations have distinct tannoy sounds; coastal areas have seabirds, offices have phones and photocopiers, shops have cash registers and scanners, caf├ęs have an FM radio on in the background. Throw in just one or two of these in the right place (particularly with the establishing shots at the beginning of a scene) and it's often all you need to help convey the location and atmosphere. Use people's preconceived expectations of how something should sound to your advantage. But be sparing and don't intrude on the pictures.

This segways neatly onto the second principle of believability. People often expect things to sound a certain way for film / TV, even when they don't in real life. I have a really old pair of Nike sneakers which make a "clip clop" sound when I walk on a hard surface, but if anyone were to see a close up of them and hear that "clip clop" sound alongside them it would be pretty off-putting; the audience expects trainers to sound like Michael J Fox's sneakers in Back to the Future, not like a pair of high heels! Similarly, if I ask you to imagine the sound of a car being remote locked, you'll all conjure up the same one or two sounds that everybody knows they make. Except in the "real world" most car locks don't make that sound, they make a rather boring "thud", so in order to maintain the audience's expectations you need to get your shot of the car being locked to make that sound. Especially if the car being locked is relevant to the story. The obvious exception would be if you had a close up of the lock moving, then you could use the "thud" because it sonically matches what the audience sees.

No prizes for guessing what he's doing...

or what sound this is going to make!

The other situation that is never real but has to be faked for the sake of the story is when you're witnessing the receiving end of a phone call. In real life someone standing from the Camera's point of view could never hear the other side of the conversation, but for the sake of the audience and the story you have present that "unreality" so people understand what's going on.

Actors can't keep phone calls secret from the audience 

Get some perspective

Where a sound occurs in relation to the action can have as much effect on it's believability as the sound itself. This is where having sounds recorded from multiple perspectives really helps, but often you don't so you need to improvise.

Degrading a sound and making it more distant sounding is much easier than doing the opposite, so it's important to get hold of the cleanest and closest version of a sound available. Then it's a case of matching the perspective to the picture. Take for example a phone ringing on a desk. An establishing shot with the phone 3-4 metres away needs to sound different than a close up.

Changing the volume is the first step, but there are other tricks to help fake the positioning. A bit of reverb on the phone sound (to match the room accoustic) will help; a "wetter" reverb for the more distant shot will help shift the perspective. It may be that the story demands that even the distant sound is relatively "clean" in which case changing the predelay more noticeable from one shot to the other may help more than dry/wet.

The other thing that also helps change perspective is EQ. If a sound is further away, it's generally perceived to have less prominent lower frequencies. Try using a high pass filter or parametric EQ with a node around 100-200 Hz and subtly change it from one shot to the next, with more removed for the more distant shot. It should produce a noticeable difference which is immediately more subtle yet more believable than reverb alone.

Sometimes what you can see in a picture necessitates a less rigid adherence to perspective, and more simply what's in the shot. The two shots below show two POV's of a busker in an underground station. In one you can see the cavernous background but in the other all you see is the the busker against a wall.

It looks like a big reverberant space from here... 

but what about from here? 

Even though the distances from the camera are similar, it makes more sense to have the front shot "cleaner" because there's less visual information to back up a more echoey sound.

The great outdoors

Another tricky problem with faking perspective is when you need to take a clean studio sound and make it sound like it's outdoors. Reverb becomes a big no no because it immediately creates a sound associated with interiors. But EQ is still a powerful tool and most outdoor sounds naturally come across as less bass heavy than their interior counterparts. Exterior spaces are generally a little rougher and less apparent, and the sense of greater space naturally creates a feeling that sounds don't need to come through as clean as an enclosed indoor environment.

If the action takes place in an environment surrounded by hard or reflective surfaces adding a small delay to the sound to helps it to become part of the nearby environment. But be very subtle and don't overdo it. As with interiors, it's more about the change of perspectives to match the action. Be led by the pictures and go with what feels right and doesn't jump out as "wrong".

A tiny bit of delay helped place the violinist in a world of concrete 

Sound from boxes

An often required trick is to take something that's clean, like a piece of dialogue or music and make it sound like it's coming from something else, like a radio, mobile phone or PA system. Most of these sources are fairly straightforward to mimic. Phones, radios, answer machines and other such devices need generous amounts of EQ, with most of the lower and higher frequencies removed, and the midrange frequencies cranked up. What also helps is heavy compression or limiting and sometimes some overdrive or distortion to help create that sense of poor quality playback you expect from small speaker devices. If you are doing a lot of this kind of compression just make sure you're adjusting the final volume to compensate.

It's not subtle, but it works. Typical “Small speaker effect” EQ

With louder sounds like PA systems, it's not that different. Use similar EQ & compression but add lots more distortion and finish off with delay / reverb to match the environment's acoustics.

You'll often find digital editors have dedicated plug-ins (such as "phone effect") tailored for these kinds of effects. By all means try them out but I find doing it from scratch is just as easy and generally more controllable.

Sounds from other rooms

If you've got dialogue or sounds that need to sound like they're coming from rooms not directly shown in the picture then again, EQ and reverb are the tools of choice.

Start with reverb and use a fairly neutral algorithm like a plate reverb. Get rid of any predelay, move the wt / dry mix to around 50% and use a short decay time of between 150-300ms. If you have a diffusion option I'd recommend switching this off and using your pan controls to dictate where the sound should be coming from.

With your reverb working, use a low pass filter on your EQ to get rid of higher frequencies. The more you remove, the more it will feel like it's behind a solid wall, but if the story necessitates easily distinguishable dialogue then it's going to have to be subtle. Again, losing some lower frequencies may also help, especially if you need a change in emphasis. And finally, go back to your reverb's wet / dry control and adjust it till you've got the right level of "distance".

No digital audio workstation has an infinite number of tracks so when you're working on processing lots of sounds in multiple different ways you'll have to decide on a workflow to deal with it. The main choice is whether to keep effects as real time or to render them off as processed audio files. It's a trade off between track count (processor demand) and future flexibility to make changes, and it's a result of not having the perfect sounds in the first place.

Systems like Pro Tools and Soundtrack Pro give you the ability the automate just about every parameter of every plug-in and automation can help you stay in a real time environment when otherwise you'd have run out tracks. The drawback is the increased complexity in dealing with automating several parameters of many plug-ins across multiple tracks, which if you're not very attentive to detail can get very confusing.

If you're less keen on going down the real time route then an option is to use a clean project and create around 4 - 8 perpectives for each sound you'll need in each environment, or scene. While this sounds straightforward, you've got complexity issues here too; you'll need an efficient method for labelling and accessing sounds, and the quantity of audio in your project is going to increase quite a bit.

My preferred option is to stay in a real time environment for as much as possible. Coming from a predominantly audio background I've got comfortable with having every parameter adjustable with instant real time results in a way that the processing demands of video still struggles to keep up with. However if you have more of a video background you might be
more at ease with the "render off and import" style workflow.

One big benefit of keeping processing real time is when you have to gradually transition a sound from one perspective to another. For example a tracking shot that follows a subject entering a kitchen with a boiling kettle, or a change in atmos due to a door being opened / closed. This is where plug-in automation comes into it's own, giving you the ability to fluidly track the perspective with the action.

As he walks away, her playing gets wetter


If you don't happen to work next to a foley studio or haven't got the kit to record sounds yourself there are a number of sites that allow you to search, download and preview sounds you're after. Here at we have a rich and varied library of professional, royalty-free sound effects for instant download. Visit our main front page to get started with that.

Mixing real world environments can be a time consuming but rewarding process. I think the trick is not to get too hung up on obsessive attention to detail but get the "feel" right. Spend the time on the things that are going to make a difference to the audience; the backgrounds that will subconsciously help them know where the scene is and the foregrounds that leave them in no confusion as to plot and direction.