Sunday, December 15, 2013

New release: "Pure Luxury, Vol. 7"

We're releasing a new music collection today: "Pure Luxury, Vol. 7".

This collection features 11 tracks composed by Dan Phillipson, Bjorn Lynne, Felipe Adorno Vassao and Danny Duberstein.

This is a chilled-out and smooth sounding collection of music that's not quite as "floaty" or ambient as our "Relaxation & Mediation" series, but is rather rich in sound and usually feature a delicate beat and a sense of introspection and comfort.

These tracks will go well with things like luxurious footage or TV commercials for fashion and style, for holidays, as well as for playing in places like Spas, Wellness centers, upper class Hotels and Resorts.

As with all our collections, each track is available to license individually (just search for the track title in the quick-search box on our site, and the track will come up), or as a whole collection.

Thursday, December 5, 2013

Thoughts and insights on our latest collection: "Dark Cues Vol. 7: bZombie"

We have just released a new album of horror / mystery underscore music to our catalogue: "Dark Cues Vol. 7: bZombie". This is a collection of 12 eerie and creepy film score / underscore tracks for use with death and horror. Each track is available to license separately, as well as together as a collection at $99.95 for the full collection with Standard License.


We caught up with composer / producer Pawel Blaszczak and had him share with us some background and insight on the creation of this music:

"Writing the bZombie album was my idea after I finished Dead Island Soundtrack (Ed.note: Pawel is a composer of music for triple-A video games). I like horror movies and I like watching pictures or movies from abandoned places. 

I had couple tracks ready in 2012 and I would like to release them on Bandcamp as short four tracks album. Then I asked my friend Grzegorz Jonkajtys if he could draw a cover artwork image for me. He designed a great bZombie cover artwork picture which was very inspiring for me. I then decided to make a full 12-track album.

I like audio editing and synth experimenting. Also I love piano sounds. So, I decided that bZombie will be more like synths and piano meet experimenting sounds. My last previous "Days and Dreams" wasn't exploring dark side of synthesizers. At the beginning of 2013 I bought some great instruments from Elektron Octatrack and Analog Four. The two of them are the main instruments I used for creating the album. I always dreamed of deep editing of piano sound. Octatrack gave me the opportunity to really make a something new with piano sounds. Something more than just typical playing piano. A couple of tracks on bZombie has a piano style playing which isn't possible with normal piano playing.

I used Octatrack not only for piano mangling. I also used it to create a lot of dark, experimental loops. I used drums made from synthesizers sounds as part of loops also.

Also, I used Elektron Analog Four. I like very much this instrument. It is analog. But what is unusual. It is sounding very modern and, what it is important for me, very dark. Almost every sound from Analog Four has something dark inside. It is the perfect synth for making mysterious music.

Composer Pawel Blaszczak
That was not an end of my searching for interesting, modern, dark and hauting new sound. I prepared couple tracks with Subtle Noise Cacophonator Noir. It is very rare machine for sound creation. It is also hard to create something which is melodically. But I tried and on track Electricity Broken Cacophonator Noir is playing almost full track with piano sound.

Also I used Moog Sub Phatty. Like all Moogs, it has a great bass sound. Also I used a Nord Wave with breathing samples. Piano is using from Nord Stage 2. It is great sounding piano.

What was important for me is that I didn't want to create too experimental music. I like melody and I like experimenting with synths. I would like to merge those two things.

I also would like to really give listeners a feeling that they are in abandoned, empty places. Just close your eyes and feel that you are in some dark zone, with no people, with only ghosts from the past.

A listener who bought this album from me mentioned that it is perfect music for background music of RPG horror games. Myself, I like hear this album in my car when I'm driving home through forest and empty spaces."

Listen to the album here

Wednesday, November 27, 2013

Maximizing composer agreements for the mutual benefit of composer and company

Kole has written a nice article for our site, about how composers of original music for games, apps, TV and more can maximize their income and benefits of working under contract with a production company. In this article he shares several ideas and methods for setting up agreements that are mutually beneficial for composer and company alike. Click here to read: Maximizing composer agreements.

Saturday, November 23, 2013

New article on how to create depth and space in the mix, for musicians

Piotr Pacyna, a seasoned music performer and producer, has written this highly useful two-part article for Shockwave-Sound.com, about how to make a good sounding mix and how to bring out certain elements in the mix using a combination of levels, compression, EQ, reverb and pre-delay. A recommended read for all music producers out there. Check it out - here is part 1 of the article and here is part 2.

Saturday, October 19, 2013

Zero Emission Shockwave-Sound-mobile :-)


Forget the Batmobile - here is the "Batt-mobile". :-) Shockwave-Sound.com landed its own environmentally sound, funky Nissan Leaf vehicle this week. Here is the zero CO2 emission company car running on clean, renewable, electric energy.



The car is obviously fitted with a Bose high-end stereo system with a considerable portion of the trunk space taken up by a large subwoofer. :). Here's where we take our new music releases for a good spin before you can hear them on our site.

Wednesday, September 11, 2013

Recording Sound for Perspective

A sound design tutorial by Paul Virostek

Why Record for Perspective?


I remember a time I first began editing when I was struggling to make a car door slam match the picture on film. I shifted the sound earlier, later, added and removed elements and it still didn't fit. The editor who was mentoring me said:

If you're trying too hard to make a sound fit, then you're using the wrong sound.

He told me why: the car door sound effect should have been correct (it was the proper model and year), but it had been recorded inches away from the car. In the scene the camera was a few meters away from the car. This difference made the sound jarringly wrong.

In other words, no matter how you synchronize the sound with the picture, if the actual nature of the sound is wrong, it will never work. This taught me how important it is to use the proper sound:

• correct volume
• correct timbre
• correct perspective or apparent distance

Cheating the Effect


Of course, volume is easy to adjust. If the timbre is wrong, you can choose another sound from the same family. However if the sound's perspective or apparent distance doesn't match the picture, no matter what you try it will never completely fit.

Simply raising or lowering the volume of the sound may seem to make the sound closer or further away, but this is only a 'cheat'. The match will be close, but will invariably seem subtly, disturbingly, off.

What About Using Reverb?


One common trick is to apply reverb to closely-recorded sounds to make them seem further away. Even the best reverb plug-ins cannot replicate perspective perfectly, however, and the result will sound slightly odd. How do we solve this problem? Read on.

What is Perspective?


Perspective describes how close a sound appears. Typically a sound's perspective is described in four ways:

  • Close (anything under 10 feet/3-4 meters)
  • Medium Distant (roughly 10 feet/3-4 meters away)
  • Distant (anything more than 10 feet/3-4 meters away)
  • Background or BG (quiet and muted)

Also:

  • MCU or Major Close Up (inches away from the microphone, although this term is being phased out in favor of Close)

A close/medium distance mic setup


Here are some examples of a smoke alarm recorded at various perspectives. NOTE: it may help to wear headphones to hear the perspective or 'space' or 'room' properly (and have the volume down, the sound is sharp):

Notice how the Distant alarm has more echo, even though it is slightly louder than the Medium Distant alarm? The difference between these recordings is how much 'air' or space is apparent in the recording. 'Air' is created by a) the space where the recording takes place (also known as 'room') and b) the amount of reverb.


An Example: Woof

Imagine a dog barking in a city alley. A close recording will have prominent barks, and very little of the echo of the barks in the alley.

The further the dog is from the microphone, the more 'air' or 'room' will appear on the recording. The dog will seem quieter since it is further from the microphone. We will also hear more of the barks reverberating or bouncing off the alley walls.

It is exactly this aspect of the sound that we want. This distance, or perspective, will make it match perfectly with medium distant camera shots.
A recording of the close dog and one of the distant dog, although they are the same animal, will sound completely different.

Me, between a close/medium mic setup

Which Recording Perspective is Best?

So which perspective do you choose if you are going to record a sound effect? The short answer: all of them.

With today's digital multi-track recorders you can record all perspectives at once. Patricio Libenson, the recordist for Hollywood Edge's Explosions library, told me his set up involved multiple microphones, all at different distances and angled mathematically to account for phasing. The result is an incredibly rich collection.

Let's return to our dog in the alley. We can set up one mic at the end of the alley, and have another next to the dog, both plugged into the same recorder. When the dog barks, we'll have recorded both perspectives at once.

Match the Recording to Your Project


If you have to choose one perspective over the others, consider the project you are working on:

  • Multimedia or Radio - it is always best to record close perspective for these projects. The reason? Distance has little value when you won't be using picture or visuals. Also sound designers like the immediacy and power of close effects.
  • Film or Television - most film editors prefer their effects recorded Medium Distant. The idea is that most camera shots are typically Medium Distant or further. Also, in a pinch you can fake a close perspective by raising the volume. In a perfect world they would like to have a Close version available as well.

Unfortunately, most commercial libraries are recorded close. Imagine you are trying to use a close dog bark in a scene where the dog is across the yard. It won't fit.

That's why at Airborne Sound we record two perspectives: close and medium distant, even if it requires multiple takes.

Conclusion

When you record sounds to match the requirements for your project, you'll find the sounds fit easier, and require less editing. And, of course, it just sounds right.


Tuesday, September 10, 2013

Using Sound Effects within music composition and production for increased overall effect

Excuse Me, You’ve Got Some Sound Effects in My Music

About using sound effects in music production and how the line between sound effects and music is blurring


by Kole Hicks

The use of certain elements we consider “Sound Effects” in Music is much more common than we may think. Whether it’s nature ambiances heard lightly in the background of a New Age track or the aurally unpleasant bang of a trashcan lid in Industrial music, our perception of what purely differentiates the line between Sound Effects and Music is rapidly blurring.

 


 I recently became more aware of this progression earlier this year when tasked to compose an eerie / ethereal background track for a horror game. The piece most definitely had to set a mood and have direction, but never really intrude the player’s “consciousness” enough to have them recognize or become aware of “Oh hey there is music being played now.” So, in a way the music was to act in a role we may consider to be more common with sound design.

Now this practice in and of itself is not new, but the questions I asked myself while approaching this problem and the previously closed “doors” the answers opened up to me are new and unique enough to want to share my findings with you.

I. Approaching the Issue & Asking the Right Questions

Before I even attempted to do the traditional “sit down & start writing” phase, I tried to think of and answer all of the necessary questions that are unique with a piece like this. Should there be any thematic material… would it “get in the way”? How Dynamic can the piece be? Will I be using “traditional” instruments? What role will the mixing process play in this piece? Etc…

Asking and answering all of these questions were absolutely critical for taking an accurate first step towards fully expressing my intent with the piece. That is why I often take this step and recommend many others do as well (especially if you need to be very articulate with what you’re wanting to express).

 

II. Answering the Questions

Let’s go through the process of asking and answering a few questions unique to a piece of music like this.

First, lets look at “Will I be using traditional Instruments?”

Since there is no right or wrong answer to this question, I only felt compelled to organize/understand my instrumentation choices enough to justify their usage in the piece. So, I decided that my approach to this piece had to be one focused more on timbre/moods and that writing standard musical phrases easily identifiable as “music” by the human ear were off limits. At least initially, as I also decided that “sneaking in” the main theme from time to time would be okay (as long as it’s full introduction was gradual). However, for the most part, I “justified” the usage of some traditional musical instruments by challenging myself to use them in a unique way that wouldn’t immediately be perceived as a musical phrase by the listener. “Typical” sound design elements (impacts/crashes/scrapes/etc.) were also allowed, but must be organized in such a manner that they would have a perceived direction.

Which brings us to our next question… “What role will Form play in this piece?”

As I mentioned before, the line between what could only be considered Sound Effects and what could only be Music, is rapidly blurring. Impacts, soundscapes, and other “sound design elements” are being used so often in modern music that I believe the only clear distinction between the two is the way each one is structured.

This is not to say that Sound Effects can’t be organized in a way to tell a story, for they surely can, but rather the way in which we approach and organize our sounds for music is different. Repetition and imitation are two of the most common techniques used in music from almost anywhere in the world at anytime in history. When you’re lacking tonality, melody, and other “common” western musical constructs, more often than not we revert to repetition and imitation to structure our music (both for our sake and the listener’s ears). Often times, when your creating Sound Effects to picture, its not ideal to only use one punch/kick sound for an entire fight scene. However, I can also imagine the argument that the variety in those punch/kick sound effects, are the equivalent of musical imitation. So, perhaps the only real thing separating the difference between Sound Design and Music is our perception/preconceived notions of what each one “should” be.

With that said, I decided that the role of Form in this piece was to take these isolated sound ideas/motifs and repeat/imitate them in a manner that felt like it was going somewhere (The repetition/imitation itself not having to be structured, but perhaps more organic or improvised). Complex and strict forms like Sonata or even Pop wouldn’t accurately achieve this goal. So, it was determined that the form must be even more basic (remember we don’t want the listener to immediately recognize this as music). My solution was to introduce and eventually repeat/imitate these “themes/motifs” as they were applied throughout the changes in the dynamic curve.

Last but not least… “What role will the Mixing Process play in this piece?”

I feel very strongly about the role of Mixing in the Composition process, as it’s unavoidable in modern times. However, I’ll save the majority of what I have to say about this topic for a separate article.


As it applies to this question though, I determined that the subject matter and piece itself needed “mixing forethought.” Simply thinking about what pitches, rhythm, or articulation to use would not be enough, so I went a step further and asked myself questions like… “Is a High Pass Filter needed in this piece? If so, When and for what Part(s)? How much distortion should be used on the guitar… what pickup? Should I automate the reverb to the dynamic curve or keep it consistent throughout the piece?

It’s through questions like these that some of my most creative answers originated. When you become more aware of exactly what frequencies you want expressed at a certain point in a piece of music or how you plan to creatively pan each instrument, your music will immediately benefit from the original answers you come up with.

I always like to say that if it affects the way your music sounds at the end of the day then it’s a part of the Composition Process that should be taken into consideration. That goes for Mixing and even your state of mind prior to writing (make sure it matches the necessary mood you want to express in the piece of music!)



III. Applying the Answers


Now that we have some unique answers to work with, it’s all about performing and capturing their essence. For instrumentation it was decided that everything is permitted, but most “standard” writing practices would not apply.

Bend a string of the guitar beyond its “comfortable point” and play your theme. Play the Piano with socks on your hands or breathe into the mic and apply massive reverse delay. Place a huge pillowcase over your mic/head and start to sing. Record your harp in the bathtub or pitch up/down kitchen pan impacts and organize them to build a triad.

The options available to you are only restrained by your ability to ignore the fear of “What will others think?” The Answer to “What is Music?” is growing every day with new ideas from creative composers willing to push the boundaries of sound and a more accepting audience that’s aching for something new/original. With that said, I’d like to wish all of you the best and keep composing fellow artists!

If you’d like to listen to piece of music I finished, click here and tell me where to send it.


Monday, September 9, 2013

Interactive Music in Videogames: We look at the options, the methods and the impact

By West B. Latta

Whether you're a game developer, game player, or carry only passing interest, it is plain to see the growth and advancement of the video game industry over the past decade. Robust graphics systems, ample disc space, bountiful system memory, and dedicated DSP have all become increasingly common on today's game platforms.


While this continues to drive the look, feel, gameplay, and sound of games, it can be said that, to a large degree, high-profile, large budget games have increasingly looked to film as their benchmark for quality. Achieving a true 'cinematic' feel to a game seems to be the hallmark of what we now consider 'AAA' games.

As game technology progresses, it is useful to look at not only the ways in which the technological aspects have improved, but also how design and artistic approaches have changed in relation to changing technology. With regards to music, what is it, specifically, about cinematic music that works so well? In this brief article, we'll take a look at how changing technology has altered our perception and application of what music in games should be.

Where We've Been

In the early years of games, music was predominatly relegated to relatively short background loops, generated by on-board synthesizer chips and various systems of musical 'control data' that would trigger these pre-scripted musical sequences. While not unlike our use of MIDI today, these systems were typically proprietary, and learning the language and programming of these systems was no mean feat for a workaday composer.

And yet, these were the 'iconic' years for video game music - where the Super Mario jingle, the Zelda theme, and many other melody-heavy tunes were indellibly imprinted on the minds of a generation. The limitations of the sound systems in these consoles were, in themselves, a barrier to creating anything other than relatively simple, catchy tunes.

As we progressed into the mid and late 1990's, technologies afforded us higher quality sounds - with higher voice counts, FM synthesis and even sample playback through the use of wavetable soundcards. Though the sounds were often highly compressed, the playback of real, recorded audio was a leap forward for home consoles and computer games. PCs and even some consoles moved to more MIDI-based or tracker-based musical systems, and so were somewhat easier to compose for than their earlier predecessors. Even so, musical soundtracks didn't drastically advance beyond the simple, background loop modality for quite some time.

In the mid to late 1990's, however, we began to hear a shift in game soundtracks. While simple backgrounds were still the norm, there was a sort of "mass-exodus toward pre-recorded background music"(1). there were a few higher profile titles that were afforded a greater percentage of budget, disc space, and system resources. This all added up to a slow, but perceptible shift toward the elusive 'cinematic' feel of film. I still remember watching the opening cinematic for Metal Gear Solid 2: Sons of Liberty and thinking to myself, "This can't be a videogame!" The quality of the voice acting, the soundtrack - the entire game felt, to me, like a dramatic leap forward. This was but one example among many titles that set out to push the boundaries of audio in games.

During the past 10 years, we have seen rapid and dramatic changes in the technology, artistry, and application of music in video games. Disc-based game platforms came to the fore with the release of the Sony Playstation 2 and Nintendo Gamecube early in the decade, and higher powered consumer PCs became increasingly more affordable. As a result, we hear a definite shift in musical scores, with significantly longer runtime, more complexity, more robust instrumentation and arrangement, higher quality samples, and even CD-quality orchestral recordings.

Where Are We Now?


At present, we're steeped in the current generation of gaming systems. Xbox 360, PS3, Nintendo Wii, and PC gaming have grown to include full HD video resolution and high-quality 5.1 surround sound. Low fidelity, synthesized or sample-based soundtracks have given way to fully arranged and orchestrated scores, recorded by world-class symphonies. While they haven't yet become household names like Zimmer, Williams or Goldsmith, well-known game composers are highly sought after as developers continue to strive for a more cinematic feel to their games. Truly, some game soundtracks rival those of major motion pictures in quality, scope and performance. This trend has even given way to a small 'video game soundtrack' industry, with record labels devoted specifically to releasing and promoting game sountracks to the mass market via CD and digital download.

Moreover, the sound of classic and contemporary video games have increasingly gained mainstream popularity as the synthesizers of old platforms such as Gameboy, C64, and NES have made their way into popular music by some of today's biggest musical artists. Likewise, game soundtracks are increasingly being presented to the public in unique ways. Bands such as The 1-Ups, The Minibosses and Contraband present re-arranged versions of old game tunes on live instruments, while live orchestras perform soundtracks via events such as Video Games Live.

While it is undeniable that the quality and scope of game music have, in some cases, grown to match that of film, it simply isn't enough. Games are an interactive medium, and as such, the presentation of musical soundtracks must also be able to adapt to changing gameplay. To get a truly immersive experience, the music in games must change on-the-fly according to what is happening in the game, while still retaining a cinematic quality. Rigidly scripted musical background sequences can't impart the same level of depth as music that truly matches the moment by moment action.

Surprisingly, adaptive and interactive music schemes have been used in games for longer than we realize. Even the original Super Mario Brothers music changed tempo as the player's time was running out. Yet making highly interactive, high-quality, orchestral scores adds a layer of complexity seldom attempted by many game developers. Instead, many continue to rely on simple geographic and 'event' triggers for our accompaniment, rather than a truly adaptive music system.

While some developers have attempted to tackle this issue themselves, many of their solutions are proprietary. To go a bit deeper into interactive music, we will instead turn our attention to middleware developers. Firelight Technologies - makers of the FMOD Ex audio sytem, and Audiokinetic - makers of Wwise - the two premier audio middleware providers for today's most popular AAA titles.

FMOD Ex

Firelight has taken a unique approach to dealing with interactive or adaptive music. Their FMOD Designer system allows two distinctly different approaches. Through their Event system, the composer can utilize multichannel audio files, or 'stems'. This allows certain individual instruments or sections to be added or subtracted based on game states, or any other dynamic information fed into the FMOD engine such as player health, location, proximity to certain objects or enemies, etc. This technique was used to great effect in Splinter Cell:Chaos Theory, where, depending on the level of 'stealth and stress' of the player, different intensities of music would begin to brought in. This type of layering is often called a 'vertical' approach to music system design.

 

The second approach FMOD takes is through their Interactive Music system. This system takes a more 'logic-based' approach, and allows the designer to define various cues, segments and themes that transition to other cues, segments or themes based on any user-defined set of parameters. Moreover, this particular system allows for beat-matched transitions, and time-synchronized 'flourish' segments. In this way, a designer or composer might break down their various musical themes into groups of smaller components. From there, they would devise the logic that determines when a given theme, for example "explore" is allowed to transition to a "combat" theme. This segment and transition based approach is often referred to as a 'horizontal' approach.

A system of this kind was used in the successful Tomb Raider: Legend. For that particular project, composer Troels Folmann used a system which he devised called 'micro-scoring', crafting a vast number of small musical phrases and themes that were then strung together in a logical way based on the players actions throughout the course of the game. For example, the player may explore a jungle area with an ambient soundtrack playing. As they interact with an artifact or puzzle, a seamless transition is made to a micro-score that is specific to that game event.

Audiokinetic Wwise


Wwise is relatively new to game development, gaining popularity over the past several years with its first major debut in FASA Interactive's Shadowrun. Since that time, Audiokinetic has rapidly enhanced their system, and their interactive music functionality takes a 'best of both worlds' approach.
With Wwise, it is possible to have both multichannel stems as well as a logic-based approach to music design. A composer can create a series of themes with time-synchronized transitions based on game events or states, while simultaneously allowing other parameters to fade various stems in and out of the mix. This system incorporates both a horizontal and vertical approach to music design, and it has resulted in an incredibly powerful toolset for composers and audio designers.


What's next?

The term 'videogames' now seems to encompass an entire spectrum of interactive entertainment in all shapes and sizes: casual web-based games, mobile phone games, multiplayer online games, and all manner and scope of console and PC games. It seems impossible to predict the future of interactive music for such a variety of forms, and yet we have some clues and ideas about what might be next for those AAA titles.

First and foremost - we can be sure that the huge orchestras and big-name composers aren't going away any time soon. In fact, as more games use the 'film approach' to scoring, it seems more likely that it will continue to be the standard for what we consider blockbuster games. Fortunately, tools like FMOD and Wwise have given composers and audio designers robust tools to adapt and modify their scoring approach to be more truly interactive with the game environment. As this generation of consoles reaches maturity, we will yet see some of the finest and most robust implementations of interactive music, I'm sure.

Even so, pre-recorded orchestral music - however well designed - will still have some static elements that cannot be changed or made truly interactive. Once a trumpet solo is 'printed to tape', it cannot easily be changed. Yet the technological leaps of the next-generation of consoles may present another option. It isn't unreasonable to think that we may see a sort of return to a hybrid approach to composing, using samples and some form of MIDI-like control data. While at first this may seem like a step backward, consider this: with the increasing quality of commercial sample-libraries of all types, and the extremely refined file compression schemes used on today's consoles, it is possible to think that the next Xbox or Playstation could, in fact, yield enough RAM and CPU power to load a robust (and highly compressed) orchestral sample library. The composer, then, is hired to design a truly interactive music score in a format akin to MIDI - note data, controller data, as well as realtime DSP effects. This score, then, would not only adapt in the ways we've described above (fading individual tracks in and out, and logically transitioning to new musical segments) - but because we have separated the performance from the sample data, we would now have control over each individual note played. The possibilities are nearly endless - realtime pitch and tempo modulation, transference of musical themes to new instruments based on game events, and even aleatoric or generative composing, which assures that a musical piece conforms to a given set of musical rules, yet never plays the same theme twice.


Indeed, thesse possibilities and more are surely coming, and it is an exciting time for composers, audio designers, and gamers alike. For now, we can enjoy a new level of attention and awareness on game music. We are treated to truly orchestral experiences, if not completely adaptive and interactive ones. And yet, in the coming years, interactive music technology will continue to mature, and we will assuredly hear more sophisticated implementations of these technologies across the full spectrum of games. I encourage you to listen closely to the games you or your friends play over the next few years. The tunes you hear today are helping to shape a musical revolution for tomorrow.
 
Footnote: 1 - Gamepro - Next Gen Audio Will Rely On Midi



Tuesday, September 3, 2013

Music for the Boxset Generation

by Simon Power


Game of Thrones, Breaking Bad, Nashville, The Sopranos. American drama series are a huge influence on the way television looks, feels and sounds in contemporary entertainment. A big part of that enjoyment comes from their music: The soundtracks & scores. In this article we take a look at the music used in some of those series and find out what makes it such an essential part of the viewing experience for the boxset generation.


Game of Thrones (HBO)


music composed by Ramin Djawadi.

Synopsis

Based on the fantasy novels, A Song of Fire & Ice by George R.R.Martin, Game Of Thrones is set in the fictional continents of Westeros and Essos with storylines encompassing civil unrest, exile & the impending threat of a very, very long winter.

The music

An important part of the mood setting in Game Of Thrones is its long title sequence and theme tune at the beginning of every episode. A mechanical 3 dimensional map unfolds alongside Djawadi’s evocative music. A rich orchestral theme featuring cantering Eastern style cinematic percussion, a solo cello and assortment of brass, string & woodwind instruments. This theme sets the tone for the narrative and returns in a variety of versions throughout the series.

GOT is heavy on complex dialogue which tends to govern the role of the music. Moody orchestral swells offer support to the atmosphere of the dialogue, helping define the importance of what’s being said, rather than overwhelming it.

During battle scenes the music comes to the fore, often heavy orchestration with deep resonant percussion.

Although it may be played on real instruments mixed with samples, there are very few obviously synthetic sounds to detract from the medieval feel of the series.
In fact, with its austere orchestral washes, the role of the music could be termed ‘transparent’, in as much as it’s sympathetic to the dialogue and offering a supporting role to the storyline.

Aside from the incidental score, the series also buddies up with a number of indie bands. The National, Sigur Ros and The Hold Steady giving a contemporary feel to the music palette. A good way to reach the Game of Thrones audience on another level. Offering a connection to the present day through familiar artists and bands.

---------

 Breaking Bad (AMC)


Music composed by Dave Porter
Music supervisor: Thomas Golubic

Synopsis

Walter White, a Struggling chemistry teacher, is diagnosed with inoperable lung cancer and turns to a life of crime in order to support his family’s future before he dies.

The music

Hugely successful cult series Breaking Bad has an entirely different approach to its score and soundtrack with the music taking on a much more upfront role during the series five seasons.

There are a number of different ways in which the music appears in the show. Firstly there’s composer Dave Porter’s contemporary sounding synth based cues that appear during key moments such as scene setting, great drama, tenderness or suspense. Unlike traditional orchestral music, Porter uses synths and electronic sounds mixed with real instruments like guitar, piano and woodwind.



With a variety of arpeggios, swells, breakbeats and loops the score takes on a much more contemporary feel in line with producers like Trent Reznor, Brian Eno or Mogwai.

The second way that music appears is with published music tracks by established & unknown artists sprinkled through each episode adding an almost ‘music video’ feel to a scene, the music becoming any bit as important as the visuals it accompanies. Anything from 60’s Lounge jazz or Hip Hop, to indie rock or Mexican Mariachi music. The variety of music used is dynamic, eclectic and quite often full to bursting point with humour and irony. Take for instance the scene where we see meth addict & prostitute, Wendy S. going about her daily business to the jaunty overtones of The Association’s Everyone Knows It’s Windy!

------------

Nashville (ABC)


Executive music producer, T-Bone Burnett
Managing producer, Buddy Miller

Synopsis

Nashville chronicles the lives of a variety of fictitious singers from Nashville, Tennessee as they deal with the ruthless cut throat world of County Music stardom.

The music

Nashville is an example of a series where songs are being recorded and performed as the drama unfolds and are often lyrically and musically intertwined with the on screen drama. It’s an example of how music, visuals and narrative can be gelled together to appear almost seamless.

The added incidental music is of course Country flavoured as well A few bars of acoustic picking as we scan across the Nashville skyline. Or a well judged slide guitar lick in a minor chord to signify moments of melodrama. As a kind of self fulfilling prophecy, the soundtrack albums have become best sellers making the music almost as popular as it appears to be fictitiously in the series!







-----


The Sopranos (HBO)


produced by David Chase

Synopsis

New Jersey mobster, Tony Soprano turns to psychiatry as he struggles to balance the conflict between his home life and his job as boss of a criminal organisation.

The music

Perhaps the original series that kicked off the boxset generation back in the late 1990’s, The Sopranos was hugely influential with its bold portrayal of the American Dream turning into a spiralling nightmare. If you were a fan, you’ll remember the dust-ups, the shoot-outs, the car chases, the brutal assassinations. But it may surprise you to learn that there was no original music composed for the shows in its entire six series run.

The music choices were all carefully chosen popular songs that fitted the mood perfectly, often in complete opposition to the on screen violence or gory melodrama. This approach to scoring was a fairly new device on TV and was perhaps more in line with the feature films of Martin Scorsese who features end to end popular music in his gangster films like Casino, Good Fellows and The Departed.

One recurring use of music in The Sopranos was a well placed eclectic song playing as the end credits rolled out. Elvis Costello, Ben E. King, The Chi-Lites, Van Morrison. Even John Cooper Clarke’s Chicken Town featured in this highly coveted spot.

Then of course the show’s popular signature tune, Woke Up This Morning by Alabama 3. Chosen when producer David Chase heard it on daytime radio while driving to work.


 

So just within these few examples we have seen widely diverse ways of using music in a TV series. The supporting role of Ramin Djawadi’s orchestral score in Game of Thrones, Dave Porter’s synth based incidental music for Breaking Bad, Nashville’s total integration where the music becomes part of the show and The Soprano’s reliance on popular music to make up a memorable score.

There are of course many other examples. Mad Men’s heady mix of 60’s pop, Boardwalk Empire’s prohibition era Jazz and Blues. Even The Handsome Family’s eerie title track to True Detective.

All these and many more add flavour, depth and atmosphere to the excitement of American TV dramas enjoyed on TV’s & other devices around the world by a new breed of dedicated fans. The Boxset Generation.


Wednesday, August 14, 2013

Three Ways to Build a Sound Library


by Paul Virostek


What’s the best way for a new field recordist to begin building a sound library? How can a sound designer grow a folders of scattered samples into a collection with heft and weight?



Huge sound clip libraries roam the Web. Some have tens of thousands of sound effects. New sound pros are easily intimidated. Perhaps you want to sell your sound clips on the Internet. Maybe you just want to grow your collection to use in your own projects. How can you grow a similar sound library? Most of us don’t have thousands of dollars to spend doing so.

I know the feeling. I began building my sound library of with only a handful of DAT tapes. Now it numbers over 20,000 samples. You can do this, too.

So, today I’ll share three ways to start building a sound library. I’ll explain the difficulties, how to avoid them, and the pros and cons of each method.
What You Need to Get Started

What do you need to start a sound collection?

A good library demands endless intangible qualities: ideas, creativity, flexibility, and originality. We’ll look at things more directly, though. What tools do you need to begin building a good collection?

  • Gear.
  • Sound isolation.
  • Original recordings and copyright.
  • Cash.
  • Time.

Your choice of the following three options depends on how much of these you have, and want to use.

1. Do It All Yourself







 The simplest way to get started is to do everything yourself. This means you’ll provide the gear. You’ll shape the recording space (whether a sound booth, or a clean atmosphere outside). You’ll find the cash to fund everything, and the time to get things done.

The major benefit of this option is control. You can record in your home at two in the morning. There’s no need to schedule studio time, or rely on assistants to show up.

And, since you produce every clip, you’ll own all of them. You can twist them, remix them, or even give them away however you like. Your collection will be perfectly legal and 100% yours.

Many recordists on a budget are able to find free software and plug-ins to achieve the same effect as commercial options. You’re free to adapt your home for the best recordings: shut off the HVAC, unplug the fridge, and so on.

Just the same, the recording environment won’t be as pristine as a studio. That may mean you’ll have to alter what you record. For example, you may not be able to record quiet props. Loud, more prominent recordings will work well, however. You may wish to focus on exterior atmospheres, too. Just ensure a substandard recording space doesn’t sacrifice the quality of your recordings. Sound isolation and quality are extremely important for a high-quality collection.

Pros
• You learn a lot.
• You improve your craft.
• You have complete control.
• The only expense is time.

Cons
• Lack of sophisticated equipment.
• Possibly a noisy environment.
• Takes longer.

2. Record in a Studio




Major cities will have dozens of recording studios. They’ll feature the latest software, and plug-ins. They’ll stock a mixture of modern equipment and classic vintage gear. These studios will be soundproof, and acoustically treated. This allows you to capture delicate, quiet sounds. This is a good choice to ensure you have clean recordings. You also have access to superior microphones.

However, this benefit comes with a cost. Studios are expensive. Research options. Big studios charge $200 an hour. There are cheaper, smaller studios that charge as low as $50 an hour. Weekend rates are cheaper. Night rates are cheaper still.
         

If you decide to work this way, make sure that you are fully prepared. Make a list of everything you want to record. Gather all your props beforehand. This means you will need less time in the studio to record what you need. That makes it cheaper.

It’s a good idea to explain to the engineer that you must own all recordings. Most of the time they don’t care. They’re selling the space, not the artistic work. It’s critical to have this discussion, nonetheless.
Do you have your own recorder? Comfortable choosing and arranging microphones? Record everything yourself. Inform the facility you don’t need an engineer. This will save a bit more in studio costs.

Remember to bring your own hard drive. Don’t use theirs. Others may use the studio later, and mistakenly use sound effects you own.
This option resolves the fragile recording process itself. Once the recordings are captured you can return to your home studio and master all the final clips yourself.

Pros
• Professional, modern equipment.
• Pristine recording space.
• Engineer’s expertise with acoustics, microphone quality, and so on.
• Creative advice from a sound pro. Collaboration.

Cons
• Need to pay whenever you want to record.
• Dependent on others.
• Must ensure ownership of files with studio.

3. Hire an Artist.


A third option is to pay someone else to build a sound library for you. There are hundreds of sound pros that are happy to record or design sound effects for your collection. These pros are highly-talented people that will deliver superior recordings.

In this case, you’ll send them a list of tracks you need. You may choose from a selection of existing tracks and “buy out” the rights to own them yourself. The fee for this will be based on an hourly rate, a bulk package, or a price based on quantity.

This is a quick, effortless way to build your sound library. An appealing side effect of hiring others is that they’ll provide a fresh take on sound recording.

There are two issues, however.

First, this is usually expensive. The cost of labour makes it a bit too pricey. You may never make up the costs of two days of artist labour in sound effect sales. You may find someone cheaply, though. That is key. Perhaps you can hire a talented film school student and a lower rate. The second issue is that your freelancer must sign a contract transferring ownership of the work to you. This is called work-for-hire. This ensures you own the creations, can use them in your own projects, and resell them if you like.

It’s important to realize that you’re working with creators, just like you. You have worked hard to create your own tracks, and they are precious to you. The freelancers you hire will feel the same. Most artists are reluctant to give up ownership of their creations. They’re usually emotionally invested in them. It’s completely understandable.

This arrangement certainly can work, however, you just need to make the issue of ownership clear. Tell them you are buying out the sounds, and that you plan to sell them later. Mention this before you begin the work. This ensures everyone is beginning the project with the same understanding.

Pros
• Fast.
• High-quality, professional work.
• Fresh recordings.

Cons
• Expensive.
• Creative ownership must be guaranteed.
• Must ensure freelancers own the copyright of the clips they are selling you.

Which Do You Choose?

No single option is better than the other. Instead, your best choice is the balance of cash, time, and availability to sound isolation and gear that works for you. Your choice may be influenced by people, too. Are you more comfortable working alone, or do you like bouncing ideas off of others? Perhaps you feel it’s easier to let someone else do all the work instead. Involving others can be inspiring. It adds expense, but saves time.

Be aware that beginning a sound library is a long journey. It takes time to record and polish sound effects. The initial up-front investment in time and cash is real, however it will pay off handsomely over the years of your sound career. Use these three options to begin your sound library now. Why?

A strong collection represents your skill and inspiration in every clip you record, master, and publish. As your sound library grows, it will become involved in every project you join, amplify it, and share your creativity with all that hear your work.



Monday, August 5, 2013

Things to Consider When Scoring for Games, part 5

By Kole Hicks

These articles are not intended to be a master source for everything one must consider (and how to prioritize them) when scoring a game, rather it will be a series of articles based off my experiences with each newly completed project. As I learn from the process, the other developers that are involved, and write about the experiences here, I hope the information will help better guide your future scoring efforts for games.

I recently had the pleasure of composing the original soundtrack to a fantastic 2d pirate Simulation/RPG called, Pixel Piracy. The experience itself was fantastic and I couldn’t have asked for a better developer to collaborate with, so there weren’t really any unexpected issues we ran into later in the process. However, as there is with every project, there were a few unique musical situations that I had to consider.



I. Defining Musical Roles


The first thing I had to consider was the role music would play in Pixel Piracy. The developer and I had a few discussions beforehand, but they were very open to my suggestions, which let me be more confident in my decisions. This is something that can’t be understated, as it directly affected the quality of the music. It was through this very open line of communication that we decided on two main roles for the music to play.

The first was to "set the stage" so to speak & operate as any normal background score in a game. Subtly enhancing the action on screen while subconsciously influencing the player’s mood. The second role was for some of the music to be consciously thought of as music & possibly participated in by the listener. Each main Role was split into multiple sub-roles that ultimately defined Pixel Piracy’s "Musical Identity".

For the subconscious background score role, we split it into two distinct categories: Combat & Neutral.

The Combat sub-role covered all potential combat situations in the game, both in the water and on land. Here’s a Combat Example. The Neutral sub-role’s purpose was to serve as light background music during relatively placid moments in the game. Here’s a Neutral Example.

For the Conscious Musical Role, we also split it into two separate categories: Tavern Tunes & Sea Shanties. Both sub-roles could be instrumental only, but the thing that makes them unique, and thus consciously thought of by players more often, is the Lyrics/Vocals featured in many of the pieces.

The Tavern Tunes sub-role only plays when the player is on an island with a Tavern & his captain is inside of it. Here’s a Tavern Tune Example. The Sea Shanties sub-role only triggers when the player is sailing across the sea on his/her ship. Here’s a Sea Shanty Example.

II. Creating that Authentic Pirate Sound


I needed to find the very essence of what it’s like to sail the rowdy pixilated seas with your merry band of salty dogs. To accomplish this, I frequently played early builds of the game and had some in depth discussions with the developer, which helped inform me of the overall tone of the game. Beyond that I looked for inspiration in various styles of music and other pirate related media.



Pixel Piracy is a fun and adventurous game, so Irish Jigs and Reels immediately popped into my mind as a base for the game’s music. I love its jaunty disposition and the unique instrumental colors that comprise its traditional ensemble. However, it wasn’t appropriate for some game play situations to stick strictly within the parameters of this style’s guidelines. It was in these situations, like walking along the beach of a new island or raiding a rival’s pirate ship that I snuck in other influences. Specifically, unique world wind instruments (like a Gemshorn), period instruments (like a Hurdy Gurdy), and of course the bombastic Symphonic flavors used in other popular Pirate projects.

Over the years I've invested quite a bit in my own personal rig so that I’d have access to the best virtual instruments and sample libraries on the market. However, no matter how much time I spent behind the computer programming, certain instruments just wouldn’t sound as raw or beautiful as I wanted. So I had the pleasure of hiring a handful of fantastic session musicians (Cello, Violin, Accordion, & a unique instrument specialist) that brought the music to life. Their input, knowledge of their instruments, and interpretation of what I wrote was immensely valuable and Pixel Piracy is the benefactor. In fact, I don’t think I (or anyone else) could have pulled off this score with the same amount of energy and authenticity if live musicians weren’t hired.

At this point you might be asking, "But the graphical style is very pixilated. This seems like a big part of the game and would justify the use of 8-bit/Chiptune music; so why did you ignore it?" The answer is, we didn’t ignore it. We actively had a discussion about its usage in the game and came to the conclusion that an 8-bit style score would push the "nostalgia factor" too much and wouldn’t allow for the raw/gritty emotion derived from acoustic instruments to filter through to the gamer. For some games Chip tunes work perfectly, but for us we felt like it was an unnecessary stereotype for the music to follow and would ultimately limit the score’s effectiveness.



III. Making the Most of What You Have Available


Some games necessitate the design of a highly interactive and complex music system; Pixel Piracy is not one of those games. That’s not to say a simple music system is less effective than a complex one, but rather that each game requires its own unique music solution. For Pixel Piracy, we felt comfortable in its system’s simplicity and rather than worrying about creating various stems or mixes of each piece and hoping it would implement correctly, I could just focus on writing a good, solid piece of music.

Even though our music system was relatively simple in that it only looped full tunes & faded in/out when necessary, we added some depth to it without magnifying our workload. For example, the jaunty tunes inspired by Irish Jigs/Reels will only play when your Captain is on an island with a Tavern. Also, when in combat, rather than loop the same song over and over again the system will cycle through a handful of appropriate combat songs after one of them has ended. It’s not perfectly seamless, but I composed the combat tracks in a way (Similar tempo & exact same key) so that transitioning from one piece to another is relatively smooth.

Although Game Music is very important to setting the tone of a game & carrying or transitioning a player through various areas/game states, it is still near the bottom of a programmer’s priority list. This is not because they don’t think your work is important, but rather (if you’re not implementing it yourself) they have so many other tasks to focus on that they rarely have time to dedicate solely to music. Programmer time is a rare resource, so use it wisely; we were very fortunate that our music system could be quite simple and still have everything sound top notch.

In addition to the rarity of the Programmer’s time, I had a pretty short amount of time to write, record, and mix/master the entire score. So it was essential that I scheduled out my weeks in a manner that would allow me to work efficiently. As Composers know, being inspired & writing great music isn’t simply a switch you turn off/on, so when it was difficult for me to write I would focus on other tasks like finishing charts for musicians or uploading the Pro Tools session to my server so the Recording Engineer could pull it down. Mixing up my tasks & staying busy kept the momentum up; allowing me to finish the score right on time without any "crunch" whatsoever.

As mentioned in the italics at the beginning of this article, this is by no means a complete list and I’m still a young professional with many ups/downs ahead in my career, but nevertheless I believe this information can be beneficial to many composers no matter their experience level. Thanks for reading and keep composing fellow artists!



Sunday, August 4, 2013

Things to Consider When Scoring for Games, part 4

By Kole Hicks


* These articles are not intended to be a master source for everything one must consider (and how to prioritize them) when scoring a game, rather it will be a series of articles based off my experiences with each newly completed project. As I learn from the process, the other developers that are involved, and write about the experiences here, I hope the information will help better guide your future scoring efforts for games.

In this article I would like to discuss a few of the things I had to consider when scoring the latest release from Ender’s Fund, "Rabid Rascals". This is a head-to-head mobile game with a unique hyper-violent "stuffed animal" type art style where you can take out opponents and level up to get better gear. It’s free to play, so I’d recommend checking it out HERE!



I. Making the Most of your Music


It’s important to understand that before we start, because this is a mobile game, we’re going to have restrictions on how much music can be used in the game. In regards to "Rabid Rascals," this is especially important, seeing as there are a large amount of Sound Effects and Voice Over "Effects" that will all need to share the same space. So early on in development, I recommend figuring out where music is absolutely necessary and how much is needed for that segment of the game to feel "enhanced."

For example, if we understand that most battles in-game will be around 60 seconds long, then a 15 second combat cue on a loop could get absolutely irritating. It may even get to the point where a player would turn the sound off entirely. My rule of thumb (for mobile games) is to figure out the average amount of time a player spends on a certain screen (or in a specific game mode/state) and then create a cue around 1 ½ times that length.

This covers the "standard" situation, but also lasts a little bit longer before looping for those epic battles. However, sometimes this may not be a possible option, as some games will keep a player in the same game mode/state for 5 minutes or more. For "Rabid Rascals" though, and many of the other mobile titles I’ve worked on, my rule of thumb seemed to work quite well.

In addition to the average amount of time a player spends in each game mode/state, I recommend figuring out how that game mode/state evolves over the player’s experience with it. In "Rabid Rascals," we knew that battles would get more intense over time as each player landed shots and their character’s health receded. So, for "semi-timed" game modes/states like this, I was able to build the intensity of the piece over its entirety to help enhance what is happening visually. Although this is a simple loop and not in any way interactive, it can help give the illusion of an adaptive score; which ultimately adds a bit more tension to each battle.

Last but not least, don’t be afraid to write less music than you originally intended. For example, we could have easily decided that the inventory management/shop screens should have their own specific music cue. However, because the player may switch between the main menu and inventory screens often, we felt it was better to just have the Lobby/Main Menu piece play throughout both screens as to not break the momentum of the piece. Otherwise it could get tremendously annoying to consistently hear the first 5 -- 10 seconds of both the Lobby and Inventory/Shop cues as you switch between the screens.

**It’s also worth mentioning the importance of clearly defining the feel of each game mode/state/screen with your music in mobile games. You often have less music to work with and can’t always develop/evolve motifs over multiple pieces. Get to the point, but be clever with your usage of these themes. For "Rabid Rascals" we only used 3 pieces of music. Lobby (=Mischievous/Fun), Versus Screen (=Impending Danger), Battle (=Chaotic blend of the previous 2 feels).

II. Loops and Loops and Loops...


Some game engines, especially if they allow FMOD or Wwise integration, can contain highly interactive music scores. However in my experience, for the most part, mobile games either don’t call for this amount of interactivity or don’t have the financial resources available to license a 3rd party audio software like FMOD or Wwise. So, it is up to us to do the best with what we have available... which in most situations tends to solely be loops.




As I mentioned in a previous article, first it is important to create a piece of music that is rich in interesting material so that the player hears something new each time, or at the very least doesn’t mind hearing it over and over again. Beyond that though, there are a few other things we can do to help our loops sound "better."

Mp3s tend to be the most common playback format for mobile music tracks and if they are to loop, then this can cause some issues. Most notably from the tiny "bubble" of silence that is inserted before each loop repetition, thus making seamless loops nearly impossible. There are a few technical things you can do to eliminate this unwanted space, but I’d like to talk about something you can do compositionally which can help the situation.

For "Rabid Rascals," all of the tracks in the game feature a mostly percussive "outro" section. The natural decay of percussive sounds can work quite well for loops, especially if you write a piece so that the final hit allows a few beats for the sound to fade out before we loop back to beat 1. In addition to this, when applicable and expecting the piece to loop a few times, it is sometimes wise to not resolve the harmonic tension until the loop starts over again; so feel free to sit on the dominant at the end of a piece.

III. Identifying What’ll Make your Score "Pop"


What do I mean by "Pop?" Well, it’s purposely vague, as it could truly mean anything. For "Rabid Rascals," we determined the music style early on and I was able to identify a few characteristics of the style that absolutely needed to be executed correctly for it to be convincing.

The main thing I needed to achieve was to make sure this highly active and "bouncy" orchestral style was articulated with an immense amount of passion for it to feel alive. However, when you’re working on mobile games you rarely (if ever) get to work with a live Orchestra, especially if you only need to record a few minutes of music.



Although samples are getting better every year, I still prefer using live players whenever possible. So, since it was within budget, I hired a fantastic violinist to record herself playing all of the 1st violin lines a few times. Stacking and layering in these recordings made a huge difference. Not necessarily because you could "hear" the violin more, but because the aggression captured from the live recordings "trickled" into all the other parts and made them feel more real.

For your project though, the thing that makes your music "Pop" may not be a soloist at all. It might be a creative way of mixing your piece, altering your recording methods, or something else entirely. Whatever it may be, I advise you to actively search for and definite it. This will help keep your score cohesive and allow it the ability to better fulfill its role in the game.

As mentioned in the italics at the beginning of this article, this is by no means a complete list and I’m still a young professional with many ups/downs ahead in my career, but nevertheless I believe this information can be beneficial to many composers no matter their experience level. Thanks for reading and keep composing fellow artists!

Saturday, August 3, 2013

Things to Consider When Scoring for Games, part 3

These articles are not intended to be a master source for everything one must consider (and how to prioritize them) when scoring a game, rather it will be a series of articles based off my experiences with each newly completed project. As I learn from the process, the other developers that are involved, and write about the experiences here, I hope the information will help better guide your future scoring efforts for games.

Hello Again! It’s been a little while since I’ve written one of these articles, but I just finished scoring a new game and learned quite a bit from the experience that many fellow Game Composers may find useful. The game is called "Cities of Legend" and it’s a Social Game for Facebook based off the New York Times bestseller, "Legend" by Author Marie Lu (Developed by Wicked Sweet Games & Published by CBS Films). I’ve worked on Flash games for Facebook before, but have never had the opportunity of creating the "Audio World" for an established IP. That in and of itself was quite a fun challenge, but I also learned a few other things I’d like to share on what a Composer should keep in mind when scoring for games.

 

I. Platform Constraints


This is the third article in this series and the first subject of each one has been about Platform Constraints. It is so tightly knit to video games and the direct performance of your music system, that I’m fully expecting to learn something new on each project (especially considering there are so many platforms to create games for!). "Cities of Legend" is a flash game for Facebook, so I’d like to talk a little bit about some of the constraints we had to work around.

First and foremost, people expect the loading times for their Facebook games to be minimal. The longer it takes your game to load, the higher the probability of the player just closing the window and ignoring your game altogether. This expectation directly limits the amount of music (and quality at which it’s being played back at) you can have in your game. For "Cities of Legend" we decided on three Sixty (60) second loops for: The Rebels Home, The Republic Home, and Combat/Mini-Game.

Another factor that limited us to three music tracks was Flash’s inability to create a basic interactive music system. Rather than spending our time trying to force the engine into something it’s not familiar with, we decided to invest our time into reinforcing the most important moments in the game with strong, thematic music that can easily loop.

My friend and talented Composer, Gerard Marino, shared his way of thinking about music loops with me a while ago. I’m going to paraphrase a bit here, but he basically said, "If you only have a minute of music to work with and that minute is going to be looped over and over again, put so much detail and interest in that single minute that the player can hear something new each time it repeats."



I directly applied this concept to the music of "Cities of Legend" and ultimately it does a better job of reinforcing the game world than a few ambient loops would. It may be harder to pick out the looping point for Ambient tracks, but we felt that approach wouldn’t be appropriate for a game of this size. The whole game flow is very fast and Ambient tracks (in that short of a time frame) have nothing to add or say to the game. Which conveniently moves us to the next subject...

II. Get to the Point


Not all games feature a twenty-hour single player campaign that develops your hero from rags to riches. So the "Symphonic Composer" mindset of taking your time to cleverly develop your motifs in unique ways over longer periods of time may be completely ineffective (depending on the style, game play, demographic, etc.).

While "Legend" (the Novel) is rich with detail and must take its time to dramatically crescendo, "Cities of Legend" (the Social Game) is meant to hastily throw the player in the world and have them immediately grasp almost everything that’s going on. Even if the player is unfamiliar with the world, within the first minute of playing they should understand the following: Pick a side to fight for, understand the function of the UI, and realize how to jump into battle. So as the Composer, we initially have about a minute to help reinforce or describe the tone of the faction they chose to fight for. After that amount of time, the player will most likely jump into a battle.

So we have a minute, what can we say in that amount of time?

Early on we decided that each faction (Republic vs Rebels) should have their own designated theme, instrumentation, and overall tone. I had read "Legend" before working on the score, but meeting with the Producer and Author was very helpful. For the sake of organization, I’ll provide a visual breakdown of how I determined which various musical elements I assigned to each faction.





III. Work your Themes into Trailers or Promo Videos


I mentioned in the previous section that developing your motifs over the course of a game may not be the best option (or even possible) in some scenarios. However, if you’re able to negotiate and work on the Trailers/Promo Videos (and have time to Compose the necessary themes before the trailers are released) this is a good place to do it.

Rather than just writing stereotypical trailer music that would just serve to move the action along, I was able to sneak in bits and pieces of each theme (Rebel & Republic) in our various Trailers. Furthermore, I was able to "stamp" the ending logo with the Rebel motif, which just so happened to align with the overall tone of the first novel (and thus served well as a "Main motif"). I found that this was not only musically satisfying, but helped establish the world of "Legend." This is especially effective since you won’t hear those themes (or probably that same combination of sounds/instruments together) anywhere else.

Trailer 1 (Republic theme sneak in)
Trailer 2 (Rebel theme sneak in)

I thoroughly enjoyed my time working on the "Cities of Legend" and would highly recommend both the game and novel to anyone interested in near future dystopias with strong characters. Thanks for reading fellow Game Composers and I hope you’ve found this useful!

Friday, August 2, 2013

Things to Consider When Scoring for Games, part 2

By Kole Hicks

These articles are not intended to be a master source for everything one must consider (and how to prioritize them) when scoring a game, rather it will be a series of articles based off my experiences with each newly completed project. As I learn from the process, the other developers that are involved, and write about the experiences here, I hope the information will help better guide your future scoring efforts for games.

In the first article on this subject we went through three different items to consider when scoring for a mobile game. Coincidentally, the latest game I finished scoring was also a mobile game, but many of the challenges and priorities differed. In this article I’d like to share three of the main things I took into consideration when composing the music for ‘Bag It!

 

I. Platform Constraints


This has always been an issue for Game Audio people, but as technology has developed over time it’s become less of a burden for those working on Console/PC games. However, with the emergence of mobile games (and restrictions of certain ‘stores’), this issue has once again reared its ugly head... or perhaps I should be the optimist & say this ‘unique challenge’ has once again reared it’s ‘special’ head.

Interestingly enough, the previous mobile game I worked on (mentioned in the first article) didn’t mind if we went over the allotted 20mb limit for 3G downloading. However, the developers at Hidden Variable Studios made it absolutely clear that they wanted to keep the whole game under 20mb.

After looking at the proposed asset list & discussing the audio ‘budget’ (around 3mb for all of the Audio) it became clear that this was a case of quantity or quality. I always tend to favor quality and as the game progressed it just so happened to work out that many of the proposed audio assets were not needed. This gave us room to use a little bit higher quality audio files. In addition, working with people who had a clear vision of what they wanted really helped me develop the appropriate audio necessary in a timely manner.

Specifically the SFX were bounced at 16/22 (mono .wav) & the Music at 16/44.1 (stereo .wav), but then compressed in Unity. Not absolutely ideal, but still pretty good and ultimately effective for the game. Also, there are issues when looping with Mp3s so fortunately (with a little tweaking from the Programmer) we were able to use features in Unity to create seamlessly looping music with .wav files.

II. Creating the Appropriate ‘Feel’


This is always a tricky one for anyone working in audio for media, as the sound/music really sells the visual. There are a million different directions to go in and you have to be aware of how other people perceive sound (more detail on this in III). With that said, the artwork for ‘Bag It’ was fantastic and really helped me when developing the appropriate mood with the music.

Fortunately, the CCO I worked with directly already had a good idea of the feel of the music they wanted (even some instrumentation too!). So building the rough foundation was rather easy... especially when considering one of the reasons they decided to work with me was because of the spec demo I submitted. So identifying the elements we wanted from my piece and other music (in a similar style) was quite painless.




After collecting all of the appropriate instrument colors for our foundation (Pizzicato Strings, Acoustic Guitar, Piano Synth, French Horn, & Light Percussion) it was time to discuss the role our music was to fill in this game. ‘Bag It!’ is a challenging, but light-hearted game driven by unique characters. We knew that the SFX would help identify the characters & realized quickly that the music should play a supporting role during game play. However, during the menu we needed the music to be a bit more active & engaging... thus there was more liberal usage of melody.

Furthermore, although we backed off on our usage of melody during the game play music, we decided that it should be semi-interactive. Realizing the constraints of our platform & the feel of the game, we decided on a simple approach that allowed the music to develop ‘organically.’ Essentially it is one single piece of music (30s long), however we don’t hear the ‘big picture’ until the 2nd layer comes in. So, about 30s into playing (by then the pace has picked up a bit) you’ll hear a more active & engaging piece of music.

However, I would also like to note that our original intent was to include 3 different layers in our game play music system. Unfortunately, this would put us over the 20mb limit, so we restricted it to 2. Perhaps this will change as more downloadable content is made available.

I recommend buying the game to hear the ‘music system’ in action (beyond my normal bias & the fact that it’s quite fun); however I’ll also supply a link below to our trailer, which happens to feature a decent chunk of the music used in the game.




Bag It! Game trailer video at YouTube


III. Being Aware of Other People’s Perceptions of Music


Iterations are a part of life and business in every corner of the world. Trying new things, developing new ideas... it’s how we grow. With that said, I highly recommend having multiple iterations be a part of your initial bid/contract as they’re inevitable and you’ll thank yourself later.

Previously I mentioned the importance of being aware of how other people perceive sound. This is especially important when working with producers who are extremely involved in the process and enjoy experimenting.




On this project I was fortunate to work with a CCO who not only had a good idea of what he wanted, but knew how to speak music (or at least express the ideas he couldn’t explain with musical terms). However, there were a few cases of miscommunication based purely off our different perceptions of music.

In one such case we were having an issue identifying elements of our main theme that sounded a little to ‘childish’ for the feel we were going for. Since I was creating the tracks based almost purely off our references it was hard for me to identify what our testers considered ‘childish,’ as that critique had not come up before when reviewing any of those reference tracks.

Eventually, after a little discussion back and forth, we found the culprit in a Pizz. String Harmony and octave doubling with Glockenspiel on the second pass through the theme. It seems so obvious to me now that I look back at it, but why was it difficult at the time?

Well, the reference tracks included Glock/Pizz. String Harmonies, but not that high in their register & not so exposed. The solution we found was to continue playing the main theme in its original range on the Piano Synth, but with Arco String accompaniment, a Guitar doubling underneath, and no Glockenspiel up top on the 2nd pass through. This helped move the piece forward while achieving our goal of keeping the piece light-hearted and playful, but mature.




As mentioned in the italics at the beginning of this article, this is by no means a complete list and I’m still a young professional with many ups/downs ahead in my career, but nevertheless I believe this information can be beneficial to many composers no matter their experience level. Thanks for reading and keep composing fellow artists.