Friday, December 19, 2014

Pruning the catalogue - cleaning out some older tracks today

Hi all. We're doing a bit of Christmas cleaning. Today we're saying goodbye to the following older tracks. As always, rest assured that we're adding at least twice as much new music as we're removing older music, so our catalogue is still growing, even though we're pruning it a bit from time to time. Here are the tracks that we are de-listing today. If you need to license one of these tracks, please contact us.

  • A Brand New World
  • A Night In Paris
  • Afrindia Drumfest
  • Afternight
  • Around The World
  • Autumn Skies
  • Bahia Blue
  • Band Of Gypsies
  • Beer Bottle Blues
  • Body Language
  • Bold Strategies
  • Bollywood Bounce
  • Bottleneck Blues
  • Bottleneck Blues
  • Bouncing Sound
  • Break Trance
  • By The River
  • Calling Houston
  • Cappucino Tequila
  • Celestial Spirit
  • Chicken Boogie
  • Child Of Calcutta
  • Cruise Control (ZiS)
  • Dance Of Life
  • Deep Blue
  • Deep Echo
  • Deep India
  • Deep Indigo
  • Deep Sorrow
  • Down Home Funk
  • Dreaming Fields
  • Drifting Along
  • Easy As That
  • Easy Midnight
  • eKustic
  • El Salvador
  • Ethetes
  • Extensive Research
  • Form And Function
  • Friend Or Funk
  • Funk Delight
  • Funk Proof
  • Funktastic
  • Gentle Rain
  • Groove Digger
  • Hard Power
  • Havana Royale
  • Hearts And Minds
  • High Noon Showdown
  • Himalayan Passage
  • Humble Beginnings
  • India Crossing
  • Innertech
  • Invitation
  • Joyous Jewel
  • Last Minute
  • Latin Kiss
  • Latin Lament
  • Lifestyles (D Saric)
  • Lights Out
  • Like Minds
  • Love Song (D Saric)
  • Lovers Touch
  • Malibu Bingo
  • Marimba Mood
  • Meet the Beat (ZiS)
  • Midsummer Beauty
  • Mirrorball
  • Mmm Hmm Oh Yeah
  • Morning Sunlight
  • My Arabian Girl
  • My Blue Truck
  • New Commerce
  • No Going Back
  • Noble
  • On Schedule
  • On The Strip
  • Oudo Flutu
  • Outside In
  • Punjabi Wedding
  • Rasta Africa
  • Reaching the Stars - D Saric
  • Rebirth (D Saric)
  • Relaxed Suit
  • Reverberation
  • Riverbed
  • Road to Nashville (ZiS)
  • Romantic Sonata
  • Roughneck Strut
  • Russian Hoedown
  • Said & Done
  • Searching - D Saric
  • Second Date
  • Secret Moment
  • Secrets Of India
  • Sitar Meditation
  • Sitar Star
  • Sitcom Samba
  • Skys The Limit (Zis)
  • Slide
  • Smiling
  • Sock It To Ya
  • Soul Breakfast
  • Spooky
  • Stay On Track
  • Stratosphere - D Saric
  • Streethopper
  • Summer Dream
  • Sunrise in the Sahara
  • Sunshine (D Saric)
  • Sweet Snarl
  • Sweet Sunday
  • Swingin Time
  • Tabla Manners
  • Techno Mash-Up
  • The Corporate World
  • The Sneak
  • The Storm (ZiS)
  • The Winner
  • Tiki Party
  • Tomorrows Endeavors
  • Unified Theory
  • Up And Coming
  • Waltz For Josie
  • Week Review
  • World Beat Jam
  • You Cant Keep Me Down

    Thursday, November 20, 2014

    Country music focus at Shockwave-Sound.com

    There is never a bad day for a bit of country music and at Shockwave-Sound.com we are lucky and privileged to be able to work with some fine country artists, making their music available to license for for Apps and Games, TV and Commercials, Videos and Internet etc. We have just been adding some new country music to the site today, and at the time of writing, we have a catalog of:

    * 91 Vocal country tracks:
    http://www.shockwave-sound.com/genre/259
    * 332 Instrumental country tracks:
    http://www.shockwave-sound.com/genre/42

    Tuesday, November 11, 2014

    Tips and curiosities from computer games music, part 2


    By: Piotr Koczewski




    Polish - Lithuanian Connection


    Now I would like to share some of my personal experience, summarizing in a few paragraphs my achievements as the composer (hope it may serve as an inspiration for some of you). In May I had the pleasure of giving a presentation on music making and on sound to the Lithuanian students. I started by playing live some of the music from a Modern Warfare 2 trailer. Then I showed the notation for the winds section of my own piece “Dust of War”. Having finished the piece, I was astounded to get an applause from the audience. It was quite a surprise. Well, I was so tired that I seriously considered the possibility that it was merely a hallucination. Apart from that, I did not really expect the piece I presented to be inspiring for people of my generation. Then I created a 30-second tune from scratch in about 5 minutes, explaining all the time what instruments I was playing. I got another round of applause when I saved the music as an mp3 file and replayed it. I also answered some questions, for example about how long I had been composing, or if I had had any formal training (I said I had had not, which got me another round of applause). I finished with an animation by Aleksander Wasilewski with my music (which was received with laughter rather than fear).


    Piotr Koczewski and Latvian Translator


    My Polish connection came during a business trip to Wroclaw (I was negotiating with an investor about an MMORPG), which I used as an occasion to meet my fellow composer Piotr Szwach. We spent time until 5 am composing war game music, discussing about equipment, music, computers and exchanging experience. I think both of us learned something new.


    From Independent projects to AAA games


    If you ever get an e-mail asking you to make a sample of music for an “AAA” game teaser, enjoy the very fact that you were contacted (even if they have to postpone publishing the teaser trailer for technical reasons), because a lot of musicians dream of cooperating with big companies that work on large-scale projects.

    At the beginning, I’ve treated my adventure with music as a hobby, something like Sunday fishing at my favourite lake. After some time of distributing my tracks to friends, I’ve noticed that they like my work, what’s more – one of them persuaded me to create my own album (to my own surprise it got high rates and good reviews). The initial problem with creating music was: Who will listen to it? And at some point you realise that someone is actually listening to your music – and what is even more – he is willing to pay for it!

    Take advantage of every day to learn something new. After finishing work recall your music. If you can do it and you remember it – congratulations! It’s one of major keys to success in video game music creation.

    However, before you decide that your work is hundred percent finished and ready, listen to it a few more times. If you don’t have any objections after that, you can send it to your publisher or boss. Usually, the next day after recording you can hear your shortcomings – it’s a normal sign of creativity, all it means that you strive for perfection!

    Learn what ASAP is!


    If your employer expects you to do a music track, and in the conversation, or an e-mail, he uses an acronym of ASAP (as soon as possible, in case you did not know), then focus on the recording (even if the mess in your kitchen resembles that from the movie “7” and there is a family meeting tomorrow). By sending even a sample of the tune (1 minute) you will calm the nerves of your boss, who wants nothing more than to yell "Jetson, you`re Fired!".

    The Power of Marketing


    It is a good idea to appear on expos, conferences and meetings related to Game Development and presenting yourself, spreading your business card or demo CDs (in my case, several free singles from my Wasteland Theme album increased the interest in me). Therefore you should stay in touch with your old team, employers and companies, because someday you may receive a call / an e-mail with a job offer. In game development, as a rule, clients are working with people they remember, and with whom they had no problems during the cooperation (the more contacts you establish, the greater chances of employment you have). It’s highly probable that some past occupation will result in a contract for another game in the future. Even if you had submitted an application, and you didn’t get a job because they took someone else, it does not mean that they will not call you again in a few months, because they remembered you and your savoir-vivre). Be open to constructive criticism (and be critical of yourself). Never be afraid to alter or add some instruments to your track, even if you’ve been working on it for three days. Learn something new every day. Experiment with instruments. Creating the "Little Boy" track, related to the 6th of August 1945 (nuclear attack on Hiroshima), I was wondering how to capture the character of that event using only orchestra. When I was choosing suitable instruments, Bass Wagner's Drum caught my attention (I used the sound of scraping and light, long beats for the sound of the shockwave).

    There is another thing that I discovered creating the WWII music. French horns played long, depending on the velocity, can imitate an Air Raid Siren Sound. A few months ago I racked my brains on how James Hannigan created the electronic background effect in "Yoriko Theme". After an hour of experimenting with effects and rhythm of the instrument called shaker, I finally found out how to record something closely resembling the original song. Not only did I find a good way to create background music, but also discovered how to make the sounds of futuristic computers.

    For about five years I have been devoting myself to an ongoing effort of boosting my keyboard skills, expanding my knowledge of orchestra articulation, and continuously developing information technology. After so many years of working with music, I finally became a recognizable person in the game industry (but before I achieved my present status, I had gathered experience in amateur projects). Professionals from all around the world write to me and invite me to music expos. One very motivating thing for me was the positive opinion about my music, which came from a western musician who creates music for commercials and AAA games. As I already mentioned, I once had the chance to create the music for a teaser trailer of an AAA game. Then, after a few months, I got one more such offer – for the E3 expo too! By the end of May, I was to create the soundtrack for a teaser trailer for Afterfall Insanity, which meant I had to meet the team in a studio to discuss the music. E3 - this acronym left me sleepless for days. Even though I did not make it in time for the presentation (someone else had to fill in for me), I created a few versions of the music for the teaser trailer, which served as my demo reel.




    The making of Afterfall Insanity Trailer Music


    It is also a very pleasant feeling to receive good reviews of your work from the project manager. I was pleasantly surprised to hear from the head of the Russian game project PostWorld that after their last meeting they decided that they had to have my music at any cost. For an unbelievable moment I felt like I was the new Hans Zimmer. When strangers ask me what I do for living, I proudly answer: "I create music for computer games" (so far I have not discovered why there is always a slight expression of surprise on their faces).


     Post-World Unity Engine and Gameplay Screenshot

    Piotr Koczewski`s Home Studio


    When I play games I sometimes add my own music in my head. Some ideas I write down as notes, for the future use. A few years ago I was supposed to create music for a Space Opera comic. Unfortunately, the piece, which I thought was perfect for it, was rejected. However, a few years later it found its place in another project.

    Finally, a few pieces of advice from me: listen to music as much as you can. After some time you will start recognizing instruments and be able to place them on the world map. As for the most important advice, which served me well in the music design (changed the way I work) and life in general – spend money to develop, not to impress (Michael Dell). After a few years of work I try to overcome my own limitations. For example, I created a 13-minute piece (inspired by Modern Warfare 2) for my second Wasteland Theme album. Remember to sign up on portals like Linkedin, Myspace, Reverbnation and to create your own homepage and keep it up to date. In the future, in order to protect your music copyrights, you should register a copyright for your tracks in organizations such as PRS (Europe) or similar. Remember to create your web page portfolio and keep it updated.

    I hope that my tips will be helpful, and we will meet at the Video Games Live concert this year.

    You may also want to read part 1 of this article.


    Friday, October 31, 2014

    Doing some Fall cleaning, pruning the catalogue of some old tracks

    The good people reading this blog may get the impression that all we're doing is to remove tracks, and not add much new stuff. We do not post on the blog here every time we publish new music, because we publish new music all the time. Every single week, and some times several times per week - why, even up to multiple batches in a single day - of new, fresh material. We can't post about it here on the blog every time we release new tracks. For each old track that is sent off into the annals of history, about 20 new ones arrive. So it's not like we're downsizing.

    Why do we do this? Because it's central to our mission and our whole way of business, that our site does not start to "sound old". Remembering when we first started out in 2000, there were already some libraries out there with a lot of music that "just sounded old". We refuse to become one of those. So we remove old tracks.

    Today we are saying goodbye to the following tracks that have been in our library for years. We thank them for their service. If you need to license one of these tracks, please contact us. We can set you up.
    • A New Love
    • A Spot of Light Entertainment
    • Afterglow
    • As the World Turns
    • At Leisure
    • Australia Didgeridoo
    • Awaken the Stone Shadows
    • Background One
    • Bat and Pad
    • Beautiful Paradise
    • Blanche Louve
    • Blues Rock Stings
    • Body And Blues
    • Burning Sun
    • Chachechur
    • Cleanse
    • Cruise the Strip
    • DF Sweating
    • Dark Corn
    • Digitale
    • Digitale 2
    • Dinner Groove
    • Dreamland
    • Elektrostep Idents
    • Entrevistas
    • Fast Food
    • Floating
    • Floripa
    • Full Speed
    • Funk Rock Stings
    • Funny Business
    • Gigi
    • Go Ahead
    • Gone
    • Gravitons
    • Guitarra In Bb Minor
    • Hidden Past
    • In Waiting
    • Infiltration
    • Intense
    • Irish Rose
    • Journey
    • Kitchen Garden
    • Kool Krush
    • Le Passage
    • Making a Pledge
    • Message on the Mirror
    • Mind Your Matter
    • Morning Ballad
    • NYC Delivery
    • Naked Blues
    • Nashville Bound
    • New Ideas
    • New Orleans Funk
    • No Trace
    • On the Town
    • Open Seas
    • Panama
    • Pineapple Fizz
    • Ray of Sun
    • Romantica
    • Round Trip
    • Scotland Bagpipes
    • Sensual Noon
    • Sexy and Edgy
    • Shock Her
    • Single Combat
    • Spanish Mood
    • Spanish Reggaeton
    • Streaming From My Heart
    • Surfin the Tube
    • Tender Love
    • The Andalucian Incident
    • The Cat Kladniew
    • The Element 47
    • The Golden Age
    • The Open Road
    • The Unknown Superhero Chase
    • The Y Factor
    • Town Beat
    • Travel
    • Up And Away
    • Waiting Time
    • Wake Up
    • Wallys Place
    • Wolfs
    • X Agent
    • XM Modules by Bjorn Lynne
    • XM Modules by Adam Skorupa

    Tuesday, October 21, 2014

    Orchestral MIDI Arrangements

    The absolute beginner's guide to the Orchestral MIDI Mockup


    By Thomas James Slater

    Like many composers in the more analog field of pencil and paper, I was entirely unaware of the other side of music production. For years I thought that the bleep and bloop of midi, compliments of my music notation software, were all I could get from a computer and the only way to get a decent recording was to have it performed on stage. As I discovered a few years ago, this is most assuredly not the case.

    Rather than give a long history of the leaps and bounds of music production and midi innovation, I'm going to go over the basics of producing a midi mockup in a short amount of time for the least amount of money. This is written chiefly for aesthetic purposes in mind, not to learn how to navigate any particular software. Now that we know that, let's get down to it shall we? First on our checklist are what I consider a couple essentials. You'll need a sequencer (listed below). Ideally you want a program that is specifically good at midi sequencing. While pro tools is an audio champion and industry standard for recording audio, you may find it lacking for midi sequencing.



    Secondly, a midi input device. Usually this is in the form of a keyboard of 49, 61 or 88 keys. Smaller keyboards may be quite limiting for impromptu realization of wider orchestration, so I recommend the larger ones. You don't need anything fancy mind you, no need for a vast array of buttons, triggers, knobs or nozzles. Since these are not synthesizers, you won't need them and the ones with the extra gizmos are going to cost you more.

    Thirdly, the sample software. If you buy Logic, it comes with a library that will be good enough to start with. If you buy something like Digital Performer, then you'll have to buy separate software.

    There are many options to choose from insofar as instrument samples (listed below). They all have their positives and their negatives, you can find samples of any of them online. For future reference, when referring to midi mockups, samples refer to wavetable patches available in a given library. These are actual instruments that have been recorded and are loaded into a particular samples library. Each patch is then triggered by your midi controller at your discretion. If you're new to this and your budget is small because you're a student, just starting as freelance or a little of both then this is all you need to get started.

    Now that you're set up in your studio, you should probably take some time to get to know it. Test out all of your samples, note what samples sound especially good in your library. In doing this you can get a good feel of a sample library, and you get a good knowledge of what you can possibly do with it. For instance, when testing out Garritan's jazz & big band sample library, I found I absolutely loved the solo clarinet on it. I wouldn't have known that if I didn't explore the samples first. Get to know your sequencer too, you can probably get away with only knowing a few key things about it to get started. Among them are velocity control, quantization, and midi limiting.

    Next, what is your goal with orchestration? Do you intend to use real instruments in the final production of the score? If you are, you probably don't need to be quite as meticulous about your mockup for this is merely for either a director or a producer. If not, then there are a lot of details to pay attention to.

    What we'll focus on now is for all of you you who are making the mockup the final score. If you're new to this, you're probably used to sequencing the music in notation software such as Sibelius or Finale. This is a fine method to begin with, I use this method for more contrapuntal work to make sure my lines are clear. I'm sure if you've chosen the import method you may be disappointed at first, but don't fret. All is not lost.

    You may find your mockup 'too precise', articulations and dynamics have suddenly disappeared or there's an ineffable 'something' that just takes the fun out of the music. This is no surprise, as you've turned a computer performance into another computer performance! Those crescendos that you once had in your notation software may not have necessarily made an appearence in its general midi export. You'll have to draw those in yourself. I use digital performer to do this, and in its midi editor I can open the tools menu and draw a parabola for the rising and volume of volumes. This arc is a more 'human' crescendo than a straight line in my opinion. A similar option exists in any of the prescribed sequencers.


    Your articulations are suddenly gone aren't they? What was a staccato passage is now a smear of notes. Oh bother. This is where sequencing separates a little bit from notation. For that passage you will either need one of two things: 1. a seperate staccato patch loaded or 2. what is called a 'keyswitch' sample. This means that a change in samples is registered by a key at the left end of the midi controller keyboard. However, let us assume the former and you load in a staccato version of the instrument into your sampler. Now you have two patches loaded up of your instrument. A legato and a staccato. You can move that staccato passage down to it via the midi editor, hear it play back. This is of course valid for any change, be it from legato to marcato, stopped horn to horn rip, or bartok pizzicato to harmonics.

    Now that we have our articulations and dynamics fairly under control, let us move on to the 'human quality' of the performance. If you've imported your midi, you may find that it's 'too robotic'. If you've played in the music via the midi controller, then you may find it's a little 'too human'.

    When I posit 'too human', this means that the timing may seem a bit off after you record with the midi keyboard into the sequencer. Even if you have a fantastic sense of tempo, it will most likely not be as perfect as you like. Latency of even 5ms can make a track lose musicality. The other problem is inconsistent dynamics. If you have a less expensive midi controller, the keys won't be weighted and so it will be difficult to get a steady sense of dynamics.

    Between those two deficiencies, it won't sound remotely professional in that state. That's ok! We can fix that!

    Let's get a hold of those timings first. In your midi editor there is an option to 'quantize' your notes. Quantize means aligning notes to a particular setting set by the user. This may take some practice to do well. Based on the quantizing settings, you set the detail in how much you want to quantize. The side effects include: making shorter notes too long, you ended up with entirely the wrong rhythm you intended and/or it just becomes too precise. One last bit of advice on durations: full instrument sections never change notes at exactly the same time. I find it sounds smoother if I have a slight overlap of the notes in the sequencer.




    So if you're happy with your note durations, let us move on to velocity (or dynamics) after recording in through a midi controller. There are a few methods to get a good sound you'll like. You may try, most tediously, to change the value of every single note between 0 and 127. I wouldn't suggest doing this until later, and even then only if you really need to. A better idea would be to select a passage that you want to change the volume of, select it, then find in your midi editor the way to 'limit' the velocity of all the notes in that passage. This method is especially good if a passage you're editing doesn't change volume that much. There's very little fuss. Once again, this is something to try a few times until you get a good feel for how it affects the patches you have loaded. I tend to aim fairly high in volume for this, highest for strings, a little softer than that for woodwinds, and then brass where velocity typically effects timbre and brass instruments naturally stand out more anyway. For crescendos, as I've suggested earlier, a parabola curve is typically available in the midi sequencer of your choice. Don't just take my word for it, experiment! Insofar as midi, there is only one step left: Freeze! Before you can bounce down those tracks you must convert them to wavs in order for you to bounce those tracks into a single audio track. If you're happy with your midi mix, go right ahead. Remember though, it's easier to fix individual errors in midi than trying to bounce down your tracks first. This is also the point where you should do any mixing then mastering (which I highly recommend) but that's another article entirely. There are many more wonderful things you can do with midi mockups, but we'll save those techniques for another day!

    Sound clips MP3:

     

    Recommended Sequencers


    These are what I like from most to least for symphonic mockups only, by no means are they best to worst. It is merely my list of tastes, your tastes may differ. For midi I tend to use a mac, but I do quite like Cubase on PC. Apologies to Linux users, I don't know the software well enough to recommend anything in particular.

    MAC

    PC


    Recommended virtual instruments

     

    There are more out there and I'm sure some are nice, so by all means check around!

    Tuesday, August 19, 2014

    Some old tracks being pruned today

    Unlike most stock music libraries, here at Shockwave-Sound.com we actually remove some tracks. We consider the track's age, its sound, its production, its sales and its genre, and a few times per year we "prune" some oldies that we feel we are replacing with more fresh new material.

    Why do we do this? Because it's central to our mission and our whole way of business, that our site does not start to "sound old". Remembering when we first started out in 2000, there were already some libraries out there with a lot of music that "just sounded old". We refuse to become one of those. So we remove old tracks.

    Keep in mind that we add much more new material than we remove old material, so the actual size of our online catalogue is always increasing.

    We've been doing a little bit of housecleaning, and here are the tracks that we are saying goodbye to today:

    • African Electro Breakbeat
    • All New
    • Blown
    • Blue Rose
    • Born of Fire
    • Bossa Cabana
    • Breathe
    • City Of Loneliness
    • Close Encounter
    • Crab Walk
    • Daft Appliance
    • Electrolite
    • Fragments
    • Grooveroo
    • Guitar Slinger
    • Happy Motion
    • Hardwire
    • Helena
    • Impulse
    • In a Good Mood
    • Infector
    • Insect Planet
    • Iyogin
    • Jazzie Waggle
    • Jazzy O
    • London Calling
    • Made By Man
    • Passive Aggressive
    • Retroactive
    • River Adventure
    • Singularity
    • Sitar Banghra Rock
    • Skydiving
    • Stealth
    • The Extract
    • The Phantom Mirage

    Tuesday, June 10, 2014

    Depth and space in the mix, Part 2

    by Piotr Pacyna

    < Go to part 1 of this article

    So, how to start?


    With a plan. First off, I imagine in my head or on a sheet of paper the placing of individual instruments/musicians on the virtual stage and then think how to "re-create" this space in my mix. Typically I’d have three areas: foreground, mid-ground and background. Of course, this is not a rule. If we make a raw rock mix with a sparse arrangement and in-ya-face feel we don’t need much of a space, on the other hand, in a dense, multi-layered electronica the depth is crucial.

    So, I have divided all my instruments into, say, 3 spatial groups. Then, in my DAW, I set the same colour for every instrument belonging to the certain group, what is wonderfully handy - I immediately see everything at a glance.

    The tracks that I usually want to have close are drums, bass, vocals. A bit deeper and further I’d have guitars, piano, strings. And then, in the distant background I’d have synth textures or perhaps some special vocal effects. If there are string or brass sections in our song, then we need to learn about placing the orchestra instruments first in order to reproduce it. Surely this is the case only if we are aiming for realism.



    But sometimes we don’t necessarily need the realism, especially in electronic music. Here almost anything goes!

    Going back to our plan...


    No matter whether we struggle for realism or not I suggest to start planning from pinning down which element will be the furthest - you need to identify the "back wall" of the mix. Let’s assume that in our case it is a synth pad. From this point, any decision about placing instruments closer or farther away has to be based on our back wall.

    At this point we have to decide what reverb will we use. There are basically two ways of thinking. Traditionalists claim that we should use only one reverb in the mix, not to give misleading information to the brain. In this case we have the same reverb on each bus (in terms of the algorithm), changing only the settings - especially pre-delay, dry/wet ratio and EQ. Those of a more pragmatic nature believe that it’s not always the realism that matters, especially in electronic music and only the end result counts. Who are right? Well, they both are.

    I’d usually use two, maybe three different reverb algorithms . First would be a short room type of reverb, the second, longer, would be Plate, and the third and farthest would be Hall or Church. Thanks to using the sends from individual tracks I can easy decide how far or close the instrument will sit on our virtual stage.

    Do not add a reverb to each track, the contrast will allow you to enhance dimension and imaging even more. If you leave some tracks dry, the wet ones will stand out.

    Filtering out the highs from our returns not only sinks things back in the stereo field, but also helps to reduce the sibilants - reverb tends to sort of spread them out in the space, what is very irritating. An alternative method of getting rid of sibilances from reverb is to use de-Esser on the sends.

    Compression and its role in creating a depth


    Can you use compression to help creating the depth? Not only you can, but you must!

    As we all know, the basic use of the compressor is to "stabilize" the instrument in stereo field. It means that compressor helps to keep the instrument the same distance away from the listener thorough the whole track. To put it another words - its relative volume is stable. But of course we don’t always need it to be. This is particularly important for instruments placed back on a sound stage, because otherwise these will not sound clear. Now, how the compression can help us here? As we all know, the unwritten rule says that the gain reduction should not exceed 6 dB. This rule works for instance for solo vocals. The bigger reduction can indeed "flatten" the sound. Yet this is not necessarily the case when it comes to backing vocals or, generally, instruments playing in the background. Sometimes these are getting reduced by 10 dB or even more. In a word - everything what is further away from the listener should be compressed heavier. The results may surprise you!

    There is one more thing I advise to pay attention to - two basic work modes: RMS and Peak. PEAK mode is "looking" at peaks and reduces the signal according to it. What sound does it give? In general - more squeezed, soft, sometimes even pumping. It’s useful when we want the instrument to pulse softly rather instead of dazzling the listener with its vivid dynamics. The RMS mode causes the compressor to act like the human ear and not focusing on signal peaks that often have little to do with the perceived loudness. This gives a more vibrant, dynamic and more natural sound. It works best if our aim is to preserve the natural character of the source (and that’s often the case for example with the vocals). RMS mode gives a lively, more open sound, good for pushing things to the front on our sound stage.

    The interesting fact is that built-in channel compressors in SSL consoles are instantly switchable between Peak and RMS modes. You can find something similar in the free TDR Feedback Compressor from Tokyo Dawn Records.


    Delay


    Another very popular effect is delay. It is, one might say, a very primitive form of reverb (as reverb is nothing more than series of the very quick reflections).

    As you may remember from the earlier part of this article, I mentioned the pre-delay parameter in reverb. You can use it in pretty much the same way in delay plugin to create the sense of depth in the mix. Shorter pre-delay times will make instruments sound further away from the listener, longer times will do the opposite. But you can of course use the delay in many different ways. For instance - very short reflection times with no feedback can also thicken and fatten the sound nicely. Try it!

    The thing I like the most in delay is that it gives the mix a certain context of space. The music played in an anechoic chamber would sound really odd to us, as we hear all sounds in a context already from birth (the situation is of course no different with the music). No matter if you listen to a garage band, a concert at the stadium or in the club - context of the place is essential to an appreciation of space in which the music is playing.

    Now, how to use all this knowledge in practice


    And now I will show you how I use all of this information in practice, step-by-step.

    1. The choice of reverbs.


    As I said before, the first we have to consider if we aim for realism or not.

    I always struggle when it comes to reverb. Like, what the best sound settings for what instrument/sample. Should I use a Hall or a Plate? Should I use an aux or use it as an insert. Should I EQ after or before the reverb etc. I don’t know why, but reverb seems to be the hardest thing for me to understand and I wish it was not.

    And then comes another big question. How much reverb should be applied to certain tracks? All decisions made during the mixing process are based on what makes me feel good. One good advice is to try monitoring your mix in mono while setting reverb levels. Usually, if I can hear a touch of it in mono it will be about right in stereo. If I get it too light in stereo, the mix will sound too dry in mono. Also - concentrate on how close or distant the reverbated track sounds in the context of the mix, not on how soft or loud the reverb is (a different perspective).

    2. Creating the aux tracks including different reverb types.


    3. Organizing the tracks into different coloured groups.

     


    At the top of the session I have a grey coloured group - these are the instruments that I want to have really close and more or less dry: kick, bass, hihats, snare, various percussion loops. I have Room reverb going on here, but it is to be felt, not heard.

    Then I have the blue group. These are the "second front" instruments with Hall or Plate type reverb on them.

    And then I have the background instruments, the back wall of my mix. Everything that is here is meant to be very distant: synth texture, vocal samples and occasional piano notes.

    4. Pre-delays, rolling off the top, the bottom, 300Hz and 4500 Hz.


    My example configuration would look like this:

    • Room: 1/64 note or 1/128 note pre-delay, HPF rolling off from 200 Hz, LPF from 9 kHz
    • Plate: 1/32 note or 1/64 note pre-delay, HPF rolling off from 300 Hz, LPF from 7 kHz,
    • Hall: no pre-delay, HPF rolling off from 350 Hz, lowpassing is usually quite low, in the 4k - 5k zone (remember the air absorbs high frequencies much more than it absorbs lower ones).

    5. Transients


    The distance eats transients. And attenuates the direct sound, the first arrival of the initial transient. But the reverberation picks up and amplifies the steady, tonal part of the sound. The distant sound is much less transient-laden, far smoother, far more legato, far less staccato, less "bangy" and "crunchy," than close-up sound. It is also harder to understand the words at a distance. That’s why I often compress the longest reverb to flatten or to get rid of transients. Set a fast attack if you want there to be less of a transient at the start, and the parts to be squashed more. I also use a transient designer (such as freeware FLUX Bittersweet) and move the knob anticlockwise
    to soften the attack a little.

    [Example.mp3]
    • Foreground: drums, percussion, bass and saxophone.
    • Mid-ground: piano, acoustic guitar.
    • Background: synth pad, female voice.

    Summary


    For a long time I had the tendency to put way too much reverb on everything. You know, I thought I would get the sense of depth and space this way, but I was so wrong… Now I know that if we want one track to sound distant, another must be very close. The same goes to volume and every other aspect of the mix - to make one track sound loud, others need to be soft and so on.

    There are some more sophisticated methods that I haven’t tried myself yet. Like a smart use of compression for instance. Michael Brauer once said: "I’m using a lot of different sounding compressors to give the record depth and to bring out the natural room reverbs of the instruments".

    Some people also get nice results by playing around with Early Reflections parameter in reverb. The closer a sound source is to boundaries or large reflective objects within an acoustic, the stronger the early reflections become.

    Contrast and moderation - I want you to leave with these two words and wish you all a successful experimenting!


    Monday, June 9, 2014

    Depth and space in the mix, Part 1

    by Piotr Pacyna

    "When some things are harder to hear and others very clearly, it gives you an idea of depth." - Mouse On Mars

    There are a few things that immediately allows one to distinguish between amateur and professional mix. One of them is depth. Depth relates to the perceived distance from the listener of each instrument. In amateur mixes there is often no depth at all; you can hardly say which instruments are in the foreground and which in background, simply because all of them seem to be the same distance from the listener. Everything is flat. Professional tracks, in turn, reveal an attention to precisely position individual instruments in a virtual space: some of them appear close to the listener’s ear, whilst others hide more in the background.


    For a long time people kept telling me that there was no space in my mixes. I was like: guys, what are you talking about, I use reverbs and delays, can’t you hear that?! However, they were right. At the time I had problems understanding the difference between using reverb and creating a space. Serious problems. The truth is - everyone uses reverbs and delay, but only the best are able to make the mix sound like it was three-dimensional. The first dimension is of course panorama - left/right spread that is. The second one is up/down spread and is achieved by a proper frequency distribution and EQ. The third dimension is the depth. And this is what this text is going to be about.

    There are three main elements that help to build the depth.

    1. Volume level


    The first, the most obvious and pretty much self-explanatory is Volume Level of each instrument track. The way it corresponds to the others allows us to determine the distance of the sound source. In a situation where the sound is coming from a distance, its intensity is necessarily smaller. It is widely accepted that every time you increase the distance twice the signal level is reduced by 6 dB. Similarly, the closer the sound you get, the louder it appears.

    It is a very important issue that often gets forgotten...

    2. Time the reflected signal needs to reach our ears


    The second is the time taken by the reflected signal to reach our ears. As you all know, in every room we hear a direct signal and one or more reflected signals. And if the time between these two signals is less than 25-30ms, then the first part of the signal gives us a clue as to the direction of the sound source. If this difference increases to about 35ms or more, the second part of signal gets recognized by our ears (and brain) as a separate echo.

    So, how to use it in practice?


    Due to the fact that the PAN knobs are two-dimensional and keep moving from left to right, it's easy to fall into the trap of habitually and set everything in the same, dull and obvious way - drums here, piano there, the keys here... as if the music was played in a straight line from one side to the other. And we all know that is not the case. When we are at the concert we are able to hear a certain depth, "multidimensionalism" quite brilliantly. It is not hard for us to say, even without looking at the scene, that drummer is located in the back, guitarist slightly closer to the left side, and the singer is in the middle, at the front.



    And although the relative loudness of the instruments is of great importance for creating the real scene, it’s the time the signal needs to arrive to our ears that really matters here. These very tiny delays between certain elements of the mix get translated by our brain into meaningful information about position of sound in space. As we know, sound travels at a speed of approximately 30cm per 1 millisecond. So if we assume that in the case of our band the snare drum is positioned 1.5m behind the guitar amps, then snare sound reaches us 5ms later than the signal from the amplifier.

    Let's say that we want to make the drums sound like they were standing at the end of the stage and near the back wall. How to do that? When setting reverb parameters remember to pay attention to 'pre-delay'. This element allows us to add a short delay between the direct signal and the reflected signal. It somehow separates the two signals, so we can manipulate the time, after we’ll hear the echo. It’s an extremely powerful tool in creating a scene. Shorter pre-delay means that the reflected signal will be heard almost immediately after the appearance of the direct signal; actually the direct and the reflected signal will hit our ears almost at the same time. And longer pre-delay, however, moves the direct signal away from the reflective surface (in this case the rear wall). If we set a short, few ms delay to the snare, longer one for the guitar or even longer for the vocals, it would be fairly easy for us to catch the differences. Vocals with a long pre-delay sound a lot closer than the snare drum.

    We can also play along with pre-delay when we want to get a real, natural piano sound. Let’s say we place the piano on the left side of our imaginary stage. When sending it to a stereo reverb let’s try to set a shorter pre-delay for the left channel of the reverb, because in reality the signal would bounce back from the left side of the stage (from the side wall) first.

    [Pre-delays.mp3]

    First we have a dry signal. Then we are in a (quite big) room, close to the drummer. And then we are in the same room again, but this time the drummer is located by the wall, far from us.

    3. High frequency content


    The third is the high-frequency content in the signal. Imagine that you are walking towards the concert in the open air or a pub with live music. What frequency do you hear most of all? Of course the lowest. The closer to the source of music we are, the less dominant "basses" are. This allows us to conclude that the less high frequencies we hear, the further the sound source is, hence a fairly common practice that helps to move the instrument to the background is a gentle high frequencies rolling off (instead of bass boost) by LPF (low pass filter).



    I often like to additionally filter the returns of reverbs or delays - the reflections seem to be more distant this way, deepening the mix even more.

    Speaking of bands, we should also pay attention to frequencies somewhere around 4-5kHz. Boosting them could "bring up" the signal to the listener. Rolling them off will of course have the opposite effect.

    "It is totally important when producing music that certain elements are a bit more in the background, a bit disguised. It is easiest to do that with a reverb or something similar. In doing that, other elements are more in focus. When everything is dry, in the foreground, it all has the same weight. When some things are harder to hear and others very clearly, it gives you an sense of depth. And you can vary that. That is what makes producing music interesting for us. To create this depth, images or spaces repeatedly. Where, when hearing it, you think wow, that is opening up so much and the next moment it is so close again. And some times both at the same time. It is like watching... you get the feeling you need to read just the lense. What is foreground and background, what is the melody, what is the rhythm, what is noise and what is pleasant. And we try to juxtapose that over and over again." (Mouse on Mars)

    Problematic Band


    All modern pop music has one thing in common: it is being recorded at a close range, using directional microphones. Yes, you’re right, it’s typical for near-field recording. This is the way how most instruments are recorded, even those you don’t normally put your ears to - bass drum, toms, snare, hihat, piano (anyone puts his head inside the piano to listen to music?), trumpet, vocals ... And yet even musicians playing these instruments (and for sure the listeners!) hear them from a certain distance. Musicians too - it's important. That's the first thing. And second - the majority of studio and stage microphones are cardioid directional close-up mikes. Okay, these two things are a quite obvious, but you’re wondering what is the result? It turns out that we record everything with the proximity effect printed on tracks! Literally everything. In short, the idea is that directional microphones pick up a lot of the wanted sound and are much less sensitive to background noise, so the microphone must handle loud sounds without misbehaving, but doesn't need exceptional sensitivity or very low self-noise. If the microphones get very close to the sound source - within ten or so microphone diameters - there's a tendency to over-emphasise bass frequencies, but between this limit and 100 cm maximum limit, frequency response is usually excellent.

    Tracks with the proximity effect printed sound everything but natural.

    Everyone got used to it and even for the musicians their instruments recorded from a close distance sound okay. What does this mean? That all of our music has a redundant frequency hump around 300Hz. Some say that it’s rather 250Hz, others that 400Hz - but it’s more or less there and it can be
    concluded that almost each mix would only benefit from taking a few dB’s off (with a rather broad Q) from the low-mids.

    Rolling off these freq’s will make the track sound more "real" in some way and it’s also actually something common on the sum. The mix gets cleaned up immediately, loses its muddyness and despite the lower volume level it is louder. Low mid appears to not contain any important musical information.

    And this problem affects not only the music being recorded live - the samples and sounds are all produced as customers want it and it means they are "compatible" with the sound of the microphone. So it is worth to get familiar with that issue even if you produce electronic music only.

    The bottom line is: if you want to move the instrument to the back - roll off the freq’s around 300 Hz. If you want to get it closer, simply add some extra energy in this range.

    Continue to part 2 of this article >


    Sunday, June 1, 2014

    Cleaning up noisy dialogue: Get rid of background noise and improve sound quality of voice recordings

    Tools and techniques for removing unwanted noise from vocal recordings


    by Richie Nieto

    One of the biggest differences between film and documentary sound versus animation and video game sound is that, usually, in films and documentaries, the recording environments are not fully controlled and often chaotic. When shooting a scene in the middle of a busy street intersection, for instance, the recorded audio will contain much more than just the voice of the subjects being filmed. Even in a closed filming environment in a “quiet” set, there is a lot of ambient noise that will end up in the dialogue tracks.



    Traditionally, in the film world, dialogue lines that are unusable due to poor sound quality are replaced in a recording studio, in a process called ADR (Automated Dialogue Replacement). Documentaries don’t have the same luxury, as interview answers are not scripted and the interviewed subjects are not actors (and wouldn’t be able easily duplicate their own previously spoken words accurately in the studio). There are also issues of budget limitations – ADR is expensive, and most small-budget productions can’t afford to replace every line that needs to be replaced.

    So, the next best option is to clean up what is already recorded, as best as we can. I’ll explain some of the tips and techniques to ensure that you get the most out of the material you have. As a brief disclaimer, keep in mind that some of these techniques are divided up between dialogue editors and re-recording mixers on most professional-level projects, so if you’re not mixing, make sure to consult with your mixer before doing any kind of processing.

    As an example of bad-sounding dialogue, we have the following clip:


    [702_RN_NoisyClip_01.mp3]


    This clip has a number of problems (aside from the poor performance by yours truly). There is hum and hiss in the background and, due to improper microphone technique, loud pops from air hitting the diaphragm too hard and the voice sounds very boomy. This would be an immediate candidate for ADR. However, we will assume the budget doesn’t allow for it to be replaced, or the actor is not available. I’ve had situations where the actor just doesn’t want to come into the studio to do ADR, even though it’s in their contract, and no amount of legal threats will convince them otherwise, so the only course of action in those cases has been to make the bad-sounding lines good enough to pass a network’s quality control.

    Okay, so let’s get to it. The first step is to filter out some of the boominess with an EQ plugin. For this example, I’m just using one of the stock plug-ins in ProTools. All of the processing here is file-based, as opposed to real time processing, mostly to be able to show how each step affects the clip. 


    By listening and a bit of experimenting, we can hear that there is a lot of bass around 100 Hz in our audio file. Here’s how the clip sounds after removing some of the offending low frequency content:

    [702_RN_NoisyClip_02.mp3]

    Next, we’ll use a noise reduction plug-in to get rid of some of the constant background noise. There are plenty of other options in the market, but I’ll use Waves’ X-Noise for this example. The trick here is to not go overboard; if you start to hear the voice breaking down and getting “phasey”-sounding, you need to pull back on the Threshold and the Reduction parameters. You won’t get rid of all the noise with this step, but I find it yields better results to use moderate amounts of processing in different stages instead of trying to cure the problem by using a single tool.



    After having the plug-in “learn” the noise and then adjusting the Threshold, Reduction, Attack and Release parameters, we process the clip, which now will sound like this:

    [702_RN_NoisyClip_03.mp3]

    There is still a fair bit of background noise in there, so now we’re going to use a multiband compressor/expander to deal with it. In this particular case I’ll use Waves’ C4, but, once again, there are many equivalent plug-ins to choose from. I am just very familiar with the C4 and how it behaves with different kinds of sounds.



    We need to set the parameters for expansion, which does the exact opposite of compression: it makes quiet things quieter. That’s why we apply it after the noise reduction plug-in, so that the noise level is much lower than the voice when it goes through the expander. A normal single-band expander will not work as well because the noise lives in different areas of the frequency spectrum, and those need to be addressed independently with different amounts of expansion.

    Now there is a vast improvement on the noise level on the clip, as we can hear:

    [702_RN_NoisyClip_04.mp3]

    Okay! The following step is to tackle the pops caused by the microphone’s diaphragm being slammed hard by the air coming out of my mouth. Obviously, this is a problem caused by bad planning, and it is replicated here to illustrate a very common mistake in recording voiceovers. In this case, the most offending pops are in the words “demonstrate” and “process”.

    A solution that has worked really well for me many times is actually very simple. It involves three quick steps. First, select the part of the clip that contains the pop, and be sure to include a good portion of the adjacent audio before and after the pop in the selection.





    Use an EQ to filter out most of the low end of that selection. This will automatically create a new region in the middle of the original one.



    Then crossfade the resulting regions to eliminate any clicks and to smooth the transitions. You will need to experiment with the crossfades’ proper positions and lengths, and the exact frequency and the amount of low end content to be removed, based on the severity of the pop.



    And now, without those loud bumps, our clip sounds like this:

    [702_RN_NoisyClip_05.mp3]

    Finally, I do a second pass through the multiband expander to remove the rest of the noise, and some EQ tweaks to restore some of the brightness lost in the process.

    [702_RN_NoisyClip_06.mp3]

    If you compare the first version of or audio file to this last one, you’ll hear the huge difference that is accomplished in sound quality by using several different steps and combining tools and tricks. As you know, there are always better and more affordable software applications being created for dealing with noisy audio, and some of the newer ones are able to cover several of the stages that I’ve described here. Others, which use spectral analisys algorithms, can even isolate and eliminate incidental background noises that happen at the same time as the dialogue, like a glass clinking or a dog bark. So the game is constantly changing.

    In closing, hopefully this article will serve as a guide on how to tackle some problems with audio material that, for any reason, can’t be recorded again. It’s by no means a definitive approach to eliminating noise, since the number of variables and tools out there is staggering. So experiment, and have fun!


    Wednesday, May 14, 2014

    Drum tips for music producers

    by Piotr Pacyna


    Here are a few tips for anyone thinking about spicing up his drum parts. Some of them are for more advanced producers, while some of the others will be suitable for less advanced readers. However, this article is not for beginners. You have to know at least the basics of MIDI programming, EQ or compression.


    1. Humanization and de-quantization


    For most of us drum humanization means only two things - "dequantization" of the notes and randomizing velocity values (within a specified range of course). Usually we just take the various drum notes and slightly offset them by milliseconds. We think about it in realistic terms and our thinking goes like this: when the drummer sits down behind the drums and starts playing, does he hit each drum in perfect timing? Of course not. The liveliness is what gives the drums their flavor. And so on, and so on...

    This is all true, but it’s not enough.

    We can’t really make any drum track sound "real" this way. All we can get is an impression of a lumpy, raunchy drummer who has no control over what he plays. It’s actually not that bad when it comes to punk music, but imagine that you make a jazz drum beat - such cheap humanisation tricks will simply not work.

    There is a sort of workaround. When you're recording your track, consider playing your hi-hats or snares directly from your MIDI keyboard without quantizing them. You want to be very careful doing this, as you still need to be close to the correct rhythm, but sometimes having some variation gives your drums life.

    But it’s still not enough...

    The thing that can really change your drum programming is drawing attention to the so-called natural rhythmic tendencies of a live musician. It is worth remembering that every musician:

    • naturally tends to slow the tempo down when he plays quietly,
    • speeds up when playing louder,
    • tends to slow down when playing more sparse rhythms,
    • speeds up when playing busy grooves.

    Most musicians try to eliminate these tendencies in the process of learning to play with a metronome, but it’s impossible to get rid of them completely. They are always present (and it’s good!). The best musicians are simply able to control them. But they are also perfectly aware of the fact that total elimination of the natural tendencies is not necessary.

    Keep all of this in mind when programming the drums- you can really benefit from that. For example, if the song contains a quiet section or a part in which the drum beat stops, keeping the tempo steady may even sound unnatural.



    If your audio sequencer allows changing tempo within a project, I encourage you to experiment - in the softer part try to slow down the tempo from 1-2 ticks (ballads) to even 10 BPM (fast songs) and then, when the groove kicks again, back to the original tempo. If you don’t overdo it, it will result in much more natural sound than keeping exact the same tempo throughout the whole song.

     

    2. Overheads


    Electronic music producers often underestimate the role of overhead microphones.

    Those mikes are used in sound recording and live sound reproduction to pick up ambient sounds, transients and the overall blend of instruments. They are used in drum recording to achieve a stereo image of the full drum kit, as well as in orchestral recording to create a balanced stereo recording of full orchestras.



    In the real world drummers often record their tracks in a special drum room where microphone feedbacks make a certain atmosphere. Our brain interprets it as a "real" sound.

    Live acoustic drums sound impressive. Sound engineers have a lot of trouble isolating each instrument from the others. They use separate mikes to capture each drum, various microphone settings and placements and play around with special plexiglass walls/partitions. Full separation is not possible though and thanks to the fact, that one instrument gets into the other’s mic, your brain tells you that you listen to live drums. It’s not easy to get such an effect using samplers and programmed drums.

    So, how can you emulate overhead drum mics in your DAW?

    You can try to run the drums through three carefully tweaked reverbs.

    I send kick, snare, hats and loops to AUX 1 bus and highpass it to get frequencies above 3 kHz only. Then I apply a drum room type reverb - this way the high frequency band sounds as it could sound recorded through overhead mic.

    On AUX 2 I use high- and lowpass filters and filter out freqs below 400 Hz and above 2.5 kHz. Then I apply the same drum ambience reverb. This allows me to get the sound of the middle band typical for snare mic feedback.

    On AUX 3 bas I’m dealing with bass. Again, I use LPF to remove freqs above 300 Hz. And then I apply drum ambience reverb.

    [OverheadsON.mp3]
    [OverheadsOFF.mp3]

    In the first example there is a normal reverb applied to each track, and that’s pretty much what everybody is doing. It’s ok, but something is missing. And the second example is the same drum track with "overhead" track blended with the original signal.

    Remember that you can always come back to each channel and set a different send level for every AUX.

    What else one can do with the overheads track to make it more realistic? You can use tips from my previous article about creating space and depth in the mix. For example - by using various tricks that I’m writing about you’ll be able to make our "overhead mics" sound like it was a bit more distant from the listener than "close" mics.


    http://www.shockwave-sound.com/Articles/G04_Depth_and_space_in_the_mix_part_2.html

    You will learn from it, among other things, how the following issues help to make the instrument sound close, in-your-face or "deep" in the mix:

    • pre-delay parameter in reverb,
    • high frequency band,
    • proximity effect (around 300 Hz),
    • PEAK and RMS modes of the compressor.

    What else?

     

    Sidechain the overheads


    A range of cool effects can be created by putting a compressor / gate on the overheads channel and keying it from the kick, snare or hi-hats. Key and frequency-conscious gate from the snare that emphasize the airy overhead signal with each snare hit; or compress the overheads via the kick to draw the sides in with each kick hit.

    What else?

     

    Overhead processing


    Applying a stereo widening plug-in to the channel overheads can work wonders for the sense of breadth. Two things to avoid with overheads, though, are overly heavy EQ boosts above 10kHz and compression (though a touch of compression can be effective for a vintage-style sound). And if your overheads unexpectedly sound weird in any sense alongside the close mic channels, do not forget to try flipping the phase.

     

    3. Tracker groove!


    Today you can find a swing parameter on almost every DAW and on most drum-related electronic instruments. Swing is a function that applies most easily to a quantized beat. The percentage of swing that you apply moves certain hits of your rhythm "off the grid" just enough to create a swinging feeling in the drums. Most devices offer very subtle to very extreme settings. It’s worth noting that swing functions apply differently on different instruments and programs. For MPC-style swing, Akai’s hardware is hard to beat. But Propellerhead’s Reason does come loaded with groove templates that emulate the Akai MPC 60 (as well as numerous other machines.) Ableton Live also offers groove quantization that can read imported audio, MIDI, and groove template files. Native Instruments’ Maschine platform offers extensive swing settings that can be applied to groups in your project as well as individual sounds.

    However, there is another, less known yet very interesting way to create a swinging funky groove - using a tracker type program. Unfortunately, you need to learn something about trackers first. It may be difficult for those who never had any experience with them, but from the other hand many of today’s producers took their first musical steps playing around with Amiga Protracker in the early 90s. And those who are not familiar with the topic can easily find suitable tutorials.




    Trackers use a very peculiar speed system that is based on 'ticks' rather than BPM. The 'tick' is a subdivision of pattern row. The 6 speed would make each row last for 6 ticks' worth of time, while 7 would be slower, with each row lasting 7 ticks. To make things even more messy - there are also a BPM-based settings in trackers, but let's ignore it for now and focus only on speed here. The "F" command alters the speed of the song. By quickly doing it one will get some kind of "swing" feel. Try dry ratios as 8 and 4 or 3 and 5 for more pronounced swing.

    The question is how to incorporate the tracker-made groove of into our DAW?

    It’s pretty simple.

    Some trackers (eg. Renoise or ModPlug) allow to use VST Instruments and effects, so you can actually produce the whole song in the tracker.

    Another way is to export the MIDI file eg ModPlug Tracker. Yet another is using XM2Midi that converts the resulting file format MIDI tracker.

             

     

    4. Some EQ Tips


    Some producers share the opinion that if there’s a need to reach for an EQ while mixing the drums, it means that something went wrong: the microphones have been set or selected badly, drums were not tuned properly etc. With the huge variety of samples available today, it’s really difficult to indicate the methods that will work in any situation. However, analyzing carefully the sound of a typical drum kit, you can easily identify key problems for each instrument.

    Let's start with the kick drum. Although most of its energy lies in low frequencies, it often covers almost the entire audible spectrum. The freqs responsible for the deepest bass and powerful sound are located in 30-100 Hz range - and if there is no really low bass in your mix, it is usually the kick that covers this area. Be careful with the high pass filters here - sometimes setting the cut off frequency few Hz’s too high can make the kick loose its punch. The actual "hit" of the kick drum is most noticeable between 100 Hz to 200 Hz - these frequencies are responsible for the "thump". Using a narrow bandpass filter or... a tuner (read below) to find its most distinctive frequency, identify kick’s note value and then check if it does not conflict with the bass line. Fix the problem if needed. Then you need to have a look at the 200-1000 Hz range. Too much energy here muddies up the mix fairly quickly. But if your bass drum occupies mostly 250-300 Hz, this doesn’t necessarily have to be a reason to worry: this is how the warm, soft kick drums from the 70s soul records sound like. It is also worth checking whether your bass drum has a "click" somewhere around 1000 Hz, because that’s what makes it more present on small laptop or portable radio speakers. Then, between 1000-4000 Hz there is an attack that actually determines the character of the kick drum. One good trick is to boost 2.5 kHz freqs a bit; this way we can add more presence to the sound, without changing the overall character of it. If we find the sound too dark, try to add few db’s between 4-8 kHz. Frequencies above 8 kHz usually bring very little or nothing to the sound of the bass drum - in most cases all we have here is only noise and the best we can do is apply a low pass filter to make more room for, say, hihats.



    The snare sounds best; rich and full that is, when frequencies 120-250 Hz are somewhat emphasized. Everything below this range can be EQ’ed out with high pass filter. If, however, we believe the snare sound is too powerful, we can always set the cut-off frequency above 120 Hz. Or even higher. In club music there is often a clap instead of a typical snare drum and it hits together with a kick, therefore a lighter, brighter sounds work best. A general rule is: the slower the pacing is, the deeper and longer the tail of the snare should be. Beware of 300-400 Hz range, which is responsible for so called "boxy" sound. Unlike the kick drum, the snare attack is located slightly higher, typically in 2-5 kHz range and this is where one should look for that resonant, crisp snare sound, a bit similar to branch cracking. The 5-10 kHz range is responsible for the brightness - I’d recommend the check it if the snare is overpowering the hats. Pay also attention to frequencies above 10 or 12 kHz - too much energy here results in messy sound of drums.

    It’s worth noting that toms and congas are often treated similar as snares - it all depends, however, of their pitch and character. Just remember that the base sound of toms is usually located slightly lower than snares. Try to make some cuts around 300 Hz first and boost 5 kHz range. For the fullness of sound look around 100 Hz.

    Hats usually don’t need the low end at all. Boosting the lower frequencies at around 150-300 Hz makes sense if you need to emphasize the sound of the drum stick. If hihats sound sharp, annoying or simply unpleasant, the easiest way to fix it is to find the problem freqs (usually located somewhere between 1-5 kHz) and simply remove it using a narrow EQ. What is a real challenge is to shape the sound of hihats in top end - it's always a matter of taste and artistic vision. If you are aiming for a bright, airy sound you can play around with 8-12 kHz and above. Remember that this range is also extremely important for vocal tracks, though! Sometimes it’s also beneficial to play with 15-16 kHz. But do not go insane with the top end - too much 10k+ ends up with a very amateur sound.

    Many pop producers start the mixing process with the drums and work on them until they get a full, dynamic sound. It’s hard to imagine a club banger without a solid rhythm base, right?

    And what if, despite our best attempts, we can’t get satisfactory results? Well, perhaps we should look for another set of samples then...

     

    5. Tuning


    When I listen to my old mixes, I’m sometimes actually shocked how detuned the snares or kicks are. They often don’t fit the key of the song at all. Figuring out the drum tuning few years later was like a discovering a new world for me.

    I usually just use a spectrum analyzer with a high resolution - with most drums you can fairly easily see where the fundamental resonant frequency is. Often times I use all sorts of misc. sounds that are detuned and lowpassed to layer under the kick and under certain circumstances it can be crucial to tune those sub hits to whatever other sound they're adjacent to, frequency-wise, so they aren't atonal or causing dissonance from phase interaction. It's something you don't notice on average nearfield monitors. But that dissonance becomes very apparent when listening with a subwoofer.

    Here’s C-Tuner from C-Plugs. Simple, good and free.


    Using an EQ, you bring out the frequency corresponding to a note. You can use a calculator, such as "ToneCalc" - or just do an online search for frequency / Hz / tone calculator.

    This is NOT a golden rule - not all kicks or drums need tunning, only the ones that display a drone note, usually it's longer kicks or resonant kicks that have this quality. Simple test: if you are having difficulty putting a bass line to your track because things are sounding out of tune it might be because the resonant note of your kick is clashing with the bass note. In this case tune your kick to the root key of your song, it will be low enough to not get in the way of dissonant tones

    A really cool effect I learned when pitching toms is to duplicate the tom, shift it up an octave (and a 5th or 7th (if you like)) and use a pitch envelope to modulate down (usually at a 1/16 or 1/8 note interval). It gives the sense of "tightening the head" while you "strike" the tom. The more prevalent you make the effect, the more 80's it sounds and the less prevalent, the more "real" it sounds. The prevalence factor is how much you choose to modulate the pitch of the 5th and/or 7th layer and of course how loud it is compared to the original.

    Another thing worth consideration is using an autotuner. It can work well for more than just vocal work. Try it!

    6. Pultec trick


    It's an EQ trick based on a common usage of the overlapping bands on the Pultecs, where the cut is narrower than the boost, but both bands are centered at the same freqs. So what you'd do is boost a little at 100Hz, then cut at 100Hz as well. The result is a wide boost with a notch in the middle to de-emphasize the center freq, so you end up with these two boosts above and below 100 with a shape that can't be replicated by pushing two different freqs.

    Why cut & boost at the same time? Either you want more or less - so what is the purpose of using both at the same time? Or is it that the ATTEN lowers everything above the selected freq & vise versa on the low EQ?

    In theory, yes, cut/boost would tend to cancel each other out, while in the case of the Pultec the reality is a bit different. But in a cool way.

    The venerable Pultec EQP-1A Program Equalizer and it's sibling the MEQ-5 Mid Band Equalizer when used together (Pultec Pro) provide a well-rounded EQ palette. This combination is still standard fare in recording studios and was once widely used in mastering sessions. The first Pultec (EQP-1) was first introduced in 1951. Through many iterations the basic design remained through the late 70's/early 80's. Every Pultec was hand made to order. The build quality and design of all the Pultec products was unparalleled.

    Unique simultaneous boost and cut. Dial in dangerous amounts of boost with incredibly musical results. Smooth, sweet top end character. Artifact-Free EQ even at high boost settings. The Pultecs are known as magical tools that improve the sound of audio simply by passing signal through them; but who wants to leave it at that?

     

    Cool trick:


    In the documentation supplied with hardware version of the EQP-1A, it is recommended that both Boost and Attenuation not be applied simultaneously to the low frequencies because in theory, they would cancel each other out. In actual use however, the Boost control has slightly higher gain than the Attenuation has cut, and the frequencies they affect are slightly different. The EQ curve that results when boost and attenuation are simultaneously applied to the low shelf is difficult to describe, but very cool: Perhaps the sonic equivalent of a subtle low-midrange scoop, which can add clarity. A great trick for kick drums and bass instruments.

    I’m using NastyLF from Bootsy, which, again, is good and free. Unfortunately it’s not developed anymore and you have to spend some time digging the Internet to find it. Try to set cut and boost frequency to 40-50 Hz - it gives a nice "oomph" to the kick drum. See the examples below.

    [PultecON.mp3]
    [PultecOFF.mp3]


    Monday, April 21, 2014

    Buying Shockwave-Sound.com music just for personal listening

    We occasionally get emails from people browsing our site and wishing to buy our music just for personal listening. People who are used to buying music tracks for a couple of dollars at iTunes, Amazon etc. are finding it hard to paying our lowest license fee around $30 just for buying a music track for listening to it. We can understand that.

    In answer to this, we usually tell people: Shockwave-Sound.com is a Music Licensing business. We are in the business of licensing music for commercial and in-public use. When you buy our music, you get rights with it, allowing you to use the music in things like online videos, TV and radio broadcasting, games, apps and more. Quite simply, we are not in the business of selling music to people just for listening to it.

    Having said that, if you really want to buy some of our music just for personal listening, we can set it up for you manually. We charge $1.65 per individual music track and $15 per CD-collection. Please contact us and let us know what you would like. We'll get back to you by email, ask you to send us payment by Paypal, and then have the file(s) sent to you. We will ask you to confirm in writing that you will be using the music only for personal listening.

    Wednesday, April 9, 2014

    Explanation of YouTube Content-ID for Stock Music / Production Music composers

    Recently, one of our artists wrote to me with questions about YouTube and his right to receive compensation when his music was used in YouTube videos. I ended up writing a pretty long explanation around the whole YouTube / Content-ID issue, and I just thought it was worth sharing here, in case it can help clear some things up. So here it is. If you already know all of this, great. :-)

    Let me try to explain the YouTube / Content-ID situation


    YouTube (or rather, their owners, Google) developed an "audio recognition" program called Content-ID, into which it invited large music publishers such as Sony, Universal, Warner Brothers etc. to submit audio recordings of all their album releases. So these companies sent their music into Content-ID, and now, every video that uses music by these companies (say, Justin Timberlake music or whatever) is automatically “detected” to include this music. As soon as the video is uploaded to YouTube, the audio in that video is scanned and compared with millions of audio recordings that they have on file. When a match is found, the person who uploaded the video to YouTube will receive a “copyright notice” in his inbox. It says something along the lines of “Your video is found to contain music copyrighted to Sony" (or which ever company). Now, advertising is put on the video. This advertising is paid for by the various companies who advertise there (obviously) and it can be anything from movies to cars to shampoos, etc in those adverts, but often times it will be an advertisement that is somehow related to the content in the video. For example, if it’s a holiday video, the advertisement could to be some kind of holiday resort. Now, YouTube obviously makes money on that advertising, and a small portion of that money is now paid out to the company who owns the music that has been detected in that video. So if the music was Sony’s, Sony are now making money on each video, and I've heard this amounts to approximately $1.00 - 1.25 for every 1,000 views that video achieves.

    A side effect to this program is that the person who created the video and uploaded the video to YouTube is not able to monetize his own video. By this I mean that the video creator can't sign the video up with the YouTube partnership advertising program and receive his own advertising income from his video. Because the advertising money from that video is already “taken” by the company that owns the music that’s playing in the video.

    Some people also decided that it would be a good idea to let independent musicians and bands into this whole setup. So they started Content-ID programs for independent musicians, where the aggregator (CDBaby, Rumblefish, AdShare, AdRev, IODA, The Orchard, INDMusic, Rebeat, Tunecore, AudioSparx, Magnatune, to name a few) feeds the independent music into YouTube’s Content-ID system, and starts to make money on the videos that contain this music. What happens now when people use this independent music in their videos is that they get a message from YouTube saying that their video “contains music owned by Rumblefish” (for example) and advertising starts appearing on the video. About $1 - $1.25 per 1,000 views is sent to that company (for example Rumblefish or INDMusic). Some of this is sent on to the aggregator (for example CDBaby or TuneCore), and some of this is sent on to the artist. Exactly how much is left for the artist, I'm not sure, but what started as $1.00 - $1.25 per 1,000 views has now passed through another couple of companies before it got to the artist, so it’s definitely considerably less. And now, the guy who created the video is not able to monetize his own video, because the monetization on that video is done by the Content-ID company who claims to own the music.

    Another negative effect it will have on the customer’s video is that the video is actually blocked in some countries - for example, in Germany. This is because YouTube and the German royalty collection society GEMA (who control music broadcast and performance in Germany) have not reached an agreement on payments, so GEMA simply forbids YouTube to broadcast registered music in German territory. There are also some other countries that have this problem, but Germany is the most publicized one. So if you upload a video to YouTube and that video is found to contain music that is in Content-ID, the video will be blocked for all German viewers.
    Content-ID clean music

    This is exactly why people come to a place like Shockwave-Sound, to license music that is not in Content-ID. The music is “clean” and is not automatically recognized at YouTube. When people put our music in a video and uploads that video to YouTube, nothing special happens. The customer does not get any email with a copyright notification. The video is not automatically monetized by a third party. The video is “clean” and the customer is able to monetize his own video. He can sign the video up into the YouTube Partnership program, and he can start to receive advertising money from his video. And his video won't be blocked in any countries.


    When a conflict happens


    What has happened sometimes is that artists have not understood this whole setup, and they have had their music both licensed via a stock music site like ours, and also monetized via a Content-ID system through Rumblefish, CDBaby, AdShare etc. And that is a conflict.

    As you can imagine, when a customer buys your a license to your music track from Shockwave-Sound and they want to use the music in a video that they wish to monetize, they upload the video to YouTube, only to be told by YouTube that their video “Contains music owned by Rumblefish”, the customer is not happy, and we here at Shockwave-Sound are definitely not happy.

    • Firstly, it creates a big problem for our customer. He is likely to be angry and he will want a pretty good explanation for the music that he thought he licensed from us.
    • Secondly, it makes us look very bad in front of our customer, because it looks like we are a fraudulent company trying to sell music that is owned by somebody else. A competing company, no less.
    • And thirdly, the Content-ID "owner" of the music (our competitor) now actually starts to make money on our customer. We've spent years building a customer base, working our guts out day in and day out for years, and spent hundreds of thousands of dollars on Google advertising to tempt customers to our site. We finally land that customer, he buys something from us... only for the Content-ID company to “leech” onto that sale, and start to make money on our customer’s video, even though they did no work in regards to that customer, that sale, or that music. All they do is to “piggy-back” on our sale, our customer, and start to make money for absolutely nothing, other than to have allowed the independent artist to have their music in their systems.

    Can Content-ID make us rich?


    It is my strong opinion that independent artists will make more money on selling/licensing their music via Shockwave-Sound, than they ever will make with the Content-ID system. Unless your music “goes viral” in some crazy popular video, the money you end up with after what started as $1 - $1.25 per 1,000 views, after the money passes through one or two other companies, is hardly anything left for you. I have never heard of any artist, except for such “crazy popular” cases, that made any money worth mentioning via Content-ID. The guys I spoke with made just “pennies”. Of course, if you're Bruce Springsteen and you have your music used in 30 million videos, things will start to build up. And for the aggregators, it’s pennies from millions and millions of videos, because they have SO many artists and tracks in their database. But for one independent artist who is part of that setup, the money is likely to be almost nothing. I feel strongly that you guys will make more money by occasionally making royalties from sales via Shockwave-Sound or indeed other stock music outlets, than to receive “pennies” through a YouTube Content-ID system. But that’s up to each artist to consider, of course.

    What we're saying is that you can't have it both ways. You can't ask us to sell a license to a customer to use your music track, and then also want to make money through the Content-ID system, having your music flagged as “Owned by AdShare” (or other such company) and deny the customer the chance to monetize his own video.

    Sorry this turned out a little long, but this whole thing isn't a simple, straight-forward thing to explain. It’s quite a complex issue.

    But if somebody "just took" our music and used it in a YouTube video, are we not entitled to any income for that?


    If you find your music being used in a video, you have the right to ask (nicely) if the video uploader has a license to use that music in their video. They may claim that they don't need a license because the video is only a personal, non-profit, non-commercial video, but in fact, whether they are making money or not is beside the point. The point is that they (1) put your music to film, and (2) are distributing your music via YouTube, and both of these items are something that you are entitled to receive something for. You created the music that is helping his video, either in a financial, OR in an artistic way. You have the right to ask the customer to buy a license or compensate you in some way. We would of course like you to send the customer to Shockwave-Sound to buy a license from us, but if you wish, you can sell him a license directly (as long as you are prepared and able to give him a proper license document, which he should rightly expect when he pays you for a license).

    If the video creator refuses to buy a license to your music, you have the right to issue a “Takedown notice” to YouTube. You can do that via this form: http://www.youtube.com/yt/copyright/copyright-complaint.html. Fill in details about yourself and your song. It gets passed to YouTube’s copyright dept., and the video gets taken down, unless the customer can document that he has purchased a license.

    This is what you should do if you find your music in a YouTube video and you suspect that the customer has not bought a license.

    But the video uploader claims to have bought the track from iTunes...


    Remember, if the customer bought the music track from iTunes, Amazon and other such places that sell a track for a dollar -- or if he bought the CD in a record store -- that purchase does not include the rights to distribute the music, to Sync it to video, or to perform it through any public website, public space, broadcast, or anything like that. The iTunes / CD purchase includes only the right to personally listen to the music.

    I hope this helps. It’s important for me that people understand all of this, which is why I decided to spend some time explaining it properly. Feel free to link to this article if you like; here is a permanent link directly to this article.