FeedBurner makes it easy to receive content updates in My Yahoo!, Newsgator, Bloglines, and other news readers.
In this tutorial I'll show you how to recreate that high pitched wobble bass used by Skrillex in the first half of his track "Make It Bun Dem". I'll help you to understand the techniques used to create this kind of sound, and apply them to any other software you may use.
Here's a preview of the patch that we'll build:
We'll build this bass line in three big steps:
Before beginning to actually build the sound, let's pay attention to some details that will make our building process more effective. Working with initialized patches by default will save a lot of time. This setting can be made from the menu, by going to Edit > Preferences > General and uncheck Load default sounds in new devices. Also, to keep the workspace clean, we'll work inside a Combinator.
Create an MClass Maximizer, a Line mixer 6:2 and an instance of Thor. The routing should look like this:
As you may already know, we'll need the mixer later, when we'll add the sub bass and another version of the main sound.
The whole timbre will came from Thor1, by making some frequency modulation between its oscillators. Its high pitched nature is due to the modulators high octave value
Oscillator 1 will be used as a carrier, and the other two as modulators. The first oscillator will be a Wavetable Osc, with the MixedWaves 1 wavetable selected, octave three, and Pos to 34. The second oscillator will be also a Wavetable Osc, but with the Basic Analog wavetable selected, octave to eight, and Pos to maximum. The last one will be an Analog Osc set on octave eight, with a square waveform, and its PW (Pulse Width) to 22.
This is how it sounds:
On the left side of the modulation matrix, do the following routing:
Notice the modulation between LFO1 and CV Out1. This connection will allow us to synchronize the sub bass with the main sound movement, by using the same signal source.
To create the wobble effect, we'll modulate the gain by LFO1 signal. Also, by using the same LFO1 to modulate the pitch of each oscillator, the sound will have more personality.
It should sound like this:
We'll use the filter routing only to create another version of the sound that is smoothly altered by a Comb filter. In order to do this, bypass Filter1 and enable Osc1 to the second filter. Make Filter2 a Comb Filter with the drive set to 50 and frequency to 7.62 kHz. Since we don't want frequency knob to change over time, turn down to zero the Env and Vel rotaries. To send the sound to the outputs, enable the Filter2ToAmplifier button.
It should sound like this:
Before leaving Thor, let's take care of the details, to achieve the maximum potential of our patch. First, enable the Shaper, change it to Hard clip, and increase its drive amount to 39. In the LFO1 section, enable both Key sync and Tempo sync, and increase the rate value to 1/8T. Enable the Chorus to start gaining some movement in the stereo field, and reduce the dry/wet amount to 18. Lastly, in the Amp section, turn the Decay and Sustain all the way up, to have a constant gain over the time, and increase the Attack time to 40.7ms.
It should sound like this:
By now, it sounds pretty interesting, but also very muddy. Let's make it cleaner by applying some discreet distortion, and EQ it a bit.
Create an instance of Scream 4 right under Thor. We don't want to destroy the sound by too much distortion, only to add some personality to it, just like we did inside Thor. Turn down the Damage control to 30, and the damage type to Overdrive. Set P1 to maximum and P2 to two.
Create an MClass Equalizer, and enable its Param1, Param2, and High Shelf. For Param1 turn down the gain to -6.6dB and its Q to 1.4 to eliminate those unneeded mid frequencies. For the Param2 only reduce the gain to -6.9, for a much cleaner presence of the sound. In the High Shelf, just turn up the gain rotary to 1.7dB.
It should sound like this:
A very important step in the song mastering process is stereo imaging, and of course, for satisfying results, each instrument should have its own amount of stereo activity. A well made Combinator patch or, in general terms, a good sound design represents the most important part of electronic music production, so we have to take care of every aspects of our sound.
Usually, just inserting some MClass Stereo Imagers would be enough, but in this case these devices simply will not work that easily. That is because they need stereo signal, and our sound is more like a dual-mono signal. This problem can be solved by doing some parallel processing, and thanks to Reason's routing, we can do it very easily, by splitting our initial sound into two parts. The first one will remain unaffected, but the second one will be processed a little more. These two parts will be merged together by the line mixer.
So, let's start by creating a Spider Audio Merger & Splitter right under the MClass Equalizer. Flip the rack by hitting the Tab key, or from the menu by selecting Options -> Toggle Rack Front/Rear . Disconnect the equalizer from the mixer and route its outputs to the Splitter audio inputs. Next, route the first pair of outputs from the Spider Splitter to the mixer's channel one inputs to reestablish the signal flow, like in the image below:
Now, that the dry version of the sound is set, we can focus on the effects that we'll apply to enhance its stereo field. Holding the Shift key to prevent an unwanted auto routing, create a UN-16 Unison device and a Stereo Imager. The second chain of the signal starts from the second pair of the Splitter outputs -> UN-16 Unison -> Stereo Imager, and ends to the mixer's second channel inputs, like in the following image:
So far so good. Flip the rack again, so we can configure these devices. For the UN-16 Unison just turn down the Dry/Wet amount to 90. Set the Stereo Imager X-Over Freq to 1.45 kHz and turn to mono all the frequencies below this threshold. Lastly, set the Hi Band to maximum wide.
Even though these parameters have extreme values, the result can be finely adjusted from the line mixer. For now, I turned down the second channel volume to 67, but you can set this value as you wish, depending on the context.
It should sound like this:
We're almost there. Since we're back to the Line Mixer 6:2, create another instance of Thor right under it. Make sure Thor2 audio outputs are routed to the third channel of the mixer. Flip the rack, and route the CV1 Output of the first Thor into the CV1 Input of the second (sub bass) Thor, as in the image below:
CV1 is now the equivalent of LFO1, and we’ll use it to modulate the amplitude of the sub bass sine wave.
Inside the second Thor make the Osc1 an Analog Osc with a sine waveform and set its octave to 4. Also, make sure to turn all the way up the Decay and the Sustain in the Amp section.
In the matrix section, make the following routing:
In general, the use of this type of sound is quie basic, and the only automation that is going on is the wobble rate changing over time. In Reason terms, it is a very simple connection between Rotary1 and LFO1 Rate. To do that, open the Combinator's Programmer section and select Thor1 from the devices list. In Modulation Routing section, in the right side, just select the target, like in the image below:
Rename the Rotary1 label to something relevant, like "Speed" or "Rate".
To recreate Skrillex's bass line set the project tempo to 140 BPM and turn Rotary1 up to 106. As for the notes, they are as follows:
Next, use a quantization of eighth and draw the notes like this:
I hope you've found this tutorial helpful. If you have any questions or suggestions, please put them in the comment section below.
Have you ever wondered how good you are at playing rhythm guitar, and then go to record a demo with your band and realize you can’t actually play in time? It happened to me, and it was really awkward. In this tutorial I'll give you some advice about how to avoid ending up in the same boat.
We can’t talk about time and rhythm without considering our best friend (and worst enemy), the metronome. If you’re a guitar player, there's a good chances that you already know why you should use a metronome. There's also a good chance you’ve never actually done it. It took me a long time to start using a metronome, but nowadays I use it every time I practice, and often when I compose.
The metronome is the judge that will help your timing so you can become a solid rhythm guitarist. But before you start messing with it, you need to know know how to use it.
Let me explain: If you want to play a specific rhythmic figure in time, you need to know where that figure is placed between the clicks of the metronome. Otherwise you’ll never be able to tell if you’re late, ahead, or perfectly on the beat.
Before jumping to the application part, I'd like to mention another underestimated aspect of playing solid rhythm guitar: consistency. Have you ever tried to play constant 8th notes for five minutes? You probably should.
It can be beneficial for your muscles, and also your timing. Keep strumming a chord for five minutes without stopping. It's not easy, even if the rhythm you’re playing is easy.
First you'll start losing volume, then you'll start playing late. The overall sound will become less uniform, then you'll actually notice you’re playing late, try to speed to recover from the mistake.
Don’t forget that being a solid rhythm player is not just a matter of using a metronome, or playing complicated rhythms. It’s also about how long you can play the rhythm consistently.
If you want to start using a metronome, you need to know where to place your rhythm. That way you can decide whether you want to be ahead of the beat, laid back, or perfectly on the beat, depending on the genre you’re playing.
Let’s start what with the most basic rhythm figures. In these examples I will assume that the metronome is clicking to quarter notes, and we are going to stay in 4/4.
An eighth note is half of the value of a quarter note, which means that you’re going to play two notes every click. Every note will have the same duration.
Before you start practicing with your instrument, make sure to have the rhythm in your brain, and make sure you can say it out loud. The subdivision I use for eighth notes is “1–and-2–and-3-and-4-and...” Honestly, you can use any words you can break into two syllables, but make sure you can articulate the rhythm with your mouth before playing.
"Stir it up" is a great example of eighth notes. There's a chart below the video so you can start getting the feel of it.
Now we’re getting a bit more complex, since we have four different placements for a note every beat. The subdivision I use is “1–e–and–a–2–e–and–a3-e-and-a-4-e-and-a...”. This time can can choose a word that can be broken into four syllables, and it will work out fine.
This song is a perfect example of 16th notes. In the middle of the song, the band drops down and only the drummer and the guitar player keep playing. Notice how the guitarist keeps his rhythm solid, while the rest of the band starts dancing and makes the performance outstanding.
Here we completely change the feel of the rhythmic subdivision. Instead of playing an even number of notes, we play an odd number. So we will have to divide the beat into three equal spaces.
If you're playing guitar, you will notice that your alternate picking will reverse every beat. The subdivision I use in my brain for triplets is "1-uh-let-2-uh-let..."
This great song from Black Sabbath will give you an idea of how triplets sound.
Also known as sextuplets, these are the least common figure I've listed. It doubles up the value of an eighth note triplet, so you'll have six notes between every click.
This rhythm is associated with the sound of a march, as you will hear in the audio example. I use this subdivision for it: "1-uh-let-and-uh-let-2-uh-let..."
It has to be said that, even if these are the main rhythmic figures, they're not the only ones. For example, you may have a rhythm with a dotted quarter note, or a dotted eighth note. Or again, quarter note triplets.
You need to think of rhythm as a grid. The smaller the value of your notes, the tighter the grid will be. Then if you set your imaginary grid to eighth notes, and you have to play a dotted quarter note (which is equal three eighth notes), it won't be hard to place that note.
I'd like to get to a direct application of what we've discussed so far. In order to improve your rhythm and your rhythmic knowledge, you need to become really familiar with your grid. And to do that, you need to be able to place accents on whatever note you want. So if you play and subdivide your grid into eighth notes, you can accent at eight different spots. If you play sixteenth notes, you will have sixteen spots, and so on.
Here is how you should practice:
Don't forget that even when the note falls on a odd spot, you need to tap your foot with the metronome. In order to improve your rhythm skills, you have to develop independence between your feet and your hands. This ability is not just for drummers; you brain has to be able to split up and coordinate two separate motions.
In term of practice, you can work at placing different accents inside of one beat, instead of trying to relate it to the whole measure (four beats). This will reduce the possibilities but, in the end, will have the same effect.
Here are a few exercises you can do to warm up with eighth notes.
Here are all the permutations for sixteenth notes. Notice I place one note at every possible space.
Here's the same thing for eighth note triplets.
I'd like to suggest a trick for triplets: Since you're playing an odd number of notes, if you use alternate picking you'll notice that your picking direction will reverse every beat. So you start the first beat with a downstroke, on beat two you will hit the first note with an upstroke; it's just the nature of this technique.
While doing this kind of training, it might get a bit inconsistent and confusing to have always a different picking direction. So instead of alternate picking, you can play triplets following this sequence: down-up-down-down-up-down... (playing two downstrokes in a row). That way you always use a downstroke on the beat.
Working on these concepts will step up your rhythm skills—I guarantee it. Don't forget to practice every day. It only takes 10 minutes to go over some of these patterns. You can use it as a warm up, and in a couple of weeks you will realize your rhythm has become more solid.
I tried to give you an overall basic approach to rhythm. Using a metronome every day will improve your timing and give more awareness of rhythm. Then you won't panic when you get to the studio, and the engineer hits the record button. Enjoy.
Today we'll discuss something really interesting. I discovered that electronic dance melodies can be really cool if you add a multiband compressor after the effects chain.
This technique can be handy when you are playing a melody in different octaves. These melodies can have a wide dynamic range—especially if you're using distortion effects on your synth. But EDM lead melodies are not meant to have much dynamic range.
It can also help when some notes are not loud enough compared to others. With this technique, you can easily rewrite the dynamic range of melodies.
According to Wikipedia:
Multiband (also spelled multi-band) compressors can act differently on different frequency bands. The advantage of multiband compression over full-bandwidth (full-band, or single-band) compression is that unneeded audible gain changes or "pumping" in other frequency bands is not caused by changing signal levels in a single frequency band.
Multiband compressors work by first splitting the signal through some number of band-pass filters or crossover filters. The frequency ranges or crossover frequencies may be adjustable. Each split signal then passes through its own compressor and is independently adjustable for threshold, ratio, attack, and release. The signals are then recombined and an additional limiting circuit may be employed to ensure that the combined effects do not create unwanted peak levels.
Multiband expansion is similar, but widens the dynamic range, rather than narrowing it.
Every note has its own frequency; for example, middle C is 261.63Hz. Every octave has a frequency band; for example, C1 to C3 is 32.7-261.63Hz.
It's good to become familiar with these frequencies, so you can more easily set up the frequency bands of the multiband compressor based on your melody notes. Of course, you could always set them up with your ears.
Create the melody you see in the image below. As you can see, all notes are between the F2-F4 octaves. I used a basic Sylenth patch. If you'd like to use it, you can download it in the Playpack above.
Use a lot of distortion, but give different bands different distortion values, as in the image below.
Add some reverb and delay. Listen the sound without any compression:
You can hear that the lower notes are not as loud compared to notes in the mid and high range. This is especially true for notes in the F3 octave.
Drag and drop any kind of multiband compressor onto the project. A good choice would be Fabfilter Pro MB, which is easy to calibrate.
Split the frequency spectrum into four bands with the following crossover frequencies:
I decided on these frequencies by ear.
There are lot of approaches to reaching the desired effect—to make the lower frequencies louder than the higher ones. I decided to use an expander on these frequencies, and compression on the middle bands to decrease their volume. My aim was that every frequency band have the same level.
Use the parameters in the images below. If you're not familiar the basic compressor/expander parameters, check out Mo Volans' tut on multiband compression, or browse our creative session on compression.
Listen the the melody now:
You can hear that the lower notes are now a lot louder, and the middle ones are not as loud, especially when the melody jumps from C3 to E4. In the original melody, you can hear a huge volume increase at that jump. Now you can'’t—the multiband compressor is doing its job.
Listen the melody in context:
In that last audio example I used different settings, and expanded all of bands, but with different values.
I hope you learned a lot from this quick tip. It was quite advanced, but if you want to become a good producer, you'll need to know how to use multiband compressors and expanders on your melody to add impact to your track.
Happy music making!
Today's tutorial is about sound design, and we'll cover how to create some great EDM lead sounds using Ableton Live's built-in synths. First we'll create the pluck sound from Inna's– "Hot" title soundtrack. Then we'll do the same with "Seek Bromance" by Tim Berg (better known as Avicii).
This riff contains a very distinct sound, which is called a pluck, and plays the song's main melody. It's a simple sound which you can easily produce with Ableton'’s built-in Operator synth. It has all of the functions we'll need. We'll use three oscillators, two squares and one saw wave.
Drag and drop an Operator, and choose the last setting from the Global Shell, which means that the oscillators are not modulating one another, so it's acting as a subtractive synth, not an FM synth, and the oscillators work independently.
Next, set the parameters of the oscillators:
To give the sound a pluck timbre, you need to create a strong transient sound. You need to use a quick filter envelope:
Listen the dry sound (without effects):
Our sound is little dry right now, but with EQ and some effects we can colour it.
We can apply a rhythmic pulse to our sound with a delay. We can do that with a simple delay with the Sync button turned on.
Create this simple four-bar melody:
Listen to the finished sound:
The next sound is very interesting - kind of like a pan flute - which you can hear in Tim Berg's "Seek Bromance"” at the break. The sound has two parts, and we'll separate them to make sure we keep the sound close to the original:
For this sound, we will use the Ableton built-in Analog synth, because we have to use the unison feature.
The spectrum of the flute isn'’t exactly the same as a square wave, so we'll give some dissonance to the sound with a sine wave.
Listen to our flute sound:
Now we'll create the main transient part, which simulates the breath sound when someone blows a flute.
Listen to our transient here:
Turn all the oscillators back on, and the other Analog which we turned off earlier. Now we can hear that we can get an interesting flute like sound. It is too dry, so we have to spice it up with some effects.
Now put and filter delay on top of it.
Finally, give the sound some reverb.
Create the chord melody in the image below.
Listen to the finished sound:
That’s all, guys! I hope you've enjoyed and learned from this tutorial. Happy music making!
The ability to walk into a venue armed with nothing more than mics, cables, stands, and a small road case is quite an alluring idea. Imagine being able to cut down your rig by half, and have even more flexibility. Sounds too good to be true? Well the ability to mix live shows on your laptop is a very real thing, but not without risk.
Today we are going to look deep into the pros, cons, and pitfalls of live mixing on a laptop. From saving our backs to risking our skins, the benefits must be very carefully weighed against shortcomings. We also will analyze just what makes a solid live computer rig, and how we can better prepare ourselves, and truly see if our gear is up to the task.
If the thought of live laptop mixing has ever crossed your mind before, read on, and see if the plunge is worth it to you.
The greatest problems we typically face when doing live sound are having the wrong amount of equipment—too little, or too much—and the financial investment that comes with the territory. Thankfully a live laptop rig can remedy any and all of these problems very easily.
Let's see just how:
There is always a catch, isn't there? While laptop rigs can provide us with a plethora of goodies, they also present a wide variety of potential issues. From latency, to stability, to having the right rig to begin with, setting up a laptop rig can be tricky and risky.
Let's see what we're working against with laptop rigs:
So, you decided you want to go ahead with a laptop rig, but what do you need? Thankfully a computer rig is fairly simple to set up.
Let's take a look!
Before you ever do a show with a laptop rig, you will need to test the system to its absolute limits. Why? Because if it crashes in the middle of the show, it is your fault!
How can we test the system? By setting up a little experiment:
While the system is running, you need to look out for a variety of things. If you encounter any of these problems you will need to tweak your system or upgrade to better interfaces/computer.
Do not walk away during this test! You need to keep an eye on the system for at least two hours, preferably four or more. Also, run the test more than once! Try it with a local band who needs to rehearse. You do not want to be caught with a glitchy system!
As you can see, there is a lot to keep in mind with a laptop rig for live mixing. If you can make it past the setup and potential stability issues, then you can find yourself in a wonderful world of minimal equipment and greater flexibility.
However, do not fool yourself into thinking the rig works if you did not do the necessary testing. If everything goes swimmingly, then you can look into having MIDI control surfaces for mixing, local area WiFi for on-stage mixing via tablet computers, and a whole host of other goodies.
Just remember, live laptop mixing is a dangerous place if you do not prepare. Thanks for reading!
I am a sucker for the singer/songwriter style of writing. My lifelong love of bands like America and The Eagles gives testament to my fondness for gentle, story-oriented, accoustic songs. Today’s critique is based on just such a wonderful song by songwriter, Anthony Quails, and is titled “Scarlett.”
This lovely song follows a basic verse, verse, chorus, verse, chorus form, followed by an eight-bar instrumental section and a final sparse revisiting of the first verse. The initial finger-picking intro that precedes everything was most evocative, and set the scene for Anthony’s story nicely.
I think the form works well, but I wouldn’t have minded hearing a bridge (third melodic section) in lieu of the final verse. This would require some lyrical rerouting, but I will get to that in the lyric section with a few alternative suggestions.
I very much like the simple narrative melody used in the first verse. The second verse begins to meander quite a bit from the first verse though, and I personally would have waited till later in the song to improvise a bit. I think it’s a good idea to get the tune established in the listener’s brain a bit more before deviating too much. Also, the places in the second verse where the melody climbs and leaps detract from those same types of spots in the second half of the chorus, in my opinion.
I like the rising elements of the third line of the chorus and might have used them in the first line as well. Keeping a few melodic tricks in the bag leaves you somewhere to go as the song reaches a lyrical crescendo.
Overall, the lyric is effective, but there were a few spots that I feel need a little work. Right off the bat, the phrase “where I used to stay” bothered me because it suggested a temporary situation, and yet the singer appears to be privy to Scarlett’s entire life story. “From my yesterdays” would work better, in my opinion and leaves fewer questions.
Soon after this area, some of the information felt out of sequence to me. To reveal that Scarlett danced for rent money in the first verse gave the story little time to evolve naturally. I do like the couplet though. Maybe you can foreshadow what is to come without coming right out and saying it. Perhaps, something like this:
"As a child she danced for the joy of it,
Not knowing one day it’d pay the rent."
I would assume her father’s leaving and her mother’s using drugs preceded her grown-up dancing. Yet, at eighteen, when she finds herself alone, she gets involved with the older man. So when did the dancing chapter occur?
I would recommend clarifying the sequence of events a little more. I also would like to know a little more about Scarlett’s connection to the singer. Did she grow up down the street from him?
"Scarlett was a name that a father gave
To a girl on a street where I used to play."
That way, it is clear that the singer had some first-hand knowledge of the events of Scarlett’s life and a clearer connection is made.
I like the chorus lyric. The only change I would make is in lines three and four.
"But, Scarlett, that cloud of darkness
Has a silver lining on the other side."
The third verse references “He” meaning the older gentleman. The trouble is, there is some distance between the line about him and the pronoun. The listener could easily assume the “he” was in reference to her father. Clarify here again.
Lines three and four of verse three could be much clearer too. Based on what is written, I could not envision the scenario. Was she living with him? This can be very simple fix. Just explain that she eventually found out that he had a wife he forgot to mention. We don’t need to know how she found out, only that she eventually did.
Before the instrumental, I might include a two-line bridge that waxes a little philosophical. You might try something that exemplifies the unforeseen light at the end of the tunnel, but in terms of your cloud metaphor. Then finish with an eight-bar instrumental and the first two lines of your final verse.
In my opinion, that says just enough without overkill. If you do keep your whole final verse, be aware that you are not following the rhyme scheme of the other verses.
The sort of song that immediately comes to mind when I think of this song’s genre is “Delilah.” There was a time when songs like this only found life in folk, country, and what we call "Americana" on this side of the pond, but those times are a changing!
Indie, alternative music is everywhere, and I think it’s market is only going to get bigger. Music always swings back in this direction when it gets away from it for awhile. The genre appeals to something genuine and basic in all of us. All the flash and dash gets old when one has a steady diet of it!
I applaud Anthony’s creation and hope my comments will be helpful. I really enjoyed listening to it and dissecting it as well! The simple demo was completely effective and sells the song. In cases like this one, less is definitely more!
I felt compelled to write about mastering audio for 2014 and onwards. Of late, there have been developments which suggest file formats and music distribution channels are likely to evolve over the forthcoming years, and as such you need to have some basic information about these changes.
Historically, mastering has been a final production service with multiple goals before the music is released. One of those goals has been multiple release formats, such as CD, vinyl and cassette.
More recently, the files for a CD release and the digital (online) release have often been one and the same—typically a 16 bit .wav, or .aiff uncompressed PCM file. So the file used to create the master CD has often doubled up as the digital online version.
This is not exclusively the case, but is certainly my own experience, and seems to be how most online distribution has been geared up for over the last five years.
In tandem with single-master creation, the loudness wars have constantly pumped up the perceived volumes of music releases over the last decade or so. One loud master has become the norm for some music genres, whether it is for CD or digital online release.
With anything related to mastering, it is not wise to generalise. It is, after all, a bespoke service. Some genres that benefit from wide dynamic range have largely been exempt from taking part in the so-called loudness war. I would say dance, pop, rock, R'n'B and hip hop music in its various forms are probably the genres that have followed the trend of increases in perceived volumes. Jazz, country, acoustic, folk and classical have suffered less.
For those in it to win the "loudness race", the continual improvement of digital limiters have allowed higher and higher perceived volumes, with fewer side effects. Of course, there are multiple methods to achieve loud end results, but the limiter is ubiquitous in loud mastering. In short, the modern look-ahead limiter is capable of relatively transparently arresting amplitude peaks in a music mix, therefore creating a higher average level to the human ears.
Rightly or wrongly, this has been the case. The often-cited reasoning is that a band, musician or label fears that their track will be quieter than the one played before or after. This situation is understandable, especially in a nightclub/DJ setting, or a publisher/radio commissioning round. Sadly, many DJs are no longer pre-fade cueing the incoming track and adjusting gains so the volumes match.
No one can deny the damage to transients, punch, depth, space, clarity, detail, dynamics and musical involvement that high perceived volumes have done to some music genres over the years. Extremely loud masters do not tend to fare well from the perspective of fidelity when broadcast on FM radio, and when converted to compressed audio file formats.
There are currently some interesting developments that indicate multiple release files may once again be a prudent move. At this time, the goal of mastering is to be a well-judged, multi-facetted compromise that lives within the aperture of best translation across many different sound systems and formats.
Car stereos, FM stereo/mono broadcasts, online compressed formats, in-store, iPods, smartphones, multimedia speakers, and audiophile hi-fi systems may all be used to play your music. The goalposts for how this judgement is being made is moving.
This is important, and whilst there is no certainty in what might eventually happen, it makes sense to discuss the possibilities and changes coming. The main changes as I see it are as follows and yet distinct from each other in their goals:
The Mastered for iTunes requirements involve very specific tweaks during mastering, and some post-production compliance work. In short, Apple is attempting to ensure the best possible fidelity from their lossy 256kbps AAC encoding algorithm. In addition they will store a high quality version of your file in their archives.
The guidelines recommend more headroom than most masters have been created with, and that the files should be a minimum of 24-bit in resolution, and at least 44.1kHz in sample rate. Higher sample rates are encouraged.
I recommend reading the Apple guidelines for mastering engineers even if you only work semi-professionally, and perform DIY self-finalising, as it is indeed a development worth knowing about.
As a mastering engineer, I am occasionally posed the question, "Will these masters be OK for iTunes?" The answer to this question is that unless a mastering job has been specifically requested as being "MFiT compliant", the tracks will not typically be mastered with those very specific guidelines. If the client is asking if the masters will be accepted by Apple for upload to iTunes, they of course will. However they cannot be submitted or marketed as specifically "Mastered for iTunes".
If this is of interest to you, as an independent artist or small label, you should ensure that your distributor can accept high resolution files. At the time of writing, quite a few online distribution companies are not geared up to accept audio of greater resolution that 16bit/44.1kHz. So before you request tracks to be mastered to MFiT specification, you should enquire with your chosen online distributor if they can accept higher resolution files or not.
In addition, you should enquire as to whether or not there are any additional fees for submitting more than one set of mastered files. This may be applicable whether you are going it alone or employing a professional mastering engineer for your project. So it may have budgetary considerations as well as sonic considerations.
Currently, some media players and proprietary streaming interfaces have an option to check a box to "Play all files at same volume," or something similarly worded. Currently these are optional, but some streaming services may switch their volume normalisation options on in software by default in the future.
These systems work a little differently to each other, but they amount to tagging each piece of music with information that cues the media player to attenuate or increase internal playback volume, depending on how loud the sensing algorithm believes a track is.
We will have to wait and see. Although not officially announced, it appears that iTunes Radio (at time of writing available in the US only) has a version of Soundcheck switched on, which as a radio service would make a lot of sense. If not Soundcheck specifically, then some similar volume normalising system.
It is worth mentioning that FM broadcasters have a type of volume normalisation, in that they multiband compress and limit the music they broadcast. Of course this dynamic control is much more drastic than mere volume balancing, but for such services and playlists it at least demonstrates the use of some kind of perceived volume management. After all, constantly changing volumes is annoying for the vast majority of listeners.
If we produce one loud master, these tracks will be pushed down in level due to volume normalisation. Everyone's music will play back at approximately the same volume, irrelevant to whether your music tracks average -16dB RMS or -5dB RMS. So loud tracks will simply be pushed down in volume in compressed file format form.
If these changes do occur, and slowly find their way into more and more media consumption outlets, we will find that this means that loudness maximisation may eventually begin to be less important. These algorithms are quite sophisticated at calculating the perceived volume of the tracks.
However, even if these volume normalisation systems are taken up on multiple media players/streaming services and online radio globally, we could assert that limiting may remain for two main reasons:
Currently the situation is ambiguous and dependent on specific situations, but it makes a lot of sense to consider the various situations in which your music may be heard for both promotional purposes and actual release. In any event, the perceived volume—be it high or low—is likely to become a more important factor to consider as time moves on.
As such, musicians and record labels have to make open-minded decisions about the perceived volumes and formats that they should request at the mastering stage. It would be prudent to ensure that all bases are covered.
Mastering for iTunes is a personal choice for a band, musician or record label. It may relate to budget and perceived audibility of improved fidelity, and whether chosen distribution channels can accept high resolution files. It is a free choice for the individuals concerned.
Choice of formats required should be considered in depth when you start DIY finalizing, or when you choose professional mastering services.
The following information is given on the basis that every mastering job, whether it is performed as a DIY self-finalised job or a professional mastering job, is unique in its goals.
Firstly, I recommend you look into a meter that is capable of measuring loudness units (LU) below full scale (LUFS for short, also known as LKFS - Loudness K-Weighted Full Scale). This metering system provides a new average loudness reading which more accurately bridges the common disparity between what is measured and what a human hears.
Digital metering in the past has not correlated well with the perceived volume to human ears. Two very different pieces of music could read similarly on a digital peak meter, and yet have rather different perceived volumes to the ear. Some modern sequencers ship with an LUFS meter and you can find third party plug-ins as well.
It has been reported that iTunes Radio seems to play music at around -16.5LUFS. This would mean that music at this level would not be subject to large volume manipulations either upwards or downwards. At this level you are likely to have a fairly dynamic master with all transients, clarity, depth and space unrestrained by limiting. By not limiting the track, the mix can retain punch, depth, detail and power that is easily lost with excessive limiting, and yet is played out at roughly equal volume as other tracks.
In my opinion as a professional mastering engineer, it's currently prudent for musicians to request (or create) at the very least a 24-bit unlimited master, as well as any "loud" versions. This would include any equalisation, compression (and other dynamic processing), stereo image manipulations, characterful processing etc. This unrestrained dynamics version (if this is what you choose sonically) would sound good on volume-normalised playback systems.
Or rather non-conclusion—we cannot conclude this complex topic because these changes are in flux and subject to wider music industry uptake. It is too early to say if, how, who and when these changes may be implemented. What is sure, is that making a lower-level 24-bit version (probably at around -16LUFS) in preparation is a good plan, especially for artists whose music is being sold today and not used for purely promotional purposes.
It is early days, and neither guarantees nor predictions can be made with any certainty, so it's a good plan to ensure you cover as many options as possible (relative to your budget), until we can all see where the music industry ends up on this issue. As we know, the music industry is quite fragmented, and it can take quite a while to respond to new technology, ideas and changes.
Hopefully this article inspires thought about your music production, files, volumes and mastering, and helps you prepare for new developments. I for one embrace these new developments, as I believe that lower perceived levels will potentially allow high-end mastering equipment to show what it is truly capable of, without being masked by digital distortion and other extreme dynamic processing side effects. I look forward to being able to listen to high-resolution formats in the future that are truly "audiophile" in nature—detailed, clear, and communicating exactly what the artists wanted to.
Yes, it is ambiguous at the time of writing, and there is a lot of education, system changes, and industry-wide sharing of knowledge still to happen, but I cautiously believe it is heading in the right direction for the betterment of musical fidelity.
Article supplied by Push Mastering.
In this series we will check out a few different guitar/pedal/amp setups to help you in the quest for better tone! In this video we are going to sit down with Evan Thorpe to talk about his setup and what he uses to create different sounds and textures. We will look at some examples of how to combine pedals and pickups to create really nice pop/rock tones.
If you're a regular reader of Audiotuts+, you'll notice that things look a little different. That’s because we're now a part of the new Tuts+ site. I’m really excited to be able to announce this change—it’s a long time coming, and a huge step forward for us.
For the last six years, Tuts+ has been made up of many different individual sites, covering a range of topics. This has served us well, but it made it difficult to expand into new areas. Now, instead of us launching a whole new site when we want to teach a new subject, it’s all in one place.
You can browse all the topics from the Tuts+ Hub, then narrow your view down to just the Music & Audio topic to see all the content that was previously on Audiotuts+. You can also easily jump to other areas that might interest you. If you want to learn how to program, switch to Development via the Topics menu.
David Appleyard, our Editorial Manager, has gone into more detail on the benefits of the new structure in the initial announcement post. Check it out if you’re interested in more of the backstory. He also talks about the redesign of the site, and some of the new features that accompany the change.
We want to tailor this site to you, so your feedback is incredibly valuable. If you notice anything that looks wrong, or something that doesn’t behave the way you want it to, we’d be grateful if you took the time to fill out this feedback form.
This track has been submitted for your friendly, constructive criticism. What useful feedback can you give the artist? The floor is yours to talk about the track and how they can fix problems in and improve upon the mix and the song.
Description of the track:
This song is about an experience I had when my friend passed away last year. After his death, I was having terrible nightmares. During this time, I met someone new and who inspired me...before I knew it, the nightmares subsided. Make what you will of it, but I give him credit for stopping the nightmares. I woke up one morning and the song just came out of me.
On the recording, my producer and I used a kick drum and sampled it to create the rhythm behind that I thought was essential. Also, there are some jingle bells in there as well. The only other instrument involved is guitar. Not certain what software my friend used for recording.
Artist's website: katrinabarclayofficial.bandpage.com
Now I can get some peace
And now I can rest easy
I felt abandoned
Everyone I had loved
Left me to fend for myself
I thought it was a hoax
The one thing I desired most
Love that would bring me to my knees
Now I can get some peace
And now I can be released
You stopped the nightmares
You stopped the nightmares
You stopped the nightmares in my head
You broke the silence
You broke the silence
You broke the silence in my head
I thought I was a lost cause
You may be proving I was wrong
Have a listen to the track and offer your constructive criticism for this Workshop in the comments section. Feel free to offer any type of advice - arrangement, mix, lyrics, performance. And remember to play nice - be constructive!
In this tutorial we'll be approaching the popular subject of mastering from a different angle. I'll show you the bare essentials, not just in plugins, but also in workflow. We'll take an EDM track that doesn't have much wrong with it, and quickly get it ready to play in a club or to friends. Note: This is not the advised route if you're going for a full release through a label.