Sunday, March 10, 2013

Recording signal flow

(I'm writing this article as part of the Introduction to Music Production course on Coursera.)

Let me give you a rough overview of what happens every time I hit my snare in my studio.

My brain generates a signal. It starts in the motor cortex, racing across each neuron as an action potential, and crossing each synapse by the release of specific neurotransmitters. It travels through the thalamus, then down my spinal cord. It goes to nerves in my arm, contracting muscles, which in turn flex and extend my joints. Starting from my shoulder, a whiplike motion occurs. It moves through my arm, elbow, wrist and then fingers, exerting force on the drumstick in my hand. The drumstick accelerates downward, making contact with the batter head of my snare:


When the drumhead begins oscillating, it alternates between compressing and rarefying the atmosphere, creating longitudinal waves of sound pressure variation. These sound waves travel through the air and make contact with the diaphragm of this Audio Technica M4000S dynamic microphone:




A microphone is an input transducer, converting energy from sound waves into an electrical signal. In a dynamic mic, the pressure variations in the atmosphere cause the diaphragm of the microphone to vibrate in sympathy, and a coil attached to the diaphragm travels past a magnet, which generates voltage variations. This low level audio signal then travels out of the microphone. In my case, it travels over an XLR cable, which plugs into the back of my MOTU 8 pre interface:



Many things happen on the interface. On my interface, a mic preamp raises the audio signal to line level, a -20dB pad can be added (when recording drums, this is very often necessary, because drums can generate extremely high sound pressure levels), mic gain can be fine tuned with a trim knob, and you can turn on 48V phantom power for condenser mics (I use condensers for overheads).

Perhaps most importantly, however, is the analog to digital converter (ADC). A computer cannot process the analog audio signal coming directly from the mics, so it must be converted to binary, which a computer can process. This is achieved through a process called sampling, in which the input stream of voltage variation is measured at a fixed rate, and the amplitude of the signal is represented by a discrete number at each interval. Once the ADC on my Motu 8 pre converts the audio to a digital signal, it is transferred out the back via a firewire cable, and into my iMac:



In the computer, the digital audio signal gets sent into the digital audio workstation (DAW) that I am using, Logic Pro 9:



Inside of Logic, I can change almost anything I want about how the snare sounds. I can EQ it, sound replace it, play the sound backwards, apply reverbs, etc. Generally speaking, for a snare, I will EQ it, compress it, limit it, bus it to a reverb, and parallel compress the whole drum set. Then I will EQ, compress, and limit the whole project.

After all of that, the audio gets sent back out of the computer via firewire, and into the interface. The interface then converts the digital signal back to an analog line level signal, and sends it to my KRK Rokit 8 monitors over balanced TRS cables:




KRK Rokit 8s are active monitors. This means that they are not only powered (have built in amplification), but have amplifiers for each driver. So, when the line level signal comes into the monitor, the higher frequency range goes to the amplifier for the tweeter (small driver), and the lower frequency range goes to the amplifier for the woofer (large driver).

A speaker driver is an output transducer that functions very similarly to a microphone. Each cone has a coil attached to it, and a permanent magnet is attached to the frame behind the cone. When the electrical current of the audio signal reaches the coil, it generates a magnetic field that causes the coil to alternate between being attracted and repulsed to the magnet. Since the cone is attached to the coil, the cone moves at the same time, and compresses and rarefies the atmosphere, once again generating longitudinal waves of sound pressure variation.

These sound waves then travel through the air until it reaches my eardrum. My eardrum begins vibrating in sympathy with the sound waves, which causes movement of the ossicle bones. These bones then transfer the vibrations to the inner ear, and the fluid of the cochlea. There, the motion causes electrochemical reactions to occur in thousands of tiny hair cells, called stereocilia, which convert mechanical motion into electrical signals. These signals go to the nerve fibers of the cochlea, and then to the auditory nerve, which goes straight to the brain.

We're back where we started, and it's taken no more than ten milliseconds.