Services Studio Profile FAQs Quinn Keon

Here are some frequently asked questions that I recieve. If you have a question other than the topics covered here, you may submit it to me by clicking here.


I have an old tape of me playing that has alot of background noise. Can you clean it up?

Yes. Through a series of frequency selectable filters, I can dramatically improve the sound from a noisy tape. I can eliminate background noise such as hiss, but I cannot improve a damaged tape. Many tapes will become wavy sounding when they have been exposed to harsh environments. These are damaged, and I cannot fix them. Sorry.

Back to the Top


I have some recordings of my band on a cassette. Can you transfer them to CD?

Yes. This is a common request that is fairly easy to accomplish. Aside from just transferring the sound, I will also perform time alignment of the high and low frequencies, noise filtering, EQ adjustments (if necessary), normalization, and limiting through a separate recording process that will ultimately prduce the best sound possible from your source tape.

Back to the Top


I hear alot about mp3. Is it better than CD quality? Should I save my PC recordings in mp3 format?

MPEG Layer 3 (mp3) files are compressed WAV files. What this means is that the file size is dramatically reduced with a slight loss in audio quality. This is accomplished by the mp3 encoder reading the audio file and determining which sounds are audible to the human ear and which are not before encoding the files. The inaudible portion of the file is removed leaving a much smaller file size. You should record your masters at 16-bit 44.1Khz sampling rate as this is the standard for CDs. If you do not record at this sampling rate, you will not be able to encode an audio CD from your master. Some people choose to record their tracks at a higher resolution (e.g. 24-bit 96kHz) and then re-sample at 16-bit 44.1kHz to create their masters. The theory behind this practice is that they are preserving the best possible sound quality through the mixing stage until the final step when they are forced re-sample to the lower resolution of 16-bit 44.1kHz. While this may sound like it carries some merit, keep in mind that the highest quality sound that is available is the sound that is present before digital conversion takes place. (See Digital vs. Analog) Therefore, I question the before mentioned theory in regards to keeping the best possible sound until the final master. My recommendation is to sample the sound once at 16-bit 44.1kHz during the initial capture and work to get the best possible sound from there. Since this practice creates smaller audio files, it will save you storage space, time (smaller files process faster), and you will not have any unpleasant surprises that may occur by re-sampling to a lower resolution. There are mp3 players that will read mp3 files from a CD now, but if you want the general public to be able to hear your music, you need to record at 16-bit 44.1Khz. From there you can convert your audio to a separate mp3 file if you wish.

Back to the Top


Can you remove a voice from a recording and put mine in instead?

The short answer is, no I cannot. The recording has been mastered to a 2-track mix. I cannot manipulate one track from that mix. I do not have the capability to isolate individual waves once they have been mastered. The long answer is that in some cases, it is possible to run an algorithm that sums the left channel with the inverse of the right and places the result into both channels. In theory, if the vocal track is centered equally between both channels, it will disappear. I have had mixed results using this method.

Back to the Top


How do I keep from getting distortion when recording loud vocal passages?

This is the exciting topic of Dynamic Range! Dynamic range is the difference in decibles from the loudest point of the program material to the noise floor. In other words, dynamic range defines the maximum change in audible program levels. The human ear has a dynamic range of approximately 140 dBu. It can detect very minute changes in sound pressure levels. Electronic recording equipment has limitations as to what it can reproduce in regards to program material. A typical digital recorder has approximately 100 dBu of dynamic range whereas as cassette deck has approximately 70 dBu of dynamic range. The differences between real world program material and the limitations of recording gear can cause an overload on tape.

To address the problem of controlling signal peaks during recording, you need to decrease the dynamic range of the program material. This is done by using a tool called a compressor. A compressor does just what is sounds like, it compresses the dynamic range of the program material. I typically use several compressors during the multi-track recording process. Each output from my mixer will be patched into a compressor prior to the signal going to the recorder. I begin by setting the threshold level. Since the line level of my recording equipment is +4dBu, I will set my threshold control just below that. Next, set the compression level to a high enough ratio as to remove excessice signal peaks without affecting the overall dynamics of the program material. A setting of 1:2 is defined as a 2dB rise in program material is limited to a 1dB rise in output. If your compressor has LED indicators respective to signal attenuation, use these as a guide. Typically I do not want to limit more than -6dBu of program material. Digital recording requires more care in respect to signal peaks than analog. Digital recording has virtually no headroom. Therefore, any signal peaks that are above line level (+4 dBu) will cause distortion. If I set the threshold on each compressor to 0 dBu, and my compression ratio to infinity, I can use the compressor as a limiter and I have effectively blocked any signal peak from ever getting above 0 dBu. Now I set my input levels on my recorder to match the output levels of my mixer, and I am able to get the maximum amplitude from the signal without any fear of distortion. I could set the threshold to +4dBu, but I like to keep a small margin to be safe.

Another common use for compressors/limiters is in live sound reinforcement. Ever notice how your live sound gear doesn't seem as loud as you think it should? Try limiting your signal peaks with a compressor and your system will come to life! I like to sub-mix using my busses and then compress each bus based on the program material that it is handling. Once the signal is compressed you can then RAISE the level of each bus because your signal peaks will be controlled and your overall program material will be much louder. Also by applying a compressor as a limiter just prior to the signals reaching the power amplifiers, you have effectively placed insurance on your system. No signal peaks will ever reach the power amps, and as a result, no speakers will be blown . You can get the maximum sound pressure levels form your system, without the worry of overload.

Compressors are used every day in modern recording. Every track that you hear on the radio had been compressed or limited prior to going to tape. Ever notice how even the levels are on a vocal track even though the singer is changing the amplitude of their voice? This is because a compressor threshold was set prior to the signal going to tape. The quiet passages and loud passages both sound like they are at the same level because the loud passages were limited. Even the broadcasters on the radio have their signals from their mics going through a compressor prior to reaching the amps.

In my opinion, compressors are the single most useful tool in the recording studio. You can use the side chain feature on many compressors to perform specific tone shaping features such as de-essing. This is done to remove excess sibilance from a vocal track (high treble sounds like S and T). By patching an equalizer into the side chain, you can compress frequency specific signals. I used to use a compressor for de-essing every vocal track - not so much anymore - it depends on the voice and the mic being used. There are several other functions other than these that compressors can accomplish, I just wanted to give you some insight on the subject since I feel that they are so valuable.

Here are some formulas for calculating dynamic range and headroom:

Dynamic Range

Dynamic Range = (peak level) - (noise floor)
A typical rock concert: 130 dBu - 40 dBu = 90 dB dynamic range

Scenario: Lets say that when sound levels reach 130 dB SPL at the mic, the maximum line levels of your mixer outputs are reached at +24 dBu (12.3 volts), and maximum output from your power amplifier may peak at 250 watts. When levels at the mic are +40 dB SPL the minimum line level falls to -66 dBu (388 microvolts) and the power amplifier output level falls to 250 nanowatts (250 billionths of a watt). The dynamic range can be calculated as follows:

dB = 10 log (P1 / P0)
= 10 log ( 250 / 0.000000250)
= 10 log (1,000,000,000)
= 10 log ( 1 x 10 exp9)
= 10 x 9
= 90 dB

Headroom

A typical line level in professional audio gear is +4 dBu (1.23 volts) corresponding to an average sound level of 110 dB SPL at the microphone. Given the same mixer as previously described, the formula would be:

Headroom = (Peak level) - (nominal level)
(+24 peak) - (+4 nominal) = +20 dB of headroom

This can also be calculated for the power amplifier:
dB = 10 log (p1 / p2)
= 10 log (250 / 2.5 watts)
= 10 log (100)
= 10 x 2
= 20 dB

Back to the Top


Digital vs. Analog - Is digital recording better than analog recording? Is analog recording better than digital recording?

For years many professional and home recording studios have used analog recording equipment for multi-track recording. The digital age has, however, brought multi-track recording to a whole new level through ease of use and increased clarity of the sound. Though many would argue this point, it is my opinion that the use of digital equipment for the recording of music is superior to analog recording technology and techniques.

Though the terms analog and digital may sound familiar, many people do not understand how the two signals differ from each other. I will start by explaining about analog signals and analog recording before I get too heavy into my reasoning for supporting digital recording.

All sound is caused by vibration and all vibration causes sound, though some may be inaudible to human hearing. When something is vibrating, it moves the surrounding air. Our ears percieve this air motion as sound.. The waveform has peaks and valleys that correspond to the back and forth motion of the vibration. Take a guitar string for example. The string is in a fixed position at each end and when plucked, it vibrates back and forth. The speed at which the vibrations occur determines the frequency of the waveform signal. The faster the signal, the higher the frequency or perceived pitch of the sound. This type of waveform signal is an analog signal.

The way that analog recording works is that when an audio signal is captured by a microphone it is converted into an electrical representation of the sound, in the form of a fluctuating voltage current. The signal voltage fluctuates at exactly the same rate as the acoustical energy that it represents, and the amplitudes of the acoustical sound wave and the electrical audio signal are scaled proportionately. The electrical signals travel through the tape head on the recorder which then produce fluctuating magnetic fields on the tape. The waveform signal remains intact throughout this entire process so that the sound remains true to the original signal. This is the way that recordings have been made for years. There are many who feel that analog is the true representation of the sound and are very adamantly opposed to digital. Neil Young was quoted as saying, "Digital is a disaster. It's an insult to the brain and heart and feelings." Another interesting quote I once read was, "Digital preserves music the way that formaldehyde preserves frogs. You kill it, and it lasts forever." Admittedly, one advantage that the analog waveform signal has over digital is that it has a genuinely warmer feel to the recording than digital. I will explain why later when I discuss the format of digital signals.

Another advantage that analog recording has is its wonderful headroom. Without going into a lot of mathematical calculations, headroom can best be described as the ability for program material to handle loud signal peaks when compared to the nominal signal level or line voltage being produced. What this means to the listener is that the higher the headroom, the more dynamic the music can be. When constricted by lower headroom, the sound may be more flat sounding and high signal peaks will cause audible distortion. Many people like analog recording because of the fact that they can drive the signals and saturate the tape. If any distortion does occur, it will usually have a warm effect and not sound flawed. Digital, however, has virtually no headroom. If distortion occurs it is very brash and sounds very flawed. So if analog sounds so much better, why would anyone want to use digital? Let me start by explaining how digital signals work before I discuss why I choose to use them.

Digital signals use a numeric representation for the sound. Digital uses a binary coding scheme in which there are two digits, 1 and 0. Each 1 or 0 represents one bit. These bits are used to represent values, in the case of music these values correlate to sound pressure. The standard format for digitized music on a compact disk is 16-bit audio at 44.1Khz sampling rate. What 16-bit means is that there are 16 1's and 0's lined up in a row that represents one cycle of a sampling. The mathematics behind binary coding are based on Base 2. What this means is that there are 65,536 (2 to the 16th power) different levels of sound pressure that can be coded. The sampling rate refers to how often the sound is being measured during recording. The standard is to sample at two times the highest frequency that you want to produce plus ten percent. Human hearing extends to 20,000 cycles per second (20Khz), so this is where the 44.1Khz sampling rate became standardized.

So digital signals are not really sound, are they? The answer is no. They are recorded and played back through the use of an analog to digital/digital to analog (AD/DA) converter. The converter samples the music during recording and then reproduces the sound during playback based on the binary data. Remember that sound that we hear is analog. The difference in sound between analog and digital comes from the nature of the recording process. An analog signal is continuous waveform and has an infinite number of sound pressure levels, whereas a 16-bit digital signal is several snapshots of that waveform that is limited to 65,536 sound pressure levels (see figure 1). The digital signal is not continuous, rather if drawn, it would have a slight stair-stepped appearance where each sample was taken (see figure 2). However, the sampling rate is fast enough that most people will not be able to distinguish any difference between the two. Okay. So now that I have explained how digital signals work, let me discuss some of the advantages that digital has over analog.

The first advantage is the fact that you can use a home computer as a multi-track recorder and for mix-downs. A good home computer system for recording can be purchased for as little as $400 now days. With the addition of $100 in recording software, it is possible to do virtually anything that can be done in a professional studio. If you are serious about multi-tracking, it would be wise to invest in a sound card that supports multiple inputs and outputs. Good cards like this are available from $200 and up. So for under $1,000 you can be set up with a good recorder that will allow you to handle all aspects of the recording and mixing process. The last time I checked, a good 16-track analog recorder cost around $8,000. Not to mention that you would still need to purchase a quality 2-track recorder to mix down to. The difference between $1,000 and $10,000 becomes clear in a hurry. If you are not comfortable using a computer, there are several multi-track digital recorders available starting around $600. Many of these have built in mixing controls and they all connect to many types of storage media such as Zip or Jaz drives as well as interfacing directly to a computer.

So the cost difference is great, but what about the issue of sound quality? This is where a good debate begins because analog produces such rich, warm tones, but digital produces clearer sound. Analog recordings transfer sound directly to tape. Along with the sound there is noise in the background. An audible hiss is present when you insert a cassette into your tape player. Since digital samples the signal and then recreates the sound based on the binary coding, there is virtually no background noise at all! This is a big advantage that digital has over analog for making professional sounding recordings at home. The most common file format for digital recordings on a computer is the Microsoft WAV file. This has become an industry standard and every software program available will allow you to save in WAV format.

Another aspect of digital recording does not record actual sound at all. MIDI which stands for Musical Instrument Digital Interface creates wholly digital files that do not contain recordings of sound; instead they contain the instructions that the audio hardware uses to create the sound. MIDI is a powerful programming language developed in the '80s to permit electronic musical instruments to communicate. Typically a keyboard is used as the input device to create, store, edit and play back music files on your computer. Because MIDI does not use any actual recording of sound, the file sizes that it creates are very small in contrast to those of a digitized analog recording. An hour of MIDI will take up approximately 500Kb, where as one minute of a WAV file will take up approximately 10.5Mb.

A MIDI file can be thought of as a representation of a musical score. It is composed of separate channels each of which represents a different musical instrument. This allows the user to use a keyboard to synthesize any number of other instrument sounds such as drums, guitars, violins, etc. All sound cards available today support MIDI. Many have found MIDI to be a valuable tool for recording. Many Rap and Hip-Hop dance club style music is all MIDI based with little or no actual instruments playing. I have never used MIDI, as I prefer to record live instrumentation. When there are real people playing real music together, it creates a lot more mood in the music than can be accomplished with synthesized tones.

Some of the things that can be done with digitized music now days are pretty astounding. How about the fact that for another $100 or less invested into your computer, you can create your own music CDs. That's right! In my opinion, this is the biggest advantage that digital recording has brought to musicians. There are several types of CD burners available, the one that I mentioned connects like a normal CD-ROM drive to your computer. There are also stand alone CD recording decks that cost from approximately $600 and up. The most expensive internal writable CD drives that I have seen lately have been around $325. That's a pretty good price for being able to produce professional demos on a CD from your own home. When recording music for use on a CD you will need to adhere to making a 16-bit stereo recording at 44.1Khz sampling rate. As you recall this is the standard for CD recording, and audio CDs can only be encoded from this format. Interestingly enough, several of the audio software programs support much higher sampling rates which in turn produce superior sound quality. For example, my audio software allows 96Khz sampling rates (DVD). This is more than double CD quality. However, CD technology is currently limited by the standards of the electronics industry. So even though you have the capability of creating a better recording, it can only be used on your computer system right now.

The final advantage of digital recording that I want to mention is the creation of mp3, Windows Media and Real Media files for web applications. Mp3 files are compressed WAV files. What this means is that the file size is dramatically reduced with only a small loss in audio quality. The mp3 encoder reads the audio file and determines which sounds are audible to the human ear and which aren't before they encode the files. The inaudible portion of the file is removed leaving a much smaller file size. I recently converted a WAV that was just over 56Mb in size to mp3. After conversion it was just over 5Mb. You may wonder what advantages there are in smaller file size if the quality is slightly lost. The answer is the Internet. Mp3s are the hottest thing going right now. Their small file size enables users to download and play them easily. There are many free mp3 encoders and players available on the Internet which can also be easily downloaded and installed on your computer. Mp3 files have given independent musicians an edge for promoting their own music. Mp3 files enable the artist to act as their own label, distributor, and retailer and ultimately retain control of their music and profits. Windows Media and Real Media files are known as streaming formats. Their functionality is great for web applications since they provide a smaller file size with faster execution time. The way streaming works is that rather than the entire file being downloaded and then played, portions of the file are downloaded and executed by the player as it receives them. The player performs a calculation on the entire file size and the rate at which the portions of the file are being received. From there, it determines when it can begin playing the file without having to stop and wait for more of the file to be downloaded. Unlike mp3's, the audio file is not saved to the user's computer. After each stream is executed, it is discarded. This makes streaming a viable option for musicians to promote their copyrighted material from their websites without so much of a fear of copyright infringement by unauthorized users.

So ultimately, is digital better than analog? I think both have their merits. But with the options that musicians have with digital over analog, I think it becomes pretty clear that digital is better. I do however, agree fully with the fact that analog produces superior sound quality, but not necessarily clarity. Prior to changing to a PC based multi-track system, my recording setup consisted of an analog multi-track recorder and analog mixers and signal processing equipment. Then I would mix down my masters through the analog gear to a computer to digitize the sound. Many mastering engineers keep analog sound in the recording pipeline as long as possible to preserve as much of the warmth as possible. This is the technique that I employed regularly to create my recordings. However, if I did not have an analog multi-track recorder already, I would go all digital with a new setup. From ease of use, compatibility with multiple software programs and hardware devices, to the ease of creating web-based files and your own CDs, digital is definitely the way to go.

UPDATE: Since writing this in the year 2000 I no longer use analog at all. AD/DA converters have taken a huge leap in quality and I now prefer to use digital only. I am getting GREAT sounds and no longer feel that analog has an edge in sound quality over digital. Of course you need to be using high-quality converters not your typical sound card in a PC to realize this quality.

Back to the Top


My software will record at 24-bit 96Khz sampling rate. Since this is twice that of CD quality, shouldn't I use this for my recordings?

No. If you are recording for the purpose of distributing your music, your master needs to be recorded at 16-bit 44.1Khz sampling rate. You will not be able to encode an audio CD from your master if your files are not in this resolution. Some people choose to record their tracks at a higher resolution (e.g. 24-bit 96kHz) and then re-sample at 16-bit 44.1kHz to create their masters. The theory behind this practice is that they are preserving the best possible sound quality through the mixing stage until the final step when they are forced re-sample to the lower resolution of 16-bit 44.1kHz. While this may sound like it carries some merit, keep in mind that the highest quality sound that is available is the sound that is present before digital conversion takes place. Therefore, I question the before mentioned theory in regards to keeping the best possible sound until the final master. My recommendation is to sample the sound once at 16-bit 44.1kHz during the initial capture and work to get the best possible sound from there. Since this practice creates smaller audio files, it will save you storage space, time (smaller files process faster), and you will not have any unpleasant surprises that may occur by re-sampling to a lower resolution. Since the 44.1kHz sampling rate is twice that of the range of human hearing, you should not be able to hear a difference anyway. For more info see the section on digital vs analog.

Back to the Top


What types of file formats can you produce my music into?

Pretty much anything that you can think of. The most common are Windows PCM(.wav), Windows Media(.asf), MP3(.mp3), and Real Media G2(.rm).

Here is a full list:

  • 8-bit signed (.sam)
  • A/mu-Law Wave (.wav)
  • ACM Waveform (.wav)
  • Ad Lib Sample (.SMP)
  • Amiga IFF-8SVX (.iff, .svx)
  • Apple AIFF (.aif, .snd)
  • Covox 8-Bit File (.V8)
  • Creative Sound Blaster (.voc)
  • Dialogic ADPCM (.vox)
  • DiamondWare Digitized (.dwd)
  • DVI/IMA ADPCM (.wav)
  • Microsoft ADPCM (.wav)
  • MP3 (.mp3)
  • Next/Sun (.au, .snd)
  • NMS vce (.vce)
  • Pika ADPCM (.vox)
  • Real Media G2 (.rm)
  • SampleVision (.smp)
  • VBase ADPCM (.vba)
  • Windows Media (.asf)
  • Windows PCM (.wav)
  • PCM Raw Data (.pcm, .raw)

Back to the Top


Visitor Map