Thoughts And Processes Relating To Recording And Mixing With Headphones.

lxh2mix
(How Not To Hang Yourself By Mixing With Headphones)

by LXH2

It can be easily demonstrated that doing your final mixes with headphones can produce stellar mixes when the mixing engineer is familiar with the practices of mixing with headphones. But how is this possible? I have heard from so many mixing engineers that you can not mix with headphones for many reasons. The one that most frequently is that things sound too good in headphones to get an accurate mix. How can this be true? If you can hear things better in headphones, then why not mix with them? The answer lies in the familiarity of the environment. Most likely the mixer has always mixed using loudspeakers, and can be rather confident that the mixes produced will sound correct on the largest variety of playback devices.

Then why mix with headphones? For one thing, the loudspeakers react with the room so that you hear the room effects while you are mixing. Sure, most of the best recording engineers can mentally ignore the room sound so that they can create a mix what is not befuddled by room acoustics. For the home studio person, we usually do not have a professionally designed control room to work in. We also do not have luxury of using the best sounding room in ones residence. So we are relegated to a little room that is virtually impossible to mix in. What sounded great when you were mixing it, now sounds anemic when played elsewhere. What happened? All that bass that piles up in such a small room, the inaccuracies if the “monitors”, and the total collapse of any likeness of a realistic soundstage. Wow!

Slip on a correctly calibrated pair of high quality headphones, and listen to your favorite CDs for a while. Then listen to your mixes that you made using speakers. Do you notice the difference? Are you mixes lacking clarity and life? Do not start blaming this piece of gear or that piece of gear. Most likely it is the result of mixing with loudspeakers in a bad room. Why not eliminate the room altogether? So lets use the headphones!

Another advantage of mixing with headphones is the clarity of the sonic picture due to perfect time alignment inherent in headphones. You do not have to deal with the room resonances and phase cancellations due to reflections off the walls and the console. The time smear caused by the typical nearfield monitor situation can gloss over things you will want to hear clearly when you are mixing. This is where you can lose control of your mix and run into trouble using speakers.

Mixing with headphones will take some getting used to. You should make a copy of your mix, and play it through various playback systems such as boom boxes and car stereos. Try listening in a variety of environments,and not the same little room. This will keep things in perspective, as a “reality check”. If your headphone mix does not sound good in these situations, then you are doing something wrong. But what are you doing wrong? Like any new skill, you must gradually familiarize yourself with different acoustic space that headphones create. This takes some time and practice. You will know when you have it down when your mixes sound correct on a variety of systems.

STEREO PLACEMENT

Good stereo placement in headphones requires that the mikes be placed in such a manner as to facilitate the sonic illusion you desire. There are ways to use stereo reverb to do this. This can be divided into two distinct arenas. One is how to create a convincing stereo image from a signal source that does not occur in real acoustic space, and the other is how to utilize stereo miking techniques to obtain a real stereo image that does not sound annoying in headphones. You would never be able to tell if a particular stereo field sounded really distressing in headphones if you only used the speakers in the control room.

Using a stereo miking technique during your tracking process:

There are several common stereo miking techniques used. some of which are: m-s, spaced pair, ortf, coincident pair and dummy head. For the purpose of this article, I will discuss a binaural “roving mike” method I have used, and have had quite good results. I can record an entire song using only the roving mike system. Each instrument will be recorded separately using two tracks. When mixing this song, I will use no panning or reverb, just the levels of each stereo pair.

lxh2mix1.gif
Figure 1

I built a special microphone stand that has the bottom of an office chair and a top with a mike tee with two short goosenecks (figure 1). This is quite easy to make for yourself. The mikes can be easily moved around while maintaining the exact distance between the mikes. This “Mike Tree” can be rolled around, and the height and spacing can be adjusted. With this method, you only adjust the spacing if it is required, not every time as you would using two conventional mike stands.

Now for the fun part. Start with a spacing of six inches between the mikes. Connect the mikes to stereo monitoring amp with a pair of closed-back headphones. I use the Beyer DT990 open-back phones. However, with these headphones, you will be able to hear what is going on in the room in addition to what is coming from the mikes. I am quite used to this effect and can easily differentiate the mike sound from the room sound. There is less of this effect with a pair of closed headphones. Move the roving mike stand around so that you get the sonic image you desire in the headphones. (Remember that rolling the mikes around while listening in the headphones at even reasonable volumes can damage your headphones due to loud bass transients caused by the movement.) The way to do this is to turn your headphones down and move the mike tree to a location you think may sound good. Give it a listen, and try to zone in on that sweet spot that sounds best to you.

You can place an instrument anywhere from left to right naturally, without resorting to the panpots on your mixing console. Vary the distance between the instrument and the mike stand to change the perspective. The easiest analogy is to think of the mike tree as an aural camera. Use it as you would use a camera. You can get close-ups. You can place your subject anywhere in the sonic picture. Each sonic picture will take up two tracks. You can have several instruments playing together at the same time to save on tracks. Just get the balance you desire in the headphones.

ENHANCING MONO SIGNALS

So, what do you do with a mono signal source such as a keyboard, a close miked vocal, a bass guitar plugged in direct and so on? A stereo reverb box can fool the ear into thinking the signal source was actually recorded in stereo. You may desire the mono source to stay mono, but also create a sense of reality of acoustic space surrounding the mono source. This is commonly done with the lead vocals on most pop albums.

For this purpose, I use the Alesis Nanoverb at the current price of $99. Of course you can use a better and more expensive reverb. But I found the Nanoverb to do just fine if used judiciously. You certainly don’t want your vocals swimming in reverb. Remember, reverb is like certain spices. A little goes a long way. Should you put reverb on the bass guitar? Think of this. When you listen to a real band playing in a real room, do you think the room has absolutely no effect on the bass guitar? Just use a little, so that the bass is playing in the same room as the rest of the band, and not in some sound-proofed room somewhere. The same goes for anything other mono source.

PLACING SOURCES TOO LOW IN THE MIX AND MIXING FOR DEEP BASS

One of the minor pitfalls with headphone mixing is placing things too low in the mix. This is because the clarity of the headphones allows you to hear things easier than you would in speakers. You must learn to mentally compensate for this. Another pitfall is in the area of deep bass. Headphones will never give the visceral impact of good speakers in a good room. If you are not careful, you will tend to equalize the deep bass far too loud in your mix. You must develop an intuitive sense as to how the bottom end will ultimately sound with a variety of playback systems. You could add substantial amounts of the deepest bass, but you will never hear it on a those little portable systems. Experience will tell what is the correct low end characteristic is desirable.

TUNING THE BASS

The subject of “Tuning” the low end of the bass and the kick drum is a sticky one indeed. Many engineers have their individual ways to make the two fit together and sound correct in the mix. This can be done when miking the bass drum. Or it can be done after the fact with tools like equalization, compression, etc. The same holds true for the bass guitar. This is why many engineers mike the bass amp in addition to the direct sound. Experience works to your advantage here.

lxh2mix2.gif
Figure 2

I have developed a circuit (figure 2) that tailors the low end of particular items such as the bass drum and bass guitar to be used for individual tracks during mixdown, not across the entire mix. With the circuit, I can set the low end rolloff characteristic and stored energy of each of these items separately. There is stored energy in the speaker/cabinet combination that adds body to the bass guitar that you can not get with typical equalizers. The tuning device allows you to get the bass drum to sit well with the bass guitar. There is little point in boosting the area below the cutoff of most speakers. As strange as it may seem, rolling off the bass content with a 12 db per octave, tunable high-pass filter can actually give you a fatter and more defined bottom end than you would have if you let the response stay flat to 15hz. Of course you do not need to use any highpass filters to get a great sounding mix. They are just one of many tools that are handy to have around.

You can build this circuit for each instrument that contains significant low frequency content. Of course it is far better to get these to sound correct by mike placement and actual tuning of the kick drum. It is also better to use the miked bass amp to get that punch you desire, if the room you are recording in allows. If your recording room is small and has the typical standing waves that most small bedrooms have, you are better off going direct with the bass. But you cannot go direct with the kick drum, drum machines notwithstanding.

The closer the mike is to the sound source in a given room, the less the room will have an effect on it. The tuning circuit applied to each of these low frequency signals can allow you to improve your bottom end. It is beyond the scope of this article to provide a primer on drum miking. Let’s just say, record it as best you can. It takes experience to get it right.

Using the tuning circuit:

There are two controls on the tuning circuit: Frequency and “Q”. The tuning circuit is a variable frequency, low cut filter with a resonance control, which is referred to as “Q”. Set the Q to minimum, and set the frequency to minimum. Things will sound pretty much as they did without the circuit.

Then increase the frequency of the filter. Notice how the sound gets thinner. Then increase the Q control until you hear an audible ringing at the frequency you selected. Now decrease the frequency control slowly until the desired tuning is achieved. Vary the Q control for the desired amount of resonance. Do this separately for the kick drum and the bass guitar. You will find settings that make them compliment each other without stepping on each other sonically. In other words, the tunings should not be the same for both. You just have to fiddle around until it sounds best to you. Make a tape of this and listen to it on different systems, such as the car, the boom box, other peoples stereos. Make a mental note of what tunings sound best in most places, and get used to how this sounds on your headphones!

CHOOSING HEADPHONES

The advantages of mixing with headphones are the clarity of the sonic picture, the total lack of those annoying room resonances, and the portability of your mixes. But you must be sure to use only the highest quality headphones, free of tonal colorations. Otherwise mixing errors can occur.

Your brain is correctly calibrated to the particular headphones by listening to many of your favorite CDs on them. As with any skill you develop, practice is essential. Learn to calibrate your ears to how a familiar CD sounds in the headphones as compared to speakers in various places. This is how you can learn to get correct sounding mixes using headphones. But you must use really good headphones. Go back to your reference CD often at first so that you can strengthen your aural memory. After a while you will not need to use your reference CD as often.

FINAL NOTES

Mixing with headphones will take some getting used to. One must refer back to speakers at times to keep things in perspective, as a “reality check”. If your headphone mix does not sound good in speakers, then you are doing something wrong.

Here are some mixing guidelines:

  • Keep bass frequencies in the middle and in mono. Be sure bass frequencies are not applied to the two channels out of phase, or else an unpleasant “pulling” sensation will occur.
  • Try to record things in true stereo, using two mikes to maintain accurate sonic images. Use reverb judiciously to balance items placed to one side in the mix. Example: When putting tambourine in the left channel, create real stereo space using a stereo reverb. Be sure to have a sufficient amount of reverb to create a convincing sonic image.
  • When using reverb on vocals, it is important to create a stereo reverb space around the vocals and not dead center. Real reverb does not occur dead center and in mono. Things should sound as they actually do in real space unless you are trying to create an effect.
  • Be wary of boosting lower bass. Headphones will not deliver bass to the entire body so the usually pleasant sensation of bass is lacking. Excessive bass will have an annoying effect. Such excessive bass is also not good in mixes that will be played through speakers in the future, since most speakers will either be pushed into distortion or will poorly reproduce this bass and ampl[ifier power will be wasted. It is a good idea to check the mix with speakers to be sure that the bass tonal balance is correct.
  • If the bass tonal balance sounds good with speakers but has an annoying character with known high quality headphones, then the lowest bass frequencies should be cut in the mix. Such lower bass frequencies are usually wasted with speakers.

Of course, these are very general guidelines, and you should experiment and hear how what you do works. It will take some time, but you will have the benefit of being able to get consistently good sounding mixes in a variety of control rooms without ever having to worry about the particular “sound” of their control room.

c. 1999, LXH2.
Author’s site: The Official LXH2 Website.

The Art of Monitoring and Mixing With Headphones.

Is it really an art to monitor and mix with headphones? There is certainly no dearth of discussion on the matter. Based on hours of reviewing newsgroup postings, articles, white papers and other pro audio literature, it must be so, since opinions on the subject range from gung-ho support of headphones for all monitoring and mixing to banning headphones outright from the studio and the stage. Performers worry about one set of concerns, while recording engineers fret over another. Musicians tend to be more accepting of headphones than recording engineers. Since consumers currently prefer loudspeaker fidelity over headphone fidelity, engineers who successfully monitor and mix with headphones must be able to correlate the two very different soundfields.

This article does not take a stance on whether headphones are better than loudspeakers in professional settings (even the most ardent champion of cans will say “it depends”), and instead examines techniques for using headphones in all monitoring situations. It may one day be possible to dispense with monitor speakers entirely, but the consensus agrees that day is not here yet. Nevertheless, when used with forethought and grounded in practical experience, headphones can challenge loudspeakers on every front. Further, in particularly demanding applications, a little technological wizardry can go so far as to fool the ears into believing that headphones are loudspeakers.

SETTING UP FOR A LIVE PERFORMANCE

Equipping The Performers

Hearing preservation as well as higher quality sound have been the driving forces behind the popularity of headphones and in-ear monitors for performers and audio engineers alike. Monitor speakers (wedges and sidefills) are the traditional means for musicians to hear themselves during a performance. The wedges are the primary sound sources, and the sidefills help to maintain uniform coverage across the stage. Most of the time, the sound image from such setups is washed out stereo, the image being best at a “sweet spot” on stage, which degrades if the performer moves around. Monitor speakers also have the bad habit of “spilling” off stage (due to wall reflections and the radiation pattern of the loudspeakers themselves) and compromising the soundfield from the house system. They are also prone to feedback if the microphones are not carefully placed. Perhaps the biggest concern is that monitor speakers (and concert sound systems generally) play so loudly that the musicians suffer hearing damage. Ear plugs are not an option since musicians onstage need to hear each other clearly.

Headphones can avoid most of these problems. The acoustic isolation of headphones allows each performer to listen to a monitor mix at a comfortable volume, with improved fidelity and dynamic range, and, when the equipment provides, to customize the monitor mix without affecting what others hear. During live performances, monitoring with standard headphones is pretty much restricted to rhythm sections and instrumentalists. Closed-ear headphones improve the intelligibility of a monitor mix and provide some attenuation of ambient noise as well. There is no feedback since the output from the canalphones is not audible to onstage microphones. For greater realism and interactivity, one or two stage monitors may be installed to expand headphone sound, or a performer may wear the headphone with one earcup off. However, these techniques can place hearing at risk.

Performers, such as vocalists, who like to move around the stage are not good candidates for headphone monitoring – primarily because headphones are too bulky. Instead, in-ear monitors permit mobility and provide acoustic isolation that is better than closed-ear phones. The popularity of in-ear monitors has displaced most, if not all, on-stage wedges and sidefill monitors. The earpieces of in-ear monitors are typically canalphones with excellent attenuation of ambient noise. (See A Quick Guide To Headphones and Preventing Hearing Damage When Listening To Headphones for more information about canalphones.) Performers are free to roam about the stage, toting wireless receivers to drive the canalphones and listening to a personalized mix in aural privacy. Without the higher SPLs of stage monitors, vocalists additionally benefit from reduced vocal fatigue, as they no longer have to strain their voices singing over the mix. See STUDIO MONITORING AND MIXING for a special discussion about vocalists and headphone monitoring.

Tips on Setting Up The Monitor Mix

A monitor mix for headphones and in-ear monitors has different requirements than one for loudspeakers. Headphones sound more detailed, offer true stereo (as opposed to the washed out image of stage monitors), and isolate the listener from ambient sounds. Consequently, a good headphone mix is more difficult to achieve. At the same time, since they can hear better and are listening to higher quality sound, performers are more likely to demand custom mixes. Here are some tips from Steven McCale and other engineers familiar with the art of mixing with ear monitors:

  • The mix must have everything that a musician would hear without headphones: drum overheads, left and right keyboard feeds, video sound.
  • Use audience microphones to lessen the isolation of headphones.
  • Always provide a stereo mix, for greater intelligibility and so that each musician has a spatial location.
  • For a more natural sound, add special effects such as reverb, harmonizers and delays to the mix.
  • Be prepared to provide for a large number of custom mixes.
  • Every ear monitor should have a limiter to prevent hearing damage – especially important in live performances where spurious noise in a sound system could amplify to deafening levels.
  • Install gates on drums and compress bass guitars slightly for lively, controlled sound.

A setup designed for headphone monitoring is vital for creating a good mix. The average console may not have enough inputs or outputs. Headphone mixer amplifiers have multiple inputs for a main stereo mix, various subgroup mixes (such as drums, background vocals or keyboards) and possibly a separate effects loop. Individual levels controls let musicians dial their own custom mixes for each headphone output. Remote mixers, such as the The Psychologist from Intelix, are the most convenient of all, and free up console outputs besides. As performers may be using headphones with different efficiencies and listening requirements, booster amplifiers can augment the individual outputs of the mixer. The better headphone distribution amplifiers have two inputs per output – one for the main mix and another for a custom mix, and musicians can select between them at the flick of a switch.

Enhancing Headphone Sound

Two major sources of dissatisfaction with headphone monitoring are the quality of the low-end response and the distorted spatialization of headphone soundfields (“in-each-ear” and “inside-the-head”), which many performers find uncomfortable. Regardless of the frequency response, headphones do not convey the strong sensation of bass of loudspeakers, which some musicians (like bass players) demand, because low frequency perception is more physical than aural. Bass notes are conducted through bones in the body, and merely hearing them lacks impact. In that case, headphone monitoring is still an option if supplemented with vibration transducers such as “shakers” and subwoofers, which add a physical sensation to bass. See A Quick Guide To Headphone Accessories for more information about vibration transducers.

The distorted perspective of headphones can be mitigated by first processing the mix through an acoustic simulator such as a crossfeed filter. Where crossfeed processing is not sufficient, an auralization processor (virtualizer) applies more complex processing to achieve true 3-D spatialization. Virtualizers were once implemented with expensive computers and software, but are now available in consumer audio gear. They can be added as an outboard to an existing monitoring system. Acoustic simulators are sold separate devices or as components of headphone amplifiers, of surround sound decoders and even as accessories with headphones. Many PC sound cards feature 3D sound outputs for headphones. Be careful to distinguish between acoustic simulation for for headphones and for loudspeakers (acoustic simulation for loudspeakers generates surround sound from stereo loudspeakers). Some in-ear monitors, such as AKG’s IVM1, have a built-in virtualizer. For more information about acoustic simulators, see A Quick Guide To Headphone AccessoriesAn Acoustic Simulator For Headphone Amplifiers and Technologies for Surround Sound Presentation in Headphones .

STUDIO MONITORING AND MIXING

More Monitoring Options For Performers

In the studio, setting up the monitor mix for performers may be slightly less complex, since the performance is not live. Most of the principles of setting up a mix for a live performance still apply. In this less formal atmosphere, there is more flexibility in configuring the headphone system. If hearing conservation is not an issue, then open-air phones are more comfortable than closed-ear types. However, if played too loudly, open-air types are prone to leak or bleed sound into microphones, so should be offered to performers as a second choice. Remote mini-mixers, which could be a distraction onstage, are a blessing-in-disguise in studios and let musicians instantly customize their mix, thus freeing engineers to focus on other things.

Vocalists are most comfortable hearing their own voices. Where hearing conservation is not an issue, vocalists can monitor with open-air headphones (again, being careful to avoid sound bleeding into the mike). Closed-ear headphones are also workable, with one earcup off the ear to let in ambient sound (mute the channel to the floating earcup to minimize sound bleed). If a vocalist wears in-ear monitors or closed-ear headphones without compromising the acoustic isolation, avoid making the voice so prominent in the mix that it sounds close-miked and unnatural – which can then cause singers to restrict their sound. In particular, vocalists who use in-ear monitors (canal-type headphones) can hear their own voices very clearly due to the occlusion effect. A compressor in the mix can help in these situations. (Apparently, some engineers raise the level of the vocalist’s mix as a natural form of compression for vocalists.) If vocalists report that they cannot hear themselves in headphones, try reversing the phase on the microphone to see if the vocalist’s voice is in phase with the voice in the headphones.

Acceptance From Recording Engineers

Headphone monitoring is also gaining converts among recording engineers, many of whom have discovered the advantages of monitoring with headphones over loudspeakers. From the console operator’s point of view, the soundfield of headphones is more detailed, so that any problems in a mix are easier to spot. However, engineers will often draw a line between using headphones for tracking and for mixdown. Headphone mixes can sound terrible when played back over loudspeakers, due to the different characteristics of the soundfields such as frequency response, interchannel crosstalk and spatialization.

Whatever the reason (hearing conservation, budget, equipment, preference), there are success stories about mixdowns done through headphones. While it isn’t easy to correlate headphone sound with loudspeaker sound, it can be done. An understanding of psychoacoustics is a good beginning. Good mixdowns with headphones are a matter of practice (and a few tips don’t hurt either). Of course, the final result should always be checked on loudspeakers.

The Challenge of Mixing with Headphones

The close proximity of headphone transducers to the ears affects how the audio spectrum is perceived. The lack of physical sensation of deep bass in headphones was discussed earlier. Headphones also tend to be brighter than loudspeakers, because the air attenuates high frequencies from speakers before they reach the ears. Headphones direct all sounds straight to the eardrums, bypassing the acoustic shaping that occurs when sound interacts with the listener’s head. Many headphones are now “diffuse-field” equalized so that they sound flat from within the ear canal – although that equalization is based on an average head shape and may not be a good match with every listener. For more information on HRTFs and diffused-field equalization, see A 3-D Audio Primer and A Quick Guide To Headphones.

Loudspeakers play in a real acoustic space. Headphones sound artificial because each audio channel is isolated to one ear. Sound waves from loudspeakers interact with each other (interchannel crosstalk), with wall reflections and with the listener’s head before they reach the ears. The resulting soundfield is a complex amalgam of phase-shifted amplitudes, which may amplify, cancel and/or delay select frequencies. It is impossible to determine through standard headphones how the phase and amplitude variations in one audio channel will affect another when played back over loudspeakers. Consequently, an otherwise smooth headphone mix can have a decidedly rough quality when heard through loudspeakers.

Acoustic simulators can improve the distorted perspective in headphones. Even the simplest of acoustic simulators, a crossfeed processor, can recreate interaural crosstalk in headphones for a more natural presentation. Beyond mere crosstalk is the whole issue of true spatial perception. Binaural recordists must monitor with headphones to hear spatial information, but the narrower soundfield of headphones can result in regular stereo mixes sounding almost monaural over loudspeakers. Until surround sound came into vogue, few audio engineers spoke of mixing for true spatial placement. Yet, with a good microphone configuration to capture localization cues, a stereo soundfield from quality loudspeakers can reproduce a sense of spatial depth and height as well as left-right width. The average headphone is not capable of re-creating these spatial artifacts in a stereo recording.

In terms of spatial editing, engineers have had a limited set of spatial options for stereo recordings: pans, delays, reverberation and other special effects. Headphones are perfectly good for auditioning spatial effects, unless the effects phase shift signals so that they sound different when heard over loudspeakers. And of course, standard headphones trap the soundfield in a straight line between the ears, so are of little value for directing placement of voices and instruments in a 3-D surround field.

Engineering A Realistic Acoustic Environment In Headphones

While many of the same techniques for making headphones sound more natural to musicians are easily adapted to recording engineers, the importance of having a close correlation between headphone and loudspeaker sound demands careful selection of headphone equipment and application of techniques. First and foremost, if they don’t already own a pair, engineers should audition diffuse-field equalized headphones, which are designed to sound flat inside the ear canal. Diffuse-field equalization is a fairly common product feature nowadays (for example, many of the AKG and Sennheiser phones are diffuse-field equalized).

If more than one engineer is participating in a session, giving everyone the same brand and model of phones set at the same gain will help with consistency of perception (if not peace). Closed-ear phones and in-ear monitors have the clearest and most extended reproduction, while attenuating ambient noise. Also, phones with good acoustic isolation are better for monitoring low-level (85-90db) mixes, which sound better on consumer audio systems. See A Quick Guide To Headphones for more information about diffuse-field equalization.

When monitoring with headphones, Fred Ginsberg of the Equipment Emporium recommends learning to set levels by ear instead of eye. The headphone level should be adjusted so that the 0 VU reference tone sounds as loud in one’s head as a loud telephone conversation – uncomfortable, but NOT painfully loud. Shouts and emphasized vocals should only briefly jump into the zone on a VU meter. LXH2 in his article Thoughts And Processes On Mixing With Headphones suggests the use of a tuned bass circuit when mixing low frequency content.

Whether or not the phones are diffuse-field equalized, binaural recordist Ron Cole suggests equalizing headphones with the biophonic curve (shown above) as a guide to compensate for ear canal resonances and other spectral differences between loudspeakers and headphones. Biophonic EQ (as well as any other signal processing mentioned below) is for listening purposes only, so the equalizer should be inserted just before the headphone amplifier. The biophonic curve is only a guide, and experimentation is encouraged. For more information about the biophonic curve, see Taking Audio In Another Direction.

Most headphones tend to image between the ears or in the back of the head. A binaural microphone system can help to create a realistic headphone sound field (see Thoughts And Processes On Mixing With Headphones). A simple technique for pulling the soundstage forward with supra-aural (on-the-ear) phones is to wear the earcups slightly lower and forward on the ears. Try out various positions to get the best localization and depth. The goal is to get the sound to enter the ears at an angle and engage more of the HRTFs of normal hearing. Unfortunately, this trick does not work as well with circumaural phones, which are designed to remain in a fixed position on the ears.

Acoustic simulators (crossfeed filters and virtualizers) electronically recreate the properties of a true acoustic space in headphones. The inability to hear interchannel phase effects on a recording is a major obstacle to using headphones for mixing. A simple crossfeed processor can mimic this aspect of acoustic space by introducing crosstalk between channels, so that phase effects can be heard. In fact, a good crossfeed processor tries to avoid overemphasizing phasey artifacts on recordings. At the same time, the processing smooths out the sonic image inside the head – no more stereo echos bouncing off the ears or holes in the soundstage. Moreover, by reducing the exaggerated stereo effect of headphones, crossfeed can help produce a better stereo mix.

For the most part, crossfeed simulators are electronic devices, but there are other options: headphone designs that provide an acoustical form crossfeed (such as the AKG K1000s) and PC-based crossfeed applications. See A Quick Guide To Headphone Accessories and HeadWize Projects Library for more information on crossfeed processors.

Note: Spatial enhancers are circuits that phase-invert the crossfeed to achieve a more spacious sound in headphones by adjusting the amount of ambient sound (such as reverberation) in a recording. Because these types of acoustic simulators dramatically alter the phase of the crossfeed, they are NOT suitable for checking interchannel phase interaction in recordings. In general, audition and experiment with acoustic simulators to become familiar with the characteristics of the sound fields that they create.

If an engineer insists on the utmost realism when mixing with headphones, then a headphone virtualizer (auralization processor) may be just the ticket. Virtualization takes acoustic simulation a huge leap beyond crossfeed to completely externalize the headphone soundfield outside the listener’s head. Virtualizers simulate a virtual, loudspeaker array inside regular headphones. Virtual reality headsets would not be convincing without them. For example, a stereo virtualizer will simulate a soundfield of two in-front loudspeakers. A surround virtualizer recreates five (or more) virtual speakers around the listener’s head.

Virtualizers also let the listener adjust many of the acoustic characteristics of an audio signal to mimic a variety of acoustic spaces – ranging from a large, reverberant concert hall to a small nightclub (add a vibration transducer to enhance the physical sensation of low frequencies for greater realism). Some virtualizers may be calibrated for each listener’s head-related transfer functions to generate highly accurate spatial cues. There are also special headphones with motion sensors (or standard phones with an add-on motion sensor) to vary the perspective of the sound field as the listener’s head moves.

Acoustic simulators are sold as separate devices, as a feature of headphone amplifiers and surround sound decoders, as headphone accessories, and more recently, as plugins for PC-based music players. Many PC sound cards offer 3D sound outputs for headphones, based on technologies from companies such as Aureal CorporationSRS Labs and Creative Labs. (Note: Be careful to distinguish between acoustic simulation for for headphones and for loudspeakers. Acoustic simulation for loudspeakers generates surround sound from stereo loudspeakers). There are also advanced PC cards and software (from companies like Lake Technology and WaveArts, Inc.) that turn PCs into full-blown acoustic modelling workstations.

When purchasing an acoustic simulator and, in particular, a virtualizer, careful and extensive auditioning is a must, as the quality of the image depends heavily on how well the processor approximates an individual’s HRTFs. For example, head-movement tracking may be critical for 3-D hearing for one person, but not another. Check the environmental adjustments to make sure that the various simulations are realistic. All Dolby Headphone virtualizers are pre-configured to simulate a Dolby Reference Room 1, which acoustically models a small, well-damped room appropriate for both movies and music-only recordings. Additionally, DH virtualizers may simulate a Room 2 (a more acoustically live room particularly suited to music listening) and/or a Room 3 (a larger room, more like a concert hall or movie theater).

QSound iQfx 2.0 plugin for Real players.

Fortunately, the popularity of PC-based music players (especially MP3-type players) has spurred the creation of low-cost and no-cost acoustic simulation plugins from most of the major 3D technology companies. QSoundSRS Labs and Lake Technology are just a few of the companies that have written plugins for PC-based music players such as Winamp and RealPlayer. These plugins are an inexpensive (often free for evaluation and less than $30 to purchase) means of evaluating the various competing 3D sound processing algorithms. However, poor performance from a plugin by a particular vendor does not necessarily reflect on the performance of any hardware implementations of that same technology.

Prices have fallen on hardware simulators as well. New (and improved) consumer devices, such as the Sennheiser DSP360, are appearing with MSRPs of less than $100. AKG’s Hearo wireless headphones have virtualizer circuitry inside the wireless transmitter. The Hearo 999 Audiosphere is full-featured enough for studio use. For more information about acoustic simulation and virtualizers, see A Quick Guide To HeadphonesA Quick Guide To Headphone AccessoriesA 3-D Audio Primer and Technologies for Surround-Sound Presentation in Headphones.

Engineers involved in surround-sound mixing have an alternative to virtualizers: surround-sound headphones. The standard 4-channel headphone, run directly from a 4-channel amplifier, does not produce a realistic surround field, because it does not integrate the listener’s HRTFs. Research indicates that these phones could sound more like loudspeakers with interchannel crosstalk and a small delay to the rear channels (around 30 to 50 ms) and the crosstalk feeds (5 to 10 ms). Another non-electronic solution for virtualization is Sennheiser’s Surrounder sound collar, which projects a sound field around the listener’s head. However, the Surrounder provides no acoustic isolation and is a different experience in feel and fit from headphones. For more information about surround headphones (including recent developments in 4-channel phone design), see A Quick Guide To Headphones and Technologies for Surround-Sound Presentation in Headphones.

Addendum

8/15/98: added section re: technique for improved in-front localization with supra-aural headphones.

7/10/99: updated section Engineering A Realistic Acoustic Environment In Headphones.

12/18/99: revised sections discussing acoustic simulation technologies.

5/1/00: added discussion of using Dolby Headphone from a mixing console.

8/4/00: added section on acoustic simulation plugins for PC-based music players.

References: Much of the research for this article came from a long (and laborious) review of audio newsgroups, where gabby engineers enthusiastically share their tips and techniques. There is no way to list them all, so I must thank them en masse.

__, In-Ear Monitoring, Garwood Communications c. 1997.
__, How To Mix In-Ear Monitors From The FOH Console, c. 1997, Garwood Communications.
__, How To Mix In-Ear Monitors From The Monitor Console, Garwood Communications c. 1997.
__, What The Pros Say About Garwood In-Ear Monitoring, Garwood Communications c. 1997.
Frink, Mark, “Monitor Lessons,” Mix, January 1998.
Ginsberg, Fred, Headphone Levels”, c. 1999
McCale, Steven, “Earphone Monitoring,” Mix, May 1996.
Santucci, Michael, PERSONAL MONITORS: What You Should Know , Mix, May 1996 (republished at Sensaphonics site).
Santucci, Michael, Musicians Can Protect Their Hearing, Sensaphonics Hearing Conservation, c. 1997.
Santucci, Michael, PROTECTING THE PROFESSIONAL EAR: Conservation Strategies And Devices , Sensaphonics Hearing Conservation, c. 1997.
Schulein, Robert, “Dolby Surround,” Mix, November 1987.

c. 1998, 1999, 2000, 2001 Chu Moy.