CLASSICAL GUITAR: On a typical recording project with a solo guitarist, how many mics are you using on the guitar, and can you describe approximately where you place them in relation to the guitar and in the room?
JOHN TAYLOR: In terms of mic techniques, the main change I made at an early stage, in the mid-1980s, was to abandon the crossed-pair configuration—known as X-Y stereo— in favor of the spaced-pair—called A-B stereo. An X-Y pair of highly directional mics placed close together and pointing towards the left and right, with an angle of around 90º between them, will give a clear spatial image of a group of musicians, when played through a pair of loudspeakers.
But there is another way of creating a stereo image using a single pair of mics—one that is in some ways closer to the way our ears give us a sense of space and direction in real life. Although the ears have some directionality in their pickup, because of the shielding effect of the head and the peculiar shape of the pinna, the eardrum itself responds more like an omnidirectional mic capsule—that is, it senses the fluctuations in air pressure without “knowing” or “caring” where the sound is coming from. Much of the directional information reaching our brains comes in the form of differences of the timing of sounds arriving at the two ears: For example, a handclap coming from somewhere to the right will produce a disturbance of the air that travels as a sound wave and reaches the right ear just before the left ear. This timing information plays a big part in our localization of sounds, and it means, for example, that if you set up a pair of omnidirectional mics spaced about a head’s width apart, record a group of instruments or singers with this pair placed side-by-side in front of them, and listen back on headphones, you’ll get a very clear image in your head, not only of where everyone is, but also of the space where they are performing. Unfortunately, the image on loudspeakers tends to be less clear and stable, but it can still be pretty good, especially if the spacing between the mics is increased somewhat.
My typical way, for what it’s worth, is to use a single matched pair of high-quality omni mics, mounted side-by-side on a stereo bar, about 36 cm (14 inches) apart, and placed not directly in front of the guitar body—which tends to sound rather harsh—but a little higher off the floor, and slightly off to the side; usually to the right from the guitarist’s point of view, i.e. nearer to the bridge than to the soundhole or fingerboard. I take the bridge to be close to the “epicenter” of the sound, and therefore the distance of the mics from the bridge is the most critical thing to get right. The optimum distance can vary quite widely according to how reverberant or otherwise the acoustic [of the room] is, but in the church I most often use—in Weston, Hertfordshire, England—it’s usually around 165 cm (65 inches).
NORBERT KRAFT: At the beginning of each session, I drag along about eight different pairs of microphones in order to try to match a mic’s “personality” to the sound the player is making. I know a number of engineers who simply bring the same mics, trusting that they are “flat” and that they grab exactly the sound being produced. However, there is much more to consider. Each player—even on the same instrument—has a different tone production and projection, and then, mixed with the characteristics of the room, this becomes a tapestry of sounds. This can be a nightmare to the inexperienced, or a fantastic field of discovery for the engineer with good ears and a knowledge of how his microphones work. This part of the session always takes an hour or more, and relies heavily on experimentation and comparative listening tests until the right combination of mics and exact placement are found.
Sure, there are some “formulas,” but the really fine nuances that make the difference between a satisfyingly musical recording and one that is either nasty and too close, or woolly and too far away, can only be found this way. To give an idea, once I have chosen the best mic for the job, then we are playing with distances that may vary at most one or two inches—usually less—in any direction: up, down, apart, or proximity. In my acoustic, which is quite “wet” [ambient] I am typically working at a distance of about 45-55 inches (140–165 cm) from the guitar, with my mics spaced about 18–24 inches (46–61 cm) apart.
Needless to say, everything else in the recording chain has an effect, so my cables, as well as their lengths, preamp, and especially A/D [analog-to-digital] converters are selected for the highest sonic quality. My preamps of choice are from Sonodore, for their perfect balance of clarity with realistic warmth, and the last few years I have gone to a converter from Merging Technologies: Horus. The sound from this unit is as “analog” as I have heard from any, and when needed, I also trust the mic preamps in it to carry a clean signal. My DAW [digital audio workstation] is from SADiE in the UK. I used ProTools in the early days, but SADiE is just that much better-designed for classical music use. The editing program is phenomenally flexible and their tech support is incredibly reliable when I need it.
RICARDO MARUI: I’ve always worked with two pairs of microphones. The difference, today, are the types of microphones I work with. I use omnidirectional microphones and one stereo ribbon microphone, but even these have their deficiencies. Despite all advancements, such microphones, still, are better at capturing what is immediately in front of them than the sound of the space as a whole.
I see the guitar as a device that emits sound in two different regions: the area of the bridge, where all the weight of the sound is located, and the area between the neck and body of the guitar, where the sound becomes defined. I’m putting this very simply, here.
My placement of the microphones seeks to somehow combine the sound of these two regions of the instrument, resulting in a single sound that is at the same time present, defined, and weighty, and in this way, avoids phase cancelling. This is very hard to explain, because it comes only with many years of constant practice. It is also difficult, because microphone placement varies according to the instrument, the musician, and the acoustics of the space.
If you have no objection, could you mention the models of some of the mics you use?
MARUI: In places where I need to capture the sound more closely, avoiding ambient sound, or in live recordings with amplification, I usually use a Neumann KM 184, a cardioid condenser [directional] microphone with a small diaphragm, whose characteristics, in my opinion, are well suited to the sound of the classical guitar.
For CD recordings in studios or on location my preferred combination is the DPA 2006 or 4006 and Royer SF-24 [ribbon mic], with the microphone pre-amplifier Millennia HV-3D. In these settings, I’m always thinking about capturing the sound of the whole, the combined sounds of the instrument and the space.
KRAFT: I’ve almost exclusively been using hand-made microphones by the Dutch electronics genius Rens Heijnis, under the trade name of Sonodore. I first discovered his work nearly 20 years ago through the fabulous mixing desk and preamps he made me, and then his original RCM 402 true omni microphones. They were a revelation to my ears, and from then on my other name-brand mics—Neumann, DPA, AKG, etc.—have been gathering cobwebs. Over the years he has developed several different models, based on different capsules, but all with his fantastically pristine electronics. They are like a perfect “window” on the performance, but without the clinical sterility of a lot of claimed “flat-response” microphones. They are such a musical complement to the performance. A few years ago, after much prodding from me, he finally conceded to making a tube version of his large diaphragm mic, and it’s a miracle! It is similar to the openness and clarity, combined with the palpable richness, of the older tube mics from the 1960s—but without their inherent noise problems—and a stunning sense of “being there” that no mic I have had even comes close to.
TAYLOR: I have no problem about mentioning specific brands or models, mainly because I’m obscure enough that it’s not going to make any difference to their sales, one way or the other! A pair of DPA 4006 omnis was my main weapon of choice for many years, and I still use them from time to time. For example, I find that they make Nigel North’s Baroque lute sound magnificently full and solid. However, in recent years I have been using a matched pair of Schoeps MK2 mics for most of my guitar recordings. These are similar to the DPA 4006 in that they are small-capsule omnidirectional mics designed for tonal accuracy—rather than hyping-up any particular frequency range—and they sound very realistic and transparent. But the Schoeps have a slightly different sonic character, with a delicate, “silvery” quality in the treble range.
Incidentally, I’m not a complete fanatic for minimal miking on everything. For larger groups I often use more than one main pair—including a crossed, but slightly spaced pair of somewhat directional mics, such as the Schoeps MK21—to enhance the stereo image, as well as spot mics if they are really needed for clarity and balance.
What is your typical recording chain? Preamp? EQ? Is everything these days recorded directly into a laptop running some sort of digital software, which I guess could range from Pro Tools down to GarageBand, with plenty in between?
TAYLOR: I use a Buzz Audio—which doesn’t, fortunately! —MA-2.2 dual mic pre as my first choice. It’s a hefty piece of kit for the seemingly modest job of amplifying the tiny mic signals, but it does sound open and clear, with a really good dynamic range. Digital conversion is via the excellent Prism Sound Orpheus, which communicates with a laptop PC using Sequoia as the recording, editing, and mastering workstation. I never use EQ on the way in, and only very sparingly, if at all, on the way out. The one piece of outboard that I use fairly often is the Bricasti M7 reverb, which can be very useful for adding a little extra touch of sustain in an acoustic that’s just on the dry side of ideal. But I only use it sparingly, and only at the mastering stage, never during the actual recording.
KRAFT: Needless to say, everything else in the recording chain has an effect, so my cables, as well as their lengths, preamp, and especially A/D converters are selected for the highest sonic quality. My preamps of choice are from Sonodore, for their perfect balance of clarity with realistic warmth, and the last few years I have gone to a converter from Merging Technologies: Horus. The sound from this unit is as “analog” as I have heard from any, and when needed, I also trust the mic preamps in it to carry a clean signal. My DAW is from SADiE in the UK. I used ProTools in the early days, but SADiE is just that much better designed for classical music use. The editing program is phenomenally flexible and their tech support is incredibly reliable when I need it.
MARUI: My recording setup is totally digital, which means that, in the recording stage, my chain is: microphones, good cables, a superior quality preamp like the Millennia—for recordings of classical guitar I never use preamps for valved microphones—and a good digital-analog converting interface. Recently, I’ve been using Metric Halo, which has yielded very interesting results.
The software for recording, editing, and mixing that I currently use is Pro Tools 12, but I’ve followed, with a lot of interest, the development of excellent software like Sequoia and Samplitude, which run on Windows. This is another myth that is currently falling, if it hasn’t already, that audio software must always run on a Mac.
I suppose this would vary from project to project and piece to piece, but how much editing do you typically do in post-production? I presume that recording in a chapel negates the need to add extra reverb on the back end?
MARUI: I’ve learned something important about editing over the years. Although I am a guitarist and generally know the repertoire that is being recorded very well, I prefer to work with good producers rather than to also want to do their job; playing a double role.
Best results are achieved in recording when each person carries out the function that is his or her own: The musician plays the instrument, the sound engineer directs all of his attention to technical elements, and the producer organizes the recording process and directs the musicians. The work of the producer is many times relegated when discussing audio recording, but I consider it a fundamental part of the whole thing. Although it is still a common practice today, I’m firmly against the idea that a musician should perform at the same time the role of producer and artist in the recording studio.
An important point relative to editing is that, given the technical ease afforded by the digital world, I think many artists sometimes edit too much and, in this way, lose track of what is most important in a recording. Many times, a whole section with wonderful phrasing and an inspired interpretation is thrown away because a single note did not resonate as brightly as the rest.
With respect to mixing, I confess that I’m very economical with regard to the plug-ins that I use at this stage. I try to be as careful as I can in the process of positioning the microphones and capturing the sound, in order to make my life easier during mixing. When the capturing is well-done, in an environment with good acoustics, I do only a few corrections in equalizing the sound, and an adjustment of reverb to match the sound of the room. I rarely use compressors so I can preserve the musician’s choice of dynamics. When recording in a chapel, I generally position the microphones such that they capture the ambience of the space, so adding extra reverb during mixing becomes unnecessary.
TAYLOR: Yes, the amount of editing can vary enormously from one project to another, and not always for the obvious reason that some players are better—or at least better-prepared—than others. In some cases, the music itself is extremely fiddly to play, no matter how well it has been prepared. Or it can sometimes happen that an excellent player is constantly looking for something extra-special in the way of musical character, expression, daring virtuosity, or whatever it may be, and in these cases it can be exciting to push the limits of what’s possible in a recording, using multiple takes and finding a way of editing the best bits together so that the whole thing sounds like a wonderful, coherent performance. What I find less thrilling is “negative editing” —that is, spending hours and hours trying to eliminate every microscopic flaw in the playing, to make it all perfectly clean and tidy, while adding nothing of musical value. Although a certain amount of this tidying-up is necessary—it’s understandable that no one wants avoidable flaws to be included on a CD—there is also a danger that over-zealous editing can lead to “death by a thousand cuts,” where everything ends up sounding so clean and controlled that you can no longer believe you’re hearing a performance by an actual human being.
Still, there are plenty of good reasons for editing in almost every case, the main one being that its availability leaves players free to go all-out to realize their vision of how the music should go, without fear of calamity if there is a slip of the finger or momentary loss of concentration. Many times, I find that a single take contains some real gems, of perhaps a few measures that are unrepeatably lovely, but that other parts of the take are not good enough to use. With editing, none of those special moments need go to waste.
As I mentioned, I will sometimes add a tiny bit of extra reverb at the final stage, even if the recording has been made in a real “live” space such as a church—typically, if the real sound of the venue is fine as far as it goes, but would be nicer still if the notes lingered in the air just a fraction of a second longer. However, I’m always very cautious about adding reverb, as it’s fatally easy to overdo it. To my mind, many guitar recordings are ruined by the heavy-handed addition of a booming reverb that sounds impressive for a moment or two, but soon becomes an unwelcome distraction from the music and the true quality of the playing.
I’d estimate that a typical 65-minute CD has between 300 and 700 edits.
KRAFT: I’d estimate that a typical 65-minute CD has between 300 and 700 edits. That always comes as a shock to everyone. But it’s not because the player is making mistakes; rather we are striving for the best possible musical result. For example, if in measure 45 there is a wonderful G-sharp that we have to use to make the phrase sing, then that will invite at least two other edits or more, just to work in that note. Other takes may have had a fine G-sharp, but this one is so spectacular that we have to get it in there and make it match the surrounding material. So maybe I’m a masochist, but all my editing works toward this ideal in every moment of the music. In fact, this stems from the session itself, so I know from the beginning of the recording process what will affect the final edited version, weeks later.
Editing is by far the most time-consuming part of the work—where a session is two or three days, maybe 15–18 hours, the editing is 60–80 hours, and often more. I almost never add other artificial effects, EQ, reverb, etc., because these usually do not add, they take away. You cannot add something that is not there, but only compensate perhaps for flaws. I have on two occasions used a bit of limiter or compression, and in some cases I need to take away some background sound—traffic, birds—in takes that are truly special but had some external sound that was distracting. But for the most part, I work with the untampered live sound.
I hope this question does not sound rude or cheeky—that’s certainly not my intention—but is there anything an engineer can do to mitigate the often audible sniffs and snorts that are an essential part of the way many guitarists breathe during their performances?
MARUI: The breathing sounds don’t bother me all that much because, generally, good musicians breathe together with the music; that is, the breathing flows with the piece. There are, however, a few tricks to minimize this, like positioning the microphones so as to capture fewer breathing noises, and even in some cases, using plug-ins like Izotope RX, which efficiently eliminates extraneous sounds without interfering in the music.
TAYLOR: Personally, I have quite a high tolerance of breathing noises. Unless they’re really obtrusive, they just seem to me another sign of life, and I’m always hoping that listeners will at least experience the illusion that they are hearing a real person playing in a real space when they put on a recording I have produced. However, I do sometimes attempt to cut out sniffs during rests—or, more precisely, replace them with an equal length of matching ambience without the sniff.
In some cases, it’s possible to use a noise reduction tool, such as the one called “spectral cleaning” on Sequoia [DAW], which tries to identify the frequencies present in an unwanted noise, and remove or reduce it without disturbing the musical sound. But this can be difficult in the case of breathing noises, which cover a wide range of frequencies and can last as long as several notes each. Incidentally, spectral cleaning can sometimes be quite effective in reducing string squeaks—the noise reduction tool focuses on the squeak frequencies while leaving the musical notes virtually unchanged.
If all else fails, you can always fall back on a clothes peg applied to the player’s nose, or a gag over the mouth—or so I’m told. So far I’ve never resorted to such drastic measures myself.
(Special thanks to David Molina for his expert translations from Portuguese for Ricardo Marui’s answers!)