Mixing Audio Concepts, Practices PDF
Document Details
Uploaded by HarmlessConsciousness8063
Universidad Latinoamericana de Ciencia y Tecnología
2018
Roey Izhaki
Tags
Summary
Mixing Audio: Concepts, Practices, and Tools, 3rd Edition, by Roey Izhaki, is a comprehensive guide to the art of mixing audio, covering fundamental concepts and advanced techniques. The book emphasizes the importance of a mixing vision, crafting and evaluating mixes, and the integration of mixing tools and techniques. It includes a new chapter on mixing and the brain, along with updated information.
Full Transcript
Mixing Audio This third edition of Mixing Audio: Concepts, Practices, and Tools is a vital read for anyone wanting to succeed in the field of mixing. This book covers the entire mixing process —from fundamental concepts to advanced techniques. Packed full of photos, graphs, diagrams, and audio sam...
Mixing Audio This third edition of Mixing Audio: Concepts, Practices, and Tools is a vital read for anyone wanting to succeed in the field of mixing. This book covers the entire mixing process —from fundamental concepts to advanced techniques. Packed full of photos, graphs, diagrams, and audio samples, it teaches the importance of a mixing vision, how to craft and evaluate your mix, and how to take it a step further. The book describes the theory, the tools used, and how these are put into practice while creating mixes. The companion website, featuring over 2,000 audio samples as well as 5 multitracks, is a perfect complement to the third edition. The new edition includes: A new “Mixing and the brain” chapter that provides a cognitive/psychological overview of many aspects related to and affecting mixing engineers (and listeners). Updated figures and text reflecting recent software updates and trends. Roey Izhaki holds a BA in Recording Arts and has been mixing since 1992. An audio engineering academic lecturer for 10 years, he has given mixing and audio seminars across Europe. Mixing Audio Concepts, Practices, and Tools Third Edition Roey Izhaki Third edition published 2018 by Routledge 711 Third Avenue, New York, NY 10017 and by Routledge 2 Park Square, Milton Park, Abingdon, Oxon, OX14 4RN Routledge is an imprint of the Taylor & Francis Group, an informa business © 2018 Roey Izhaki The right of Roey Izhaki to be identified as author of this work has been asserted by him in accordance with sections 77 and 78 of the Copyright, Designs and Patents Act 1988. All rights reserved. No part of this book may be reprinted or reproduced or utilized in any form or by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying and recording, or in any information storage or retrieval system, without permission in writing from the publishers. Trademark notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe. Library of Congress Cataloging in Publication Data Names: Izhaki, Roey, author. Title: Mixing audio : concepts, practices, and tools / Roey Izhaki. Description: Third edition. | New York, NY : Routledge, 2017. | Includes bibliographical references. Identifiers: LCCN 2016039166| ISBN 9781138241381 (hardback) | ISBN 9781138859784 (pbk.) | ISBN 9781315716947 (ebk) Subjects: LCSH: Sound—Recording and reproducing—Digital techniques. | Sound recordings—Production and direction. Classification: LCC TK7881.4.I94 2017 | DDC 621.389/3—dc23 LC record available at https://lccn.loc.gov/2016039166 ISBN: 978-1-138-24138-1 (hbk) ISBN: 978-1-138-85978-4 (pbk) ISBN: 978-1-315-71694-7 (ebk) Typeset in Univers by Florence Production Ltd, Stoodleigh, Devon, UK Visit the companion website: www.MixingAudio.com iv Contents Symbols used xi Introduction 1 Part I: Concepts and practices 5 1 Music and mixing 7 Music: an extremely short introduction 7 The role and importance of the mix 8 The perfect mix 11 2 Some axioms and other gems 14 Louder is better 14 Percussives weigh less 16 Importance 17 Natural vs. artificial 17 3 Learning to mix 20 What makes a great mixing engineer? 20 The ability to work fast 24 Methods of learning 24 Mixing analysis 25 Reference tracks 27 4 The process of mixing 30 Mixing and the production chain 30 The mix as a composite 34 Where to start 34 Deadlocks 41 The raw tracks factor 41 Milestones 42 Finalizing and stabilizing the mix 42 v vi Contents 5 Related issues 45 How long does it take? 45 Breaks 46 Using solos 46 Mono listening 47 Housekeeping 48 Mix edits 50 Mastering 50 6 Mixing and the brain 53 Dual process theory 53 The power of the unconscious 55 Intuition 58 Thinking without thinking 60 Emotions 61 Change 62 Creativity 64 7 Mixing domains and objectives 66 Mixing objectives 66 Definition 67 Interest 68 Frequency domain 69 Level domain 73 Stereo domain 76 Depth 79 Part II: Tools 83 8 Monitoring 85 How did we get here? 85 Choosing monitors 89 The room factor 92 Positioning monitors 96 Headphone mixing 100 9 Meters 105 Amplitude vs. level 105 Mechanical and bar meters 107 Peak meters 107 Average meters 107 Phase meters 109 vi Contents vii 10 Mixing consoles 111 Buses 112 Processors vs. effects 112 Basic signal flow 114 The importance of signal flow diagrams 120 Groups 125 In-line consoles 134 The monitor section 138 Correct gain structure 144 The digital console 146 11 Software mixers 149 Tracks and mixer strips 149 Routing 154 The internal architecture 157 12 Phase 170 What is phase? 170 Problems 172 Tricks 174 13 Faders 183 Types 183 Scales 185 Working with faders 187 14 Panning 190 How stereo works 190 Pan controls 192 Types of track 200 Panning techniques 204 Beyond pan pots 208 15 Equalizers 210 Applications 210 The frequency spectrum 214 Types and controls 218 Graphic equalizers 234 In practice 237 Equalizing various instruments 253 vii viii Contents 16 Introduction to dynamic range processors 266 Dynamic range 266 Dynamics 268 Dynamic range processors in a nutshell 269 17 Compressors 274 The course of history 275 The sound of compressors 277 Principle of operation and core controls 278 Additional controls 290 Controls in practice 298 Applications 314 Tricks 324 More on compressors 329 18 Limiters 337 19 Gates 339 Controls 340 Applications 352 In practice 356 Tricks 359 20 Expanders 366 Controls 367 In practice 369 Upward expanders 372 21 Duckers 375 Operation and controls 375 Applications 378 22 Delays 381 Delay basics 381 Types 385 In practice 390 Applications 395 23 Other modulation tools 398 Vibrato 398 ADT 399 Flanging 401 Phasing 403 Tremolo 404 viii Contents ix 24 Reverbs 406 What is reverb? 406 Applications 406 Types 411 Reverb programs 419 Reverb properties and parameters 422 Early reflections (ERs) 425 Reverbs and stereo 434 Other reverb types 438 Reverbs in practice 444 25 Distortion 451 Background 451 Distortion basics 451 Ways to generate distortion 454 26 Drum triggering 460 Methods of drum triggering 461 27 Other tools 465 MS 465 Pitch shifters and harmonizers 469 Exciters and enhancers 470 Transient designers 472 28 Automation 474 Automation engines 475 The automation process 475 Automation alternatives 478 Control surfaces 479 Part III: Sample mixes 483 29 “Show Me” (rock ’n’ roll) 485 Drums 486 Bass 488 Rhythm guitar 488 Lead guitar 489 Vocal 490 30 “It’s Temps Pt. II” (hip-hop/urban/grime) 492 Beat 492 Bass 495 ix x Contents Other tracks 496 Vocals 500 31 “Donna Pomini” (techno) 506 Ambiance reverb 506 Beat 507 Sound FX 512 Bass 513 Vocal 514 Other elements 514 32 “The Hustle” (DnB) 519 Ambiance reverb 519 Drums 520 Motif elements 525 Pads 528 Horns and brass 529 Risers 531 Strings 532 33 “Hero” (rock) 533 Drums 534 Bass 539 Rhythm guitar 540 Lead guitar 543 Vocals 543 Appendix A: The science of bouncing 547 Appendix B: Notes-to-frequencies chart 553 Appendix C: Delay time chart 554 Index 565 x Symbols used Audio samples www.MixingAudio.com/audio Tracks referenced within these boxes are included on the website, organized in folders by chapter. Please mind your monitoring level when playing these tracks. Notes These boxes contain tips or other ideas worth remembering. xi Introduction It’s not often a new form of art is conceived; where or when the art of mixing was born is not easy to answer. We can look at the instrumentation of orchestral pieces as a very primitive form of mixing—different instruments that played simultaneously could mask one another; composers understood this and took it into account. In the early days of recording, before multitrack recorders came about, a producer would place musicians in a room so that the final recording would make sense in terms of levels and depth. Equalizers, com- pressors and reverbs hadn’t yet been invented; there was no such role as a mixing engineer either; but sonically combining various instruments in order to produce an appealing, coherent, and aesthetic sound was an ambition shared by many. Like many other new forms of creative expression that emerged in the twentieth century, mixing was tied to technology. It was the appearance of the multitrack tape machine during the 1960s that signified the dawn of mixing as we know it today. Yes, there was a time when having the ability to record eight instruments separately was a dream come true. Multitracks allowed us to repeatedly play recorded material before committing sonic treatment to the mix. Equalizers, compressors, and reverbs soon became customary in studios; audio consoles grew in size to accommodate more tracks and facilities. We had more sonic control over individual tracks and over the final master. The art of mixing was flourishing. Music sounded better. The 1990s significantly reshaped much of the way music is made, produced, recorded, mixed, and even distributed—computers triumphed. Real-time audio plugins were first introduced with the release of Pro Tools III as far back as 1994, but such a setup required a dedicated DSP card. It was Steinberg’s 1996 release of Cubase VST that gave us the audio plugins we now take for granted—a piece of software that can perform real- time audio calculations using the computer’s CPU. The term project studio was soon coined as computers became more affordable and capable, and the hiring of expensive studios was no longer a requisite for multitracking and mixing. However, the processing power of computers back then could still not compete with the quality and quantity of mixing devices found in a professional studio. Things have changed—running 10 quality reverbs simultaneously on a modern DAW has been a reality for some time. There are now more audio plugins in the market than hardware units, and the quality of these plugins is constantly improving. Professional studios will always, it seems, have an advantage over project studios, if only for their acoustic qualities. However, DAWs offer outstanding value for money, constantly improving quality and widening possibilities. 1 2 Introduction So is everything rosy in the realm of mixing? Not quite. It is thanks to computers that mixing has moved from large and expensive studios into bedrooms. More people than ever are mixing music, but only a few can be labeled experts. Mixing used to be done by skilled engineers, who were familiar with their studio and the relatively small set of expensive devices it contained. Mixing was their occupation—and for many their raison d’être. On the contrary, project studio owners generally do much more than just mixing— for many, it is just another stage in an independent production chain. So how can these people improve their mixing, specifically when time is often constrained? This is where this book comes in. When the first word of this book was typed back in 2004, mixing literature was limited, cluttered, and often only scratched the surface. This book was originally conceived to stand as a much-needed comprehensive source. It is hard to believe that just over a decade later, we have the opposite problem—there is too much out there, and if literature isn’t enough there are also blog posts, online forums, and video tutorials. As us humans are increasingly and involuntarily assuming the role of information filters in this vast jungle called the Web, a book such as this can spare many the foraging—what you need or wish to know in one place. Being comprehensive doesn’t come without a cost, though. This book is long; painfully long if you ask me. Perhaps of little comfort is that this book isn’t quite a cover-to-cover type of read—feel free to stop reading now, look at the table of contents, and jump to the topic of most interest to you. Possibly not everything will be clear without reading some preceding chapters, but you should grasp the bulk of it. Regardless, many readers have testified that with so much to digest, it was only on the second or third reading understanding sunk in. I would like, in this opening text, to expose the greatest misconception that exists about mixing: it is wrongly assumed by some that mixing is a purely technical service, and some even declare that mixing is simply a remedy for imperfect recordings. There is no doubt that mixing entails technical aspects: a problematic level balance, uncontrolled dynamics and deficient frequency response are just a few of the technical issues we encounter. Yet, with the right amount of effort, almost anybody can master the technical aspects of mixing—after compressing 100 vocal tracks, one should be getting the hang of it. Technical skills are advantageous but can be equally acquired by all. The true essence of mixing does not lie in these skills. Many mixes are technically great, but nothing more than that; equally, many mixes exhibit some technical flaws, but as a listening experience they are breathtaking. It is for their sheer creativity—not for their technical brilliance—that some mixes are highly acclaimed and their creators deemed sonic visionaries. The sonic qualities of music are inseparable from the music itself—the Motown sound, the Neve sound, the Wallace sound, and so forth. The nontechnical side of mixing entails crafting the sonic aspects of music: shaping sounds, crystallizing soundscapes, establishing harmony between instruments, and building impact—all rely on the many creative decisions that we make; all are down to the talent and vision of each individual; all have a profound influence on how the music is perceived. It is in the equalization we dial, in the reverb we choose, in the attack we set on the compressor, to name but a few. There simply isn’t one correct way of doing things—be it an acoustic guitar, a kick, or any other instrument, it can be mixed in 100 ways; all could be considered technically correct, but some would be more remarkable than others. A mix is a sonic portrait of the music. The same way different portraits of a person can each project a unique impression, different mixes can 2 Introduction 3 convey the essence of the music in extremely different ways. We are mixing engineers, but more importantly: we are sonic artists. By the time you finish reading this book, you should have far more knowledge, a greater understanding, and improved auditory skills that will together enable you to craft better mixes. However, I hope that you keep this in mind: Mixing is an art. A friendly warning It would not make sense for wine tasters to sip boiling oil, just as it would not make sense for mixing engineers to stick sharp needles into their eardrums. While I have yet to meet an engineer who fancies needles in his or her eardrums, very loud levels can be equally harmful. Unlike needle-sticking, the hearing damage caused by loud levels is often not immediate, whether involving short or long periods of exposure. Sparing the medical terminology, with years one might lose the ability to hear high frequencies, and the really unlucky could lose substantial hearing ability. In some circumstances, very loud levels can cause permanent damage to the eardrum and even deafness. Most audio engineers, such as myself, have had one or two level-accidents; the majority of us are fine. But hearing a continuous 7 kHz tone is no laughing matter, especially when it lasts for three days. The allowance, as they say in Italian, is forte ma non troppo—loud but not too much. The National Institute for Occupational Safety and Health in the USA recommends that sound exposure to 85 dBSPL should not exceed eight hours per day, halving the time for each 3 dB increase. A quick calculation reveals that it is only safe to listen to 100 dBSPL for 15 minutes. A screaming child a meter away is roughly 85 dBSPL. A subway train one meter away produces roughly 100 dBSPL when cruising at normal speed. On the website that accompanies this book, I have done my best to keep relatively consistent levels. Still, some samples had to be louder than others. Please mind your monitoring level when listening to these samples. Remember that too quiet can easily be made louder, but it might be too late to turn down levels once they are too loud. Why we like loud levels so much is explained in Chapter 2. But if we are all to keep enjoying music, all we have to do is be sensible about the levels at which we mix and listen to music. Levels, like alcohol, are best enjoyed responsibly. 3 Part I Concepts and practices 5 1 Music and mixing Music: an extremely short introduction You love music. All of us are mixing because music is one of our greatest passions, if not the greatest. Whether starting as a songwriter, bedroom producer, performer, or studio tea boy, we were all introduced to mixing through our love of music and the desire to take part in its creation. Modern technology dictates, to some extent at least, how we go about our life: we watch more and read less, we message more and talk less, we look at our smartphones more and less at one another. As far as music is concerned, new technologies have provided new opportunities, increased reach, and improved quality. The invention of the wax cylinder, radio transmission, tapes, CDs, software plugins, iTunes, smartphones, and Spotify has made music more readily accessible, widely consumed, and easier to create. One of mankind’s most influential inventions—the Internet—is perhaps music today’s greatest catalyst. Nowadays, a computer or a smartphone is all one needs to browse, listen to, and purchase music. Music is universal and all-encompassing. It is in our living rooms, in our cars, in malls, on our televisions, and in hairdressing salons. There is a strong bond between music and mixing (other than the obvious connection that music is what’s being mixed), and to understand it we should start by discussing the not- too-distant past. History teaches us that in the Western world, sacred music was very popular until the nineteenth century, with most compositions commissioned for religious purposes. Secular music has evolved throughout the years, but changed drastically with the arrival of Beethoven. At the time, Beethoven was daring and innovative, but it was the way that his music made people feel that changed the course of music so dramatically. Ernest Newman once wrote about Beethoven’s symphonies: The music unfolds itself with perfect freedom; but it is so heart-searching because we know all the time it runs along the quickest nerves of our life, our struggles & aspirations & sufferings & exaltations.1 We can easily identify with this when we think about modern music—there is no doubt it can have a huge impact on us. Following Beethoven, music became a love affair between two willing individuals—the artist and the listener—fueled by what is today an inseparable part of music: emotions. 1. Allis, Michael (2004). Elgar, Lytton, and The Piano Quintet, Op. 84. Music & Letters, Vol. 85 No. 2, pp. 198–238. Oxford University Press. Originally a letter from Newman to Elgar, January 30, 1919. 7 8 Concepts and practices Today, music rarely fails to trigger emotions—all but a few pieces of music have some sort of mental or physical effect on us. “Killing in the Name” by Rage Against the Machine can trigger a feeling of rage or rebellious anger. Others find it hard to remain stationary when they hear “Hey Ya!” by OutKast. Music can turn a bad morning into a good one. Music can also trigger sad or happy memories, and so the same good morning can turn into a more retrospective afternoon after hearing Albinoni’s “Adagio” (which goes to show that it’s not just emotive lyrics that affect us). As we shall soon see, our response to music mostly stems from our unconscious mind. Yet, we sometimes deliberately listen to music in order to incite a certain mood—some listen to ABBA as a warm-up for a night out, others to Iggy Pop. Motion-picture directors understand well how profoundly music can affect us and how it can be used to solicit certain emotional responses from the audience. We all know what kind of music to expect when a couple fall in love or when the shark is about to attack; it would be a particular genre of comedy that used “YMCA” during a funeral scene. As mixing engineers, one of our prime functions, which is actually our responsibility, is to help deliver the emotional context of a musical piece. From the general mix plan to the smallest reverb nuances, the tools we use—and the way we use them—can all sharpen and even create power, aggression, softness, melancholy, psychedelia, and many other moods. Mostly, it would make little sense to distort the drums on a mellow love song, just as it would not be right to soften the beat of a hip-hop production. When approaching a new mix, we may ask ourselves: What is this song about? What emotions are involved? What message is the artist trying to convey? How can I support and enhance the song’s vibe? How should the listener respond to this piece of music? As basic as this idea might seem, it is imperative to comprehend—it is emotions that gel the music and mix together, not technical excellence. A mix can, and should, enhance the music: its mood, the emotions it conveys, and the response it should incite. The role and importance of the mix A basic definition of mixing is: a process in which multitrack material—whether recorded, sampled, or synthesized—is balanced, treated, and combined into a multichannel format (most commonly, two-channel stereo). But a less technical definition would be: a sonic presentation of emotions, creative ideas, performance, and musicianship. Even for the layperson, sonic quality does matter. Take talking on a cellphone, for example —people find it annoying when background noise masks the other party. Intelligibility is the most elementary requirement when it comes to sonic quality, but it goes far beyond that. Some new cellphone models with integrated speakers are no better than playback systems from the 1950s. It is no wonder that people prefer listening to music via their kitchen’s mini-system or the living room hi-fi. What would be the point of more expensive hi-fi systems if the sound quality were no better than a cellphone speaker? 8 Music and mixing 9 Sonic quality is also a powerful selling point. It was a major contributor to the rise of the CD and the fall of compact cassettes. Novice classical music listeners often favor new recordings over older, monophonic ones, regardless of how acclaimed the performance on these early recordings is. Record companies issue digitally remastered versions of classic albums that allegedly sound better than the originals. The once-ubiquitous iPod owed much of its popularity to the MP3 format—no other lossy compression format has managed to produce audio files so small, yet of an acceptable sonic quality. The majority of people appreciate sonic quality more than they realize. It is our responsibility as mixing engineers to craft the sonic aspects of the final mix. This involves how different instruments combine, but also how each sounds—the sum and its parts. Let us consider for a moment the differences between studio and live record- ings. During a live concert, there are no second chances. You are unable to rectify sloppy performance or a buzz from a faulty DI box. Both the recording equipment and the environment are inferior compared with those found in most studios—it would be unreasonable to place Rihanna in front of a U87 and a pop shield during a live show. Also, when a live recording is mixed on location, a smaller and cheaper arsenal of mixing equipment is used. All of these constraints could result in different instruments suffering from masking, poor definition, erratic dynamics, and deficient frequency response, to name just a few possible problems. Audio terms aside, these can translate into a barely audible bass guitar, honky lead vocals that come and go, a kick that lacks power, and cymbals that lack spark. Altogether, these can make a live recording less appealing. A studio recording is not immune to such problems, but in most cases it provides much better raw material to work with, and, in turn, better mixes. With all this in mind, the true art of mixing is far more than just making things sound right... Many people are familiar with Kurt Cobain, Dave Grohl, and Krist Novoselic as the band members of Nirvana, who back in 1991 changed the face of alternative rock with the release of Nevermind. The name Butch Vig might ring a bell for some, but the general public will be unlikely to have heard of Andy Wallace. The front cover of my Kill Bill DVD makes it extremely difficult to ignore Tarantino’s writer and director credits. But it is seldom that an album cover credits the producer, let alone the mixing engineer. Arguably, the production of Dr. Dre can be just as important as the artists he produces, and perhaps Nevermind would have never been such an enormous success had it not been for Andy Wallace’s mixes. Nevertheless, record labels generally see very little marketing potential in production personnel. Ironically, major record companies do part with large sums of cash in order to have a specific engineer mix an album because they all realize that: The mix plays an enormous role in an album or track’s success. To understand why, one may wish to compare Butch Vig’s and Andy Wallace’s mixes for Nirvana’s “Smells Like Teen Spirit” (both can easily be found online through streaming services). Both Vig and Wallace used the same raw tracks; yet their mixes are distinctly different. Vig’s mix entails some unbalanced frequency spectrum that involves masking and the absence of spark; a few mixing elements, such as the snare reverb, are easily discernible. Wallace’s mix is burnished and balanced; it boasts high definition and perfect separation between instruments; the ambiance is present, but like many mixing elements it is subtle. 9 10 Concepts and practices Perhaps the most important difference between the two mixes is that Vig’s mix sounds more natural (more like a live performance), while Wallace’s mix sounds more artificial. It is not equipment, time spent, or magic tricks that made these two mixes so dissimilar— it is simply the different sonic visions of Vig and Wallace. Vig has opted for the real and organic, whereas Wallace, a sonic alchemist who was perfecting his polishing skills at the time, combined every aspect of this powerful song into an extremely appealing masterpiece, albeit not a live-sounding one. Like many other listeners, Gary Gersh—Geffen Records’ A&R—liked it better. Straight after recording Nevermind, it was Vig that started mixing the album. Having spent countless hours listening to the same songs while recording them, it is common for a producer to wear out and develop sonic biases. A tight schedule and some artistic disagreements with Cobain left everyone (including Vig) feeling that it would be wise to bring fresh ears in to mix the album. From the bottom of the prospective engineers list, Cobain chose Wallace, mostly due to his mixing credits for Slayer. Despite Nirvana approving the mixes, following Nevermind’s extraordinary success, Cobain complained that the overall sound of Nevermind was too slick—perhaps suggesting that Wallace’s mixes were too listener-friendly for his somewhat anarchic and unrefined taste. Artistic disagreements are something engineers come across often, especially if they ignore the artist’s musical values. Yet, some suggested that Cobain’s retroactive complaint was only a mis-targeted reaction to the massive success and sudden fame the album brought. Not only did Nevermind leave its mark on music history; it also left a mark on mixing history— its sonic legacy, a part of what is regarded as the Wallace sound, is still heavily imitated today. As testament to Wallace’s skill, Nevermind has aged incredibly well and still impresses despite enormous advances in mixing technology. Seldom do we have the opportunity to compare different mixes of the same song. The 10th anniversary edition of The Holy Bible by the Manic Street Preachers allows us to compare an entire album. The package contains two versions of the album—the UK release was mixed by Mark Freegard and the US one by Tom Lord Alge. There is some similarity here to the Vig vs. Wallace case, where Freegard’s mixes are cruder and drier compared with the livelier, brighter, and more defined mixes of Alge. In the included DVD, the band comments on the differences between the mixes, saying that for most tracks Alge’s mixes better represented their artistic vision. Arguably, neither version features exceptional mixes (most likely due to poor recording quality in a cheap facility), but the analytical comparison between the two is worthwhile. The two examples above teach us how a good mix can sharpen the emotional message of a musical piece, make it more appealing to the listener, and boost commercial success. Conversely, a bad mix can negatively affect a potentially great production and significantly impair its chance of success. This is not only relevant for commercial releases. The price and quality of today’s DAWs enable unsigned artists and bedroom producers—with enough talent and vision—to craft mixes that are of an equal standard to commercial mixes. For quite some time now, A&Rs are receiving demos of a respectable mix quality. Just as a studio manager might filter through a pile of CVs and eliminate candidates based on poor presentation, an A&R might dismiss a demo for its poor mix. Mixing engineers know what a dramatic effect mixing can have on the final product. With the right amount of effort, even the poorest recording can be made appealing. Yet, there are a few things we cannot do; for example: correct a truly bad performance, compensate 10 Music and mixing 11 for a very poor production, or alter musical ideas. If the piece does not have potential to begin with, it will fail to impress the listener, no matter how noteworthy the mix is. A mix is as good as the song. The perfect mix It doesn’t take much experience before the novice mixer can begin to recognize problems in a mix. For instance, we quickly learn to identify vocals that are too quiet or a deficient frequency response. We will soon see that, once a mix is problem-free, there are still many things we can do in order to make it better. The key question is: What is better? At this point, I recommend an exercise called excerpt set (Figure 1.1)—an essential mixing experiment. It takes around half an hour to prepare, but provides a vital mixing lesson. The excerpt set is very similar to a DJ set, except each track plays for around 20 seconds and you do not have to beat-match. Simply pull around 20 albums from your music library, pick a single track from each, and import it into your audio sequencer. Then trim a random excerpt of 20 seconds from each track and arrange the excerpts consecutively. It is important to balance the perceived level of all excerpts, and cross-fade them. Now listen to your set, beginning to end, and notice the differences between the mixes. You are very likely to identify great differences between all the mixes. You might also learn that mixes you thought were good are not as good when played before or after another mix. While listening, try to note mixes that you think overpower others. This exercise will help develop a heightened awareness of what a good mix is and why. Most of us do not have a permanent sonic standard stored in our brains, so a mix is only better or worse than the previously played mix. The very same mix can sound dull compared with one mix but bright compared with another. (With experience, we develop the ability to critically assess mixes without the need for a reference, although usually only in a familiar listening environment.) In addition, our auditory system has a very quick settle- in time, and it becomes accustomed to different sonic qualities so long as these remain constant for a while. In essence, all our senses work that way—a black-and-white scene in a color movie is more noticeable than the lack of color on a black-and-white TV. The reason why the excerpt set is such an excellent tool for revealing differences is that it does not give the brain a chance to settle in to a particular style. When mixes are played in quick succession, we can more easily perceive the sonic differences between them. Different engineers have different ideas and mix in different environments, and therefore produce different mixes. Our ears are able to tolerate radical differences as long as mixes are not heard in quick succession. It is hard to find two albums that share an identical sound because different genres are mixed differently—jazz, heavy metal, and trance will rarely share the same mixing philosophy; different songs involve different emotions and therefore call for different soundscapes; and the quality and nature of the raw tracks vary between projects. But we shouldn’t forget that each mixing engineer is an artist in their own right, and each has different visions and ideas about what’s best. Asking what is a perfect mix is like asking who is the best writer that ever lived, or who was the greatest basketball player of all time—it is all down to subjective opinion. 11 12 Figure 1.1 Excerpt set. This sequence of 20-second excerpts from various productions is used as an important comparison tool between mixes. Music and mixing 13 Mixing engineers will often adjust their style depending on the project. One example is Rich Costey, who mixed Muse’s Absolution, imbuing it with a very polished feel. He later produced Franz Ferdinand’s You Could Have It So Much Better, taking a much rawer mixing approach with a distinctly retro feel. His mixes on Glasvegas’s debut album are anthemic and sharp- sounding, featuring dominant reverbs typical of mixes from the 1980s. Humbug by Arctic Monkeys feels dark and beefy and features a contemporary sound with retro touches. Each of these different mixing approaches works a charm for its respective album. 13 2 Some axioms and other gems Louder is better In 1933, two researchers at Bell Labs—Harvey Fletcher and W.A. Munson—conducted one of the most significant experiments in psychoacoustics. Their experiment was based on a series of tests taken by a group of listeners. Each test involved playing a test frequency followed by a reference tone of 1 kHz. The listener simply had to choose which of the two was louder. Successive tests involved either a different test frequency or different levels. Essentially, what Fletcher and Munson tried to conclude is how louder or softer different frequencies had to be in order to be perceived as loud as 1 kHz. They compiled their results and devised a chart known as the Fletcher–Munson curves. A chart based on the original Fletcher–Munson study is shown in Figure 2.1. I am presenting it upside down, as it bears a resemblance to the familiar frequency-response graphs that we see on some equalizers, with peaks at the top. A similar experiment was conducted two decades later by Robinson and Dadson (resulting in the Robinson–Dadson contours), and today we use the ISO 226 standard (which is still subject to occasional revisions). The formal name for the outcome of these studies is equal-loudness contours. Each curve in Figure 2.1 is known as a phon curve, labeled by the level of the 1 kHz reference. To give an example of how to read this chart, we can follow the 20-phon curve and see that, if 1 kHz is played at 20 dBSPL, 100 Hz would need to be played at 50 dBSPL in order to appear equally loud (a 30 dB difference, which is by no means marginal). The graph also teaches us that our frequency perception has a bump around 3.5 kHz—this is due to the resonant frequency of our ear canal. Interestingly, this is pretty much the center frequency of a baby’s cry. One important thing that the equal-loudness contours teach us is that we are more sensitive to mid-frequencies—an outcome of the lows and highs roll-off that can be seen on the various curves. But more importantly, it is evident that at louder levels our frequency perception becomes more even—the 0-phon curve in Figure 2.1 is the least flat of all the curves; the 100-phon curve is the most even. Another way to look at this is that the louder music is played, the louder the lows and highs are perceived. In extremely general terms, we associate lows with power and highs with definition, clarity, and spark. So it is only natural that loud levels make music more appealing—louder is perceived as better. 14 Some axioms and other gems 15 0 Softer 0 10 20 20 30 40 40 Level (dBSPL) 50 60 60 70 80 80 90 100 100 110 Louder 20 100 1k 3.5k 10k 20k Frequency (Hz) Figure 2.1 The Fletcher–Munson curves (shown here upside down). Note that on the level axis, soft levels are at the top, loud at the bottom. This phenomenon explains the ever-rising level syndrome that many experience while mixing: once levels go up, it is no fun bringing them down. The more experienced among us develop the discipline to defeat this syndrome by keeping levels constant. The louder music is played, the more lows and highs we perceive compared with mids. The latest ISO 226 contours are slightly different than those shown in Figure 2.1; they show an additional bump around 12 kHz and a steeper low-frequency roll-off, which also occurs on the louder phon curves. The fact that our frequency perception alters in relation to levels is a fundamental mixing issue. How are we supposed to craft a balanced mix if the frequency content varies with level? At what level should we mix? And what will happen when the listener plays the track at different levels? The answer is: we check our mix at different levels, and try to make it as level-proof as possible. We know what to expect when we listen at softer levels—less highs and lows. It is possible to equalize the different instruments so that even when the highs and lows are softened, the overall balance between instruments hardly changes. For example, if the kick’s presence is based solely on low frequencies, it will be heard less at quiet levels, if at all. If we ensure that the kick is also present on the high-mids, it will be heard much better at quiet levels. Many believe that the mids, which vary little with level, are the key to a balanced mix, and if the lows and highs are crafted as an extension to the mids, a mix will exhibit more stable balance at different levels. Also, many agree that if a mix sounds good when at low levels, it is likely to sound good when played loud; the opposite is not always true. Another pointer is that we can sometimes guess the rough level at which the mix is likely to be played (e.g., dance music is likely to be played louder than ambient), and so we can use that level as the main reference 15 16 Concepts and practices while mixing (as reference means occasionally—mixing at nightclub levels throughout is next to certain to damage your ears). Two common adages: The mids are the key to a balanced mix at varying levels. A mix that sounds good at quiet levels is likely to sound good at loud levels. There is another reason why louder is perceived as better. When listening at soft levels, we hear more of the direct sound coming from the speakers and less of the sound reflected from the walls (i.e., the room response). Sound energy is absorbed, mostly as it encounters a surface. The small amount of energy our speakers emit at quiet levels is absorbed by walls to a degree that only a fraction of it reflects back to our ears. At louder levels, more energy is reflected and we start hearing more of the room response. As a consequence, the louder music is played, the more we hear the reflections coming from around us, which provides us with the appealing sensation that the music surrounds us. There is an experiment you can do to demonstrate this effect, which might be more apparent with eyes shut—play a mix at quiet levels through speakers and try to define the spatial boundary of the sound image. Most people will imagine a line, or a very short rectangle between the two speakers. As the music is made louder, the sound image grows, and at some point the two-dimensional rectangle turns into a vague surrounding sense. When making individual instruments louder in the mix, we perceive them better. The core reason for this is masking—the ability of one sound to cover up another. More specifically, the frequency ranges of one instrument mask those of another. One of the principal rules of masking is that louder sounds overpower quieter sounds. The higher the level of an instrument in the mix, the more it will tend to mask other instruments, and the more clearly it will be perceived. When focusing on a particular instrument, it is tempting to bring it up to hear it better, and once up it is likely to sound better. This may be misleading and lead to suboptimal balance. Percussives weigh less It is important to distinguish the different natures of the instruments we are mixing. An important mix resource is space; when different instruments are combined, they compete for that space (mostly due to masking). Percussive instruments come and go—a kick, for example, has little to no sound between various hits. Percussives fight for space in successive, time-limited periods. On the other hand, sustain instruments play over longer periods and thus constantly fight for space. To give one extreme example, think of a rich pad produced using sawtooths (the most harmonically rich orderly waveform) that involves unison (an effect that spreads copies across the stereo image), and played in a legato fashion (long notes). Such a pad would fill both the frequency spectrum and the stereo panorama in a way that is most likely to mask many other elements in the mix. In a practical sense, sustained instruments require somewhat more attention. Whether we are setting levels, panning, or equalizing them, our actions will have an effect over a longer period. Raising the level of a dense pad is likely to cause more masking problems than raising the level of a kick. If the kick masks the pad, it would only do so for short periods—perhaps not such a big deal. But if the pad masks the kick, it would do so constantly—a big deal indeed. 16 Some axioms and other gems 17 Importance Imagine yourself being on a Seinfeld set. In the scene being shot, Jerry and Kramer stand in a long line of people at a box office, engaged in conversation. Being the stars of the show, among all people, the production efforts would have been focused on the two stars. The makeup artist, for example, probably spent quite some time with them, perhaps little time with the extras standing next to them, and most likely no time with any other extras standing farther away in the line. In the camera shot, Jerry and Kramer are seen clearly in the center and extras are out of focus. The importance of the stars will also have been evident in the work of the gaffer, the grips, the boom operator, or any other crew member, even the chef. Equally, different mix elements have varying importance within the mix. The importance of each instrument depends on many factors, like the nature of the production being mixed. In hip-hop, for example, the beat and vocals are generally the most important elements. In jazz, the snare is more important than the kick. Spatial effects are an important part of ambient music. A prominent kick is central to club music, but of far less importance in most folk music. Many more examples can be given. We also have to consider the nature of each instrument and its role in the overall musical context. Vocals, for example, are often of prime importance, but the actual lyrics also play a crucial role. The lyrics of Frank Sinatra’s “My Way” are vital to the song’s impact, and mixing a vocal part as such calls for more emphasis. Arguably, the lyrics to “Give It Away” by Red Hot Chili Peppers are of little importance to the overall song climate. Importance affects how we mix different elements, be it levels, frequencies, panning, or depth we are working on. We will shortly look at how the order in which we mix different instruments and sections may also be affected. Identifying importance can make the mixing process all the more effective as it minimizes the likelihood of delving into unnecessary or less important tasks—for example, spending a fair amount of time on treating pads that only play for a short period of time at relatively low level. Those of us who mix under time constraints have to prioritize our tasks. In extreme circumstances, you might have as little as one hour to mix the drums, just half an hour for the vocals, and so on. A useful question: How important is it? Natural vs. artificial A specific event that took place back in 1947 changed the course of music production forever. Patti Page, then an unknown singer, arrived at a studio to record a song called “Confess.” The studio was set up in the standard way for that era, with all the performers in the same room, waiting to cut the song live. But there was a problem—”Confess” was a duet where two voices overlap, but for a reason yet to be found no second vocalist showed up. Jack Rael, Page’s manager, came up with the unthinkable: Patti could sing the second voice as well, provided the engineer could find a way to overdub her voice. Legend has it that at that point, the engineer cried in horror: in real life, no person can sing two voices at the very same time. It’s ridiculous. Unnatural! But to the A&R guy from Mercury Records, this seemed like a great gimmick that could secure a hit. To achieve 17 18 Concepts and practices this, the engineer did something that was never done before—cloning the track from one machine to another while adding the second voice on top. What then seemed so bizarre is today an integral part of music production. For our purposes, a “natural” sound is one that emanates from an instrument that is played in our presence. If there are any deficiencies with the raw recordings (which capture the natural sound), various mixing tools can be employed to make instruments sound “ more natural.” A mix is considered more natural if it presents a realistic sound stage (among other natural characteristics). If natural is our goal, it would make no sense to position the kick up front and the rest of the drum kit behind it. However, natural is not always best—natural can also be seen as very ordinary. Early on in photography, it occurred to people that shadows, despite being such a natural part of our daily life, impair visuals. Most advertisements have had tone and color enhancements in order to make them look “better than life.” The same goes for studio recording. It is not uncommon today to place the kick in front of the drum kit, despite the fact that this creates a very unnatural spatial arrangement. One of the principal decisions we make when we began a mix is whether we want things to sound natural or artificial. This applies on both the mix and instrument levels. Some mixes call for a more natural approach. Jazz enthusiasts, for example, expect a natural sound stage and natural-sounding instruments, although in recent years more and more jazz mixes involve an unnatural approach—for instance, compressed drums with an emphasized kick and snare. This fresh, contemporary sound has attracted a new audience (and even some connoisseurs), and facilitated a wider market for record companies to exploit. Popular music nowadays tends to be all but natural—the use of heavy compression, distortions, aggressive filtering, artificial reverbs, delays, distorted spatial images, and the like is routine. These paradigms, while not natural, increase the potential for creativity and profoundly affect the overall sound. Mixes are sonic illusions. The same way that color enhancement improves visuals, our mixing tools allow us to craft illusions that sound better or just different from real life. People who buy live albums expect a natural sound. Those who buy studio albums expect, to some extent, a sonic illusion, even if they don’t always realize that. Some inexperienced engineers are hesitant to process since they consider the raw recording a natural touchstone. Often they are cautious about even gentle processing, considering it to be harmful. Listening to a commercial track that was mixed with an artificial approach will reveal just how extreme mixing treatments can be. Take vocals, for example: their body might be removed, they might be compressed so that there are no dynamic variations, or they might be overtly distorted. We have to remember that radical mixing is generally unperceived by those without a trained ear—the majority of listeners, that is. Here are three sentences my mother has never said and will probably never say: Listen to her voice. It’s over-compressed. That guitar is missing body. The snare is too loud. The common listener does not think or speak using these terms. For them, it is either exciting or boring; they either feel it or don’t; and most importantly, they either like it or they don’t. This leaves a lot of room for wild and adventurous mixing treatments—we can 18 Some axioms and other gems 19 filter the hell out of a guitar’s bottom end; people will not notice. We can make a snare sound like a Bruce Lee punch; people will not notice. Just to prove a point here, the verse kick on Nirvana’s “Smells Like Teen Spirit” reminds me more of a bouncing basketball than any bass drum I have ever heard playing in front of me. People do not notice. 19 3 Learning to mix An analogy can be made between the process of learning a new language and that of learning to mix. At the beginning, nothing seems to make sense. With language, you are unable to understand simple sentences or even separate the words within a sentence. Similarly, if you play a mix to most people they will not be able to hear a reverb or compression as they haven’t focused on these sonic aspects before, let alone used reverbs or compressors. After learning some individual words and how to use them, you find yourself able to identify them in a sentence; in the same way, you start learning how to use compressors and reverbs and then learn to recognize them in mixes. Pronouncing a new word can be challenging, since it is not easy to notice the subtle pronunciation differences in a new language, but after hearing and repeating a word 20 times you get it right; likewise, after compressing 20 vocal tracks, you will start to identify degrees of compression and quickly evaluate what compression is most suitable. Then you will begin to learn grammar so that you can begin to connect words together and construct coherent sentences, much like all your mixing techniques help you to craft a mix as a whole. Finally, since conversation involves more than one sentence, the richer your vocabulary is and the stronger your grammar, the more sentences you are able to properly construct. In mixing, the more techniques and tools you learn and the more mixes you craft, the better your mixing becomes. Practice makes perfect. What makes a great mixing engineer? World-class mixing engineers might earn more for a single album than many people earn in a year. Some mixing engineers also receive points—a percentage from album sale revenue. On both sides of the Atlantic, an accomplished mixing engineer can enjoy a six- digit annual revenue. These individuals are being remunerated for their knowledge, experience, and skill. Record labels reward them for that, and in exchange enjoy greater sales. It is clear why mixing is often done by a specialized person. And it is such a vast area that it is no wonder some people devote themselves entirely to it—the amount of knowledge and practice required to make a great mixing engineer is vast. Primarily, the creative part of mixing revolves around the three steps shown in Figure 3.1. The ability to progress through these steps can lead to an outstanding mix. But a great mixing engineer will need a notch more than that, especially if hired. These steps are explained, along with the requisite qualities that make a great mixing engineer, in the following sections. 20 Learning to mix 21 How do I want it to sound? Vision Evaluation Action Does it sound like I want it to? What equipment should I use? Does it sound right? How should I use the equipment? What is wrong with it? Figure 3.1 The three steps of creative mixing. Mixing vision There are different methods of composing. One involves using an instrument, say a piano, and then, either by means of trial and error or using music theory, coming up with a chord structure and melody lines. Another approach involves imagining or thinking of a specific chord or melody and then playing it. The latter process of “visualizing” and then playing or writing is favored by many composers and songwriters. The same two approaches apply to mixing as well. If we take the equalization process of a snare, for example, the first approach involves sweeping through the frequencies, then choosing whatever frequency appeals to us most. The second approach involves first imagining the desired sound and then approaching the EQ in order to attain it. Put another way, the first approach might involve thinking, “OK, let’s try to boost this frequency and see what happens,” while the second might sound more like, “I can imagine the snare having less body and sounding more crisp.” Just as some composers can imagine the music before they hear it, a mixing engineer can imagine sounds before attaining them— a big part of mixing vision. Mixing vision is primarily concerned with the fundamental question: How do I want it to sound? The answer could be soft, powerful, clean, etc. But mixing vision cannot be defined by words alone—it is a sonic visualization, which later manifests through the process of mixing. The selection of tools that we can use to alter and embellish sound is massive—equalizing, compressing, gating, distorting, adding reverb or chorus are just a few. So what type of treatment should we use? There are infinite options available to us within each cate- gory—the frequency, gain, and Q controls on a parametric equalizer provide billions of possible combinations. So why should we choose a specific combination and not another? Surely equalizing something in a way that makes it sound right does not mean that a different equalization would not make it sound better. A mixing vision provides the answer to these questions: “because this is how I imagined it; this is how I wanted it to sound.” A novice engineer might lack imagination. The process of mixing for him or her is a trial- and-error affair between acting and evaluating (Figure 3.2). But how can one critically evaluate something without a clear idea of what one wants in the first place? Having no mixing vision can make mixing a very frustrating hit-and-miss process. 21 22 Concepts and practices Evaluation Action Does it sound right? What equipment should I use? What is wrong with it? How should I use the equipment? Figure 3.2 The novice approach, without a mixing vision. Having a mixing vision can make all the difference between the novice and the professional mixing engineer. While the novice shapes the sounds by trial and error, the professional imagines sounds and then achieves them. The skill to evaluate sounds The ability to craft a good mix is based on repeated evaluations. One basic question, often asked at the beginning of the mixing process, is “What’s wrong with it?” Possible answers might be, “the highs on the cymbals are harsh,” “the frequency spectrum of the mix is too heavy on the mids,” or “the drums are not powerful enough.” From the endless amount of treatment possibilities we have, focusing on rectifying the wrongs provides a good starting point. It can also prevent the novice from doing things that aren’t actually necessary (for example, equalizing something that didn’t really require equalization) and thereby save precious studio time. At times, it might be hard to tell what is wrong with the mix, in which case our mixing vision provides the basis for our actions. After applying a specific treatment, the novice might ask, “Does it sound right?” while the veteran might also ask, “Does it sound the way I want it to?” Clearly, the veteran has an advantage here since this question is less abstract. Mastering the tools, and knowledge of other common tools Whether with or without a clear mixing vision, we perform many actions in order to alter sounds. When choosing a reverb for vocals, the novice might tirelessly go through all the available presets on a reverb emulator. There can be upward of 50 of these, and so the entire process can take some time. The veteran, on the other hand, will probably quickly access a specific emulator and choose a familiar preset; a bit of tweaking and the task is done. It takes very little time. Experienced mixing engineers know, or can very quickly find out, which tool will do the best job in a specific situation; they can quickly answer the question: “What equipment should I use?” Professional mixing engineers do not always work in their native environment. They will sometimes work in different studios and, even though they might take their favorite gear with them, a big part of the mix will be done using in-house equipment. Therefore, profes- sional mixing engineers have to be familiar with the common tools found in a commercial environment. Mastering the tools at one’s disposal does not only mean having the ability to pick the right tool for a specific task, but also having the expertise to employ the equipment in the 22 Learning to mix 23 best way (“How should I use the equipment?”). Knowing whether to choose high- shelving or high-pass characteristics on an equalizer, or knowing that a specific compressor will work well on drums when more than one ratio button is pressed, are just a couple of examples. It is also worth discussing the quantity of tools we have at our disposal. Nowadays, DAW users have a wider selection than those mixing using hardware. Not only are plugins cheaper, but they can be used across various tracks, simultaneously, whereas a specific hardware processor cannot. In an analog studio, a mixing engineer might have around three favorite compressors to choose from when processing vocals; DAW users might have a choice of 10. Learning each of these compressors—understanding each of them—takes time; just reading the manual is time-consuming. Having many tools can mean that they are not realizing their potential because there is no time to learn and properly experiment with them all. Mixing is a simple process that only requires a pair of trained ears and a few quality tools. Less can be more. Jack of all trades, master of none. Theoretical knowledge Four questions: When clipping shows on the master track in an audio sequencer, is it the master fader or all of the channel faders that should be brought down? To achieve more realistic results, should one or many reverb emulators be used? Why and when should stereo linking be engaged on a compressor? When should dither be applied? To say that every mixing engineer knows the answers to these questions would be naïve. So would be saying that one cannot craft an outstanding mix without strong theoretical knowledge. There are more than a few highly successful engineers who would be unable to provide answers to many theoretical questions relating to their field. But knowing the answers to these questions is definitely an advantage. Knowledge is always a blessing and, in such a competitive field, can make all the difference. Out of two equally talented engineers with different levels of knowledge, it would not be difficult to choose who to work with. To acquire knowledge, some might undertake an educational program while others might learn little by little on the job; but, either way, all mixing enthusiasts need to be compulsive learners—if the ratio on a compressor is set to 1:1, a novice will spend hours trying to figure out why no other control has an effect. Learning the difference between shelving and a pass filter is handy. The effect that dither has on the final mix quality is worth knowing. It would seem unreasonable for a mastering engineer not to know when to apply dither, but mixing engineers should know too. It is better to know what you can do, and how to do it, than to understand what you have done. 23 24 Concepts and practices Interpersonal skills Studio producers need an enormous capacity to interact and deal with many people, with different abilities, moods, and degrees of dedication. Mixing engineers tend to work on their own and only occasionally mix in front of the client—whether that be the artist, A&R, or the producer. So, although to a lesser extent than a studio producer, like any job that involves interaction with people, mixing also requires good interpersonal skills. When the band comes to listen to the mix, it should not come as a surprise that each band member will insist that his or her instrument is not loud enough. (In their defence, they are used to their instrument appearing louder to them, whether onstage or through the cans in the live room.) Even the old tricks of limiting the mix or blasting the full-range speakers will not always appease them. On many occasions, artists and A&R remark on the work of mixing engineers with the same rationale of accountants commenting on the work of graphic designers they have hired. While the feedback from fresh ears can sometimes be surprisingly constructive, at times the clients’ comments are either technically or artistically naïve or inappropriate. Things can easily become personal—mixing engineers, like the artists they mix, can become extremely protective about their work. Interpersonal skills can help avoid or resolve artistic disagreements, and assist with calmly expressing an opinion. But if the artist does not like some aspect of the mix, even if it’s technically earth-shattering, however much the mixing engineer might disagree, it is the mixing engineer who must compromise. The client- is-always-right law is the same in mixing—after all, a displeased client is a lost client. For artists, each single or album goes on their CV forever and can be life-defining; do you really want them to be unhappy with your work? The ability to work fast Learning something new can be tricky and testing—all guitar players experience some frustration before they can change chords quickly enough or produce a clean sound. It is maddening working on a single verse for a whole day and still being unhappy with the mix. But, as experience accumulates, it takes less time to choose tools and utilize them to achieve the desired sound. Also, our mixing visions become sharper and we can crystallize them more quickly. Altogether, each task takes less time, which leaves more time to elevate the mix or experiment. Needless to say, the ability to work fast is essential for hired mixing engineers, who work under busy schedules and strict deadlines. Methods of learning Reading about mixing Literature is great. Books, magazine articles, and Internet forums can be the source of some extremely valuable theory, concepts, ideas, and tips. But reading about mixing will not make a great mixing engineer, in the same way as reading a cookery book will not make a great chef. Reading about mixing gives us a better chance to understand core concepts and operate our tools, but the one thing it does not do is improve our sonic skills. 24 Learning to mix 25 Reading manuals is also important, although unfortunately many people choose to neglect it. The basic aim of a manual is to teach us how to use our equipment, and sometimes also how to use it correctly or how to use it better. In their manuals, many manufacturers will present some practical advice on their products and sometimes on mixing in general. Sometimes the controls of a certain tool are not straightforward and it might take an eternity to understand what its function is without reading the manual. Read the manual. Reading and hearing This book is an example of this method. An aural demonstration of mixing-related issues provides a chance to develop critical evaluation skills and better understanding of sonic concepts. While this method can contribute to all stages of mixing—vision, action, and evaluation—it is a passive way of learning since it does not involve active mixing. Seeing and hearing Watching other people mix is another way to learn. Many people want to work in a studio so they can learn from the experienced. Listening to others while they apply their expertise to a mix is a great opportunity and a valuable experience, but comes with two cautions. First, it is impossible to enter other people’s minds—while watching them mix it might be possible to understand what they are doing, but not why they are doing it. Mixing vision and experience are nontransferable. Second, if we take into account the tricks and tips already published, what is left to learn from these experienced people is mostly their own unique techniques. True, learning the secret techniques of mixing engineers at the top of their game is great, but only if these are used in the right context. There is the belief that the greatest mixing engineers produce incredible mixes because of secret techniques. In practice, these amazing mixes are not down to secret techniques, but to an extensive understanding of basic techniques and experience using them. Secret techniques often only add a degree of polish or the individual’s idiosyncratic sonic stamp. Doing it Without a shadow of a doubt, the best way to learn mixing is simply by doing it. Most of the critical skills and qualities of a great mixing engineer can be acquired through the practice of the art. While mixing, we learn to evaluate sounds and devices, use our equipment in the best way, work faster, and articulate our mixing vision quicker. Combined with good theoretical background and enough application, there is very little to stop anyone from becoming a competent mixing engineer. There is a direct link between mixing-miles and the final quality of the mix. The best way to learn mixing is to mix. Mixing analysis Sometimes, learning the techniques of an art makes it hard to perceive the art as a whole. For example, while watching a movie, film students will analyze camera movements, 25 26 Concepts and practices lighting, edits, lip-sync, or acting skills. It can be hard for those students to stop analyzing and just enjoy movies like they did when they were fascinated kids. However, many mixing engineers find it easy to switch in and out from a mixing analysis state—even after many years of mixing, they still find it possible to listen to a musical piece without calculating how long the reverb is, where the trumpet is panned to, or questioning the sound of the kick. Others simply cannot help it. Although it is far less enjoyable to analyze the technical aspects of a movie while watching it, this critical awareness can help make film students more conscientious filmmakers. Sit, watch, and learn how the masters did it—simple. The same approach works for mixing. Every single mix out there, whether good or bad, is a lesson in mixing. Learning is just a matter of pressing play and actively listening to what has been done. Although mixing analysis cannot always reveal how things were done, it can reveal much of what was done. Your music collection contains hundreds of mixing lessons. There are endless things to listen for when analyzing others’ mixes, and these can cover any and every aspect of the mix. Here are just a few questions you might ask yourself while listening: How loud are the instruments in relation to one another? How are the instruments panned? How do the different instruments appear in the frequency spectrum? How far apart are the instruments in the depth field? How much compression was applied to the different instruments? Can any automation be detected? How long are the reverbs? How defined are the instruments? How do different mix aspects change as the song advances? A quick demonstration seems appropriate here. The following points provide a partial mixing analysis for the first 30 seconds of Nirvana’s “Smells Like Teen Spirit,” the album version: The tail of the reverb on the crunchy guitar is audible straight after the first chord (0:01). There is extraneous guitar noise coming from the right channel just before the drums are introduced (0:05). The crunchy guitar dives in level when the drums are introduced (0:07). Along with the power guitars (0:09–0:25), the kick on the downbeats is louder than all other hits. (It appears to be the actual performance, but it can also be achieved artificially during mixdown.) When listening in mono, the power guitars lose some highs (0:09–0:25). The snare reverb changes twice (a particular reverb before 0:09, then no audible reverb until 0:25, then another reverb). During the verse, all the kicks have the same timbre (suggesting drum triggers). There is kick reverb during the verse. It is possible to hear a left/right delay on the hi-hats—especially during open/close hits. (This could be the outcome of a spaced microphone technique, but can also occur during mixdown.) The drums are panned audience-view. 26 Learning to mix 27 The excerpt set (from Chapter 1) can be a true asset when it comes to mixing analysis, as the quick changes from one mix to another make many aspects more noticeable. Not every aspect of the mix is easily discernible: some are subliminal and are felt rather than heard. To be sure, the more time and practice we put into mixing analysis, the more we discover. In addition to what we can hear from the plain mix, it is also possible to use different tools in order to reveal extra information. Muting one channel of the mix can disclose additional stereo information (e.g., a mono reverb panned to one extreme). Using a pass filter can help in understanding how things have been equalized. To reveal various stereo effects, one can listen in mono while phase-reversing one channel (this results in a mono version of the difference between the left and right, which tends to make reverbs and room ambiance very obvious). Reference tracks Mixing analysis is great, but it is impossible to learn hundreds of mixes thoroughly, and it can be impractical to carry them around just in case we need to refer to them. It is better to focus on a few select mixes, learn them inside out, analyze them scrupulously, and have them readily accessible. Some mixing engineers carry a few reference tracks (mostly their own past mixes) so they can refer to them. The novice might refer to his reference tracks on a frequent basis. When mixing at home or in their studio, some have a specific folder on the hard drive with their select mixes. In addition to reference tracks, including the excerpt set can be great since it enables a quick comparison between many different mixes. It is also possible to include a few raw tracks, which can later be used to evaluate different tools. Our choice of reference tracks might not be suitable for every mix. If we are working on a mix that includes strings, and none of our reference tracks involve strings, it would be wise to look for a good mix that does. Likewise, if our reference tracks are all heavy metal and we happen to work on a chill-out production, it would be sensible to refer to some more appropriate mixes. Usage of reference tracks Reference tracks can be employed for different purposes: As a source for imitation—painting students often go to a museum to copy a familiar painting. While doing so, they learn the finest techniques of famous painters. Imitating another’s techniques is part of the learning process. Likewise, there is nothing amiss in imitating proven mixing techniques—if you like the sound of the kick in a specific mix, why not imitate that sound in your mix? There is no reason why you can’t replicate the technique of a specific track that you particularly like. When we are short of a mixing 27 28 Concepts and practices vision, we can replace it with the sonic image of an existing mix, or try to imitate it, or just some aspects of it. Trying to imitate the sound of a known mix is actually a great mixing exercise. However, caution must be exercised, for several reasons. First, because productions can be so diverse that, whether in their emotional message, style, arrangement, quality, or nature of the raw material, what sounds good in another mix might not sound so good in yours. Second, setting a specific sound as an objective can be limiting and mean that nothing better will be achieved. Third, it is hard to obtain the same sounds with different recordings—when the ingredients are different, it’s hard to make the dish taste similar. Finally, and most importantly, imitation is innovation’s greatest enemy—there is little creativity involved in imitation. In fact, it might restrain the development of creative mixing skills. As a source of inspiration—while imitating a mix requires a constant comparison between the reference track and our own mix, reference tracks can be played before mixing to inspire us as to the direction in which the mix should go and what qualities it should incorporate. For the novice, such a practice can kick-start some mixing vision and set certain sonic objectives. As an escape from a creative dead end—sometimes we reach a point where we are clearly unhappy with our mix, but frustratingly cannot tell quite what is wrong with it. We might be simply out of ideas or lacking vision. Learning the difference between our mix and a specific reference mix can trigger new ideas, or suggest problems in our mix. As a reference for a finished mix—when we finish mixing, we can compare our mix to a specific reference track. Listening to how the professionals do it can help us generate ideas for improvement. The frequency response or relative levels of the two mixes are just two possible aspects that we might compare. To calibrate our ears to different listening environments—working anywhere but in our usual listening environment reduces our ability to evaluate what we hear. Just before we start to listen critically in an unfamiliar environment, whether mixing or just evaluating our own mixes, playing a mix we know well can help calibrate our ears to unfamiliar monitors, the acoustics, or even a different position within the same room. To evaluate monitor models before purchase—studio monitor retailers usually play customers a popular track that has an impressive mix, at loud levels. Chances are that the monitors will impress the listener that way. Listening to a mix that you are very familiar with can improve judgment. It is worth remembering that if a reference track has been mastered, it is very likely to contain tighter dynamics, usually in the form of more allied relative levels and heavier compression. Also, mastered tracks are typically louder, whereas the overall loudness of a track is not a concern during the mixing stage. In some albums, frequency treatment takes place in order to match the overall sound to that of the worst track. These points are worth bearing in mind when comparing a reference track to a mix-in-progress—a mastered reference track is an altered version of a mix, usually for the better. How to choose a reference track Choosing some of our own past mixes for reference purposes is always a good idea. Having worked on these mixes, we are familiar with the finer details and, retrospectively, their faults. Ideally, reference materials should be a combination of both unmastered and commercial tracks. Here are a few of the qualities that reference tracks should have: 28 Learning to mix 29 A good mix—while subjective, your opinion of what is a good mix is central. It is important to choose a mix you like, not a production you like—despite Elvis Presley’s greatness, the sonic quality of his original albums is nowhere near today’s standards. A contemporary mix—mixing has evolved. A good mix from the 1980s is likely to have more profound reverbs than the mix of a similar production from the 1990s. Part of the game is keeping up with the changing trends. Genre related—clearly, it makes little sense to choose a reference track of a genre that is fundamentally different from the genres you will be working on. A dynamic production—choosing a dynamic production, which has a dynamic arrangement and mix, can be like having three songs in one track. There is more to learn from such a production. Reference tracks should not be: A characteristic mix—the mixing style of some bands, The Strokes, for example, is rather unique. A mix that has a distinct character will only serve those distinct productions and bands. Too busy—it is usually easier to discern mixing aspects in sparse productions. Too simple—the more there is to learn from a mix, the better. An arrangement made of a singer and her acoustic guitar might sound great, but will not teach you how to mix drums. 29 4 The process of mixing Mixing and the production chain There are differences between the production processes of recorded music and sequenced music, and these differences affect the mixing process. Recorded music Songwriting Arranging Recording Mixing Mastering Editing Figure 4.1 Common production chain for recorded music. Figure 4.1 illustrates the common production chain for recorded music. Producers may give input at each stage, but they are mostly concerned with the arrangement and recording stages. Each stage has an impact on the subsequent stage but each of the stages can be carried out by different people. Mixing is largely dependent on both the arrangement and recording stages. For example, an arrangement might involve only one percussion instrument, a shaker, for example. If panned center in a busy mix, it is most likely to be masked by other instruments. But panning it to one side can create an imbalanced stereo image. It might be easier for the mixing engineer to have a second percussion instrument, say a tambourine, so the two can be panned left and right. A wrongly placed microphone during the recording stage can result in a lack of body for the acoustic guitar. Recreating this missing body during mixdown is a challenge. Some recording decisions are, to be sure, mixing decisions. For example, the choice of stereo-miking technique for drum overheads determines the localization and depth of the various drums in the final mix. Altering these aspects during mixdown takes effort. Mixing engineers, when a separate entity in the production chain, commonly face arrangement or recording issues such as those mentioned above. There is such a strong link between the arrangement, recordings, and mix that it is actually unreasonable for a producer or a recording engineer to have no mixing experience whatsoever. A good pro- ducer anticipates the mix. There is an enormous advantage to having a single person helping 30 The process of mixing 31 with the arrangement, observing the recording process, and mixing the production. This ensures that the mix is borne in mind throughout the production process. There is some contradiction between the nature of the recording and mixing stages. The recording stage is mostly concerned with the capturing of each instrument so that the sound is as good as it possibly can be (although to a varying degree, instruments are recorded so their sound fits existing sounds). During the mixing stage, different instruments have to be combined, and their individual sounds might not work perfectly well in the context of a mix. For example, the kick and bass might sound unbelievably good when each is played in isolation, but combined they might mask one another. Filtering the bass might make it thinner, but will work better in the mix context. Mixing often involves altering recordings to fit into the mix—no matter how well instruments were recorded. Sequenced music The production process of sequenced music (Figure 4.2) is very different in nature to that of recorded music. In a way, it is a mixture of songwriting, arranging, and mixing—producing for short. This affects mixing in two principal ways. First, today’s DAWs, on which most sequenced music is produced, make it easy to mix as you go. The mix is an integral part of the project file, unlike a console mix that is stored separately from the multitrack. Second, producers commonly select samples or new sounds while the mix is playing along; uncon- sciously, they choose sounds based on how well they fit into the existing mix. A specific bass preset might be dismissed if it lacks definition in the mix, and a lead synth might be chosen based on the reverb that it brings with it. Some harmonies and melodies might be transposed so they blend better into the mix. The overall outcome of this is that sequenced music arrives at the mixing stage partly mixed. Production Mixing Mastering Figure 4.2 Common production chain for sequenced music. As natural and positive as this practice may seem, it causes a few mixing problems that are common to sequenced music. To begin with, synthesizer manufacturers and sample- library publishers often add reverb (or delay) to presets in order to make them sound bigger. These reverbs are permanently imprinted into the multitrack submission and have restricted depth, stereo images, and frequency spectrums that might not integrate well with the mix. Generally speaking, dry, synthesized sounds and mono samples offer more possibilities during mixdown. In addition, producers sometimes get attached to a specific mixing treatment they have applied, such as the limiting of a snare drum, and leave these treatments intact. Very often, the processing is done using inferior plugins, in a relatively short time, and with very little attention to how the processing affects the overall mix. Flat dynamics due to over-compression or ear-piercing highs are just two issues that might have to be rectified during the separate mixing stage. Sequenced music often arrives at the mixing stage partly mixed—which could be more of a hindrance and less of a help. 31 32 Concepts and practices Recording They say that all you need to get killer drum sounds is a good drum kit in a good room, fresh skins, a good drummer, good microphones, good preamps, some good EQs, nice gates, nicer compressors, and a couple of good reverbs. Remove one of these elements and you will probably find it harder to achieve that killer sound; remove three and you may never achieve it. The quality of the recorded material has an enormous influence on the mixing stage. A famous saying is “garbage in; garbage out.” Flawed recordings can be rectified to a certain extent during mixing, but there are limitations. Good recordings leave the final mix quality to the talent of the mixing engineer, and offer greater creative opportunities. Nevertheless, experienced mixing engineers can testify to how drastically the process of mixing can improve poor recordings, and how even low-budget recordings can be turned into an impressive mix. Much of this is thanks to the time, talent, and passion of the mixing engineer, and sometimes involves a few mixing “cheats,” such as the triggering of drum samples and re-amping of guitars. Garbage in; garbage out. Still, a lot can be improved during mixdown. Arrangement The arrangement (or instrumentation) largely determines which instruments play, when, and how. Mixing-wise, the most relevant factor of the arrangement is its density. A sparse arrangement (Figure 4.3a) will call for a mix that fills various gaps in the frequency, stereo, and time domains. An example of this would be an arrangement based solely on an acoustic guitar and one vocal track. The mixing engineer’s role in such a case is to create something out of very little, or, at the other extreme, a busy arrangement (Figure 4.3b), where the challenge is to create a space in the mix for each instrument. It is harder to lay emphasis on a specific instrument, or emphasize fine details in a busy mix. Technically speaking, masking is the cause. (a) (b) Figure 4.3 Sparse vs. dense arrangement. 32 The process of mixing 33 Both Andy Wallace and Nigel Godrich faced sparse arrangements consisting of a guitar and vocal only in sections of “Polly” by Nirvana and “Exit Music” by Radiohead. Each tackled it in a different way—Wallace chose a plain, intimate mix, with fairly dry vocal and a subtle stereo enhancement for the guitar. Godrich chose to use very dominant reverbs on both the guitar and vocal. It is interesting to note that Wallace chose the latter, reverberant approach on his inspiring mix for “Hallelujah” by Jeff Buckley—an almost seven-minute song with an electric guitar and a single vocal track. It is not uncommon for the final multitrack to include extra instrumentation along with takes that were recorded as try-outs, or in order to give some choices during mixdown. It is possible, for example, to receive eight power-guitar overdubs for just one song. This is done with the belief that layering eight takes of the same performance will result in an enormous sound. Enormousness aside, properly mixing just two of the eight tracks can sometimes sound much better. There are always opposite situations, where the arrangement is so minimalist that it is very hard to produce a rich, dynamic mix. In such cases, nothing should stop the mixing engineer from adding instruments to the mix—as long as time, talent and ability allow this, and the client approves the additions. It is acceptable to remove or add to the arrangement during mixdown. It is worth remembering that the core process of mixing involves both alteration and addition of sounds—a reverb, for example, is an additional sound that occupies space in the frequency, stereo, and time domains. It would therefore be perfectly valid to say that a mix can add to the arrangement. Some producers take this well into account by “leaving a place for the mix”—the famous vocal echo on Pink Floyd’s “Us and Them” is a good example of this. One production philosophy is to keep the arrangements simple, so that greatness can be achieved at the mixing stage. The mix can add sonic elements to the arrangement. Editing Generally, on projects that are not purely sequenced, editing is the final stage before mixing. Editing is subdivided into two types: selective and corrective. Selective editing is primarily concerned with choosing the right takes, and the practice of comping—combining multiple takes into a composite master take. Corrective editing is done to repair a bad performance. Anyone who has ever engineered or produced in a studio knows that session and professional musicians are a true asset. But as technology is moving forward, enabling more sophisticated performance corrections, mediocre performance is becoming more acceptable—why should we spend money on vocal tuition and studio time when a plugin can make the singer sound in tune? (More than a few audio engineers believe that the general public perception of pitch has sharpened in recent years due to the excessive use of pitch correction.) Drum correction has also become common practice. On big projects, a dedicated editor might work with the producer to do this job. Unfortunately, though, sometimes it is the mixing engineer who is expected to do such a job (although mostly this is done for an additional editing fee). 33 34 Concepts and practices A lot of corrective editing can be done mechanically. Most drums can be quantized to metronomic precision, and vocals can be made perfectly in tune. Although many pop albums feature such extreme edits, many advocate a more humanized approach that calls for little more than an acceptable performance (perhaps ironically, sequenced music is often humanized to give it feel and swing). Some argue that over-correcting is against all genuine musical values. It is also worth remembering that corrective editing always involves some quality penalty. In addition, audio engineers are much more sensitive to subtle details than most listeners. To give an example, the chorus vocals on Beyoncé’s “Crazy in Love” are late and offbeat, but many listeners don’t notice it. The mix as a composite Do individual elements constitute the mix, or does the mix consist of individual elements? Those who believe that individual elements constitute the mix might give more attention to how the individual elements sound, but those who think that the mix consists of individual elements care about how the sound of individual elements contributes to the overall mix. It is worth remembering that the mix—as a whole—is the final product. This is not to say that the sound of individual elements is not important, but the overall mix takes priority. A few examples would be appropriate here. It is very common to apply a high-pass filter on a vocal in order to remove muddiness and increase its definition. This type of treatment, which is done to various degrees, can sometimes make the vocals sound utterly unnatural, especially when soloed. However, this unnatural sound often works extremely well in mix context. Another example: vocals can be compressed while soloed, but the compression can only be per