Sound Syllabus Film-Animation-MPS - PDF

Document Details

LavishMinimalism7630

Uploaded by LavishMinimalism7630

Rochester Institute of Technology

2024

David Sluberski

Tags

audio engineering sound design ProTools film sound

Summary

This syllabus, by David Sluberski, for the SOFA Sound Courses details course objectives in film sound, sound design, editing and techniques with ProTools software, for film animation students at RIT. It includes grading, tutorials, and important information on required equipment such as SanDisk USB drives.

Full Transcript

Okay, here is the transcription of the image into markdown format. # SOUND SYLLABUS for FILM-ANIMATION-MPS **SOFA Sound Courses** Copyright: David Sluberski - 2024-25 [[email protected]](mailto:[email protected]) 585-880-4349 Room MMM-3105 (MSS-3120 Suite) MAGIC SPELL STUDIOS John Ebert, [jcepph@rit....

Okay, here is the transcription of the image into markdown format. # SOUND SYLLABUS for FILM-ANIMATION-MPS **SOFA Sound Courses** Copyright: David Sluberski - 2024-25 [[email protected]](mailto:[email protected]) 585-880-4349 Room MMM-3105 (MSS-3120 Suite) MAGIC SPELL STUDIOS John Ebert, [[email protected]](mailto:[email protected]), will be teaching a section on Basic Sound. This syllabus and other information for this course apply to all sections. Feel free to reach out to either of us, no matter what section you are in, and get the correct information. *Anything stated or demonstrated in class may be on the tests!* **Course objective:** The ability to correctly record, edit, and produce a soundtrack to technical specifications while applying sonic integrity and the ability to "listen forward" to create an artistic soundscape. Ear training will be developed to understand desirable and/or problematic sounds and techniques. This will include training for the Sound Devices Mix Pre 6 recorders, various microphones and techniques, Avid's ProTools software, and Izotope's insight monitoring software for metering proper audio levels so all films can meet specifications for RIT screenings and delivery for other distribution formats. PLEASE MAKE A ROUTINE TIME TO GO OVER MY POSTS AND WHETERVER IS SENT FOR CLASS. As a general rule, on Monday of each week, an email will be sent from me via MyCourse outlines the weeks' expectations. You are responsible for reading this and have your RIT email account able to get these. No other email accounts are allowed. **GRADING** I will have tests, and you will respond weekly in class discussions and submit a weekly sound report for the project. There may be changes from week to week. To succeed in this creative world, adaptability means every day. I will send Vimeo and other links to view and hear audio-related things I feel are essential. **YOU need to pay attention in class, take notes and manage your time in order to succeed!** All students need to purchase two high-speed SanDisk USB thumb drives to off-load files from the Sound Devices Mix-pre 6 recorder and to back up all Mac production project files. This is the model SanDisk 64 Ultra USB 3.0 Flash drive about $8.00 each. No other model is accepted. [https://www.bhphotovideo.com/c/product/1003351-REG/sandisk\_sdcz48\_064g\_a46\_64gb\_ultrausb\_3\_0\_flash.html](https://www.bhphotovideo.com/c/product/1003351-REG/sandisk_sdcz48_064g_a46_64gb_ultrausb_3_0_flash.html) **This is mandatory! No other drive you own can be used!** You will format one for Mac and have all permissions read and write for everyone for your projects. The second drive will offload files from the Mix Pre 6 and be formatted as exFAT. We will show you in class. This has to be purchased by week 1. No exceptions! These should only be used for this class. Everyone is responsible for backing up all projects within a group. You should all have the same copy. We will be working with Avid ProTools/MAC OS. A Mac-formatted, not partitioned, drive will work best. For these devices to work in the HD Lab, they must have all permissions settings set to "to everyone." They can't be dual-format, only Mac format; otherwise, your project will crash the computer, and you will lose your work. **Communication is the key!** I am always available to listen to any concerns you may have, including listening to your projects, learning ProTools and understanding technical equipment. *It is your responsibility to contact me and make an appointment in a timely fashion.* Grading; There will be 2 tests (50%) of your grade. Project is 35%. Weekly check-in report 10%. ProTools tutorial 5% Class participation, attendance and showing up on time will be weighted with your final grade. See the Student Agreement for attendance policies in MyCourses. This needs to be digitally signed and submitted via Mycourses by the first week of class. Basic Sound class week by week! These are target dates and may change:. All classes will have video tutorials are part of material viewed in class with a follow-up Q &A. These are all listed in my-courses and can and should be used for future reference especially preparing for the tests. See page 24 from past classes for more details. * **WEEK 1:** Introductions about the course, each other and TA's and professors. Friday Lab; We will do a class ADR session from trailers used in Advanced sound. Everyone is encouraged to be an actor and try this out. * **WEEK 2:** Pages 1-6 and will try to schedule HD lab for ProTools tutorial * **WEEK 3:** ProTools in HD lab. Everyone brings headphone or ear buds (no blue-tooth) * **WEEK 4:** ProTools tutorial due; sent via Dropbox on MyCourses. Script Due * **WEEK 5:** Video tutorials and prep for test. Training on Mix Pre 6 kits. Once completed, all students should reserve kits immediately and plan a group timeline and strategy for the project. * **WEEK 6:** Test 1 is in class; about 20 questions. 25% of final grade * **WEEK 7:** Continuation of syllabus such as EQ, Dynamics, Monitoring * **WEEK 8:** Stereo session due. Everyone will demonstrate their ProTools session in class. * **WEEK 9:** Spring break but no break in the Fall semester; we will adapt. * **WEEK 10:** Dialogue all recorded & finished. * **WEEK 12:** Final class projects due (35%); take-home test (25%) sent out via email. * **WEEK 13:** Take-home test due via MyCourses. This will be stated in class. 25% of final grade. * **WEEK 14:** Playback of projects * **WEEK 15:** TBD Dave's number one rule! **TRUST YOUR EARS and TAKE CARE OF THEM!** Students! Ear Safety is important. Since the Walkman and now personal listening devices, it is a fact that your generation and the ones before you are playing music too loud for long periods of time: **Presbycusis (also spelled presbyacusis), or age-related hearing loss**, is the cumulative effect of aging on hearing. It is a progressive and irreversible bilateral symmetrical age-related sensorineural hearing loss resulting from degeneration of the cochlea or associated structures of the inner ear or auditory nerves. The hearing loss is most marked at higher frequencies. Hearing loss that accumulates with age but is caused by factors other than normal aging (**nosocusis and sociocusis**) is not presbycusis, although differentiating the individual effects of distinct causes of hearing loss can be difficult. The cause of presbycusis is a combination of genetics, cumulative environmental exposures and pathophysiological changes related to aging. At present there are no preventive measures known; treatment is by hearing aid or surgical implant. Presbycusis is the most common cause of hearing loss, affecting one out of three persons by age 65, and one out of two by age 75. Presbycusis is the second most common illness next to arthritis in aged people. Usually occurs after age 50, but deterioration in hearing has been found to start very early, from about the age of 18 years. One early consequence is that even young adults may lose the ability to hear very high-frequency tones above 15 or 16 kHz. Despite this, age-related hearing loss may only become noticeable later in life. The effects of age can be exacerbated by exposure to environmental noise, whether at work or in leisure time (shooting, music, etc.). This is **noise-induced hearing loss (NIHL)** and is distinct from presbycusis. A second exacerbating factor is exposure to ototoxic drugs and chemicals. **Audiology Benefits for RIT Employees & Dependents** Did you know? All RIT faculty, staff, and their dependents (6 years and older) are eligible for all Audiology Services offered on campus by CSS.Audiology services are free for employees of RIT. [https://www.rit.edu/ntid/css/audiology](https://www.rit.edu/ntid/css/audiology) **FACTOID!** A reliable set of monitors in a controlled room (speakers or headphones) is the only way to record and mix audio properly! You will need to use headphones to listen to my online tutorials. PICTURE LOCK IS DEFINED AS NO TIMING CHANGES! NO FURTHER EDITS INVOLVING TIME CHANGES WILL TAKE PLACE! FACT: The key to proper mixing & recording is what I define as the "Trinity of Mixing." Dave's "Trinity of Mixing" steps below 1. Viewing the dialog track (-16 to -10dbfs) or music track (-6dbfs highest peak) playback at the proper metering levels on meters. 2. Setting the control room volume to match the properly-metered levels. 3. Understanding the effects of Fletcher-Munson curves in a known listening environment. In other words, know how the ear hears. ALL LISTENING SESSIONS SHOULD BE MONITORED THIS WAY, EVEN IF USING HEADPHONES IN THE STUDIO or ON LOCATION! **(1st part of Dave's Trinity of Mixing rule)** **DECIBEL** It is used as a measurement for sound pressure level (SPL). 0 dB refers to the threshold of hearing. Almost Death. MIXES IN STUDIOS FOR FILM ARE GENERALLY ABOUT 85-87 DB-SPL **Example:** * 10 dB recording studio * 30 dB quiet office * 60 dB average conversation * 90 dB traffic * 135 dB threshold of pain * 160 dB jet engine close up **METER dB** A VU or LCD meter (in any audio program) is a VOLTMETER and is in NO WAY a reference to sound pressure (SPL). The Sound Pressure Level is measured by a calibrated microphone to determine how loud the sound is. This will be demonstrated in class. Line Level- OVU = +4 Dbm (decibel per milliwatt) = 1.23 volts and, for the most part, the analog industry standard. The microphone level is much lower and needs to be amplified to an acceptable line level range (a topic discussed later in the syllabus). There is also a line level referred as -10. This is used for consumer and semi-professional equipment, typically an RCA or 1/8" plug or jack, computer audio card interfaces. **BALLISTICS AND READINGS** All VU meters (old-style analog are almost gone in today's environment) are different in how they work. Meters are a device that indicates the relative level of the signal (voltage). They all have different ballistics or reaction times. Some are fast and some are slow. A term used for the digital world is 0 DBFS (Decibels Full Scale). It's the absolute peak or end of the meter and bits in the digital world. Anything over 0 DBFS is distorted and uses all the bits. At RIT, we use Izotope software for measuring levels within ProTools. There is also the Loudness Radar tool by TC Electronics in Adobe Premiere. Below in RED are targeted levels measured in DBFS. RIT has moved to cinema/broadcast terms such as Dial Normalization or Target Integrated Loudness. All audio levels need to be measured and monitored this way in order to screen at RIT! There is a software by [https://youlean.co/youlean-loudness-meter/](https://youlean.co/youlean-loudness-meter/) that is free and does a pretty good job of measuring loudness. This may be helpful for use on your personal computer, but we require the final sound project and films submitted by Izotope and TC Electronics for accuracy. **Audio Leveling Policy** For any student film to screen in Wegmans Theater, the audio must be either 1. mixed in MAGIC Sound Mix (2100) and monitored at Fader Level 5.5 (80.0 dBC), or 2. mastered to the following SOFA Theatrical Mixing Standard: `Maximum Peak: -3 dbfs` `Target Integrated Loudness: -27 LKFS/LUFS (+/-2)` **LU** Below are older references but they are similar to the above and will be discussed in class: Final audio mixes: voice should average -16 to -10 dbfs. All other material should never go above -6dbfs. (Look at the last few pages of these documents for screening standards) **(2nd part of Dave's Trinity of Mixing rule)** The playback of the monitors/speakers should simulate playback of theaters starting at 80 db spl when the dialog is averaging -16 to-10dbfs. You will have to approximate this when using headphones, but the same principle applies. **SOUND** Molecules of air moving around an object. This vibration meets our ear, creating what we call sound. FREQUENCY: If we say a tone has a frequency of 440 Hertz (Hz), it means this is vibrating back and forth at 440/second. FREQUENCY RANGE: The human ear can perceive from 20Hz to 20 kHz. This is also used in describing the response of equipment. DYNAMIC RANGE: This describes the span of sound from the quietest (pianissimo-very quiet) to the loudest (fortissimo-very loud). FREQUENCY RESPONSE and DYNAMIC RANGE OF EQUIPMENT: This term is how most equipment is measured and/or referred to for evaluation, i.e., "It has a flat response from 10Hz to 22Khz and a dynamic range of 130 dB." Or "It is +/- 3dB from 100Hz to 12Khz." **LOUDNESS** The human ear does not hear all sounds equally. **(3rd part of Dave's Trinity of Mixing rule)** FLETCHER-MUNSON: The Fletcher-Munson curves refer to the way the ear perceives sound at various frequencies and dynamics. Simply stated, as the loudness changes, the ear's response is altered. Listening at one level (higher SPL) will sound different at another (lower SPL) level. The ear is most sensitive at 3000 - 4000 Hz (2K - 4K). This is the basic response of the telephone (cell phones are all over the map but close). It was no accident that phones sound the way they do. All that was needed was the ineligibility range. For instance: For the ear to perceive a tone of 100Hz (low bass) and 1Khz (mid voice or guitar) equally at 50 dB (phone - see Fletcher-Munson curves), the 100Hz tone would have to be raised about 17dB. The difference in listening at 90 dB would only be about 5 dB. Monitoring or listening levels used to produce the mix or recording is absolutely critical to the final product! It makes all the difference! The screenings of all your works played back in the theater and all movie theaters are based are this principle! THE HOME STEREO: Some stereos have a loudness button. Almost all older ones did, pre-1990's. This is used to compensate during low-level listening. It is generally a boost of 100Hz and 10 kHz. This helps the sound appear identical at low levels as it would have when played back loud without adjustment. This follows Fletcher-Munson. Newer electronics have controls to do this but are identified in different terms. Home systems now have an intensive menu to select different modes of listening from Stereo to ATMOS. It can be quite confusing. Below is a graph of the Fletcher-Munson curves, also known as the equal loudness curves. What should be pointed out is: at 90 DBFS, the ear hears all frequencies almost equally. When the playback is significantly lower, the ear doesn't hear low frequencies very well. Our ability to record and produce good sound depends on us hearing all frequencies well. Good theaters are calibrated to accommodate this concept and are why it's so important to understand and use my "Trinity of Mixing" principles. The image is a graph of the Fletcher-Munson curves. The vertical axis represents Intensity Level in dB, ranging from -20 to 140. The horizontal axis represents Frequency in Hz, ranging from 20 to 20k. A series of curved lines indicate the perceived loudness levels across different frequencies. REFLECTION - REFRACTION - DEFRACTION REFLECTION: When a pressure wave reaches a surface barrier, it is reflected back into the room. REFRACTION: The bending of a waveform as it passes from one medium to another, or it experiences some change (i.e., temperature) within the medium. From a cool surface, sound can rise. From a warm surface, sound can fall. DIFFRACTION: The change in direction of sound brought about by an obstacle in the path of direct sound. ABSORPTION COEFFICIENT: All materials and objects have different absorption coefficients at different frequencies, which affect reflection, refraction, and diffraction. This is the fundamental problem or solution for recording AND monitoring audio. Sound blankets on set can dramatically improve dialog by reducing reflections and reverb-type noise. This principle can also be used for voice recordings when a studio is unavailable. When doing a site survey or scouting for locations, everyone should take into account the noises in that area. Factors such as traffic, time of year, etc., can change by the hour. PHASE AND COHERENCE An audio person's worst nightmare! a.k.a. Time Delay. Two sine waves 180 degrees apart when combined will cancel each other out! The image shows 4 waveform graphs, the first one is titled "Separate Signals", the second one is also titled "Separate Signals", the third one is titled "In phase Mixed signals" and the last waveform graph is titled "Out of phase". The image shows sine waves at various phases. The first two sine waves are separate signals, while the third shows the signals in phase and the last one is out of phase. ~~~ ~~~ 0 90 180 270 360 Dave's hand-drawn points of degrees. The above shows a sine wave's amplitude and polarity (bad freehand....). A wave starts at 0, peaks at 90 (positive polarity), the zero crossing or amplitude (180), 270 (negative polarity), and back to 0 (a complete cycle). A second wave is starting at 180 and reaches 90 (original's 270) and they crisscross continually. When both waves are combined, they cancel each other out and become a straight line. The above is the worst-case scenario. Phase problems or time delay shows up in many ways. This will be demonstrated in class at a later time. This can happen with an electronic audio signal. If two identical audio signals are combined and one of them has the positive and negative signals switched, the result will cancel each other out. This is known as a polarity reversal and is said to be electronically out of phase. Phase is referred to both in the acoustic and electronic worlds. ELECTRONIC PHASE AND CABLE WIRING Cables used for microphones and between professional audio equipment/components are called balanced cables. This cable has two conductors twisted together - one positive, one negative - and is enclosed in a braided sheath that is referred to as the shield. The second type is known as an unbalanced cable. This has only one conductor for the positive signal and the shield is used for the negative signal. The advantage of balanced cables far outweighs that of unbalanced cables. In most cases, long runs are needed for microphones and equipment. Cables and equipment are subject to a host of induction-related problems from Radio Frequency Interference (RFI), electromagnetic induction (EMI), and others. An unwanted hum or noise that is picked up by both conductors equally will be canceled out at the next stage of the circuit. The balanced line, known as a twisted pair, is just like the previous sine wave example. Two identical signals with wire twisted 180 degrees apart (in this case, hum induced equally on the positive and negative signal) will cancel itself out. The balanced cable connections are generally known as low impedance, and the single unbalanced are known as high impedance. This same noise on a single conductor line will be amplified at that next stage. Generally, unbalanced cables are only used for short distances (under 15ft). Cell phones create a lot of problems on set with RF noise, even in vibrate mode. In most cases, these should be turned off during takes. The audio team hears everything on a production, even when it is not rolling. They are on headphones much of the time, listening for problems. Dave's professional tip: While on location or in post-production and during important moments, if you are seen using your phone or getting distracted by it, you may not be hired again. Clients and others around you notice what you do. EXAMPLES Balanced cables: 3-pin type (XLR) for microphones and professional audio devices. 1/4" TRS (tip/ring/sleeve or stereo 1/4" plugs) for professional and semi-pro audio gear. Unbalanced cables: RCA or HI-FI (phono) (typical home stereo stuff), 1/4" guitar or instrument cords, low-budget microphones, and 1/8 "cables typical of iPod devices and internal computer cards. Headphone Cords are unbalanced but carry a much higher voltage. They carry VOLTAGE to speakers. Depending on amplification, a proper gauge cable is needed. Headphone cables are generally terminated with a 1/4 or 1/8-inch TRS connector. Tip = Left, Ring = Right, and Shield goes to both negative terminals of each speaker. Noise issues like buzz and hum? Another area to be considered is the design of audio gear. Good-quality equipment is shielded and produced well. Cheap computer cards and mic pre-amps can create problems. Speakers also need to be shielded if placed near VGA monitors. They also create an EMI field. Higher-priced audio gear has better shielding, design, and components. CELL PHONES: Certain cell phone frequencies and RF radiation bleed into many cables and microphones, especially wireless equipment. While on sessions and location shoots, I recommend all cell phones are turned off and not set to vibrate. Do a search on the FCC and the Whitespace issues with RF spectrum allocation. Things are not very good for the wireless world of audio and are getting worse. MICROPHONES There are many styles of microphones. Some are better suited for certain applications, but there is no set rule as to which one to use. Many models have a great reputation, but experimentation and years of evaluation will dictate why microphone choice is a personal preference. At RIT SOFA, we will mainly use shotgun and lavalier microphones, which are standard for live-action and field-production styles, including animation dialog. We also have a large diaphragm condenser in the Narration Voice Booth for a closer, bigger-sounding voice, such as required for narration. The MSS-2100 Sound Mix voice booth also has high-end microphones. Microphone: A transducer which changes acoustical energy to electrical energy. Types of Microphones: Dynamic: These generally come in two flavors: moving coil and ribbon. The difference between the two is construction. Both pick up vibrations and are housed in an electromagnetic environment. This produces a voltage or signal, which represents sound. Characteristics: Rugged, medium sensitivity, lacking in high-frequency response. Can handle higher SPL (sound pressure level). Condenser: This was also known as a capacitor microphone. A capacitor is an electrical component consisting of two electrodes (plates), separated by a small distance, capable of storing an electrical charge. This microphone consists of a diaphragm. As the diaphragm vibrates, the spacing between it and the stationary backing plate creates a capacitance. A signal voltage is derived in conjunction with a pre-amplifier built into the microphone body. This pre-amp is needed because the output is very low. Of course, the pre-amp requires a power supply, which furnishes the polarizing voltage to the condenser/diaphragm as well as the transistors or tubes within the microphone. This voltage is commonly referred to as phantom voltage. Each manufacturer requires a different voltage, but it is commonly 48 volts DC (Direct current). This voltage travels along with the audio in the microphone cable. Phantom voltage can be found in recording consoles or in a stand-alone unit. Batteries are used as phantom voltage. There is also T-Power (12 Volts), but this is an old standard. The RIT Cage still has microphones in the kits that use this. There are adapters for it. Always look at the microphone to understand what is needed and make sure you know which one you are using. Electret Condenser: This microphone is similar to the above except that the condenser/diaphragm is permanently polarized, and only the pre-amp needs the voltage. Most lavalier microphones are electret. Characteristics: Great frequency response, very sensitive, not rugged, and can fail in high humidity conditions. MICROPHONE PICKUP or POLAR PATTERNS All microphones have varying pickup patterns known as polar patterns. The frequency response is related to polar patterns and is commonly referred to as ON - AND OFF-AXIS FREQUENCY RESPONSE. Proximity effect: The exaggeration of off-axis response and the function of distance and frequency. This is the same concept used for tonal change in Dialog or EFX to create the perception of distance. Generally, the use of equalization can make this happen. CARDIOID: Generally known as unidirectional. This pickup pattern is very directional and is used in noisy environments where a specific sound is needed while surrounding sounds are ignored. The pickup is generally 90 degrees - the optimum frequency response. There are variations: Hyper cardioid and super cardioid. The best frequency response has to be straight on the axis. CHARACTERISTICS: High off-axis coloration. Proximity effect. EXAMPLES: Vocal mics used in live concerts and on instruments, shotgun mics used in film and news gathering. OMNI: The pickup is considered 360 degrees. It hears sound in all directions. CHARACTERISTICS: Full frequency response in all directions. Low proximity effect. EXAMPLES: High quality recording mics. News gathering (hand held style). FIGURE EIGHT or BI-DIRECTIONAL: This pattern literally looks like a figure eight. It picks up in a 180-degree direction and rejects sound from the sides. CHARACTERISTICS: Very similar to two cardioids back-to-back. EXAMPLES: High quality recording mics. A popular ribbon style microphone. This is part of Mid-Side micing and is what we have on our stereo shotgun mics. ONLY "STEREO SHOTGUN" MICROPHONES HAVE A FIGURE 8 PATTERN and that is the second microphone capsule. The main capsule is Cardioid! These are in most of the Mix-pre 6kits. As of 2019, the Mix-pre 6 kits have a mono shotgun primarily for dialog on sets and include the stereo, as mentioned. Several of the mono shotguns are 12T powered and need the adapter (48V to 12T) to work. Look at the cage online to find information about each kit. MORE THAN ONE? WHY TWO? With a single ear, a listener can only determine pitch, timbre, loudness, and everything else but direction or localization. Adding another ear provides the cues for localization within a few degrees on a horizontal plane. This is generally referred to as PSYCHOACOUSTICS. This same principle can be applied to the use of microphones. MONOPHONIC: A single source. DUAL MONO: A single source equally or identical on both channels (NOT STEREO). BINAURAL: Two sources. Referenced in characteristic of the human head and ears. We do not use binaural recording at RIT/SOFA. This term is now being used for spatial recording and technically not correct. The playback of Spatial recordings is heard via headphones. The use of technical designs creates the illusion or sound all around, and this is where the term binaural is being used in the wrong manner or adapted to a new manner. STEREOPHONIC: Similar to Binaural except more spatial effect. The use of two or more microphones. Always document the type of recording in writing and with a verbal slate in the field or studio: this helps everyone. Example: a single voice or reference tone recorded on both channels of a camera or device is DUAL MONO. A recording, let's say an onboard stereo camera mic, used for B Roll is STEREO (depending on settings) Both channels aren't identical. They give us localization information, so they can't be the same. STEREOPHONIC RECORDING TECHNIQUES There have been several approaches to recording stereo and preserving natural balance and perspective. The main approach we will take is the coincident technique, which is commonly used. This is the use of two microphones or a pair of matched microphones. Both capsules are close together. This system reduces or avoids time delays and phase shifts between microphones. COINCIDENT MICROPHONE RECORDING TECHNIQUES: This area will be discussed & demonstrated in class with specific examples & demonstrations. The X-Y configuration. It consists of two cardioid microphones; capsules overlapped (coincident) 90 degrees to each other. It forms a right angle. The front plane of this pattern faces the source to be picked. One is left and the other is right. The "ORTF" configuration. Was proposed by the French National Broadcasting Group. This uses two cardioid microphones, 17 cm apart at an angle of 110 degrees. As above, one is left and the other is right. "M-S" (middle-side OR MS): This approach uses Mic #1 as a primary microphone for pickup of an ensemble or sound source. Microphone #2 is a figure - eight pattern at 90 degrees to the source, which picks up the sides of the source. In simple terms: The main Mic can be varying patterns depending upon the information wanted. The figure-eight picks up the left-right information. When these are combined or "matrixed," it produces excellent stereo. How is it done? The basic premise is `M+S (mid+side) + M-S (mid-side) = stereo.` The plus and minus are derived from splitting the figure-eight and reversing the polarity of one split. The M+S = left and M + -S = right. FORMULA: `M + S = LEFT Channel` `M + (-S) = RIGHT Channel` `2M (all S is cancelled out) = L + R or perfect mono also dual mono` 1. The main advantage of this technique is its mono-compatibility. When the left and right are summed to mono, the sides cancel each other out (+S) + (-S) = 0) leaving the main mic as the source. People at home watching television in mono (the big 3" speaker or any device) will not lose the basic audio intended. However, it is doubtful anyone is listening on a mono TV anymore but TV went stereo in 1987 and it was years before most consumers and broadcasters moved to true stereo. The High-Definition transition on 2002-3 really changed all of this. An easy way to accomplish this with a mixer is by the use of three inputs: `Input #1 is MIDDLE Mic and is panned center.` One signal of the split Figure-eight goes in input #2 and panned left. The other signal of the split figure-eight goes into input #3, and the phase-reversal switch is used to change polarity. This is panned right. To do this in the field with only two channels or inputs on a recorder, simply assign Main or mid mic in one channel and the single (unsplit) figure eight in the second channel. This means having a special matrixing headphone amp while recording because the signals as they are would sound very weird. Our Sound Devices Mix-pre 6 kits can do this in the headphones. 2) By matrixing back in the studio, you have control of the width of the stereo. It can be matrixed in the field to two channels and recorded that way. It's all a matter of preference. This is the most complicated setup, but it's the most versatile and rewarding for film and television. You will be required to know how this works (conceptually) and explain it in detail on the test, but we will use the stereo shotgun microphones with the Sound Devices Mix-Pre 6 recorders. They have three selections. The operator must check these and understand them before recording. 3) The Stereo shotgun allows you to record mono dialog, stereo ambient, or stereo music with one microphone instead of several. This is really useful in documentary work. The ability to go from a mid mic for dialog only to adding the Side (in post) to create stereo for the background can be invaluable. Many audio engineers, including myself, record in M-S mode. We record Mid on channel 1, Side on Channel 2 but listen in MS mode on the recording device. It does the matrixing for the headphones so we can hear the future but optimize this process in post-production. ProTools matrix instructions 1) Drag or import both track 1 (MID) & track 2 (SIDE) from Mix-pre 6 recorder onto "TWO AUDIO MONO" tracks. 2) Audio track 2 should be duplicated. (Under TRACK pull down-duplicate) 3) New audio track 3 (now called audio track 2dup) will need to be inverted using the audio suite plug-in called invert. Check to process all individual tracks in settings. This will be demonstrated in class. 4) Pan track 2 to LEFT & pan track 3 to RIGHT. HEADROOM & MONITORING ALIGNMENT TONE: Tone that is used as a reference for level calibration -20DBFS. Using tone helps align all the equipment being used in the audio chain. It's also a good way to trace and troubleshoot signal flow. By using tone from a console to recorders, it reassures the operator that all metering from either the console or destinations reflect the actual audio being passed. Having tone on a tape or media represents a suggested known level from which to judge the audio peaks and average audio modulation. Recording tone on magnetic tape while monitoring playback lets us know what's actually being put on the tape and not what's going into the tape machine (this was the old days of magnetic tape recording). There is a difference, depending on tape stock and equipment alignment. This is still important in the digital world. ALL ALIGNMENT TONES SHOULD BE -20DBFS. NO EXCEPTIONS! It is also a known reference when calibrating playback in studios or control rooms. To create an alignment tone track on ProTools is easy. Step 1: Create a NEW STEREO TRACK! NOT TWO MONO TRACKS! Step 2: Highlight region on this new track (40 seconds)- go to audio suite – other -signal generator - sine wave. This will playback perfectly as -20dbfs. HEADROOM: This is the absolute highest point that audio can achieve before distortion. When audio hits the distortion level, this is also referred to as clipping or overmodulation. Headroom usually has a fixed point. Typical professional audio equipment has a headroom range up to 24db. (same as Odbfs) Remember that `0VU = +4dbm or 1.23 volts (Line level with a consistent tone)` `A reference tone @ -20DBFS`. `24db headroom (technical equipment ratings)` `- 4db = 0VU or (+4dbm) this is the subtracted 4` `20 dB difference of level:` This is the amount of headroom level as defined by the term reference above. Having a reference tone of 1Khz @ -20dbfs leaves 20 db of headroom and avoids clipping or distortion. By putting a reference tone @ -20DB, the amount of audio level before distortion is 20 dB, just like the equation. When you hit absolute zero on any device, you're done, and that's bad! This is the 0 DBFS point. IN MASTERING CD'S AND DVD'S, THE ENGINEERS TRY TO HIT THE HIGHEST POINT TO MAKE IT LOUD. THAT'S WHY THE WAVEFORMS RIPPED FROM CD'S ONTO A TIMELINE LOOK BLOWN OUT OR OVERAL VERY LOUD -- THEY ARE! I IMMEDIATELY REDUCE THIS FILE ABOUT 7-10 DB FOR MIXING PURPOSES when importing into a session. YOU'LL UNDERSTAND WHEN YOU MIX! THIS SHOULD BE DONE ON ALL EDITING PLATFORMS FOR USE WITH VIDEO. TRANSIENTS: These are spikes or peaks of audio, which usually slip by the metering. Transients are very fast and complex waveforms. Distortion can occur from audio equipment, which cannot handle transients. This is the difference between professional audio gear and low-budget stuff. All audio gear