Audiovisual Translation Issues: Ch 9 PDF
Document Details
Uploaded by SpeedyMoldavite4880
Qassim University
Tags
Related
- Tema 1. Introducción a la traducción audiovisual PDF
- Lecture Notes: Translation Procedures
- Translation Types And Categories Lecture
- Audiovisual Translation, Translators, and Technology: 2023 PDF
- Audiovisual Translation, Translators, and Technology by Granell & Chaume PDF
- Translation of Taboos in Dubbed American Crime Movies into Persian PDF
Summary
This document offers an introduction to the topic of audiovisual translation, highlighting the methods of dubbing, subtitling, and voice-over. The text explores the processes involved, discussing both advantages and disadvantages of each method. It also touches on the field's importance in global content access.
Full Transcript
※ Introduction Audiovisual translation is one of several overlapping umbrella terms that include ‘media translation’, ‘multimedia translation’, ‘multimodal translation’ and ‘screen translation’. These different terms all set out to cover the interlingual transfer of verbal language when it is tran...
※ Introduction Audiovisual translation is one of several overlapping umbrella terms that include ‘media translation’, ‘multimedia translation’, ‘multimodal translation’ and ‘screen translation’. These different terms all set out to cover the interlingual transfer of verbal language when it is transmitted and accessed both visually and acoustically, usually, but not necessarily, through some kind of electronic device. Theatrical plays and opera, for example, are clearly audiovisual yet, until recently, audiences required no technological devices to access their translations; actors and singers simply acted and sang the translated versions. After the introduction of the first talking pictures in the 1920s a solution needed to be found to allow films to circulate despite language barriers. How to translate film dialogues and make movie-going accessible to speakers of all languages was to become a major concern for both North American and European film directors. Today, of course, screens are no longer restricted to cinema theatres alone. ※ AVT Modalities The two most widespread modalities adopted for translating products for the screen are dubbing and subtitling. Dubbing is a process which uses the acoustic channel for translational purposes, while subtitling is visual and involves a written translation that is superimposed on to the screen. Another, less common, acoustic form of screen translation is voice-over. 1. Dubbing Dubbing is a process which entails ‘the replacement of the original speech by a voice track which attempts to follow as closely as possible the timing, phrasing and lip-movements of the original dialogue’ (Luyken et al. 1991: 31). The goal of dubbing is to make the target dialogues look as if they are being uttered by the original actors so that viewers’ enjoyment of foreign products will be enhanced. 1.1 THE DUBBING PROCESS There are traditionally four basic steps involved in the process of dubbing a film from start to finish. First, the script is translated; second, it is adapted to sound both natural in the target language and to fit in with the lip movements of the actors on screen; third, the new, translated script is recorded by actors; and finally it is mixed into the original recording. As well as rendering talk natural, care is taken to ensure that the dialogue fits into visual features on screen such as lip movement, facial expressions and so on. Furthermore, the new dialogue also needs to take the emotive content of each utterance into account. However, with the awareness that a thorough understanding of the source text is a crucial asset for a translator, it is becoming ever more common for the two processes (the translation itself and the adaptation) to merge and be carried out by a single translator who is proficient in both languages (Chaume 2006). Furthermore, the dubbing director may intervene in the translation of the dialogues wherever he or she wishes. In practice, a single person often carries out more than one of the four steps in the process. For example, the same person may double up as both dubbing director and dubbing translator or an actor may also double up as dubbing director (Chiaro 2005). Finally, once recording has been completed, the dubbed tracks are mixed with the international track and musical score so as to create a balanced effect. Digital technology has modernized dubbing by allowing actors to record parts independently, eliminating the need for complex shifts and rewinding tapes. Unlike the traditional "artisan" approach that required actors to work together, software now seamlessly edits separate recordings into a complete film, making the process more flexible and cost-effective. As well as simplifying technical and organizational aspects of the dubbing process, new technology is also able to modify lip sync and voice quality. Software is now available that can automatically modify footage so that an actor mouths words that he or she did not actually speak in the original; in other words, the original sequence can be modified to sync the actors’ lip motions to the new soundtrack. Other programmes allow a dubbed voice to be readily assimilated to that of the original actor, irrespective of the source language, by recording first a sample of the original voice and then the dubbed dialogues. The software matches the first recording with the second, giving the impression that the original actor is speaking the target language with its characteristic quality and intonation patterns. ※ Advantages of Dubbing 1. Wider Audience Reach: Dubbing enables films to reach larger audiences by making them fully accessible in the target language, potentially increasing sales. 2. Cultural Familiarity: Viewers can watch films in their native language, making content more relatable and immersive. 3. Greater Filmic Uniformity: Dubbing retains the full visual experience without the need for text on the screen, maintaining visual and emotional coherence. 4. No Text Reduction: Unlike subtitling, dubbing doesn’t require condensing dialogue, allowing audiences to experience the full narrative without omissions. 5. Automatic Consumption: Dubbing is easier for viewers to consume passively since they don’t have to read subtitles, making it go “unnoticed” by those accustomed to it. ※ Disadvantages of Dubbing 1. High Cost and Complexity: Dubbing involves numerous professionals, including translators, voice actors, and sound engineers, making it time-consuming and expensive. 2. Loss of Original Soundtrack: Dubbing replaces the original audio, which some audiences feel detracts from the authenticity and emotional impact of the original voices. 3. Artificial Effect: Critics argue that dubbing can seem “fake” or “phoney” since it creates a mismatch between the original actors’ expressions and the dubbed voices. 4. Imperfections in Lip Sync: Dubbing often struggles with perfect lip-syncing, especially in close-up shots, although this issue is less noticeable to audiences in dubbing-prevalent regions. 5. Competition with Subtitling: The rise of subtitled products, especially in DVDs and streaming, is a faster and cheaper alternative, challenging the traditional dubbing industry and its unique craftsmanship. 1. Subtitling Subtitling can be defined as ‘the rendering in a different language of verbal messages in filmic media, in the shape of one or more lines of written text presented on the screen in sync with the original written message’ (Gottlieb 2001b: 87, emphasis added). ※ THE SUBTITLING PROCESS Subtitling consists of incorporating on the screen a written text which is a condensed version in the target text of what can be heard on screen. Depending on the mode of projection, subtitles can either be printed on the film itself (‘open’ subtitles), selected by the viewer from a DVD or teletext menu (‘closed subtitles’) or projected on to the screen, although the latter mode is largely restricted to film festivals where subtitles are displayed in real time. The written, subtitled text has to be shorter than the audio, simply because the viewer needs the necessary time to read the captions while at the same time remaining unaware that he or she is actually reading. According to Antonini (2005: 213), the words contained in the original dialogues tend to be reduced by between 40 and 75 per cent in order to give viewers the chance of reading the subtitles while watching the film at the same time. Especially, where an SP is thick with dialogue, the subtitling translator is forced to reduce and condense the original so that viewers have the chance to read, watch and, hopefully, enjoy the film. Antonini (213–14) identifies three principal operations that the translator must carry out in order to obtain effective subtitles: elimination, rendering and simplification. Elimination consists of cutting out elements that do not modify the meaning of the original dialogue but only the form (e.g. hesitations, false starts, redundancies, etc.) as well as removing any information that can be understood from the visuals (e.g. a nod or shake of the head). Rendering refers to dealing with (in most cases eliminating) features such as slang, dialect and taboo language, while condensation indicates the simplification and fragmentation of the original syntax so as to promote comfortable reading. Just like dubbing, the subtitling process may also involve several operators. The first stage in subtitling is known as spotting or cueing and involves marking the transcript or the dialogue list according to where subtitles should start and stop. Traditionally, this stage in the process is carried out by a technician, who calculates the length of the subtitles according to the cueing times of each frame. With the aid of the dialogue list annotated for cueing, the translator will then take over and carry out the actual translation. In addition, it is not unusual for a third operator to be employed to perfect the final subtitles, checking language but also technical aspects, such as ensuring that subtitles are in sync with changes of frame. However, as with the dubbing process, thanks to technology it has become quite normal nowadays for a single operator to carry out all three steps of the entire procedure. Nevertheless, while subtitling translators working with SP for the cinema tend to create a new transcript from the original transcript in writing alone (i.e. their end product will be in written form), those working for DVD and TV are likely to work from computer-based workstations that allow them to receive all the necessary information, including the time- coded transcription or dialogue list, from which they devise, cue, check and even edit the subtitles. In other words, they will work directly on to electronic files and produce a complete product. Traditionally, subtitles consist of one or two lines of 30 to 40 characters (including spaces) that are displayed at the bottom of the picture, either centred or left-aligned (Gottlieb 2001b). However, films for the big screen tend to have longer lines with more characters compared to TV screens because of movie audiences’ greater concentration and DVDs also have longer lines, presumably because viewers can rewind and re-read anything they may not have read (Díaz Cintas and Ramael 2007: 24). According to Díaz Cintas, such restrictions are bound to disappear in the future as many subtitling programmes work with pixels that are able to manage space according to the shape and size of letters.9 The exposure time for each subtitle should be long enough to permit comfortable reading; three to five seconds for one line and four to six for two lines (Linde and Kay 1999: 7). Subtitles cannot remain on screen too long because the original dialogue continues and this would lead to further reduction in the following ‘sub’. Studies also show that, if they are left on the screen too long, viewers tend to re-read them, which does not appear to lead to better comprehension (Linde and Kay 1999). However, at present subtitles adhere to what Gottlieb has defined as the ‘one-size-fits-all’ rule of thumb (1994: 118), based on the assumption that slower readers who are not familiar with the source language set the pace. This has led to the established length/timing conventions. Yet different languages use varying amounts of verbal content to express the same meaning. For example, the average German word is longer than the average English word, but subtitling conventions are the same for all. As indicated above, subtitles can also be either open, meaning that they cannot be turned off and controlled by the viewer (i.e. at cinemas), or closed, which means that they are optional and accessed by the user (i.e. subtitles for hard of hearing, subtitles on pay TV channels and DVDs). ※ Advantages of Subtitling : 1. Preserves Original Soundtrack: Subtitles keep the original dialogue and soundtrack, allowing audiences to hear the actors' voices and cultural nuances. 2. Language Learning Tool: Watching with subtitles supports foreign language learning by allowing viewers to listen and read simultaneously. 3. Less Distortion of Source Language: Subtitles do not alter the original audio, reducing the risk of distorting the source language or meaning. 4. Cost-Effective: Subtitling is usually less expensive and faster than dubbing, making it a preferred option for global distribution. 4. User-Friendly Improvements: Modern subtitling has become more readable, with improved layouts, bold fonts, and grammatical segmentation to enhance viewer experience. 5. Enhanced Accessibility: Subtitles make content accessible to viewers with hearing impairments and serve as an additional layer of context. ※ Disadvantages of Subtitling : 1. Divided Attention: Viewers need to read while watching, which can be distracting and may reduce immersion in the film. 2. Limited Translation Options: Since the original dialogue remains audible, translators have limited flexibility, especially when dealing with censored or sensitive content. 3. Reduction of Text: Subtitling requires condensing dialogue, which can lead to loss of detail and nuance in complex scripts. 4. Visual Clutter: Text on the screen can interfere with visual elements, potentially distracting from important visual cues. 5. Stronger Impact of Taboo Language: Written taboo language in subtitles can have a stronger impact than spoken language, often leading to more censoring or simplification of offensive terms. 6. Lack of Real Writing Structure: Subtitles are a form of writing but must omit standard language features (e.g., hesitations, slang) to fit time and space constraints, limiting their expressiveness. VOICE-OVER Voice-over can be defined as a technique in which a disembodied voice can be heard over the original soundtrack, which remains audible but indecipherable to audiences. To date, this modality of screen translation has been very much overlooked and under-researched by academics. Voice-over consists of a narrator who begins speaking in the target language following the initial utterance in the original and subsequently remains slightly out of step with the underlying soundtrack for the entire recording. Despite the fact that audiences may be familiar with the source language, the underlying speech cannot be clearly perceived apart from the initial and final utterances of the original narrator and the insertion of the odd sound bite. A sound bite is a very short piece of footage of the original soundtrack which is not covered by the new target language audio. This modality is generally linked to the sober narrative style adopted in traditional historical and wildlife documentaries as well as news broadcasts. However, it would be wrong to believe that voice-over is limited to these particular genres and to factual products alone. In Italy, for example, advertisements and shopping channels make frequent use of voice-over, although with an intonation which is less sober than that adopted for traditional documentaries. People acting as testimonials for products advertised are voiced over ‘theatrically’, as are celebrity chef programmes (e.g. Jamie at Home) and eyewitnesses in several historical documentaries (e.g. History Channel’s Decoding The Past, A&E Television Networks, 1995;). LOCALIZATION FOR VIDEO GAMES Video games can be defined as ‘computer-based entertainment software, using any electronic platform..., involving one or multiple players in a physical or networked environment’ (Frasca 2001: 4). Video games incorporate human voices, thus products tend to be both dubbed and subtitled. However, language translation and software engineering go hand in hand in the localization of these products for individual markets, and, unlike for other SP, translation is considered an integral part of the localization process of each product. Game publishers are usually also responsible for localizing their products, a process in which both functional and linguistic testing are part of quality assurance (Chandler 2005). Furthermore, translators are involved in each stage of projects. Of course, the negative side is that translators work with ‘unstable work models’ that are continually changing (O’Hagan and Mangiron 2006). O’Hagan and Mangiron highlight a number of similarities and diversities between video game localization and audiovisual translation. Firstly, while most SP are dubbed or subtitled from English into other languages, video games are mainly dubbed and subtitled into English from Japanese. The dubbing process for video games is similar to that of other SP; subtitling, however, differs. Most subtitled games make use of intralingual subtitles. Players are able to control them, by pausing for example, as when watching a DVD. Furthermore, in order to keep up with the rapid speed of a video game, subtitles appear at a faster speed than at the cinema or on TV. Above all, however, the aim of video games is to provide entertainment and to be enjoyed. It is thus paramount that translators bear in mind the importance of the ‘look and feel’ of the original. Although this involves taking into account culture- specific features and especially humorous effects, it also means that the translator should be familiar with the game genre itself and the specific type of register it employs. In fact, translators are usually given total freedom to accommodate sub and dub so as to come up with a product that is as enjoyable as possible for each locale. Translators are given the freedom to make use as much as possible of local features, such as jokes and references to popular culture, so as to enhance the target product. This kind of translation is often termed ‘transcreation’ REAL-TIME SUBTITLING AND RESPEAKING Real-time subtitling is ‘real time transcription using speaker-dependant speech recognition of the voice of a trained narrating interpreter in order to provide near simultaneous subtitles with a minimum of errors’ (Lambourne 2006). Originally developed to provide intralingual subtitles for the deaf and hard of hearing, real-time subtitling is also widely used for interlingual subtitling in many countries worldwide (see Sheng-Jie Chen 2006 for an overview). Whether inter- or intra-lingual, real-time subtitles are produced with a speaker/interpreter who reads and reduces and, in the case of interlingual subtitles, translates speech flow in the original language while a stenographer creates the subtitles. Korte (2006) reports that Dutch television companies have been regularly adopting real-time subtitles, not only for international affairs, state weddings and funerals etc., but also for live programmes in a foreign language since the late 1990s. However, more recently the practice of respeaking has been rapidly gaining ground. Thanks to speech recognition software able to transform oral speech into written subtitles with a certain degree of accuracy, the respeaker remains the only human operator in the entire process. Basically, the respeaker reduces the source message, software recognizes his or her voice and automatically translates this into written subtitles. At present, speech recognition software is able to transform oral speech into written subtitles with some accuracy, and there is reason to believe that future advances will eliminate existing technical shortcomings.