Audiovisual Translation, Translators, and Technology: 2023 PDF

Summary

This is an academic article analyzing audiovisual translation (AVT) and how technology is impacting the industry, including automation and human-machine convergence. The authors explore the evolution of translation technology and its impact on the AVT profession. It also discusses the use of cloud-based environments for translators.

Full Transcript

Granell, X. & Chaume, F. (2023). Audiovisual translation, translators, and technology: From automation pipe-dream to human–machine convergence. Linguistica Antverpiensia, New Series: Themes in Translation Studies, 22, 20–40. Audiovisual translation, translators, and technology: From au...

Granell, X. & Chaume, F. (2023). Audiovisual translation, translators, and technology: From automation pipe-dream to human–machine convergence. Linguistica Antverpiensia, New Series: Themes in Translation Studies, 22, 20–40. Audiovisual translation, translators, and technology: From automation pipe dream to human–machine convergence Ximo Granell Universitat Jaume I [email protected] https://orcid.org/0000-0002-9522-006X Frederic Chaume Universitat Jaume I [email protected] https://orcid.org/0000-0002-4843-5228 Abstract Audiovisual translation (AVT), broadly understood as a synonym for media content localization, and not only as a particular practice of linguistic transfer, is undergoing a revolution that was unthinkable only a few years ago – even in those territories where viewers are less accustomed to localized content. Digitalization and technological changes, which have had such an impact on the way audiovisual texts – whether original, localized, or adapted – are produced, distributed, edited, consumed, and shared have also had a substantial impact on the AVT profession. This article explores the ways in which technology has been evolving as an aid to translators: from being merely a clerical aid for transcribing digital texts to automating tasks and integrating machine translation into human translation processes. This it does by providing a range of tools to assist translators in their work processes, progressively migrating both tools and processes to cloud-based environments. The focus is then on AVT, and more particularly on dubbing, where digitalization has shaped the consumer market and posed several challenges to language technology developments and AVT professional practices. Academia has also paid attention to such developments and has increasingly dealt with a number of matters affecting both practice and training to cater to the needs of current media markets. A final word is devoted to proposing a literacy-based framework for the training of translators that embraces technology so as to incorporate automation as an additional aid and which redefines the audiovisual translator’s workstation. Keywords: digitalization; machine translation; MT; audiovisual translation; AVT; translation technology 20 Granell, X. & Chaume, F. (2023). Audiovisual translation, translators, and technology: From automation pipe-dream to human–machine convergence. Linguistica Antverpiensia, New Series: Themes in Translation Studies, 22, 20–40. 1. Translators and technology: a back-and-forth story of editing, aiding, and automating The use of technology for translation purposes has been through several stages in modern times, first from technology taking the form of a mere clerical aid for transcribing digital texts and enabling utopian machine translation systems to fully automating translations in the second half of the 20th century. These advancements then led to today’s integrated and evolved language technology systems that cater for the undeniable reality of a digital world and the inevitable acceptance of technology by all the parties involved (i.e., users, industry, and translators). To better understand today’s situation, we first consider the parallel evolution of technology developments, translators’ positioning as regards their use of technology and the automation of their work, and the market requirements that have driven industry. The development of computers for translating human languages has been widely researched and reported on by scholars over time. In the beginning, research interests were mostly focused on those technological developments that followed the success of machine translation (MT). This was the time when the use of computers for translation purposes was first considered following the primary proposal of Weaver in the 1950s (see Hutchins’ review of 2000 for a detailed account of this early stage). The focus of such systems until the end of the previous century was on their technical capability to automate translation fully (Hutchins, 1996, 2001a, 2001b; Kay, 1980, 1997; Melby, 1982, 1992, 1998; Slocum, 1988). Translators’ subsequent adoption of specialist tools such as computer-assisted translation (CAT) tools did not receive much attention from the academic community until the end of the 1990s; nor was it thoroughly investigated until the first decade of the 21st century (Granell, 2015, p. 22). Since the beginning of this century, technology has had a determinant impact on translation professionals because they have had to catch up with the increased demand for the translation services provided by the digital world. Large suppliers of multilingual services, small translation companies, and freelance translators have increasingly been required to use translation memory (TM) and terminology management solutions – the main features of CAT tools (Doherty, 2016; Jiménez-Crespo, 2020; O’Brien & Conlan, 2018; O’Hagan, 2013). Technological developments in the translation sector have always provoked much discussion among translators at professional conferences and seminars, and also via online discussion groups and in the networks of professional associations. Such conversations have, at times, been emotionally charged, primarily because of the threat to job security which some translators fear computer-assisted aids pose to the translation profession (Fenner, 2000; Fulford & Granell-Zafra, 2005; Shields, 1999) and to professional associations (Audiovisual Translators Europe (AVTE), 2021). Another factor has been the ethical concerns of some translators about adopting practices involving automation and post-editing due to their impact in translators’ working practices (Álvarez Vidal et al., 2020; Cadwell et al., 2017; Fulford, 2002; Guerberof Arenas, 2013; Moorkens & O’Brien, 2015; Sakamoto, 2019). These reservations have even triggered a belligerent reluctance to post-editing by some professional associations, such as the Spanish and French professional associations of audiovisual 21 Granell, X. & Chaume, F. (2023). Audiovisual translation, translators, and technology: From automation pipe-dream to human–machine convergence. Linguistica Antverpiensia, New Series: Themes in Translation Studies, 22, 20–40. translators (ATRAE and ATAA, respectively) and the International Association of Professional Translators and Interpreters (IAPTI). These associations have issued statements on their social media censuring the use of post-edited MT (ATAA, 2021; IAPTI, 2021). In 1992, Hutchins and Somers claimed that the ultimate target of the MT industry had always been achieving “Fully Automatic High Quality Translation” (FAHQT) without human interaction, as opposed to (fully) “Traditional Human Translation” (p. 148). After more recent developments, both targets seem unrealistic today to meet market needs because of a lack of quality associated with FAHQT or a lack of productivity in the use of traditional translation methods. Consequently, since the beginning of this century, the focus has broadened to finding technological solutions to aid translators’ practice – that is, the broad range of CAT tools – or to find technological solutions and processes that will improve unsatisfactory MT results through human aid – that is, pre- and post-editing MT. Academic and professional discussion has usually revolved around the concept of the translator’s workstation, a term used by Martin Kay in his report for Xerox in the eighties, The Proper Place of Men and Machines in Language Translation (1980). We return to this concept subsequently to explain the adoption of technology by translators; but we first need to understand how translation automation has evolved over time. Early MT systems aimed to automate the ways of translating texts from one language to another. These systems were defined as software for automatic translation, where input units are full sentences of one natural language and the output units are corresponding full sentences of another language, without the intervention of any human translator (excluding pre-editing or post-editing) (Slocum, 1988). Since the very first efforts in 1954, based on large bilingual dictionaries and sets of rules that allowed systems to determine the syntactic order of the output (broadly known as Rule-based Machine Translation, or RBMT), the aim was to produce fully automatic high-quality translation, to use Bar-Hillel’s (1964) term. The translations produced over a decade by such systems were still disappointing, requiring human translators to be present to revise (post-edit) the output extensively, which led to the purposefully created Automatic Language Processing Advisory Committee’s (ALPAC) conclusion that MT was slower, less accurate, and twice as expensive as human translation and that there was no immediate or predictable prospect of useful machine translation (ALPAC, 1966). After acknowledging the limitations of MT systems at that time and drastically cutting investment in MT research (Lehmann & Stachowitz, 1971) – and despite further research effort during the 1980s in Canada and Europe – there was a shift from the rules- based systems of fully automated machine translation (FAMT) towards the research and development of computer tools that aid translators, such as automatic dictionaries and CAT tools. The new MT developments of the early 1990s were based on statistical methods or analogies, or analogy-based machine translation (ABMT), also termed example-based machine translation (EBMT). These tools used compared translation corpora and no syntactic or semantic rules in the analysis of texts or in the selection of lexical equivalents. These MT systems were considered to be a fast real-time solution that catered to a rising crowd of Internet users, in spite of their non-publishable-quality results. The upshot was the introduction of MT online services with the launch of Babel Fish (Gaspari, 2004; Gaspari & 22 Granell, X. & Chaume, F. (2023). Audiovisual translation, translators, and technology: From automation pipe-dream to human–machine convergence. Linguistica Antverpiensia, New Series: Themes in Translation Studies, 22, 20–40. Hutchins, 2007; Yang & Lange, 1998). FAMT focused then on statistical machine translation (SMT) methods to improve its results and spread its areas of application, mostly using controlled language to pre-process MT input and post-editing MT output but also to enhance CAT tools (Melby, 2012, p. 11). The parallel development of CAT tools was heavily fueled by the localization industry and was mainly based on TM and terminology management systems. At the same time, language technology developments encompassed and combined both TM and MT systems in order to try to have the best of both worlds and boost the automatic segment retrieval of TM by using MT methods (Kanavos & Kartsaklis, 2010; Lagoudaki, 2008; Melby, 2007; O’Brien & Conlan, 2018). During the past decade, technological developments in MT have regained their momentum. This has been due mostly to the introduction of hybrid MT systems and the rise of neural machine translation (NMT). Regarding quality and a more automated translation process, MT systems have become a promising bet both for industry and as part of MT-enhanced CAT tools (Castilho et al., 2017; Melby, 2019; Rothwell et al., 2023; Zaretskaya et al., 2015). NMT draws on an artificial neural network. This is a system that resembles brain neurons and their multiple interconnections and uses deep machine learning to dig into huge amounts of data and to predict suitable translated text autonomously (Koehn, 2020; Pérez-Ortiz et al., 2022). NMT is still highly dependent on human post-editing for achieving high-quality results. It is also a task that has increasingly been carried out by professional translators as part of their translation projects in many fields (Vieira et al., 2019). In fact, post-editing has become so popular in the language industries that the ISO 18587:2017 standard (Translation services – Post-editing of machine translation output – Requirements) was both drafted and published (International Organization for Standardization [ISO], 2017). We now focus on translators and return to the idea of a translator’s workstation that broadly includes the computer software and hardware used by translators (Hutchins, 1998; Melby, 1992). Here, the academic and professional discussion has traditionally paid attention to the tasks and perceptions associated with translation processes. These are mainly linguistic and information search-and-retrieval processes: document production terminology management storing and retrieving translated segments and translation automation according to the traditional three phases of translation (i.e., pre-translation, translation and post-translation) (Melby 1998), the types of application at each translation process and sub-process (Austermühl, 2001), and the degree of technology adoption within translators’ workflow management (Granell, 2015, p. 67). In addition to the wider or narrower integration of translation automation as part of translators’ processes, the technology-supported activities and tools associated with the translation activity have also been considered an integral part of the translator’s workstation (Fulford & Granell-Zafra, 2004, 2005; Granell, 2015; Locke, 2005). 23 Granell, X. & Chaume, F. (2023). Audiovisual translation, translators, and technology: From automation pipe-dream to human–machine convergence. Linguistica Antverpiensia, New Series: Themes in Translation Studies, 22, 20–40. The next section of this article turns its attention to a field in which creativity plays a key role in translation processes and which has traditionally been more reluctant to adopt CAT tools, let alone MT and post-editing: this field is audiovisual translation (AVT). Matamala built upon the concept of the translator’s workstation to depict the technology environment of AVT professionals and proposed a classification of the range of tools and resources available to them (2005). Special attention has also been paid to tools for automating tasks in the diverse modalities of AVT (Martí Ferriol, 2009), tools for subtitling (Díaz-Cintas & Remael, 2007), and the automatic calculation of subtitle reading speeds (González-Iglesias, 2012; Martí Ferriol, 2012). More recently, Granell & Martí Ferriol revisited the concept of the translators’ workstation to accommodate translation technology in the working processes of AVT professionals (2016) and, more specifically, the dubbing modality following Chaume’s tasks in dubbing workflows (2012, p. 37). 2. Audiovisual translation needs today: a booming video-on-demand market AVT, broadly understood as a synonym for media content localization and not only as a particular practice of linguistic transfer, is undergoing a revolution that was unthinkable only a few years ago – more especially in those territories where viewers are less accustomed to localized content (Spiteri Miggiani, 2022, p. 10). Digitalization and technological changes, which have had such an impact on the way audiovisual texts – whether original, localized, or adapted – are produced, distributed, edited, consumed, and shared, have also had a substantial effect on the AVT profession. In recent years there have been three notable changes that have radically transformed the industrial process and the professional conventions of the different AVT modes (Chaume & de los Reyes Lozano, 2021, p. 1). First, the increasingly frequent use of MT, speech-to-speech translation (S2ST), speech-to-text translation (STT), automatic speech recognition (ASR) technologies, and the (not-so-new) incorporation of the use of translation memories, have made translation tasks easier and cheaper. These translation tools have significantly increased the number of post-editing tasks, especially in subtitling. Second, the irruption into the market of large video-on-demand platforms and streaming services have clearly changed the rules of the AVT game. These changes have affected not only labor relations involving translators, but also the new ways of producing and recording dubbing takes and producing subtitles, and also of consuming them (Torralba Miralles et al., 2019). Among them it is worth mentioning, above all, Netflix; but there are also HBO, Hulu, Disney+, Amazon Prime Video or Apple TV+. These platforms are also known under the name of ‘OTT services’ (over-the-top media services, an expression which implies that a content provider is going over the top of existing Internet services); they have burst onto the audiovisual market and have radically changed video audiences’ consumption habits. This is the case because subscribers are no longer subject to schedules and the type of programming decided on by the different television channels, but we choose instead what we want to watch and when we want to watch it. In 2021, Netflix alone experienced 120% growth in dubbing, with 5 million minutes dubbed – a staggering figure compared to 7 million minutes of subtitled content (Marking, 2022). The volume is such that the sector has come to denounce the 24 Granell, X. & Chaume, F. (2023). Audiovisual translation, translators, and technology: From automation pipe-dream to human–machine convergence. Linguistica Antverpiensia, New Series: Themes in Translation Studies, 22, 20–40. shortage of talent for localization, which mainly affects translators (Deck, 2021; Green, 2018; Media & Entertainment Services Alliance [MESA], 2022; Spiteri Miggiani, 2022), but also dubbing directors and broadcasters, more especially in the language combinations in which the sector has historically lacked training. The correlation between viewer consumption and the role of technology integrated into translators’ workflows can easily be understood: streaming platforms are a product of technological advances; professional translators are becoming accustomed to new materials and workflows made possible by technology (cloud dubbing, to mention just one); and viewers consume massive amounts of AVT on all kinds of device, thanks to technology, but also thanks to the integration of all the processes in the translation workflow, which at the same time leads to the production of a larger amount of localized audiovisual content in much less time. But, more specifically, thanks to the potential of technological advancements, we are no longer obliged to consume a dubbed or subtitled foreign product, as was the case until a few years ago in many countries. Instead, we can watch it in its original version, with or without subtitles, or dubbed and subtitled at the same time, or only dubbed, or dubbed and subtitled in different languages. And not only that: we can also watch it subtitled for the deaf (or for people willing to learn a foreign language, or for migrants, or for those in environments with ambient noise, etc.); or audio-described for the blind; or for users who simply prefer to consume an audio-described audiovisual product for pleasure or because they can perform another task at the same time. And, as if that were not enough, in addition to all these combinations, depending on the geographical area a viewer is in, they can have access to dubbing and subtitling in different languages. This means that they can choose other languages in which to consume a video or even watch it dubbed in one language and subtitled in another. Digitalization, and also MT with post-editing, are enabling these changes in user consumption and in translators’ day-to-day routines. And, finally, the third change has been the practice of dubbing and subtitling in the cloud, which has allowed the translator to interact with all the agents in the industrial process and with the client in real time (Chaume & de los Reyes Lozano, 2021, p. 2). However, it is important to mention, too, that cloud subtitling has also enabled agencies to monitor and control performance, which could come at a cost to translators. If one thing seems clear at the beginning of this third decade of the 21st century, it is that this new audiovisual content localization market will not be managed in the same way as it has been until now. This is mainly a consequence of the exponential increase in demand for dubbing, subtitling, and accessibility worldwide. In the specific case of dubbing, an increasing number of territories where previously subtitled foreign-language audiovisual products were consumed are now demanding that they be dubbed into their local languages. A paradigmatic example can be found in Netflix series and films dubbed into English (Hayes, 2021; Sánchez- Mompeán, 2021; Spiteri Miggiani, 2021): the success that these dubbings have reaped in territories where dubbing was not previously consumed has changed the rules of the game (Bylykbashi, 2019). The new countries and territories where OTT services have been 25 Granell, X. & Chaume, F. (2023). Audiovisual translation, translators, and technology: From automation pipe-dream to human–machine convergence. Linguistica Antverpiensia, New Series: Themes in Translation Studies, 22, 20–40. introduced, and where they have gradually been winning the battle against digital, cable, and satellite TV, are now also demanding more localized foreign products, especially a higher volume of dubbing (Chaume & de los Reyes Lozano, 2021, p. 2; The Economist, 2019). 3. AVT professionals today: matching needs and markets in a digital world Within the framework of new technologies applied to AVT, CAT tools are starting to become popular among professionals and have a potential that is already taking off. In an industry where working with Microsoft Office was the rule (Word and Excel, mainly), translation memories and glossaries – now already integrated into cloud platforms (Bolaños-García- Escribano & Díaz-Cintas, 2020; Diaz-Cintas & Massidda, 2019) – can speed up the subtitling and dubbing processes. This is especially the case with audiovisual formats such as corporate videos and other specialized content materials such as documentaries and educational videos with high percentages of linguistic repetition and the use of restricted registers (Díaz-Cintas & Massidda, 2019, p. 263). In the case of fiction formats and dubbing, this type of tool helps to ensure cohesion in projects where the plot develops over several chapters: specific lexicon, character names, repeated phrases, idiolects, fictional languages, etc. all have to be taken into account. These are just some of the recurring terminology and consistency challenges that feature in TV series, film sagas, and video games, in addition to crossovers, spin-offs, prequels, and sequels, and all kinds of transmedia projects (Chaume & de los Reyes Lozano, 2021, p. 6). The cloud has also been a major game-changer in this respect (Bolaños-García-Escribano & Díaz-Cintas, 2020). COVID-19 speeded up the adoption of cloud subtitling, cloud dubbing and remote voice recording faster than expected. When lockdown struck the entertainment industry, it became quite clear that traditional dubbing was ill prepared to react. Professionals, teachers, and trainees are currently experiencing the cloud revolution (Bywood, 2020; Díaz- Cintas & Massidda, 2019). Cloud dubbing is a different way of approaching the dubbing process – at least as we know it in traditional dubbing countries (Chaume, 2012). The aim of using cloud environments is to work outside the confines of a physical studio setting: professionals work from different locations in the production and recording of a single dubbed version. Moreover, the companies that put this mode of working into practice are committed to developing the dubbing process entirely online, relying on cloud computing systems. The major advantage is that this type of technology allows remote access to all types of software, file storage, and data processing via the Internet. From an industrial point of view, this is an extreme form of teleworking, one in which all the phases are delocalized: from translation to final mixing, including adaptation, voice casting and recording, everything is centralized in a single application to which the team involved can connect at any time (Chaume & de los Reyes Lozano, 2021, p. 9). This delocalization means that the agents do not have to be in the same place simultaneously. This was already the norm for translation and adaptation tasks, which were traditionally outsourced and carried out remotely; what cloud dubbing does is to extend this mode of working to art direction and recording. In this way, actors and dubbing directors can work from anywhere in the world and recording sessions are conducted at any time to suit their respective schedules, without their having to depend on the time availability of the dubbing studio, which, in turn, saves time and 26 Granell, X. & Chaume, F. (2023). Audiovisual translation, translators, and technology: From automation pipe-dream to human–machine convergence. Linguistica Antverpiensia, New Series: Themes in Translation Studies, 22, 20–40. effort by avoiding a commute. This feature offers the possibility of expanding the portfolio of professionals who otherwise would not have access to the profession because they do not reside in a specific recording location. To this end, cloud dubbing companies, while relying on local dubbing studios, also encourage the use (after approval by the company) of home recording environments that have the appropriate materials and equipment and that meet the level of audio quality required by each client. The applications monitor recording tasks automatically, which facilitates the process by avoiding tedious and repetitive tasks such as having to re-record an entire take due, for instance, to a mispronunciation or a missing line of dialogue. An example is the multiple video versions that are created for each episode; here, the interconnected scripts tool in a cloud dubbing ecosystem can be used to identify any discrepancies between versions. This check automates a cumbersome process and ensures that any differences between versions can be identified immediately. Discrepancies are automatically detected, resulting in fewer retakes and project delays. Regarding appearance, the starting point of all the cloud dubbing applications consulted resembles the French bande rythmo programs in their modern, virtual version – although the detection part has been toned down. The interface therefore displays a window with the images of the audiovisual work and a timeline. It also includes two columns, one with the original language dialogues segmented into boxes of small excerpts called “events” by some companies and another with the same empty boxes, into which the translations have to be typed. VoiceQ Cloud Manager (by VoiceQ) and ZooDubs (by Zoo Digital), together with iDub (by Iyuno) and OneDub (by DeLuxe), are project management tools that allow all the agents in a project to transfer and download any type of file needed for dubbing. They also enable them to access other colleagues’ messages and update any type of information needed for all dubbing tasks. In subtitling, Ooona, Transperfect or Haymillian, among others, have also developed cloud platforms. In ZooDubs (see a detailed account and screenshots of their academic programme at https://www.zoodigital.com/technology/zoodubs/), to mention just one, isochrony is guaranteed because when writing the translation in the corresponding event box of the target text column, on the right side of the screen, the program minimizes the size of the letters as we introduce new characters and words, so that when the size of the letters of our sentence is more or less equal to the size of the letters of the dialogue in the original language, we know that the dubbing actor will be able to fit the translation into the correct time, using the adequate tempo, and therefore that isochrony is guaranteed. In summary, as Spiteri Miggiani (2022) expresses it, [w]orking on such platforms also comes with challenges, as many established translators must unlearn old ways and adjust to translating in a confined space, with input segmented into text bites. They also need to get used to the translated text moving beneath the visuals, as opposed to the more traditional blank, static Word file. (p. 10) 27 Granell, X. & Chaume, F. (2023). Audiovisual translation, translators, and technology: From automation pipe-dream to human–machine convergence. Linguistica Antverpiensia, New Series: Themes in Translation Studies, 22, 20–40. 4. Dubbing, machines, and human beings: a pipedream or a likely future? Given this new scenario, it seems clear that AVT, and particularly dubbing production, will no longer be the linear process it has been up to now, based in a single territory or city. To meet the growing demand for localized audiovisual products, more and more companies have decided to move all or part of their workflow to the cloud in order to increase the pace of production and, at the same time, reduce their ecological footprint and strengthen teleworking. However, this also means that they are tempted to reduce costs by accessing professionals in countries that command lower fees. However, innovations underway in artificial intelligence (AI) will soon shake up the translators’ world and radically change the approach to AVT (Spiteri Miggiani, 2022, p. 10). In the area of dubbing, AI technologies can now manipulate the on-screen actors’ mouth flaps to match the non-adapted translated text (Flawless, n.d.; Yang et al., 2020). This means that a striking shift is now occurring in the editing of dubbed video content: an image-to-word adaptation technique is replacing the word-to-image adaptation strategy that has typically and traditionally characterized the translation and adaptation process in dubbing. Let us then consider all these steps one by one. Object-based sound technology replacing channel-based sound technology On the one hand, so-called object-based sound technology has recently replaced channel- based sound technology. This means that it now allows a higher degree of personalization and a much more immersive experience when watching a clip. This immersive technology consists of attaching an easy description to each sound, which describes where it is at any moment in time. This description or the metadata can be updated regularly, perhaps in as many frames of a movie as we want. By changing the location tags over time in a film, any sound or collection of sounds can be moved around in a room in any way one wishes. This has tremendous potential as an accessibility service, because audio content can be beamed to a specific person using sound bars. Specific objects can be sent to other audio devices, mobile, hearing aids, headphones, for example, for audiodescription purposes. Synthetic voices rapidly gaining ground Text-to-speech technologies, on the other hand, are mainly used to create synthetic voices, which are currently being used for documentaries, institutional videos, corporate videos, educational videos and, last but not least, audiodescription. Synthetic voices are rapidly gaining ground in the media content localization industry, especially for non-fiction products. Some companies have also achieved high quality in the creation of synthetic voices, to the extent that it is difficult to tell the differences from the original voice in a target language. The start-up company Papercup won a two-year translation and dubbing contract with Bloomberg, signaling greater interest in synthetic voices (Bloomberg Media, 2022). Since Bloomberg usually broadcasts non-fiction programs, lip-syncing is not carried out, making it clear to viewers that the synthetic voice is a translation. However, some attempts are being made to introduce synthetic voices in lip-sync dubbing, too, as we shall see below. 28 Granell, X. & Chaume, F. (2023). Audiovisual translation, translators, and technology: From automation pipe-dream to human–machine convergence. Linguistica Antverpiensia, New Series: Themes in Translation Studies, 22, 20–40. Respeaking using automatic speech recognition In respeaking, an interpreter or an audiovisual translator used to dictate and summarize what is being said in live events or on screen and to use the software to produce subtitles or captions from the interpreter’s revoiced sentences. Although the procedure may change and will depend on whether it is intralingual or interlingual, and also on whether it is a live or a pre- recorded event, the respeaker usually listens to a clip or a sound source and speaks into a microphone, summarizing what they hear and adding punctuation marks as necessary. Speech-to-text technologies, in particular automatic speech recognition (ASR), are mainly used for transcription purposes, especially to create transcriptions as recorded scripts, after the post-production stage, and also for respeaking purposes. This technology permits human beings to use voice commands to interact with a computer interface. Viseme detection on screen Another important development in the field of dubbing has been centred on detecting visemes on screen, initially for scriptwriting. A viseme is any of several speech sounds that look the same (Fisher, 1968), that is, phoneme groups whose articulatory configuration is the same from a visual viewpoint: p-b-m, or f-v, among others (Taylor et al., 2012). Speech-to- speech technology (S2ST) software processes mouth articulation movements and classifies them into visemes. For each viseme, it produces possible words or word strings. Initially envisaged for English, it is potentially applicable to other languages, especially for dubbing purposes. Speech-to-speech technology: the future of automatic dubbing However, these efforts have quickly been replaced by attempts to manipulate the real images, instead of pretending that the onscreen characters’ mouths are speaking another language. And this is why S2ST, finally, seems now the future of automatic dubbing (Federico et al., 2020). The first step was termed “real-time face re-enactment” (Thies et al., 2020). The aim of this technology is to animate the facial expressions and mouth movements of the source face on screen in real time by using a target-language actor – who can also be a dubbing actor – and re-render the manipulated output video in a photo-realistic fashion. The new facial expressions of a target actor have to be captured with a webcam and a deformation transfer is performed between the source and the target faces. This technology then requires a target- language actor, who must utter the translation so that the webcam captures their mouth articulatory movements. These will then be transferred to the on-screen actors’ mouths. This research, however, has excluded sound and voices. Latest trend and frontier: fully automatic dubbing However, the last trend and the frontier of machine dubbing is now fully automatic dubbing (Federico et al., 2020). Not only do these technologies provide the audio dub via MT, but they also transform the lip and jaw movements of the person on screen to match the target text. No dubbing actors and actresses are needed. Companies such as Dubverse, NeuralGarage and Flawless are currently developing software that uses AI to create perfectly lip-synced footage in multiple languages. The software digitalizes the on-screen characters’ mouths and creates 29 Granell, X. & Chaume, F. (2023). Audiovisual translation, translators, and technology: From automation pipe-dream to human–machine convergence. Linguistica Antverpiensia, New Series: Themes in Translation Studies, 22, 20–40. new mouth flaps according to the translation in a target language; it replicates the mouth flaps that the on-screen actors articulated in other moments of the film or simply creates them from the real mouth movements of these actors and actresses. This way, what a foreign language audience can see is the mouths of the onscreen actors and actresses moving in different ways, that is, in the ways the target language demands a mouth to articulate the target language dialogues. Voices and voice modulations are also imitated. 5. Fostering metaliteracy to cater to current markets: embracing technology in audiovisual translation All the changes and technological movements mentioned earlier appear to necessitate an urgent upgrade of translation programs so that they incorporate further training in MT and AVT. According to Spiteri Miggiani (2022), [t]his would address the shortage of translators in this field. At the same time, it would highlight academia’s responsiveness to constantly evolving industry demands, a necessary approach in such a profession-oriented area of study and research. (p. 10) In higher education, there has been a growing emphasis on integrating technology training into AVT training, from using the available tools to support the dubbing of specific tasks, such as Windows Movie Maker, Dubbing 2, DubIt, Divace Lite, and Divace Solo (Martí Ferriol, 2009, p. 628), to online autonomous learning platforms such as AVT-LP (Audiovisual Translation Learning Platform) (Igareda & Matamala, 2011). More recent approaches to teaching and learning in this field are already incorporating MT as part of the training to be included for professional translators (Kenny, 2022). However, the concern still exists regarding the complete automation of translation processes because of its negative effects on professional working conditions (O’Hagan, 2019; Vieira, 2018). Some higher education programs have already begun providing specific training in MT and post-editing (Cid-Leal et al., 2019; Guerberof Arenas & Moorkens, 2019; O’Brien & Salis, 2002; Rico Pérez, 2017; Venkatesan, 2018) or have undertaken innovations on a smaller scale (González Pastor, 2021; González Pastor & Rico, 2021; Mejías-Climent & de los Reyes Lozano, 2021; Moorkens, 2018;). Adaptation to an adequate consideration of the role of computers in human–computer interaction and the connection between information and knowledge in the digital age has also been reconsidered from an information literacy (IL) perspective. Whereas traditional IL focused on seeking, accessing, selecting, and reusing text-based information, it is now recognized that IL needs to encompass the digital, visual, interactive, and cultural domains too. This involves developing competencies with tools, resources, the research process, emerging technologies, critical thinking, and an understanding of the publishing industry and the social structures that produce information products (Marcum, 2002). Mackey and Jacobson (2014) have referred to this holistic and updated conceptualization of IL as “metaliteracy”. This conceptualization goes beyond reading and writing (literacy) and search and retrieval (information literacy); instead, it promotes the collaborative production and sharing of information (metaliteracy) (2014, p. 6). International organizations such as UNESCO have also emphasized the importance of developing multiple literacies in the digital world to 30 Granell, X. & Chaume, F. (2023). Audiovisual translation, translators, and technology: From automation pipe-dream to human–machine convergence. Linguistica Antverpiensia, New Series: Themes in Translation Studies, 22, 20–40. empower people and ensure equitable access to information and knowledge, and of promoting free, independent, and pluralistic media and information systems (n.d.). In the area of multilingual communication, “[t]ranslation and interpreting is a complex cognitive and expert communication activity that requires constant information management” (Sales Salvador, 2022, p. 3). Translators are therefore a professional community closely intertwined with information, where IL and metaliteracy have been recognized as key elements to take on board if one wishes to keep up with the demands of the digital age (Granell, 2015; Pinto & Sales, 2007; Sales Salvador, 2022). Given the needs and challenges of today’s multimodal and digital world discussed earlier, AVT professionals should be empowered and trained to learn how to learn and adapt themselves to the ever-evolving digital and technological landscape (Bowker & Buitrago, 2019). But approaching information literacy as a training paradigm involves more than simply identifying a set of skills; it is a complex and dynamic practice shaped by the workplace context (Lloyd, 2010, p. 28) and the community of practice in which it occurs (Tuominen et al., 2005, p. 341). It determines how information is generated, accessed, processed, and used; and this can vary considerably across different fields of knowledge and communities of practice (Elmborg, 2006; Goad, 2002; Pinto et al., 2014; Sales & Pinto, 2011). In multimodal working environments, literacy becomes an even greater challenge to overcome as information literacy is a multifaceted construct. Therefore, a comprehensive and updated approach such as metaliteracy is necessary to face critically the ever-increasing challenges of an evolving digital society (Forte et al., 2020; Hamey, 2015; Mackey & Jacobson, 2011, 2014). Sales Salvador (2022) provides a detailed account of the way in which metaliteracy can be applied to the training of translators in the early stages of their higher education process. The article describes various activities for each module in a first-year course on “Information literacy for translators and interpreters” offered in the Translation and Interpreting degree program at Universitat Jaume I. It highlights the natural alignment between the program’s objectives in preparing future translators and interpreters for their professional lives, on the one hand, and the critical thinking and ethical objectives grounded in a metaliteracy-based course, on the other. And whereas initiatives such as this course can be found at the initial levels of translators’ and interpreters’ training, the progressive demand for more specialized and complex tasks at higher levels and in specialized translation courses, such as AVT, increases the need for further critical thinking and metacognitive learning. Such an approach would enhance the professional practice of future specialists and help them to face the challenges of an expansive and interactive information environment. As highlighted earlier, AVT creates the need to adapt to a fast-paced technology setting, both at the producer and at the consumer ends of the spectrum of audiovisual products. For instance, if we focus on the learning objectives (LO) of AVT training that are centred on professional practice, market demands, and developments in technology, students are expected to acquire various competences, including: 31 Granell, X. & Chaume, F. (2023). Audiovisual translation, translators, and technology: From automation pipe-dream to human–machine convergence. Linguistica Antverpiensia, New Series: Themes in Translation Studies, 22, 20–40. 1. Understanding the theory and practice of AVT across a range of modalities (dubbing, subtitling, audiodescription, subtitling for the deaf and the hard-of-hearing, and video game localization, among others). [LO1] 2. Increasing familiarity with the available electronic resources for AVT and the software for video and audio editing, dubbing and subtitling. Understanding how these tools can be adapted to their professional work in different settings, both now and in the future. [LO2] 3. Identifying and critically evaluating sources of information on language, cinema, and culture, and making efficient use of them. [LO3] 4. Developing an understanding of audiovisual texts from intersemiotic, interlinguistic, and intralinguistic perspectives. Being able to comprehend oral and written scripts in a wide range of audiovisual products, narratives, and genres; and either producing appropriate translated versions (interlinguistically) or re-elaborating them (both, inter- and intra- linguistically) in different registers or for diverse audiences, including those with accessibility needs. [LO4] Although AVT is not in itself an information literacy course, there is a strong relationship between the technology and the overall learning objectives reflecting an expanded and comprehensive metaliteracy in practice. Some key points in relation to the learning objectives mentioned and the way metaliteracy can contribute to them are: LO1 Comprehensive understanding of information and media: Metaliteracy goes beyond traditional information literacy by incorporating digital, visual, interactive, and cultural domains. This would enable audiovisual translators to develop a deep understanding of different forms of information and media, including audio, video, and multimedia content. It would also equip them with the skills to critically analyze and interpret various modes of communication, allowing for more effective translation of the audiovisual context. LO2 Adaptability to technological advancements: Metaliteracy acknowledges the evolving nature of the digital landscape and promotes adaptability to technological advancements. In the context of AVT, where technology plays a crucial role, metaliterate professionals would be better equipped to navigate and leverage emerging tools and technologies. They would be able to integrate MT and other automated processes effectively into their workflow while also understanding the limitations and potential challenges associated with them. LO3 Critical evaluation and selection of resources: Metaliteracy fosters critical thinking and the ability to evaluate and select resources. In AVT, where translators often rely on a wide range of reference materials and online resources, metaliterate professionals would be able to assess the reliability, accuracy, and relevance of these resources. They would be able to make informed decisions about the suitability of specific tools, software, or online platforms to their translation tasks. 32 Granell, X. & Chaume, F. (2023). Audiovisual translation, translators, and technology: From automation pipe-dream to human–machine convergence. Linguistica Antverpiensia, New Series: Themes in Translation Studies, 22, 20–40. LO4 Media and translation literacy: Metaliteracy encourages the development of the skills needed to interpret and analyze different semiotic systems, including verbal, visual, and auditory elements. Metaliterate audiovisual translators would effectively analyze and comprehend the interplay between the written and visual components of different languages and cultures, such as dialogue, subtitles, captions, images, gestures, and cultural references. They would be able to understand how these elements contribute to the overall meaning and intention of the audiovisual text, which would facilitate accurate and nuanced translations for diverse target audiences. Furthermore, metaliteracy would enhance future professional translators’ capability to interact in collaborative working settings. It would help audiovisual translators develop the skills to effectively collaborate, share resources, and contribute to the collective knowledge in their field. Similarly, metaliteracy could also help to deal with the ethical considerations that may arise in professional practice, because it encompasses ethical objectives related to the responsible use of information and media. In AVT, where issues such as copyright, intellectual property, and cultural sensitivity are significant, metaliterate professionals would be equipped to navigate these ethical challenges. They would understand the importance of respecting copyright, cultural norms, and diverse perspectives while providing accurate and culturally appropriate translations. In summary, by incorporating metaliteracy into the training of professional audiovisual translators, educational programs could equip translators with the skills and perspectives necessary to excel in AVT in all its professional, intersemiotic, interlinguistic, and intralinguistic dimensions. Metaliterate audiovisual translators would effectively analyze, understand, and adapt audiovisual texts in the process of creating appropriate translations that consider the diverse nature of the target audience and the specific requirements of different genres and registers. And they would be able to deal with technology turns and adopt innovations while maintaining control over translation processes, as preserved in the core idea of the translator’s workstation. 6. The proper place of human beings and machines in audiovisual translation: from an amanuensis to a proper workstation, through uneasy post-editing Kay (1997, p. 13 ) advocated an incremental approach to the problem of the way machines should be used in language translation. In this approach, machines would gradually assume certain functions in the overall translation process, leaving to translators only what they know they can do reliably. His concept of a (then idealistic) Translator’s Amanuensis proposed using programs to edit and generate bilingual texts. These would initially take the form of source language words or longer text strings copied to a target window to create a first draft of the translation and eventually a bitext with source and target text correspondences at various levels of definition (i.e., words, terms, and full segments). This approach would incorporate translation aids, such as integrated dictionary lookups, and subsequent CAT tools and MT systems to support human translators at different levels of automation, a perspective shared by Melby’s translators’ workstation (1982). 33 Granell, X. & Chaume, F. (2023). Audiovisual translation, translators, and technology: From automation pipe-dream to human–machine convergence. Linguistica Antverpiensia, New Series: Themes in Translation Studies, 22, 20–40. This unstoppable advent of technology adoption would continue from Kay’s and Melby’s early days of technology as translation aids until today’s human–machine collaboration. It would gradually incorporate tools (e.g., Hutchins’ compendium of translation tools, 1998), process- oriented approaches (e.g., Austermühl, 2001), and frameworks for managing multilingual information (e.g., Granell’s framework for translators’ activities, 2015), while maintaining human agency as a key element. Recent scholars have proposed further elaborations of Kay’s Amanuensis, envisaging the integration of recent developments in technology into professional translation settings such as data-mining or effective voice recognition for translation (as in Alonso & Vieira’s “expert level” of their Translator’s Amanuensis 2020, 2017). Or the combination of sub-segment translation memory, adaptative neural MT, automated content enrichment, improved terminology management, and AI-driven project management (as in Lommel’s concept of “Augmented Translation”, 2018). Decades later, Kay’s predictions are regarded not as utopian visions but as insightful forecasts, as they have either been realized or served as foundations for further MT developments (Melby, 2019). Customizable word processors have become essential text-editing tools; terminology management systems have evolved into integrated dictionaries; translation memories have revolutionized bitext production, storage, and retrieval; and CAT tools have integrated MT technology. Together they have served to realize Kay’s vision of “machine translation in a new form” serving translators. This means that translators need to be ready to embrace technological advancements and this is why metaliteracy can empower them to thrive in this new era. A final word about Kay’s predictions. He could not have predicted the cloud-based technology turn, but that was only because the Internet as we know today was still a couple of decades ahead. However, he did predict that significant contributions to linguistics would most likely be “more and more in the spirit of Artificial Intelligence” (Kay, 1997, p. 4 ), suggesting that AI could be the next step in language technology innovation. Perhaps the next technology turn? 34 Granell, X. & Chaume, F. (2023). Audiovisual translation, translators, and technology: From automation pipe-dream to human–machine convergence. Linguistica Antverpiensia, New Series: Themes in Translation Studies, 22, 20–40. References Automatic Language Processing Advisory Committee (ALPAC). (1966). Language and machines: Computers in translation and linguistics. Álvarez Vidal, S., Oliver, A., & Badia, T. (2020). Post-editing for professional translators : Cheer or fear? Tradumàtica, 18, 49–69. https://doi.org/10.5565/REV/TRADUMATICA.275 Alonso, E., & Vieira, L. N. (2017). The Translator’s Amanuensis 2020. The Journal of Specialised Translation, 28, 345–361. ATAA. (2021). Les mirages de la post-édition. https://beta.ataa.fr/blog/article/les-mirages-de-la-post- edition Audiovisual Translators Europe (AVTE). (2021). Machine translation manifesto. https://avteurope.eu/ avte-machine-translation-manifesto/ Austermühl, F. (2001). Electronic tools for translators. St. Jerome. Bar-Hillel, Y. (1964). Language and information: Selected essays on their theory and application. Addison-Wesley. Bloomberg Media. (2022). Bloomberg Media partners with speech AI dubbing start-up Papercup to localize its award-winning news for Spanish-speaking countries. https://www.bloombergmedia. com/press/bloomberg-media-partners-with-speech-ai-dubbing-start-up-papercup-to-localize- its-award-winning-news-for-spanish-speaking-countries/ Bolaños-García-Escribano, A., & Díaz-Cintas, J. (2020). The cloud turn in audiovisual translation. In Ł. Bogucki & M. Deckert (Eds.), The Palgrave handbook of audiovisual translation and media accessibility (pp. 519–544). Palgrave Macmillan. https://doi.org/10.1007/978-3-030-42105-2_ 26 Bowker, L., & Buitrago, J. (2019). Machine translation and global research: Towards improved machine translation literacy in the scholarly community. Emerald Group. https://doi.org/10.1108/97817 87567214 Bylykbashi, K. (2019). The big business of dubbing. Television Business International. https://tbivision. com/2019/04/04/the-big-business-of-dubbing/ Bywood, L. (2020). Technology and audiovisual translation. In Ł. Bogucki & M. Deckert (Eds.), The Palgrave handbook of audiovisual translation and media accessibility (pp. 503–517). Palgrave Macmillan. https://doi.org/10.1007/978-3-030-42105-2_25 Cadwell, P., O’Brien, S., & Teixeira, C. S. C. (2017). Resistance and accommodation: Factors for the (non- ) adoption of machine translation among professional translators. Perspectives, 26(3), 301–321. https://doi.org/10.1080/0907676X.2017.1337210 Castilho, S., Moorkens, J., Gaspari, F., Calixto, I., Tinsley, J., & Way, A. (2017). Is neural machine translation the new state of the art? The Prague Bulletin of Mathematical Linguistics, 108(1), 109–120. https://doi.org/10.1515/pralin-2017-0013 Chaume, F. (2012). Audiovisual translation: Dubbing. St. Jerome. Chaume, F., & de los Reyes Lozano, J. (2021). El doblaje en la nube: La última revolución en la localización de contenidos audiovisuales. In B. Reverter Oliver, J. J. Martínez Sierra, D. González Pastor, & J. F. Carrero Martín (Eds.), Modalidades de traducción audiovisual: Completando el espectro (pp. 1–15). Editorial Comares. Cid-Leal, P., Espín-García, M. C., & Presas, M. (2019). Machine translation and post-editing: Profiles and competences in translator training programmes. MonTI: Monografías de Traducción e Interpretación, 2019(11), 187–214. https://doi.org/10.6035/MONTI.2019.11.7 Deck, A. (2021). The global streaming boom is creating a severe translator shortage. Rest of World. https://restofworld.org/2021/lost-in-translation-the-global-streaming-boom-is-creating-a- translator-shortage/ 35 Granell, X. & Chaume, F. (2023). Audiovisual translation, translators, and technology: From automation pipe-dream to human–machine convergence. Linguistica Antverpiensia, New Series: Themes in Translation Studies, 22, 20–40. Diaz-Cintas, J., & Massidda, S. (2019). Technological advances in audiovisual translation. In M. O’Hagan (Ed.), The Routledge handbook of translation and technology (pp. 255-270). Routledge. https:// doi.org/10.4324/9781315311258 Díaz-Cintas, J., & Remael, A. (2007). Audiovisual translation: Subtitling. St. Jerome. Doherty, S. (2016). The impact of translation technologies on the process and product of translation. International Journal of Communication, 10, 947–969. Elmborg, J. (2006). Critical information literacy: Implications for instructional practice. Journal of Academic Librarianship, 32(2), 192–199. https://doi.org/10.1016/j.acalib.2005.12.004 Federico, M., Enyedi, R., Barra-Chicote, R., Giri, R., Isik, U., Krishnaswamy, A., & Sawaf, H. (2020). From speech-to-speech translation to automatic dubbing. Proceedings of the 17th International Conference on Spoken Language Translation, 257–264. https://doi.org/10.18653/V1/2020. IWSLT-1.31 Fenner, A. (2000). The choices facing translators. Institute of Translation and Interpreting Bulletin, April 2000, 9. Fisher, C. (1968). Confusiones entre consonantes percibidas visualmente. Revista de investigación del habla y la audición, 11(4), 796–804. Flawless. (n.d.). https://www.flawlessai.com/ Forte, M., Jacobson, T., Mackey, T., O’Keeffe, E., Stone, K., & Sales, D. (trad.) (2020). Metas y Objetivos de Aprendizaje de la Meta-alfabetización. Metaliteracy.org. https://metaliteracy.org/learning- objectives/goals-and-learning-objectives-translated/metas-y-objetivos-de-aprendizaje-de-la- meta-alfabetizacion/ Fulford, H. (2002). Freelance translators and machine translation: An investigation of perceptions, uptake, experience and training needs. Proceedings of the 6th EAMT Workshop: Teaching Machine Translation, 117–122. Fulford, H., & Granell-Zafra, J. (2004). The freelance translator’s workstation: An empirical investigation. Proceedings of the 9th EAMT Workshop: Broadening horizons of machine translation and its applications, 53–61. Fulford, H., & Granell-Zafra, J. (2005). Translation and technology: A study of UK freelance translators. JoSTrans, The Journal of Specialised Translation, 4, 2–17. Gaspari, F., & Hutchins, J. (2007). Online and free! Ten years of online machine translation: Origins, developments, current use and future prospects. Proceedings of Machine Translation Summit XI: Papers. Gaspari, F. (2004). Online MT services and real users’ needs: An empirical usability evaluation. In R. E. Frederking & K. B. Taylor (Eds.), Machine translation: From real users to research (pp. 74–85). Springer. https://doi.org/10.1007/978-3-540-30194-3_9 Goad, T. W. (2002). Information literacy and workplace performance. Greenwood. González-Iglesias, J. D. (2012). Desarrollo de una herramienta de análisis de los parámetros técnicos de los subtítulos y estudio diacrónico de series estadounidenses de televisión en DVD [Doctoral dissertation]. Universidad de Salamanca. González Pastor, D. (2021). Introducing machine translation in the translation classroom: A survey on students’ attitudes and perceptions. Tradumàtica: Tecnologies de la traducció, 19, 47–65. https://doi.org/10.5565/rev/tradumatica.273 González Pastor, D., & Rico, C. (2021). POSEDITrad: La traducción automática y la posedición para la formación de traductores e intérpretes. Revista Digital de Investigación en Docencia Universitaria, 15(1). https://doi.org/10.19083/10.19083/ridu.2021.1213 Granell, X. (2015). Multilingual information management: Information, technology and translators. Elsevier/Chandos. https://doi.org/10.1016/C2014-0-01998-3 Granell, X., & Martí Ferriol, J. L. (2016). Tecnologías de la Información y la comunicación para el doblaje. 36 Granell, X. & Chaume, F. (2023). Audiovisual translation, translators, and technology: From automation pipe-dream to human–machine convergence. Linguistica Antverpiensia, New Series: Themes in Translation Studies, 22, 20–40. In B. Cerezo Merchán, F. Chaume, X. Granell, J. L. Martí Ferriol, J. J. Martínez Sierra, A. Marzà, & G. Torralba Miralles (Eds.), La traducción para el doblaje en España: Mapa de convenciones (pp. 123–142). Publicacions de la Universitat Jaume I. Green, S. (2018, March 15). How digital demand is disrupting dubbing. M&E Journal. https://www. mesaonline.org/2018/03/15/journal-digital-demand-disrupting-dubbing/ Guerberof Arenas, A. (2013). What do professional translators think about post-editing? The Journal of Specialised Translation, 19, 75–95. Guerberof Arenas, A., & Moorkens, J. (2019). Machine translation and macrum-editing training as part of a master’s programme. The Journal of Specialised Translation, 31, 217–238. Hamey, Y. (2015). Metaliteracy: Reinventing information literacy to empower learners. The Australian Library Journal, 64(2), 156–156. https://doi.org/10.1080/00049670.2015.1040358 Hayes, L. (2021). Netflix disrupting dubbing. Journal of Audiovisual Translation, 4(1), 1–26. https://doi. org/10.47476/JAT.V4I1.2021.148 Hutchins, J. W. (1996). Computer-based translation systems and tools. ELRA Newsletter, 1(4). Hutchins, J. W. (2000). Early years in machine translation. John Benjamins. https://doi.org/10.1075/ sihols.97 Hutchins, J. W. (1998). Translation technology and the translator. Machine Translation Review, 7, 7– 14. Hutchins, J. W. (2001a). Machine translation and human translation: In competition or in complementation? International Journal of Translation, 13(1–2), 5–20. Hutchins, J. W. (2001b). Machine translation over fifty years. Histoire Epistémologie Langage, 23(1), 7– 31. https://doi.org/10.3406/hel.2001.2815 Hutchins, J. W., & Somers, H. L. (1992). An introduction to machine translation. Academic Press. International Association of Professional Translators and Interpreters. (2021). ATRAE states its view on post-diting. https://www.iapti.org/iaptiarticle/atrae-state-its-view-on-post-editing/ Igareda, P., & Matamala, A. (2011). Developing a learning platform for AVT: Challenges and solutions. JoSTrans, The Journal of Specialised Translation, 16, 145–162. https://doi.org/10.17533/udea. ikala.8654 International Organization for Standardization. (2017). ISO 18587:2017 - Translation services — Post- editing of machine translation output — Requirements. https://www.iso.org/standard/62970. html Jiménez-Crespo, M. A. (2020). The “technological turn” in translation studies. Translation Spaces, 9(2), 314–341. https://doi.org/10.1075/TS.19012.JIM Kanavos, P., & Kartsaklis, D. (2010). Integrating machine translation with translation memory: A practical approach. Proceedings of the Second Joint EM+/CNGL Workshop: Bringing MT to the User: Research on Integrating MT in the Translation Industry, 11–20. Kay, M. (1980). The proper place of men and machines in language translation. Research report CSL- 80-11. Kay, M. (1997). The proper place of men and machines in language translation. Machine Translation, 12(1), 3–23. https://doi.org/10.1023/A:1007911416676 Kenny, D. (2022). Machine translation for everyone: Empowering users in the age of artificial intelligence. Language Science Press. https://doi.org/10.5281/zenodo.6653406 Koehn, P. (2020). Neural machine translation. Cambridge University Press https://doi.org/10.1017/ 9781108608480 Lagoudaki, E. (2008). The value of machine translation for the professional translator. Proceedings of the 8th Conference of the Association for Machine Translation in the Americas: Student Research Workshop, 262–269. Lehmann, W. P., & Stachowitz, R. (1971). Feasibility study on fully automatic high quality translation 37 Granell, X. & Chaume, F. (2023). Audiovisual translation, translators, and technology: From automation pipe-dream to human–machine convergence. Linguistica Antverpiensia, New Series: Themes in Translation Studies, 22, 20–40. (pp. 1–50). University of Texas. Lloyd, A. (2010). Information literacy landscapes: Information literacy in education, workplace and everyday contexts. Elsevier. https://doi.org/10.1533/9781780630298 Locke, N. A. (2005). In-house or freelance?: A translator’s view. MultiLingual Computing & Technology, 16(1), 19–21. Lommel, A. R. (2018). Augmented translation: A new approach to combining human and machine capabilities. Proceedings of the 13th Conference of the Association for Machine Translation in the Americas (Volume 2: User Track), 5–12. Mackey, T. P., & Jacobson, T. E. (2011). Reframing information literacy as a metaliteracy. College and Research Libraries, 72(1), 62–78. https://doi.org/10.5860/crl-76r1 Mackey, T. P., & Jacobson, T. (2014). Metaliteracy: Reinventing information literacy to empower learners. Facet. Marcum, J. W. (2002). Rethinking information literacy. The Library Quarterly, 72(1). https://doi.org/ 10.1086/603335 Marking, M. (2022). Netflix COO reveals scale of dubbing and subtitling operations. Slator. https://slator.com/netflix-coo-reveals-scale-of-dubbing-subtitling-operations/ Martí Ferriol, J. L. (2009). Herramientas informáticas disponibles para la automatización de la traducción audiovisual (“revoicing”). Meta: Journal des Traducteurs / Translators’ Journal, 54(3), 622–630. https://doi.org/10.7202/038319ar Martí Ferriol, J. L. (2012). Nueva aproximación al cálculo de velocidades de lectura de subtítulos. Trans: Revista de Traductogía, 16, 39–48. https://doi.org/10.24310/TRANS.2012.v0i16.3210 Matamala, A. (2005). La estacíon de trabajo del traductor audiovisual: Herramientas y recursos. Cadernos de Tradução, 2(16), 251–268. Mejías-Climent, L., & de los Reyes Lozano, J. (2021). Traducción automática y posedición en el aula de doblaje: Resultados de una experiencia docente. Hikma, 20(2), 203–227. https://doi.org/10. 21071/hikma.v20i2.13383 Melby, A. K. (1982). Multi-level translation aids in a distributed system. In J. Horecký (Ed.), COLING '82: Proceedings of the 9th conference on computational linguistics – Volume 1 (pp. 215–220). North Holland. https://doi.org/10.3115/991813.991847 Melby, A. K. (1992). The translator workstation. In J. Newton (Ed.), Computers in Translation: A practical appraisal (pp. 147–165). Routledge. Melby, A. K. (1998). Eight types of translation technology. American Translators Association, 4–9. Melby, A. K. (2007). MT+TM+QA: The future is ours. Tradumàtica: Traducció i Tecnologies de La Informació i La Comunicació, 4, 1–7. Melby, A. K. (2012). Terminology in the age of multilingual corpora. Journal of Specialised Translation, 18, 7–29. Melby, A. K. (2019). Future of machine translation: Musings on Weaver’s memo. In M. O’Hagan (Ed.), The Routledge handbook of translation and technology (pp. 419–436). Routledge. https://doi. org/10.4324/9781315311258-25 Media & Entertainment Services Alliance. (2022). The talent crunch: Does it exist and can it be addressed? Content Workflow Management Forum 2022. https://www.mesaonline.org/confer ences/content-workflow-management-forum-2022/ Moorkens, J. (2018). What to expect from neural machine translation: A practical in-class translation evaluation exercise. 12(4), 375–387. https://doi.org/10.1080/1750399X.2018.1501639 Moorkens, J., & O’Brien, S. (2015). Post-editing evaluations: Trade-offs between novice and professional participants. Proceedings of the 18th Annual Conference of the European Association for Machine Translation, 75–81. 38 Granell, X. & Chaume, F. (2023). Audiovisual translation, translators, and technology: From automation pipe-dream to human–machine convergence. Linguistica Antverpiensia, New Series: Themes in Translation Studies, 22, 20–40. O’Brien, S., & Salis, B. (2002). Teaching post-editing: A proposal for course content. Proceedings of the 6th EAMT workshop: Teaching machine translation, 99–106. O’Brien, S., & Conlan, O. (2018). Moving towards personalising translation technology. In H. V. Dam, M. N. Brøgger, & K. K. Zethsen (Eds.), Moving boundaries in translation studies (pp. 81–97). Routledge. https://doi.org/10.4324/9781315121871-6 O’Hagan, M. (2013). The impact of new technologies on translation studies: A technological turn? In C. Millán & F. Bartrina (Eds.), The Routledge handbook of translation studies (pp. 521–536). Routledge. https://doi.org/10.4324/9780203102893.ch37 O’Hagan, M. (Ed.). (2019). The Routledge handbook of translation and technology. Routledge. https:// doi.org/10.4324/9781315311258 Pérez-Ortiz, J., Forcada, M., & Sánchez-Martínez, F. (2022). How neural machine translation works. In D. Kenny (Ed.), Machine translation for everyone: Empowering users in the age of artificial intelligence (pp. 141–164). Language Science Press. https://doi.org/10.5281/zenodo.6760020 Pinto, M., García-Marco, J., Granell, X., & Sales, D. (2014). Assessing information competences of translation and interpreting trainees: A study of proficiency at Spanish universities using the InfoliTrans test. Aslib Journal of Information Management, 66(1), 77–95. https://doi.org/10. 1108/AJIM-05-2013-0047 Pinto, M., & Sales, D. (2007). A research case study for user-centred information literacy instruction: Information behaviour of translation trainees. Journal of Information Science, 33(5), 531–550. https://doi.org/10.1177/0165551506076404 Rico Pérez, C. (2017). La formación de traductores en traducción automática. Tradumàtica: Tecnologies de la traducció, 15, 75–96. https://doi.org/10.5565/rev/tradumatica.200 Rothwell, A., Moorkens, J., Fernández-Parra, M., Drugan, J., & Austermuehl, F. (2023). Translation tools and technologies. Routledge. https://doi.org/10.4324/9781003160793 Sakamoto, A. (2019). Why do many translators resist post-editing?: A sociological analysis using Bourdieu’s concepts. The Journal of Specialised Translation, 31, 201–216. Sales, D., & Pinto, M. (2011). The professional translator and information literacy: Perceptions and needs. Journal of Librarianship and Information Science, 43(4), 246–260. https://doi.org/10.11 77/0961000611418816 Sales Salvador, D. (2022). Threading metaliteracy into translation and interpreting undergraduates’ information literacy training: A reflective active learning approach. Anales de Documentación, 25, 1–4. https://doi.org/10.6018/ANALESDOC.504691 Sánchez-Mompeán, S. (2021). Netflix likes it dubbed: Taking on the challenge of dubbing into English. Language & Communication, 80, 180–190. https://doi.org/10.1016/J.LANGCOM.2021.07.001 Shields, M. (1999). Slaves to the computer. Institute of Translation and Interpreting Bulletin, October 19, 4–5. Slocum, J. (1988). Machine translation systems. Cambridge University Press. Spiteri Miggiani, G. (2021). English-language dubbing: Challenges and quality standards of an emerging localisation trend. The Journal of Specialised Translation, 36a, 2–25. Spiteri Miggiani, G. (2022). The dubbing metamorphosis: Where do we go from here? EST Newsletter, 60, 10. Taylor, S. L., Mahler, M., Theobald, B.-J., & Matthews, I. (2012). Dynamic units of visual speech. In J. Lee & P. Kry, Eurographics/ ACM SIGGRAPH Symposium on Computer Animation (pp. 275–284). https://doi.org/10.2312/SCA/SCA12/275-284 The Economist. (2019). Invasion of the voice snatchers. 115–117. Thies, J., Zollhöfer, M., Stamminger, M., Theobalt, C., & Nießner, M. (2020). Face2Face: Real-time face capture and reenactment of RGB videos. Communications of the ACM, 62(1), 96–104. https:// doi.org/10.48550/arxiv.2007.14808 39 Granell, X. & Chaume, F. (2023). Audiovisual translation, translators, and technology: From automation pipe-dream to human–machine convergence. Linguistica Antverpiensia, New Series: Themes in Translation Studies, 22, 20–40. Torralba Miralles, G., Tamayo Masero, A., Mejías Climent, L., Martínez Sierra, J. J., Martí Ferriol, J. L., Granell, X., de los Reyes Lozano, J., de Higes Andino, I., Chaume, F., & Cerezo Merchán, B. (2019). La traducción para la subtitulación en España: mapa de convenciones. Publicacions de la Universitat Jaume I. Tuominen, K., Savolainen, R., & Talja, S. (2005). Information literacy as a sociotechnical practice. The Library Quarterly, 75(3), 329–345. https://doi.org/10.1086/497311 UNESCO. (n.d.). Media and Information Literacy. https://iite.unesco.org/mil/ Venkatesan, H. (2018). Teaching translation in the age of neural machine translation. APLX 2017 at Taipei Tech - Transformation and Development: Language, Culture, Pedagogy and Translation, 39–54. Vieira, L. N. (2018). Automation anxiety and translators. Transl Stud, 13(1), 1–21. https://doi.org/10. 1080/14781700.2018.1543613 Vieira, L., Alonso, E., & Bywood, L. (2019). Introduction: Post-editing in practice–process, product and networks. The Journal of Specialised Translation, 31, 2–13. Weaver, W. (1955). Translation. In W. N. Locke & A. D. Booth (Eds.), Machine translation of languages (pp. 15–23). The Technology Press of MIT. Yang, J., & Lange, E. D. (1998). SYSTRAN on AltaVista a user study on real-time machine translation on the internet. In D. Farwell, L. Gerber, & E. Hovy (Eds.), Machine translation and the information soup: Third conference of the association for machine translation in the Americas (pp. 275–285). Springer. https://doi.org/10.1007/3-540-49478-2_25 Yang, Y., Shillingford, B., Assael, Y., Wang, M., Liu, W., Chen, Y., Zhang, Y., Sezener, E., Cobo, L. C., Denil, M., Aytar, Y., & de Freitas, N. (2020). Large-scale multilingual audio visual dubbing. https:// arxiv.org/pdf/2011.03530.pdf Zaretskaya, A., Pastor, G. C., & Seghiri, M. (2015). Integration of machine translation in CAT tools: State of the art, evaluation and user attitudes. Skase Journal of Translation and Interpretation, 8(1), 76–89. 40

Use Quizgecko on...
Browser
Browser