CHAPTER SEVEN: The Role of the Translator PDF

Document Details

TrendyMulberryTree9740

Uploaded by TrendyMulberryTree9740

Malekan Payame Noor University

Tags

translation translation studies cultural studies literary theory

Summary

This chapter discusses the roles of translators in cultural and political agendas using the works of Venuti and Berman. It analyses the concepts of domestication and foreignization in translation and the differing perspectives of theorists in the field. The document examines the ethical and discursive levels of translation strategies.

Full Transcript

CHAPTER SEVEN ((The Role of the Translator)) 1. The cultural and political agenda of translation Like the other cultural theorists. Venuti insists that the scope of translation studies needs to be broadened to take account of the value-driven nature of the sociocultural framewo...

CHAPTER SEVEN ((The Role of the Translator)) 1. The cultural and political agenda of translation Like the other cultural theorists. Venuti insists that the scope of translation studies needs to be broadened to take account of the value-driven nature of the sociocultural framework Thus he contests Toury's scientific descriptive model with its aim of producing value-free norms and laws of translation. The claim of value-free translation research is spurious... far from presenting comprehensive and incisive accounts of translation, descriptive translation studies is itself ideological, scientistic in assuming a naive empiricism, conservative in reinforcing the academic status quo, and anti-intellectual in blocking. The introduction of materials from other fields and disciplines that would expose its limitations. (Venuti 2018) It should be noted, however, that Venuti's criticism is not universally shared, and is itself contentious. 1.1. Venuti and the invisibility of the translator fbedignization234t5 $14050draws on Venuti's own experience as a translator of experimental Italian poetry and fiction. Invisibility is a term he uses to describe the translator's situation and activity in contemporary British and American cultures. Venuti sees this invisibility as typically being produced:  By the way translators themselves tend to translate fluently into English, to produce an idiomatic and readable TT, thus creating an illusion of transparency.  By the way the translated texts are typically read in the target culture A translated text, whether prose or poetry, fiction or nonfiction, is judged acceptable by most publishers. reviewers and readers when it reads fluently, when the absence of any linguistic or stylistic peculiarities makes it seem transparent, giving the appearance that it reflects the foreign writer's personality or intention or the essential meaning of the foreign text the appearance, in other words, that the translation is not in fact a translation, but the original. (Venuti ) 1.2.Domestication and foreignization Venuti discusses invisibility hand in hand with two types of translation: domestication and foreignization. These practices concern both the choice of text to translate and the translation method. Their roots are traced back by as the postcolonialists are alert to the cultural effects of the differential in power relations between colony and ex-colony, so Venuti bemoans (= criticizes) the phenomenon of domestication since it involves an ethnocentric reduction of the foreign text to receiving cultural values. This entails translating in a transparent, fluent, invisible style in order to minimize the foreignness of the TT. Venuti allies it with Schleiermacher's description of translation, which leaves the reader in peace, as much as possible, and moves the author toward him. Domestication further covers adherence to domestic literary canons by carefully selecting the texts that are likely to lend themselves to such a translation strategy. On the other hand, foreignization entails choosing a foreign text and developing a translation method along lines which are excluded by dominant cultural values in the target language. It is the preferred choice of Schleiermacher, whose description is of a translation strategy where the translator leaves the writer in peace, as much as possible, and moves the reader toward [the writer]. Venuti follows this, and considers foreignizing practices to be a highly desirable, strategic cultural intervention which seek to send the reader abroad by making the receiving culture aware of the linguistic and cultural difference inherent in the foreign text. This is to be achieved by a non-fluent, estranging or heterogeneous translation style designed to make visible the presence of the translator and to highlight the foreign identity of the ST. This is a way, Venuti says, to counter the unequal and violently domesticating cultural values of the English-language world. Note: Venuti links foreignization to minoritizing translation. Importantly, domestication and foreignization are considered to be not binary opposites, but part of a continuum, and they relate to ethical choices made by the translator in order to expand the receiving culture's range: The terms 'domestication' and 'foreignization' indicate fundamentally ethical attitudes towards a foreign text and culture, ethical effects produced by the choice of a text for translation and by the strategy devised to translate it, whereas the terms like 'fluency' and 'resistancy' indicate fundamentally discursive features of translation strategies in relation to the reader's cognitive processing. (Venuti) This relationship, operating on different levels, might be depicted as shown in Figure below: Ethical level domestication (conforming to TL culture values)↔ foreignization (making visible the foreign) Discursive level fluency('transparent reading' assimilated to TL norms) ↔ resistancy (resistant reading, challenging TL norms) Domestication and foreignization: ethical and discursive levels Although Venuti advocates foreignizing translation in this book, he is also aware of some of its contradictions.  No translation can be entirely foreignizing, as he states, since all translation [...] is an interpretation that fundamentally domesticates the source text.  Foreignization is a subjective and relative term that still involves a degree of domestication.  Foreignization depends on the dominant values of the receiving culture because it becomes visible precisely when it departs from those values. However, Venuti stoutly defends foreignizing translations. They are equally partial [as are domesticating translations] in their interpretation of the foreign text, but they tend to flaunt their partiality instead of concealing it. In addition, Venuti emphasizes the culturally variable and historically contingent nature of domestication and foreignization. 2.Antoine Berman: the negative analytic of translation Questions of how much a translation assimilates a foreign text and how far it signals difference had already attracted the attention of the noted French theorist Antoine Berman. Berman describes translation as an épreuve ('experience'/'trial) in two senses:  For the target culture in experiencing the strangeness of the foreign text and word,  For the foreign text in being uprooted from its original language context. Berman deplores (= strongly disapproves) the general tendency to negate the foreign in translation by the translation strategy of naturalization, which would equate with Venuti's domestication. The properly ethical aim of the translating act, says Berman is receiving the Foreign as Foreign, which would seem to have influenced Venuti's foreignizing translation strategy at the time. However, Berman considers that there is generally a system of textual deformation in TTS that prevents the foreign from coming through. His examination of the farms of deformation is termed negative analytic: The negative analytic is primarily concerned with ethnocentric, annexationist translations and hypertextual translations (pastiche, imitation, adaptation, free writing). where the play of deforming forces is freely exercised. (Berman Berman, who translated Latin American fiction and German philosophy, sees every translator as being inevitably and inherently exposed to these ethnocentric forces, which determine the desire to translate as well as the formi of the TT. He feels that it is only by psychoanalytic analysis of the translator's work, and by making the translator aware of the forces at work, that such tendencies can be neutralized His main attention is centered on the translation of fiction: The principal problem of translating the novel is to respect its shapeless polylogic and avoid an arbitrary homogenization. (Berman) By this, Berman is referring to the linguistic variety and creativity of the novel and the way translation tends to reduce variation. He identifies 12 deforming tendencies: 1. Rationalization: This mainly entails the modification of syntactic structures including punctuation and sentence structure and order. An example would be translations of Dostoevsky which remove some of the repetition and simplify complex sentence structures. Berman also refers to the abstractness of rationalization and the tendency to generalization. 2. Clarification: This includes explicitation, which aims to render clear what does not wish to be clear in the original. 3. Expansion: Like other theorists (e.g., Vinay and Darbelnet), Berman says that TTs tend to be longer than STS. This is due to empty explicitation that unshapes its rhythm, to over translation and to flattening. These additions only serve to reduce the clarity of the work's voice. 4. Ennoblement: This refers to the tendency on the part of certain translators to improve on the original by rewriting it in a more elegant style. The result, according to Berman, is an annihilation of the oral rhetoric and formless polylogic of the ST. Equally destructive is the opposite - a TT that is too popular in its use of colloquialisms. 5. Qualitative impoverishment: This is the replacement of words and expressions with TT equivalents that lack their sonorous richness or, correspondingly, their signifying or "iconic" features. By iconic, Berman means terms whose form and sound are in some way associated with their sense. An example he gives is the word butterfly and its corresponding terms in other languages. 6. Quantitative impoverishment: This is loss of lexical variation in translation. Berm an gives the example of a Spanish ST that uses three different synonyms for face (semblante, rostro and cara); rendering them all as face would involve loss. 7. The destruction of rhythms: Although more common in poetry, rhythm is still important to the novel, and can be destroyed by deformation of word order and punctuation. 8. The destruction of underlying networks of signifcation: The translator needs to be aware of the network of words that is formed throughout the text. Individually, these words may not be significant, but they add an underlying uniformity and sense to the text. 9. The destruction of linguistic patternings: While the ST may be systematic in its sentence constructions and patternings, translation tends to be asystematic. The translator often adopts a range of techniques, such as rationalization, clarification and expansion, all of which standardize the TT. This is actually a form of incoherence, since standardization destroys the linguistic patterns and variations of the original." 10.The destruction of vernacular networks or their exoticization: This relates especially to local speech and language patterns which play an important role in establishing the setting of a novel. Examples would include the use of diminutives in Spanish, Portuguese, German and Russian, or of Australian English terms and cultural items (outback, bush, dingo, wombat). There is severe loss if these are erased, yet the traditional solution of exoticizing some of these terms by, for example, placing them in italics, isolates them from the co-text. Alternatively, seeking a TL vernacular or slang equivalent to the SL is a ridiculous exoticization of the foreign. 11.The destruction of expressions and idioms: Berman considers the replacement of an idiom or proverb by its TL equivalent to be an ethnocentrism: 'to play with "equivalence" is to attack the discourse of the foreign work' he says. 12.The effacement of the superimposition of languages: By this, Berman means the way translation tends to erase traces of different forms of language that co-exist in the ST. Berman considers this to be the 'central problem' in the translation of novels. Counterbalancing the universals of this negative analytic is Berman's positive analytic, his proposal for the type of translation required to render the foreign in the TT. This he calls literal translation (here literal means: attached to the letter (of works), which is markedly different and more specific compared to the conventional use of literal translation; his use of literal and letter and his reference to the signifying process point to a Saussurean perspective and to a positive transformation of the TL. How exactly this is to be done, however, depends on the creativity and innovation of the translator in his search for truth. Note: Berman's work is important in linking philosophical ideas to translation strategies, with many examples drawn from existing translations. His discussion of the ethics of translation as witnessed in linguistic deformation of TTs is of special relevance and is a notable counterpoint to earlier writing on literary translation. 3.The sociology and historiography of translation Since the tum of the millennium. the study of translators and the social nature of translation taken center-stage in translation studies research. This includes the dramatic increase in works translation historiography, but most strikingly encompasses the simultaneous development of a sociology of translation. Many studies have drawn on the work of French ethnogripher and sociologist Pierre Bourdieu and his concepts of:  Field of social activity, which is the site of a power struggle between participants or agents-for us, this field is translation and the participants potentially include the author. commissioner, publisher, editor, translator and reader; at the same time, there is the field of power, which partially conditions the field of cultural production and the field of translation;  Habitus, which is the broad social, identitary and cognitive make-up or disposition of the individual, which is heavily influenced by family and education; Habitus is an integral part of the individual translator's history, education and experiences. It is particularly linked to field and to cultural capital and has been central to recent sociological work in translation studies;  The different types of capital which an individual may acquire or be given-these comprise the more tangible economic capital (money and other material assets) and the more intangible: social capital (such as networks of contacts), cultural capital (education, knowledge and symbolic capital (status);  Illusio, which may be understood as the cultural limits of awareness;  Heteronymy and autonomy, which refer to how dependent the respective field is on the field of power: so a literary field characterized by heteronomy is focused on publishing best-sellers, whereas an autonomous literary field is characterized by translations from non-hegemonic languages and potentially avant-garde or marginalized literature.  Doxa, a term that indicates the dominant ideological tenets of the target culture:  Naming,which refers to the definition of the field; If we want to study the translations published in a single target language, like Italian, we first focus on the Italian literary field. In Bourdieu's terms, we need to situate the Italian literary field with in a broader context, which he defines as the field of power. The field of power sets the conditions under which the literary field operates: the amount of autonomy and heteronomy that Italian writers and agents have in terms of the literary market. Note: When studying why one language is translated from more than another language in a given target culture, it is helpful to draw on Bourdieu's notion of symbolic capital. Note: Bourdieu's work has been adopted by some scholars as a less deterministic alternative to the polysystem framework, especially as a means of theorizing the role of the translator, which seemed worryingly absent from earlier theories. An early but still seminal article in this vein is by the late Daniel Simeoni, who stresses that the study of the translatorial habitus complements and improves on Toury's norm-based descriptive translation smiles by focusing on how the translator's own behavior and agency contribute to the establishment of norms. In his study of the modern-day translator. Simeoni rather depressingly concludes that translation is a poorly structured activity where most translating agents exert their activity in fields where their degree of control is nil or negligible and that their habitus is generally one of voluntary servitude. Munday explains three sociology-related concepts/ theories that you should know:  Translation practice: which refers to how the translator and other agents act as they carry out their tasks in the translation process or event and what the interrelation is between these agents-what Pym terms causation.  Bruno Latour's actor-network theory, which is helpful in analyzing the roles of each agent, participant or mediator in the network and providing] solid bases for testing interpretative hypotheses relating to the nature of the translation process. In translation studics, the theory has been applied to the translation of poetry. amongst others.  Niklas Luhmann's social systems: In contrast to Latour, Luhmann views society as a complex of closed functional systems that operate beyond the immediate influence of humans. 4.The power network of the translation industry In presenting their Outline for sociology of translation, Heilbron and Sapiro assert the elements that must be covered by this approach: firstly, the structure of the field of international cultural exchanges secondly, the type of constraints-political and economic-that influenced these exchanges and thirdly, the agents of intermediation and the processes of importing and receiving in the recipient country. As far as the economics is concerned, the translator's lot may be miserable. Venuti has already described and lamented how the literary translator works from contract to contract, often for a usually modest flat fee, with the publishers (rather than translators) in itlating most translations and generally seeking to minimize the translation cost. Publishers, as Venuti shows, are very often reluctant to grant copyright or a share of the royalties to the translator. Venuti deplores this as another form of repression exercised by the publishing industry, but it is a repression that is far from uncommon because of the weakness of the translaior's role in the network. Fawcett describes this complex network as amounting to a power play, with the final product considerably shaped by editors and copy-editors. This most often results in a domesticating translation Interviews with publishers confirm that it is often the case that the editor is not fluent in the foreign language and that the main concem is that the translation should 'read well in the TL In some cases, the power play may result in the ST author's omission from the translation process altogether. 5.Thereception and reviewing of translations The link between the workings of the publishing industry and the reception of a given translation is clearly made i: Meg Brown. She stresses the role of reviews in informing the public about recently published books and in preparing the readership for the work. Brown adopts ideas from reception theory, including examining the way a work conforms to, challenges or disappoints the readers' aesthetic horizon of expectation. This is a term employed by Jauss to refer to readers' general expectations (of the style, form, content etc.) of the genre or series to which the new work belongs. One way of examining the reception is by looking at the reviews of a work, since: they represent a body of reactions to the author and the text. reviews are also a useful source of information concerning that culture's view of translation itself. In a study of examining reviews, Venuti found that readers mostly prefer fluent translations written in modern. general, standard English that is natural and idiomatic. Venuti considers such a concentration on fluency and the lack of discussion of translation as prime indicators of the relegation of the translator's role to the point of invisibility. 1.1.Paratexts The term paratexis refers to devices appended to the text. Gérard Genette considers two kinds of paratextual elements:  Peritexts: they appear in the same location as the text and are provided by the author or publisher. Examples given by Genette are titles, subtitles, pseudonyms, forewords, dedications, prefaces, epilogues and framing elements such as the cover and blurb.  Epítexts: an epitext is any paratextual element not materially appended to the text within the same volume but circulating, as it were, freely, in a virtually limitless physical and social space. Examples are: a) Marketing and promotional material, which may be provided by the publisher, b) Correspondence on the tear by the author, and also c) Reviews and academic and critical discourse on the author arid text which are written by others. If we adopt the analytical approach of reception theory, we can analyze reviews synchronically or diachronically.  An example of a synchronic analysis would be an examination of a range of reviews of a single work.  Examples of a diachronic analysis would be an examination of reviews of books of an author or newspaper over a longer time period. CHAPTER ONE ((Definition and History of Interpreting)) 1. Interpreting defined Within the conceptual structure of Translation, interpreting can be distinguished from other types of translational activity most succinctly by its immediacy: in principle, interpreting is performed 'here and now' for the benefit of people who want to engage in communication across barriers of language and culture. In contrast to common usage as reflected in most dictionaries, interpreting need not necessarily be equated with oral translation or, more precisely, with the oral rendering of spoken messages. Doing so would exclude interpreting in signed (rather than spoken) languages from our purview. Otto Kade, a self-taught interpreter and translation scholar at the University of Leipzig, defines interpreting as a form of Translation in which a first and final rendition in another language is produced on the basis of a one-time presentation of an utterance in a source language. This definition is based on two criteria:  the source-language text is presented only once and thus cannot be reviewed or replayed, and  the target-language text is produced under time pressure, with little chance for correction and revision. 2. A Brief History of Interpreting Interpreting is an ancient human practice which clearly predates written translation, since it was presumably practiced before texts were actually written. As an official or professional activity, interpreting has been practiced throughout history; however, interpreters are not specifically named or mentioned in historical documents. They became much more visible between the two World War and during the Nuremberg trials after World War II. The etymology of the word interpreting can be traced back to the Assyro- Babylonian root targumanu as far back as 1900 BCE. This is also the origin of the Arabic term tarjoman and the etymological branching leading to the autonomous English term for interpreter, dragomon. Academic research into interpreting is slightly younger than its counterpart in translation and dates back to the 1950s. But it was not until the early 1990s that interpreting was perceived as an academic field of study. The history of research into interpreting can be broken down into four periods: The early writings: The first period covers the early writings in the 1950s and early 1960s by some interpreters and interpreting teachers. These writings were mainly an account of intuitive and personal experiences with practical didactic and professional aims. However, although fascinating, these were personal memoirs and more like historical documents than research into what exactly is going on when an interpreter is at work. The experimental period: During the experimental period, which covers the 1960s and early 1970s, interpreting developed a relationship with psychology and psycholinguistics. Some scholars conducted a few experimental studies on psychological and psycholinguistic aspects of simultaneous interpreting and examined the effect on performance of variables such as source language, speed of delivery, ear-voice span (or EVS, a technical term used in simultaneous interpreting referring to the interval between the moment a piece of information is perceived in the source speech and the moment it is reformulated in the target speech), noise, pauses in speech delivery, etc.  The practitioners' period: During the practitioners' period, which started in the late 1960s and continued into the 1970s and early 1980s, interpreting teachers began to develop an interest in research. The first doctoral dissertation on interpreting was defended and subsequently numerous papers and MA theses were written by practicing interpreters.  The renewal period: During the renewal period in the mid-1980s, a new generation of practitioners questioned the idealized view of interpreting and called for a more scientific study of interpreting as well as an interdisciplinary approach to the subject. There are more empirical studies drawing on ideas from other disciplines, in particular cognitive psychology and linguistics. While there has been a dramatic increase in the number of publications on interpreting, its emergence as a discipline owed much to developments in the field of Translation Studies. The naming and mapping of James S. Holmes paved the ground for the foundation of Interpreting Studies. He viewed interpreting as a subcategory of the medium-restricted form of translation, classifying it as human oral translation. The need was perceived in the early 1990s by translation scholars such as Salevsky who first used the term Interpreting Studies in a major international publication. Salevsky adopted an analogous map for interpreting studies. ((Interpreting Modes and Settings1)) 1. Interpreting modes Interpreting is different from translation in that it involves oral input of the source language and oral output of the target language rather than written input and output of the source and target languages. This, however, sounds like only scratching the surface; there is more to it than meets the eye. Interpreting is a sophisticated cognitive task which, according to Seleskovitch (1975), consists of at least three major components. 1. listening or comprehension, 2.reformulation or deverbalization, and 3.production or oral rendering This means that basically an interpreter listens to the ST to comprehend the message, gets rid of the SL form (words, phrases, structures, etc.) or deverbalizes the message, and orally produces the TT which is the reformulated message in TL. This is a very broad picture of what an interpreter's job involves. However, what makes the study of interpreting difficult, or even sometimes elusive, is the complexity of these three components, the operation of these mental tasks, and the cognitive efforts required for operation of these three tasks as well as the coordination of them. 1.1. Simultaneous interpreting (SI) Simultaneous interpreting was first used on a large scale at the Nuremberg Trials, it then developed beyond the political sphere into the fields of economics, sports, finance, manufacturing industries, transport, etc. It finally came to replace consecutive which had previously been the prevailing mode in many domains. In simultaneous interpreting, the interpreter continuously receives and comprehends the new input while simultaneously deverbalizing it and producing the output in the target language. Thus, the simultaneous interpreter has to handle several tasks at the same time, and this requires coordination of different cognitive efforts. The interpreter sits inside a booth, which has to meet certain requirements (e.g. being sound-proof, having a good view of the speaker, etc.), and wears a headset comprising headphones through which he listens to the speaker, and a microphone into which he utters his rendering of the ST. At the same time as the interpreter is interpreting the speech into the microphone on the headset, the audience can listen to the interpretation through the headphones they are equipped with. Henderson maintains that SI involves three elements: a)The listening to another person element, which comes first both logically and chronologically, the raw material the interpreter gathers and from which he devises his output. b)The interpreter's business is not words but ideas or message elements. Only in the most elementary cases can simultaneous interpretation be conceived as a simple transposition of source-language utterances. The interpreter is continually involved in evaluating, filtering and editing (information, not words) in order to make sense of the incoming message and to ensure that his output, too, makes sense. c)The active form of spontaneous speech. The message the interpreter handles comes from an outside source the interpreter is attending to two different activities at the same time and must pay attention to the incorning message and also give conscious and critical attention to his own speech output. Note: The high level of mental demand, interestingly dubbed as 'mental gymnastics', imposed on the interpreter as a result of the linguistic, communicative, and cognitive operations involved makes it almost impossible for a single interpreter to carry on interpreting non-stop for a long time. Thus, it is common practice for more than one interpreter (at least two) to sit in the booth and take turns to interpret segments the speech. Normally the segments interpreted at a go by one interpreter are not longer than thirty minutes. Therefore, at any given point in time, there is one active interpreter (the one who is interpreting) and at least one passive interpreter, who remains in the booth, preparing him٫herself for the next segment and٫or helping the active interpreter if the need to do so arises. 1.2. Consecutive Interpreting (CI) Consecutive interpreting, as one of the two basic types of interpreting used today, consists in translating a 10 to 15-minute speech after it has been delivered by the speaker. In the early 1950s, consecutive was the main interpreting method. Unlike simultaneous interpreting, here the interpreter sits or stands near the speaker(s). Given the length of the stretch of speech to be interpreted at a go (which is normally 10-15 minutes, but may vary according to the speaker, the conditions, etc.), the interpreter needs to make use of a technique called note-taking. There are exceptions, however, such as A. Kaminker, who never took any notes, even for speeches of 20 minutes and more and yet was claimed to be word perfect. Choi (1999) defines CI as: The method in which the interpreter listens to the speaker, takes note of the contents, and when the speaker pauses, directly conveys the speech in the first person as if making the speech him٫herself. Though not as challenging as SI, CI, too, seems to be a demanding task requiring robust training. This can be seen in Arumi Ribas's observation: Consecutive interpreting entails a large number of almost concurrent cognitive, psychomotor and affective processes, all of which pose major challenges for the interpreter who has to deal with them simultaneously. The interpreter is constantly confronted with unexpected situations that must be dealt with while he٫she is already working at the limits of his٫her available processing capacity. It is, therefore, crucial that interpreter training should be as effective as possible. Note-taking seems to be an inseparable, indispensable component of Cl as the demand put on the interpreter's memory can be huge. That is why interpreters generally aid their memory by means of a particular note- taking technique which helps them reconstruct the whole speech in the TL, without altering its general structure. Kim (2006) also holds that the consecutive interpreter extracts the proper information from notes and from memory, and produces a linguistically correct and culturally appropriate target text, monitoring his٫her interpretation at the same time. CI is normally used in negotiations, press conferences, official events, question and answer sessions, and where simultaneous interpreting equipment (such as interpreting booth) is not available. Pochhacker makes a distinction between classic consecutive and short consecutive interpreting: 1.Classic consecutive refers to consecutive interpreting with the use of systematic note-taking. 2.Short consecutive refers to consecutive interpreting without notes. 1.3. Sight translation (STR) The term sight translation refers to the oral translation of a written text. Here the source text is written and the turget text is oral. Lambert defines sight translation as: The transposition of a message written in one language into a message delivered orally in another language. Thus sight translation stands in between written translation and interpreting , it shares the characteristics of its source text with written translation while its target text resembles that of interpreting. Lambert argues that due to the time stress factor as well as the oral nature of the production, sight translation seems to have more in common with interpreting rather than written translation. Yet, it should be born in mind that because the input is received visually, the mechanisms of message processing are different in sight translation as compared to SI and CI. As the source text is constantly present before the interpreter's eyes, listening comprehension abilities , which are crucially important in both SI and C1, are totally irrelevant in sight translation. Another distinction is that on the account that the interpreter's access to the source text is not temporary, his memory is not as vigorously involved in sight translation as it is in Sl and CI. Although the constant presence of the visual input eliminates listening efforts and relieves memory efforts in sight translation, the downside is that it may cause interference in TT production, especially if the two languages at hand are drastically different in their syntactic structure and distribution of information over the sentence components. Empirical support for this can be found in Agrifoglio's (2004) study where she compared the performance of some interpreters in three different modes of interpreting (SI, C1, and STR). She found that expression problems in sight translation outnumbered those in SI and CL. Most of these TL mistakes stemmed from the syntactic differences between SL and TL Sight Translation may be required in different settings: interpreting some written documents in a trade meeting. interpreting some legal papers in a court, etc. Note: One of the outstanding features of sight translation is that, unlike most other forms of interpreting, it is produced at the interpreter's own pace. Lambert makes a distinction between two types of sight translation: 1.Rehearsed sight translation, where the interpreter is given some time for preparation before the actual task of interpreting starts; 2.Unrehearsed sight translation, where there is no or little time for preparation and the interpreter has to sight-translate online. Weber contends that in sight translation the following capabilities are of prime significance: 1.analyzing the source text rapidly, 2.converting information from source language into target language rapidly without falling into the trap of word-for-word rendering, and 2.being good at public speaking 1.3.1. Skills involved in sight translation Sight translation engages three types of skills, in general:  Reading skills: ─ Analyzing: Analyze the content of each text and practice picking out the subject and verb to determine the idea. Skimming and scanning are analytical techniques that use rapid eye movement and keywords to move quickly through text for slightly different purposes. Skimming is reading rapidly in order to get a general overview of the material. Scanning is reading rapidly in order to find specific fact. Skim and scan the text to get the necessary information in the shortest time possible. ─ Chunking: Chunking is simply breaking a usually long sentence or paragraph into the smallest semantic parts. You need to first identify sentences and embedded sentences and then chunk them into manageable segment in STR. ─ Parsing: Parsing is the restructuring of sentences into sentence structure of the target. ─ Establishing a hierarchy of importance: Preform chunking with transcripts of court proceedings, for example, and try to establish a hierarchy of importance among the chunks.  Production skills: ─ Completing phrases٫ clauses: Try to complete partial information that is in the forsy of incomplete phrases and clauses. As you use this technique, note the errors you make and be aware of how susceptible we are to reaching false conclusion based on partial information ─ Paraphrasing: Read a text out loud and rephrase it as you go along, making sure you do not change the meaning. ─ Expanding: Read a text out loud and expand it (ie., say the same thing in more words) as you go along. again making sure not to change the meaning. ─ Condensing: Read a text aloud and condense it (ie., say the same thing in fewer words) as you go along. retaining the meaning. ─ Manipulating the register: Read a text aloud and alter the register or language level as you go along. Be careful not to stray from the original meaning.  Performance skills: ─ Reading aloud. Stand in front of a mirror and read passages aloud from any book, newspaper, or magazine. A legal textbook, code book, or other legal text is useful for familiarizing yourself with legal language. Record or videotape yourself and analyze the outcome critically. Pay attention to your voice, pitch, tone, hesitations, signs, projection, enunciation, and posture. ─ Controlling emotions: Practice controlling your emotions while reading aloud texts with high emotional content, such as fear, anger, humor, etc. Make sure you convey the author's intended emotions and not your personal reaction to the subject matter. ─ Public speaking: Practice speaking before a group of people at every opportunity. People you know will constitute a less threatening audience and will allow you to ease your way into public speaking and build your confidence. Court interpreting is an ongoing exercise in public speaking. 1.4. Simultaneous interpreting with text One of the sub-types of SI, particularly common at very formal speeches and events, for instance when heads of state are making their speech at the annual meeting of the UN General Assembly, is when the speaker reads out a speech which has already been written down. This speech is also made available to the interpreter beforehand; the interpreter is listening to the oral delivery of a text, the written version of which he is looking at, and at the same time translates it into the target language. The interpreter is spared the effort of listening as the only channel of reception of the content (there is a visual channel to assist the auditory one), as well as the short-term memory effort to store temporarily bits of information to be rendered into target language. Also the possibility of problems arising from speaker's non-standard, peculiar pronunciations and sound equipment disruptions is ruled out. All this seems to suggest that simultaneous interpreting with text' should be less burdensome in comparison with its 'text- less' counterpart. Nonetheless, this is oftentimes not the case; the speakers do not necessarily follow the text to the letter and at times decide to make changes to it while presenting the speech. This means that the interpreter cannot and must not rely too much on the text, losing sight of what the speaker is actually saying on the stage. Still a bigger problem is that speaking 'off the cuff" requires certain cognitive efforts which slow down the speed of presentation while reading out from paper relieves the speaker from the cognitive struggle to form the speech on-line and, thus, the speech is presented at a much faster speed. In the former case, the interpreter is naturally provided with the pauses and delays s/he needs in order to tackle the time problem whereas in the latter case, s/he is faced with the magnified problem of keeping pace with a speaker who is talking faster than normal. 1.5. Liaison interpreting Liaison interpreting, also known as bilateral interpreting, is one of the oldest types of interpreting. It involves communication between two parties that speak different languages. The interpreter here has to enjoy a good command of both languages because s/he has to work in both directions, from their A language to their B language and vice versa. It may sound very similar to consecutive interpreting and indeed it does to a great extent resemble CI and may even be considered as a sub-type of consecutive interpreting. However, it is different from Crn that it does not normally require taking notes, or at least extensive notes as an essential part of it, and that it is performed bidirectionally. Note: This type of interpreting does not require any special equipment and is used in informal meetings, commercial negotiations, and for community interpreting as well. 1.6. Whispering interpreting Whispering or whispered interpreting is different from other modes of interpreting in that it is used in situations where only one person or very few people need an interpretation of the speech. In this mode, also known in the literature by the French name chuchorage, the interpreter sits or stands next to the person needing the interpretation and whispers it into their ear. As the explanation shows, this type of interpreting can be used in a very limited range of situations and for short periods of time. This is in part due to the inconvenience caused for both the interpreter and the audience, especially if there is more than one interpreter working in the same room. Note: Chuchotage can be considered as a sub-type of simultaneous interpreting since the delivery of the original speech and the interpreting happen almost concurrently. 1.7. Escort Interpreting In this specific type of interpreting, the interpreter accompanies a person or a group of people, hence the name wer or empting, who are paving a visit to an event such as a trade exhibition. Note: This type of interpreting involves a combination of chuchotage and liaison interpreting. 1.8. Remote interpreting In what is broadly referred to as remote interpreting, the interpreter is not in the same room as the speaker or listener. This could mean that the interpreter is in a booth or separate place on the premises, with hardwired connections. The oldest form of remote interpreting, proposed as early as the 1950s, is telephone interpreting (over-the-phone interpreting), which became more widely used only in the 1980s and 1990s, particularly in intro-social settings (healthcare, police, etc.). 1.8.1. Telephone interpreting Telephone interpreting is usually performed with standard telecommunications equipment in the bilateral consecutive mode over the phone. It is widely used in a business context, for medical examinations, and even in some courts in America. If a factory manager in the United States needs a component that is manufactured in Japan. he contacts a telephone interpreting service and asks for an English-Japanese interpreter. The interpreter interprets everything that is said. Freelance telephone interpreters are paid a retainer to be at the end of a phone line. Depending on their conditions of employment, they may be paid by the minute or every five minutes for actual interpreting time. Note: The advantage of telephone interpreting is that it is available from anywhere round the clock in a large number of languages. It is obviously ideal for emergency situations and for first contacts. Advances in voice recognition processes mean that machine interpreting may become available over the phone in the future. Note: A particular technology designed for the deaf and hard-of-hearing, known as video relay service (VRS), allows deaf users of sign language to communicate over the phone, the call being mediated by a video interpreter. 1.9. Simul-consec interpreting Living in a time when technology is developing on a daily basis, we are used to facing new technologies which inevitably challenge some of our old conceptions. One such technology is the digital pen which has been designed to assist interpreters, consecutive interpreters, in fact, to improve their performance through reducing the burden of taking detailed notes. This has brought about a new hybridized mode of interpreting labelled by different researchers as 'Consec-simul with notes', 'Digitally remastered consecutive", "Technology assisted consecutive", *Digital recorder assisted consecutive", "Digital voice recorder assisted CI' or 'SimConsec The reason why it can be considered a hybrid mode is that it has features of both of the traditional modes of interpreting, i.e., consecutive and simultaneous. The basic idea is that a consecutive interpreter listens to the speaker while using the digital pen to make notes, not of course as extensively as in the traditional mode, and to record the speech segment. Then, when the speaker pauses and it is time for the Interpreter to start rendering their interpretation, rather than merely depending on their notes and memory, the interpreter uses the play- back option on the pen and listens to the recorded segment of the speech while simultaneously rendering it into the target language 2.Interpreting settings If we approach the phenomenon of interpreting from a historical perspective,the most obvious criterion forcategorization and labeling is the social context of interaction,or setting,in which the activity is carried out.In itsdistant origins,interpreting tookplace when(members of) different linguistic and cultural communities enteredinto contact for some particular purpose.Apart from such contacts berween social entities in various inter-socialsertings,mediated communication is also conceivable within heterolingual societies, in which case we can speak ofinterpreting in intra-social settings.  Inter-social settings As the name suggests,inter-social interpreting takes place in international settings between diplomats,politicians,scientists,business representatives,etc.where communicating parties are typically on an equal footing asrepresentatives of a nation,party,company, or other organizations. The typical form of inter-social setting isconference interpreting.International conference interpreting is used in the multilateral sphere,as in conferencesattended by delegates and representatives of various nations and institutions.  Intra-social settings Intra-social setting,which is best represented by court interpreting and community interpreting (also referrd toas public service interpreting, mainly in the UK)takes place within an institution of a given society,typicallybetween a service provider or institutional authority and individuals speaking on their own behalf.Therefore,communication in itra-social setting is characterized by an unequal distribution of knowledge and power,as in thecase of a police interrogation,or a witness's testimony in court. In intra-social interpreting,since a bilingualinterpreter is assumed to be mediating between two(monolingual) clients,it is referred to as bilateral interpretingor dialogue interpreting.Therefore,in community-based dialogue interpreting the format of interaction is typicallydialogic as opposed to intemational conference interpreting,where the format of interaction is typicallymonologic. Note:An interpreting type whose linkage to the inta-social sphere is less obvious is media interpreting or broadcast interpreting (often focused on TV interpreting),which is essentially designed to makeforeign-language broadcasting content accessible to media users within the socio-cultural community.Sincespoken-language media interpreting,often from English,usually involves personalities and content from theinternational sphere,media interpreting appears as rater a hybrid form on the inter-to intra-social continuum. 3.Other modes of interpretin · ─ The kind of interpreting designed for the purpose of trading and exchanging goods,of doing business is calledbusiness interpreting. · ─ Where the representatives of different linguistic and cultural communities come together with the aim of establishing and cultivating political relations, we speak of diplomatic interpreting. · ─ Other types of interpreting include military interpreting,educational interpreting,healthcare interpreting(medical interpreting,hospital interpreting),legal interpreting (including,among others,police and asylumsettings),and professional interpreting,(opposed to non- professional interpreting or natural interpreting.that is,interpreting done by bilinguals without special training for the task.) ─ Advances in technology have led to what is known as automatic interpreting systems, which work on the basis of machine emanation sufliware and technologies for speech recognition and synthesis. While such machine interpreting js unlikely to deliver fully automatic high-quality interpreting in the near furore. advances ih mobile ahd cloud computing have led to Impressive progress in the development of speech-to-speech (ranslation for certain applications and domains. ─ The types of interpreting discussed so far fall under the general category of spoken-language interpreting, Which is distinguished from sign language interpreting, popularly known also as interpreting for the deaf. Since deaf and hearing-impaired people may actually rely on a variety of linguistic codes in the visual rather than the acoustic medium, it is more accurate to speak of signed language interpreting (or visual language interpreting). Interpreting into a signed language is sometimes referred to, loosely, as signing (voice-ro-sign interpreting or sign-to-sign interpreting), as opposed to voicing or voice-over interpreting (sign-to-voiee interpreting), A special modality is used in communication with deafblind persons, who monitor a signed Message, including fingerspelling, by resting their hands on the signer’s hands (tactile interpreting).. ((Gile's Effort Model of Consecutive Interpreting (CI))) One of the best models proposed to date, which very well captures the components of the interpreting process, is Gile's effort model of interpreting. Basically, he proposed his model for explanatory purposes, i.e., as a way to be able to understand the underlying reasons for interpreters' performance errors, omissions, etc. when they were not expected. Drawing on cognitive psychology, he first discusses the concepts of attention, automatic operations and non-automatic operations, from which he moves on to introduces the three main components, or efforts, of the model: The Listening and Analysis Effort, The Production Effort, and The Memory Effort. To these is added a fourth component, namely Coordination Effort, which is the mental effort needed to coordinate the core efforts. This model was initially developed for simultaneous interpreting, but was then modified to suit consecutive interpreting as well. Gile's effort model of consecutive interpreting consists of two phases: I. Gile's Effort Model for Phase 1 of CI (also known as the listening/comprehension phase). This phase, as you can see in Figure below, involves the following efforts, respectively: 1. Listening and analysis effort 2. Note-taking effort 3. Coordination effort 4. Short-term memory effort Consecutive Interpreting (phase 1) = Listening and Analysis Effort + Note-taking Effort + Short-term Memory Effort + Coordination Effort Figure: Gile's Effort Model for Phase 1 of CI Note: The mathematical signs used here are not used in their pure mathematical sense. For example, the equation sign (=) does not really mean equals but rather involves. As the Figure shows, the task of consecutive interpreting, in its first phase, involves a number of efforts, the first of which is the listening and analysis effort. This is the amount of mental effort needed to analyze the linguistic input in order to comprehend it. The second is the note-taking effort, the mental effort allocated to the production of notes which will be used in phase two of CI. The short-term memory effort is the processing capacity devoted to storing the incoming information until it is either written down in the form of notes or sent to long-term memory for later recollection. These can be considered as the core efforts involved. In addition to these, there is the coordination effort involved as well. Gile's Effort Model for Phase 2 of CI (also known as the production/reformulation phase): This phase, as you can see in Figure below, involves the following efforts, respectively: o Remembering effort o Note-reading effort o Production effort o Coordination effort Consecutive Interpreting (phase 2) || Remembering Effort + Note-reading Effort + Production Effort + Coordination Effort Figure: Gile's Effort Model for Phase 2 of CI In phase 2, the remembering effort is associated with all the cognitive operations required in recalling information from long-term memory. But the note-reading effort implies the operations necessary for using the notes taken during phase one, to reconstruct the original speech. The production effort is the mental effort devoted to actual formulation of the message in the target language, and the coordination effort makes it possible for all these processes to occur smoothly and harmoniously. Note: A remarkable difference between phase 1 and phase 2 is that phase 1 is paced by the speaker. So the interpreter has to adapt to the pace of the speaker which means that the coordination effort gains more significance, i.e., the proper, well-timed distribution of mental resources to different tasks becomes vital. In phase 2, however, it is the interpreter who sets the pace. Therefore, the pressure on coordination effort is considerably reduced. This is the distinction between the two phases in terms of 'processing capacity' requirements. Figure below shows the processing capacity requirements for the first phase of CI. TR=LR+NR+ MR + CR Figure: Processing Capacity Requirements for Phase 1 of CI In this equation, TR stands for the total processing capacity requirements for consecutive interpreting, LR for the processing capacity requirements of the Listening and Analysis Effort, NR for the processing capacity requirements for the Note-taking Effort, MR for the processing capacity requirements for the Short-term Memory Effort, and finally CR for the processing capacity requirements for the Coordination Effort. It is rightly assumed in cognitive psychology that the processing capacity an individual has, at a certain point in time is, without doubt, limited. Therefore, a consecutive interpreter has a limited amount of mental resources available to allocate to the cognitive tasks involved: listening and analysis, note-taking, short-term memory, and coordination. This makes it easy to see that if the total amount of the processing capacity exceeds the total amount of processing capacity available to the interpreter, failure is doomed to occur. There is, however, more to it than meets the eye for this is not the only condition to be met. There are cases in which the total capacity required is smaller than the total capacity available, and yet problems do arise. This can be attributed to inappropriate allocation of available processing capacity between efforts. For example, if the interpreter allocates too much attention to the production of notes relating to a previously heard segment of the ST, she may not be able to direct enough attention to comprehension of the incoming segment of the message and thus a portion of the message will go uninterpreted or misinterpreted. Consequently, not only is it necessary that the total processing capacity required be less than the total processing capacity available, but also it is a must that the processing capacity required for each of the efforts be less than the processing capacity allocated to it. When all these conditions are met, the interpretation can go on smoothly. Figure below shows these necessary conditions. (1) TR≤ TA (2) LR≤LA (3) NR NA (4) MR & MA (5) CR SCA Figure: Necessary Conditions for Phase 1 of CI CHAPTER FOUR ((Note-taking)) 1. Essential elements Note-taking is one of the effective techniques in consecutive interpreting that interpreters should have at their disposal. 1.1. What to note?  The essence: It is the gist of what is communicated.  Fulcra: It refers to different kinds of links as causality, consequence, etc. and the relation of the ideas to one another in time.  Transcodable terms: They are words that must be repeated rather than deverbalized and interpreted.  Numbers: Numbers include statistics, dates, duration, etc.  Proper nouns: They include names of people, places and things.  Technical terms: Terms which are specific to the context of the speech.  Lists: Lists of words which overload the memory.  The first sentence: The first sentence of each new idea should be noted with particular care - not verbatim, but with care.  The last sentence: The last sentence of the speech should be noted with particular care.  Striking usage: It refers to words and expressions that stand out and the speaker has probably used it deliberately and wants it to appear in the interpretation. 2. Deciding on the language of note-taking To beat the time pressure, notes can be taken in the source, target, or a combination of both languages, whichever comes first to the interpreter. It is assumed that when notes are taken in the source language, there is still no interpreting taking place and thus the load of the job is postponed to when the interpreter produces his/her rendering. However, when note-taking is done in the target language, one part of the job is already accomplished and production of the final rendering will be facilitated. 3. Noting vertically Vertical notation, from top to the bottom of the page rather than from left to right, is the distinguishing characteristic of Rozan's system, one that you will find in almost all interpreters' notes. 4. Noting diagonally Diagonal notation is a kind of vertical noting in which notes read from top-left to bottom-right. Each subsequent element to be noted is written below and to the right of the previous one (This is, of course, the case with left-to-right languages. In the case of right-to-left languages, it is exactly the other way around). Start writing the main point from the left. Indent secondary and supporting details. Further indent major subgroups. CHAPTER ONE ((Human and Machine Translation)) 1. What makes translation difficult?  Inter-linguistic differences or non-isomorphism between languages  Discontinuous dependencies, where two words that belong together are separated by one or more intervening words, as in the following example Send your certificate of motor insurance back Idioms, 1.e., phrases whose meaning cannot be inferred on the basis of their constituent parts. Idioms, in other words, are non-compositional. A good example is old hat 2. Translation memory In the 1990s translators working in the growing software localization industry found themselves translating texts that were either extremely repetitive in themselves or that repeated verbatim whole sections of earlier versions of a document. This was the case, for example, with software manuals that had to be updated any time there was a new release of the software. Rather than translate each sentence from scratch, as if it had never been translated before. they invented a tool that would store previous translations in a so-called translation memory, so that they could be reused. The tool, known as a translation memory tool would take in a new source text divide it into segments sentences or other sensible units like headings or cells in tables and then compare each of these segments with the source-language segments already stored in memory. If an exact match or a very similar segment was found, then the corresponding target-language segment would be offered to the translator for re-use, with or without editing. As translators worked their way through a new translation assignment, they would get hits from the translation memory, accept, reject or edit the existing translation and update the memory as they went along, adding their own translations for the source language segments for which no matches existed. Over time, the translation memories grew extremely large. Some companies who were early adopters of the technology built up translation memories containing hundreds of thousands and then millions of translation units that is source-language segments aligned with their target-language segments. Private translation enterprises also accumulated large translation memories, which came to be regarded as valuable linguistic assets that could help control translation costs and enhance competitiveness. International organizations such as the Institutions of the European Union adopted the technology and built up huge multilingual translation memories, which they in turn made freely available to computer scientists in the knowledge that they could support research agendas in natural language processing. While translation memory was originally conceived as a way of improving, among other things, the productivity of human translators, it also eventually supported efforts to increase automation in the translation industry: on the one hand, translation memory tools enabled translation data to be created in great quantities and in a format that could be easily used in machine translation development, on the other hand, the tools used to manage them provided an editing environment in which machine translation outputs could later be presented to human translators for editing alongside human translations retrieved from conventional translation memory. Translation memories can be seen as a special type of parallel corpus, that is a collection of source texts aligned at sentence level with their target texts. 3. What is machine translation? Machine translation can be briefly defined as translation performed by a computer program, like Google Translate. Machine translation was one of the first non- numerical applications of the digital computers that emerged in the aftermath of the Second World War. Despite the undoubted usefulness of machine translation, it comes with some health warnings: First, just like human translators, machine translation systems can make mistakes. Errors might range from the amusing but trivial to the extremely serious. Machine translation also raises a surprising number of moral and legal issues. Machine translations can be used for assimilation and dissemination purposes: If you are simply using machine translation to get the gist of a text, to understand the basic contents of a web page, then we can say you are using machine translation for assimilation. Such uses generally involve low-stakes, private use of the translated text in question, with little risk of reputational or other damage. If, however, you want to use machine translation for dissemination, for example to publish your blog in a second language, or to advertise your business, then it is wise to understand the risks involved and even to take measures to mitigate them. The ability to do so is a component of what is now known as machine translation literacy. 4. Artificial intelligence, machine learning and machine translation Contemporary machine translation is frequently mentioned alongside a number of other related concepts, including artificial intelligence, machine learning, artificial neural networks and deep learning.  Artificial intelligence (AI) is the most general category. It is often defined as the branch of computer science that aims to create machines - or more specifically computer programs that can solve problems of the kind that would normally require human intelligence. The machines in question don't necessarily have to think like humans, rather they need to act like an intelligent human would. They might be designed to solve fairly narrowly defined problems, like recognizing faces. Such goals are the stuff of narrow Al, also known, somewhat unkindly. as weak Al. So-called strong Al is a more aspirational undertaking. It would involve either general Al-in which machines would have human-like intelligence, be self-aware, able to learn and plan for the future – or superintelligence, which would involve intelligence that exceeds the abilities of any human. It is fair to say that translation, as practiced by professional, human translators, requires the kind of intelligence that strong Al aspires to, but that such intelligence still remains beyond the capacity of machine translation systems. 4.1. Rule-based machine translation One way to tackle the challenges of Al is to attempt to give a computer program all the knowledge it would need to solve a particular problem, and rules that specify how it can manipulate this knowledge. In the case of machine translation, for example, you can give the program a list of all the words in each of the source and the target languages, along with rules on how they can combine to create well- formed structures. You can then specify how the words and structures of one language can map onto the words and structures of the other language, and give the machine some step-by-step instructions (an algorithm) on how to use all this information to create translated sentences. This approach, known as rule-based machine translation (RBMT), dominated machine translation up until the early part of this century. When free online machine translation first became available in 1997, for example, it was based on RBMT. RBMT was beset by a number of problems, however: It was very expensive to develop, requiring highly skilled linguists to write the rules for each language pair. Like other knowledge-based approaches to AI, it suffered from knowledge bottlenecks it was simply impossible in many cases to anticipate all the knowledge necessary to make RBMT systems work as desired. This applies both to knowledge about language and knowledge about the wider world, so- called real-world knowledge. 4.2. Data-driven machine translation This is where machine learning comes in. Machine learning is based on the premise that rather than telling a machine or, more precisely, a computer program everything it needs to know from the outset, it is better to let the machine acquire its own knowledge. The machine does so by observing how the problem it is intended to solve has been solved in the past. We have already seen how translation problems and their solutions can be captured at segment level in the translation units stored in translation memories and other parallel corpora. These translation units constitute the training data from which contemporary machine translation systems learn. This is why such systems are usually categorized as data-driven. And learning from data is what distinguishes machine learning from other types of AI Data-driven machine translation is divided into two types: statistical machine translation and neural machine translation, each of which is addressed below. 4.3. Statistical Machine Translation Statistical Machine Translation (SMT) systems basically build two types of statistical models based on the training data. The first model, known as the translation model, is a bilingual one in which words and so-called phrases found in the source-language side of the training data appear in a table alongside their translations as identified in the target- language side of the training data, and each source-target pairing is given a probability score. The ensuing structure is known as a phrase table. Note: The term "phrase" is something of a misnomer here however, as the strings in question don't necessarily correspond to phrases as commonly understood in linguistics. Rather they are n-grams, that is, strings of one, two, three or л words that appear contiguously in the training data. In the previous sentence, appear contiguously is a bigram, for example, and "appear contiguously in" is a trigram. The second model, known as the language model is a monolingual model (or combination of models) of the target language. Again, it is based on n-grams. A trigram target language model, for example, would give the probability of seeing a particular word in the target language, given that you had already seen the two words in front of it. In SMT systems, the translation model is supposed to capture knowledge about how individual words and n-grams are likely to be translated into the target language, while the language model tells you what is likely to occur in the target language in the first place. What is really important from the current perspective, is that linguists don't have to handcraft these models. Rather they are learned directly from the data by the machine in a training phase. In a second phase, called tuning, system developers work out the weight that should be assigned to each model to get the best output. Once the system is trained and tuned, it is ready to translate previously unseen source sentences. Translation (as opposed to training) is called decoding in SMT. It generally involves generating many thousands of hypothetical translations for the input sentence, and calculating which one is the most probable, given the particular source sentence, the models the system has learned, and the weights assigned to them. SMT was state-of-the-art in machine translation for at least a decade up to 2015. It represented a huge advance compared to the RBMT systems that preceded it, but suffered from a number of deficiencies, most of them due to the fact that: Relatively short -grams were used to build models and that -grams in the same sentence were translated almost as if they were independent of each other. SMT performed particularly poorly on agglutinative and highly inflected languages. Other problems included word drop, where a system simply failed to translate a word, and inconsistency, where the same source-language word was translated two different ways, sometimes in the same sentence. By 2015, SMT was already being displaced by a competing approach to data-driven machine translation, the above-mentioned neural approach. Within a matter of two years the transition to neural machine translation was complete 4.4. Neural Machine Translation SMT had its heyday between 2004 and 2014. Most major users and suppliers of machine translation, including Google Translate (from 2007) and the European Commission (from 2010) were using the technology. Until 2015, that is. That year a neural machine translation (NMT) system developed at Stanford University beat a number of SMT systems by a wide margin and on what was considered a difficult language pair. The Stanford success heralded the beginning of what Bentivogli et al. call "the new NMT era." The excitement was palpable among researchers and especially in the press. Grand claims were made about the new technology, for example, that it was as good as professional, human translation and had thus reached human parity It was also claimed, with some justification, that NMT could learn "idiomatic expressions and metaphors", and "rather than do a literal translation, find the cultural equivalent in another language”. But while there is some truth in such claims, they should not be over-interpreted. An NMT system might indeed produce an idiomatic translation, but this is generally because the data it has learned from contain hundreds or maybe thousands of examples of that very translation. An NMT system (in this case Google Translate) does not know it is being idiomatic, or using a cultural equivalent, when it correctly translates an idiom. Rather it is outputting what it has learned from data. But why is NMT so much better that SMT, if it is simply learning from data? Is that not what SMT was already doing? The answer lies in the kind of representations that NMT systems use and in the kind of models they learn. 4.4.1. Models in NMT Let's start with models. A computer model is an abstract, mathematical representation of some real-life event, system or phenomenon. One use of such a model is to predict an answer to a previously unseen problem. A computational model of translation, for example, should be able to predict a target-language sentence given a previously unseen source-language sentence We have already seen that SMT systems use probabilistic models of translation and the target language that are encapsulated in phrase tables and n-gram probabilities. NMT systems, in contrast, use models that are inspired. even if only loosely, by the human brain. They use artificial neural networks, in which thousands of individual units, or artificial neurons, are linked to thousands of other artificial neurons (let's just call them neurons from now on). In such a network, each neuron is activated depending on the stimuli received from other neurons, and the strength or weight of the connections between neurons. As Forcada explains, the activation states of individual neurons do not make much sense by themselves. It is, instead, the activation states of large sets of connected neurons that can be understood as representing individual words and their relationships with other words. The trick in training an NMT system is to learn precisely those weights that will result in the best performing model of translation, that is, the model whose activation states allow it to predict the best translations. So how is this done? Like in all machine learning, the system learns from data. A neural model of translation is built step by step by exposing a learning algorithm to vast quantities of parallel data. In successive passes, the algorithm learns weights and keeps adjusting those weights, so that the predictions of the model it builds get closer and closer to a desired correct answer It suffices to say here that data-driven machine translation is typical of machine learning in that it involves technologies that are developed to solve problems to which humans already know the answer and to which, in fact, humans have already supplied at least one, if not several correct answers. Such correct answers may be present în the training data or they may be arrived at through generalization from the training data. When a machine translation system is tested to see whether it is improving during training or to compare it to another system once training has finished, we also test by giving it a problem to which we already know the answer. Typically, we ask it to predict the translation of several sentences it has never seen before but for which we already have good (human) translations that we set aside specifically for this purpose. When an NMT system has been trained to our satisfaction1. it can be put into use in a real translation scenario. We no longer talk about "testing the system, and instead talk about using it. When an NMT system is in actual use, most people say that the system is "translating". As with SMT, computer scientists also use the term decoding for the moment when an NMT system produces an output in the target language. 4.4.2.Representing words in NMT In NMT this type of representation is used: the vector, which is a fixed-sized list of numbers. The word apple could be represented by a vector like [1.20.2.80.6.10] for example. Vectors are quite good at representing relationships between words. Vectors have other interesting properties that make them particularly attractive to computer scientists. You can add a vector to another vector, for example, or multiply them and so on. The vector-based representations of words that the machine leams are called word embeddings. The reason why embeddings for related words end up lookcing similar to each other is that they are built up on the basis of where particular words are found in the training data. Word embeddings are not built in one go, but rather in successive layers. An artificial neural network that has multiple layers sandwiched between its external layers is known as a deep neural network Deep learning, in turn, is simply the branch of machine learning that uses multiple layers to build representations. In a deep neural network the external layers correspond to inputs and outputs of the network and are visible to the human analyst. The intermediary, or hidden, layers have traditionally been less open to scrutiny. however, giving deep learning a reputation for opacity, and encouraging some commentators to misleadingly use the word "magic" to describe the internal workings of deep neural networks. 5. The advantages and disadvantages of neural machine translation NMT is generally considered the best performing type of machine translation invented so far. It performs better than SMT, for example, because it can build up very rich representations of words as they appear in a given source text, taking the full source sentence into account, rather than mere n-grams. When it produces translations, an NMT system considers both these rich representations and the emerging target sentence at the same time. Because NMT handles full sentences, it is better at dealing with tricky linguistic features like discontinuous dependencies and it handles all sorts of agreement phenomena better than SMT. But while contemporary NMT systems certainly handle full sentences, until recently, they did not look beyond the current sentence. This meant that they could not use information from a previous sentence to work out what a pronoun like "it" refers to in the current sentence. This restriction to sentence- level processing can cause lots of other problems that only become apparent when users translate full texts rather than isolated sentences. The problem is currently being tackled by researchers working in the area of document-level machine translation, however. NMT can also output words that don't actually exist in the target language. Far more seriously, NMT output can be fluent but inaccurate. And when a translation looks anad sounds good, one might neglect to check that it is compatible with the source text. Like other technologies trained on large quantities of existing text, it can also amplify biases encountered in the training data. NMT systems take much longer and much more computing power to tm in than their predecessors and use up vast quantities of energy in the process. They usually require dedicated, expensive hardware in the form of graphical processing units. They also need massive quantities of training data, which are not available for every language pair. Note: Improvements in the technology have also led some people to question the wisdom of learning foreign languages: if a machine can translate anything anyone else says or writes in a foreign language into your language, why go to all the trouble of learning their language? Such arguments are based on a very limited understanding of the benefits of second or foreign language learning, however, and ignore the fact that machine translation is viable for only a small number of the world's languages. They also tend to see machine translation as being in competition with language learning, rather than possibly being an aid in the process. 6. Four last things you need to know about machine translation Many readers are likely to use only free, online inachine translation and so will encounter only generie engines buiht for the language pair that interests thein. But even these readers should be interested to learn that: different systems may output different translations; different engines in the same system may output different translations; a single system may output different translations for the same input depending on the co-text; a single system's outputs may change over time. 7.Conclusions In one way. NMT is just the latest in a line of technologies designed to automate translation, albeit one that has risen to prominence remarkably quickly. Its success could lead to policy makers and ordinary citizens questioning the value of learning foreign languages or training human translators. But such positions would ignore the fact that NMT still relies on human translations or at least translations validated by humans as training data. And because NMT, like other types of machine translation. is not invincible, its outputs still need to be evaluated and sometimes improved by people who can understand both source and target texts. There is also a pressing need for machine translation literacy among even casual users of the technology, so that they do not suffer unnecessarily because of ignorance of how the technology works. Given the right conditions, NMT can be a vital pillar in the promotion and maintenance of multilingualism, alongside language learning and continued translation done or overseen by humans. CHAPTER TWO ((Selecting and preparing texts for machine translation: Pre-editing and writing for a global audience)) Neural machine translation (NMT) is providing more and more fluent translations with fewer errors than previous technologies, Consequently, NMT is becoming a real tool for speeding up translation in many language pairs. However, obtaining the best raw MT output possible in each of the target languages and making texts suitable for each of the target audiences depends not only on the quality of the MT system but also on the appropriateness of the source text. This chapter deals with the concept of pre-editing, the editing of source texts to make them more suitable for both machine translation and a global target audience. Put simply, pre-editing involves rewriting or editing parts of source texts in a way that is supposed to ensure better quality outputs when those texts are translated by machine. It may involve applying a formal set of rules, sometimes called controlled language rules, which subulate the specific words or structures that are allowed in a text, and prohibit others. Alternatively, it can involve applying a short list of simple fixes to a text, to correct wrong spellings, or impose standard punctuation, for example. Another way to ensure that a text is translatable is to write it that way in the first place. Writers whose work will ultimately be translated into multiple languages are thus often asked to write with a global audience in mind As well as applying principles of "clear writing", they are asked for example, to avoid references that may not be easily understood in cultures other than their own. This applies also to writers whose work will be read in the original language by international readers who are not native speakers of that language. 1.Pre-editing and NMT In the past, when rule-based systems produced obvious, and often systematic, errors in adequacy and fluency. pre-editing was often necessary to get the best out of MT. Even after the transition to statistical MT (SMT), researchers sully found pre- editing to be useful. Some, however, believe that pre-editing is not an effective strategy with NMT systems, because these systems make fewer errors. 2. Genre and domain-based advice Based on their own experience with MT, many language service providers recommend restricting the use of MT. and by extension, NMT, to the translation of:  Certain types of technical documentation: These usually involve already standardized texts, in which terminology use is already strict in itself, the style is direct and simple, and the use of linguistic resources closely resembles the application of controlled language rules. The conceptual framework that underpins such technical documentation may also be identical in both the source and target locales.  Low-risk internal documentation: These are texts which have very low visibility and where the consequences of less-than-optimal translation are not serious. They may even be limited to use within the user's or client's company. A priori, considerations such as naturalness or fluency in the target language are less relevant than would otherwise be the case (although NMT generally produces quite fluent output anyway). but companies may still wish to control lexical selection and lexical variability.  Low-risk external documentation: This refers to texts that are consulted only occasionally or sporadically. Or texts that are used as a help database or similar, and that are often not produced by the client, but by the community of users of its service or product. In many such cases, the MT provider may explicitly deny liability for any losses caused by faulty translations. Note: MT is not usually recommended for texts of a more visible nature whose purpose is not just to inform for give instructions but also to be appellative. that is, to arouse a particular interest in the reader, for example, in a certain brand, or to elicit a certain behavior. In other words, the more informative a text is, the more it limits itself to the literalness of its message, the less implicit information it contains and the less it appeals to references linked to the reader's culture or social reality. the greater the expected success of MT. 3. Writing for a global audience Sometimes the objective of pre-editing is not a matter of avoiding errors in the translated text, but rather of ensuring that the translation, beyond conveying a meaning consistent with that of the source text. also achieves the same or a similar effect on the reader of the target text as the source text did on its reader, to the extent that this. is possible. It is a question of making a text available to a global audience and attempting to have the same effect on readers in each target language Whether the text is to be translated with an MT system or not, from a communication perspective, for years it has been considered advisable to have the translation already in mind during the drafting phase of the source text. In fact, the preparation of documentation for translation forms part of their training for technical writers. NMT allows users to obtain translations with fewer and fewer errors of fluency or adequacy. It enables translations to be completed very quickly. Moreover, it seems to a achieve excellent results when translating many different text genres. But a text written grammatically in the target language and without translation errors may still not be an appropriate translation. Pre-editing makes it possible to ensure the appropriateness of the transition with a global audience in mind. Currently, this phase is seldom used in the translation industry. In the past, some global companies using SMT or RBMT pre-edited their original texts to avoid recurring translation errors using their own systems. With NMT pre-editing may become widespread in the industry as pant of a strategy that not only avoids translation errors but also contributes to making the raw MT output appropriate to the contexts of use of the target translation. 4. Pre-editing guidelines Pre-editing is based on applying a series of specified strategies to improve MT results when preparing content for a global audience or in controlled domains. Pre- editing helps to ensure clear communication in controlled domains targeting global audiences. In this context, the predominant textual type is informational, where there is no creative or aesthetic use of language but a literal and unambiguous use with the intention of either informing or instructing the text's recipient. The following are the most common guidelines used in communication for a global audience. and are the basis for pre-editing strategies. The aim of most of these guidelines is to increase MT effectiveness in producing grammatically correct translations that reproduce the source text "message" and also to obtain translations that are appropriate to the communicative situation of the receiver according to the text function and the context in which it is used. These guidelines can be grouped into three different categories: - Lexical choice - Structure and style - Referential elements Whatever the case, the success of pre-editing will be determined by two considerations:  The function of the (source and target) text: the greater the predominance of the informative or instructive function over the phatic or aesthetic functions, the more sense it makes to pre-edit the original text.  The kind of errors in the raw MT output that the chosen MT system provides and that should he avoided or minimized by pre-editing the source text. Pre-editing has two objectives:  To prepare the original text so that the most error-free possible raw MT output can be obtained, and also  To prepare the original text so that its translation through MT is suitable for a global audience. The pre-editing guidelines presented in this section respond to these two objectives. 4.1. Lexical guidelines The way each word or unit of meaning is processed in NMT is determined by its content and vice versa. A lexical choice in a text is linked to the range of texts and contexts in which the same choice is used. Let's take the case of a source text to be translated by MT and, consequently, to be published in several target languages in the shortest time possible. An appropriate choice of words in the source text can contribute not only to avoiding translation errors, but also to complying more effectively with the linguistic uses in accordance with the function of the text and the reason for its publication. Table below contains typical guidelines related to the lexicon. Guideline Avoid lexical shifts in register Avoid uncommon abbreviations Avoid unnecessary words Be consistent Table:Typical lexical pre-editing guidelines Explanation Avoid words that can change the style of the text orthe way it addresses the receiver. This facilitates understanding the text and normalizes the way the receiver is addressed. Only use commonly-found abbreviations. Avoid abbreviated or reduced forms that cannot be easily translated from their immediate context. Avoid unnecessary words for transmitting the information required. Using more words than needed means that the NMT system handles more word combinations and has more opportunities to propose an inappropriate or erroneous translation. Use terminology in a consistent and coherent way. Avoid introducing unnecessary word variation (that is, avoid synonymy). 4.2.Structure and style The way a text is formulated in general, and its individual sentences in particular, are as important in terms of comprehensibility as the lexicon used. The order in which ideas are interrelated. at the sentence level, throughout a text, or even intertextually, contributes to the reader's comprehension and interpretation. In the case of NMT. the options adopted in the source text activate or inhibit translation options. An unnecessarily complex and ambiguous text structure that allows objectively different interpretations increases the possibility of the NMT system proposing correct translations of microstructural elements (terminology, phrases or syntactic units) which, when joined together in the same text, generate texts that are internally incoherent, suggest a different meaning to the source text Table(a) plow. Gives premediating guidelines regarding the style and structure of the text. Most of them are not only aimed at optimizing the use of NMT systems, but also at the success of the translated text in terms of comprehensibility and meaning. Most of the guidelines listed in Table (a) are aimed at producing a simple text that can be easily assimilated by the reader of the source text. In the case of NMT engines trained with data sets already translated under these criteria, source text pre-editing helps to obtain the best raw MT output possible. Note, however, that if an engine is trained on "in-domain" data, that is, using a specialized and homogeneous dataset, based on texts of a particular genre and related to a particular field of activity, then the best possible pre-editing, if needed, will involve introducing edits that match the characteristics of that genre and domain. In addition to this general advice, in many cases it is also necessary to take into account guidelines that are specific to the source or target language. This might mean avoiding formulations that are particularly ambiguous, not only for the MT system, but also for the reader. If we take English for instance, avoiding ambiguous expressions means, for example, avoiding invisible plurals. A noun phrase such as "the file structure' could refer to both "the structure of files" and "the structure of a particular file". Although this ambiguity is resolved as the reader moves through the text, the wording of the noun phrase itself is not clear enough to provide an unambiguous translation. Another example of ambiguous structures in many languages, not only in English, is often the way in which negation is expressed. Sentences such as "No smoking seats are available." are notorious for giving rise to different interpretations and, consequently, incorrect translations. Verb tense forms are another aspect that may be simplified for the sake of intelligibility for the reader and error-free translations. Although the translation of the different verb tense forms and modes does not necessarily pose a problem for MT, an inappropriate use of verb tenses in the target language, despite resulting in well-formed sentences, can lead to target translation text comprehension errors. Typical guidance related to verb forms is given in Table (b). Table (a): Aspects related to structure and style in pre-editing Guideline Short and simple sentences Avoid unnecessarily complex sentences that introduce ambiguity. This makes it easier to understand the source text and avoids confusion. Anaphoric or cataphoric references may not be correctly handled by the NMT system and may lead to omissions or mistranslations. Avoid syntactic ambiguities subject to interpretation. Complete sentences Avoid cliding or splitting information. The compensation mechanisms for the non-typical structure of sentences do not necessarily work in the target language. For instance, a sentence written in passive form which does not take the agent explicitly into account can lead to misunderstanding in target texts. The same can happen when one of the sentence complements is presented as a list of options (e.g., in a bulleted list). In such cases, the sentence complement is broken down into separate phrases which the NMT system may process incorrectly. Remember that NMT systems only use the sentence as a translation unit, i.e., the text between punctuation marks such as full stops or paragraph breaks. Use the same syntactic structure in sentences in a list or that appear in the same context (e.g., section headings, direct instructions). This kind of consistency usually makes it easier to understand the text, both during the source publishing phase and post-publishing corrections. Active voice Where appropriate, use mainly the active voice or other structures that make “participants” in an action explicit (taking into account t

Use Quizgecko on...
Browser
Browser