The Effect of Machine Translation on the Performance of Arabic

Total Page:16

File Type:pdf, Size:1020Kb

The Effect of Machine Translation on the Performance of Arabic EACL 2006 Workshop on Multilingual Question Answering - MLQA06 The Affect of Machine Translation on the Performance of Arabic- English QA System Azzah Al-Maskari Mark Sanderson Dept. of Information Studies, Dept. of Information Studies, University of Sheffield, University of Sheffield, Sheffield, S10 2TN, UK Sheffield, S10 2TN, UK [email protected] [email protected] According to the Global Reach web site Abstract (2004), shown in Figure 1, it could be estimated that an English speaker has access to around 23 The aim of this paper is to investigate times more digital documents than an Arabic how much the effectiveness of a Ques- speaker. One can conclude from the given infor- tion Answering (QA) system was af- mation shown in the Figure that cross-language fected by the performance of Machine is potentially very useful when the required in- Translation (MT) based question transla- formation is not available in the users’ language. tion. Nearly 200 questions were selected from TREC QA tracks and ran through a Dutch 1.90% Arabic, question answering system. It was able to Portuegese 1.70% answer 42.6% of the questions correctly 3.3% in a monolingual run. These questions Italian, 4.1% Korean 4.2% were then translated manually from Eng- English, lish into Arabic and back into English us- French 4.5% 39.6% ing an MT system, and then re-applied to German the QA system. The system was able to 7.4% answer 10.2% of the translated questions. Japanese An analysis of what sort of translation er- 9% ror affected which questions was con- Spanish, ducted, concluding that factoid type 9.6% Chinses, questions are less prone to translation er- 14.7% ror than others. Figure 1: Online language Population (March, 2004) 1 Introduction The goal of a QA system is to find answers to Increased availability of on-line text in languages questions in a large collection of documents. The other than English and increased multi-national overall accuracy of a QA system is directly af- collaboration have motivated research in Cross- fected by its ability to correctly analyze the ques- Language Information Retrieval (CLIR). The tions it receives as input, a Cross Language goal of CLIR is to help searchers find relevant Question Answering (CLQA) system will be documents when their query terms are chosen sensitive to any errors introduced during question from a language different from the language in translation. Many researchers criticize the MT- which the documents are written. Multilinguality based CLIR approach. The reason for their criti- has been recognized as an important issue for the cism mostly stem from the fact that the current future of QA (Burger et al. 2001). The multilin- translation quality of MT is poor, in addition MT gual QA task was introduced for the first time in system are expensive to develop and their appli- the Cross-Language Evaluation Forum CLEF- cation degrades the retrieval efficiency due to the 2003. cost of the linguistic analysis. 9 EACL 2006 Workshop on Multilingual Question Answering - MLQA06 This paper investigates the extent to which source language to the target language. How- MT error affects QA accuracy. It is divided as ever, most of them are focused on European lan- follows: in section 2 relevant previous work on guage pairs. To our knowledge, only one past cross-language retrieval is described, section 3 example of research has investigated the per- explains the experimental approach which in- formance of a cross-language Arabic-English cludes the procedure and systems employed, it QA system Rosso et al (2005). The QA system also discuss the results obtained, section 4 draws used by Rosso et al (2005) is based on a system conclusions and future research on what im- reported in Del Castillo (2004). Their experiment provements need to be done for MT systems. was carried out using the question corpus of the CLEF-2003 competition. They used questions in 2 Related Research English and compared the answers with those obtained after the translation back into English CLIR is an active area, extensive research on from an Arabic question corpus which was CLIR and the effects of MT on QA systems’ re- manually translated. For the Arabic-English trieval effectiveness has been conducted. Lin and translation process, an automatic machine trans- Mitamua (2004) point out that the quality of lator, the TARJIM Arabic-English machine translation is fully dependent upon the MT sys- translation system, was used. Rosso el al re- tem employed. ported a decrease of QA accuracy by more than Perret (2004) proposed a question answering 30% which was caused by the translation proc- system designed to search French documents in ess. response to French queries. He used automatic Work in the Rosso paper was limited to a sin- translation resources to translate the original que- gle QA and MT system and also did not analyze ries from (Dutch, German, Italian, Portuguese, types of errors or how those errors affected dif- Spanish, English and Bulgarian) and reports the ferent types of QA questions. Therefore, it was performance level in the monolingual task was decided to conduct further research on MT sys- 24.5% dropping to 17% in the bilingual task. A tems and its affect on the performance in QA similar experiment was conducted by Plamondon systems. This paper presents an extension on the and Foster (2003) on TREC questions and meas- previous mentioned study, but with more diverse ured a drop of 44%, and in another experiment ranges of TREC data set using different QA sys- using Babelfish, the performance dropped even tem and different MT system. more, 53%. They believe that CLEF questions were easier to process because they did not in- 3 Experimental Approach clude definition questions, which are harder to translate. Furthermore, Plamondon and Foster To run this experiment, 199 questions were ran- (2004) compare the cross language version of domly compiled from the TREC QA track, their Quantum QA system with the monolingual namely from TREC-8, TREC-9, TREC-11, English version on CLEF questions and note the TREC-2003 and TREC-2004, to be run through performance of a cross-language system (French AnswerFinder, the results of which are discussed questions and English documents) was 28% in section 3.1. The selected 199 English TREC lower than the monolingual system using IBM1 questions were translated into Arabic by one of translation. the authors (who is an Arabic speaker), and then Tanev et al. (2004) note that DIOGENE sys- fed into Systran to translate them into English. tem, which relies on the Multi-WordNet, per- The analysis of translation is discussed in detail forms 15% better in the monolingual (Italian- in section 3.2. Italian) than cross-language task (Italian- English). In Magnini et al.’s (2004) report for the 3.1 Performance of AnswerFinder year 2003, the average performance of correct The 199 questions were run over AnswerFinder; answers on monolingual tasks was 41% and 25% divided as follows: 92 factoid questions, 51 defi- in the bilingual tasks. In addition in the year nition questions and 56 list questions. The an- 2004, the average accuracy in the monolingual swers were manually assessed following an as- tasks was 23.7% and 14.7% in bilingual tasks. sessment scheme similar to the answer categories As elucidated above, much research has been in iCLEF 2004: conducted to evaluate the effectiveness of QA systems in a cross language platform by employ- • Correct: if the answer string is valid and ing MT systems to translate the queries from the supported by the snippets 10 EACL 2006 Workshop on Multilingual Question Answering - MLQA06 • Non-exact: if the answer string is miss- Below is a discussion of Systran’ translation ing some information, but the full answer accuracy and the problems that occurred during is found in the snippets. translation of the TREC QA track questions. • Wrong: if the answer string and the snippets are missing important informa- Type of Translation Error Percentage tion or both the answer string and the Wrong Transliteration 45.7% snippets are wrong compared with the an- Wrong Sense 31% swer key. Wrong Word Order 25% • No answer: if the system does not return Wrong Pronoun 13.5% any answer at all. Table 3. Types of Translation Errors Table 1 provides an overall view, the system cor- Wrong Transliteration rectly answered 42.6% of these questions, Wrong transliteration is the most common error whereas 25.8% wrongly, 23.9% no answer and that encountered during translation. Translitera- 8.1% non-exactly. Table 2 illustrates Answer- tion is the process of replacing words in the Finder’ abilities to answer each type of these source language with their phonetic equivalent in questions separately. the target language. Al-Onaizan and Knight (2002) state that transliterating names from Ara- Answer Type bic into English is a non-trivial task due to the Correct 42.6% differences in their sound and writing system. Non exact 8.1% Also, there is no one-to-one correspondence be- Wrong 25.8% tween Arabic sounds and English sounds. For No Answer 23.9% example P and B are both mapped to the single are mapped“ ﻩ“ and ”ح“ Arabic ;”ب“ Arabic letter Table 1. Overall view of AnswerFinder Mono- into English H. lingual Accuracy 1 2 Original text Who is Aga Khan ? 2 1 ﻣﻦ ﻳﻜﻮن اﺟﺎ ﺧﺎن؟ Factoid Definition List Arabic version 1 Correct 63 6 15 Translation From [EEjaa] be- 2 Not exact 1 6 9 (wrong) trayed ? Wrong 22 15 13 Table 4. Incorrect use of translation when trans- No answer 6 23 18 literation should have been used Table 2.
Recommended publications
  • Translation and the Internet: Evaluating the Quality of Free Online Machine Translators
    Quaderns. Rev. trad. 17, 2010 197-209 Translation and the Internet: Evaluating the Quality of Free Online Machine Translators Stephen Hampshire Universitat Autònoma de Barcelona Facultat de Traducció i d’Interpretació 08193 Bellaterra (Barcelona). Spain [email protected] Carmen Porta Salvia Universitat de Barcelona Facultat de Belles Arts [email protected] Abstract The late 1990s saw the advent of free online machine translators such as Babelfish, Google Translate and Transtext. Professional opinion regarding the quality of the translations provided by them, oscillates wildly from the «laughably bad» (Ali, 2007) to «a tremendous success» (Yang and Lange, 1998). While the literature on commercial machine translators is vast, there are only a handful of studies, mostly in blog format, that evaluate and rank free online machine translators. This paper offers a review of the most significant contributions in that field with an emphasis on two key issues: (i) the need for a ranking system; (ii) the results of a ranking system devised by the authors of this paper. Our small-scale evaluation of the performance of ten free machine trans- lators (FMTs) in «league table» format shows what a user can expect from an individual FMT in terms of translation quality. Our rankings are a first tentative step towards allowing the user to make an informed choice as to the most appropriate FMT for his/her source text and thus pro- duce higher FMT target text quality. Key words: free online machine translators, evaluation, internet, ranking system. Resum Durant la darrera dècada del segle xx s’introdueixen els traductors online gratuïts (TOG), com poden ser Babelfish, Google Translate o Transtext.
    [Show full text]
  • A Comparison of Free Online Machine Language Translators Mahesh Vanjani1,* and Milam Aiken2 1Jesse H
    Journal of Management Science and Business Intelligence, 2020, 5–1 July. 2020, pages 26-31 doi: 10.5281/zenodo.3961085 http://www.ibii-us.org/Journals/JMSBI/ ISBN 2472-9264 (Online), 2472-9256 (Print) A Comparison of Free Online Machine Language Translators Mahesh Vanjani1,* and Milam Aiken2 1Jesse H. Jones School of Business, Texas Southern University, 3100 Cleburne Street, Houston, TX 77004 2School of Business Administration, University of Mississippi, University, MS 38677 *Email: [email protected] Received on 5/15/2020; revised on 7/25/2020; published on 7/26/2020 Abstract Automatic Language Translators also referred to as machine translation software automate the process of language translation without the intervention of humans While several automated language translators are available online at no cost there are large variations in their capabilities. This article reviews prior tests of some of these systems, and, provides a new and current comprehensive evaluation of the following eight: Google Translate, Bing Translator, Systran, PROMT, Babylon, WorldLingo, Yandex, and Reverso. This re- search could be helpful for users attempting to explore and decide which automated language translator best suits their needs. Keywords: Automated Language Translator, Multilingual, Communication, Machine Translation published results of prior tests and experiments we present a new and cur- 1 Introduction rent comprehensive evaluation of the following eight electronic automatic language translators: Google Translate, Bing Translator, Systran, Automatic Language Translators also referred to as machine translation PROMT, Babylon, WorldLingo, Yandex, and Reverso. This research software automate the process of language translation without the inter- could be helpful for users attempting to explore and decide which auto- vention of humans.
    [Show full text]
  • An Evaluation of the Accuracy of Online Translation Systems
    Communications of the IIMA Volume 9 Issue 4 Article 6 2009 An Evaluation of the Accuracy of Online Translation Systems Milam Aiken University of Mississippi Kaushik Ghosh University of Mississippi John Wee University of Mississippi Mahesh Vanjani University of Mississippi Follow this and additional works at: https://scholarworks.lib.csusb.edu/ciima Recommended Citation Aiken, Milam; Ghosh, Kaushik; Wee, John; and Vanjani, Mahesh (2009) "An Evaluation of the Accuracy of Online Translation Systems," Communications of the IIMA: Vol. 9 : Iss. 4 , Article 6. Available at: https://scholarworks.lib.csusb.edu/ciima/vol9/iss4/6 This Article is brought to you for free and open access by CSUSB ScholarWorks. It has been accepted for inclusion in Communications of the IIMA by an authorized editor of CSUSB ScholarWorks. For more information, please contact [email protected]. Examination of the Accuracy of Online Translation Systems Aiken, Ghosh, Wee & Vanjani An Evaluation of the Accuracy of Online Translation Systems Milam Aiken University of Mississippi USA [email protected] Kaushik Ghosh University of Mississippi USA [email protected] John Wee University of Mississippi USA [email protected] Mahesh Vanjani Texas Southern University USA [email protected] Abstract Until fairly recently, translation among a large variety of natural languages has been difficult and costly. Now, several free, Web-based machine translation (MT) services can provide support, but relatively little research has been conducted on their accuracies. A study of four of these services using German-to-English and Spanish-to-English translations showed that Google Translate appeared to be superior. Further study using this system alone showed that while translations were not always perfect, their understandability was quite high.
    [Show full text]
  • Trados Studio Apps/Plugins for Machine Translation Latest Update: 7 April, 2018
    Trados Studio apps/plugins for machine translation Latest update: 7 April, 2018. For a long time now, I’ve been intrigued by the very large number of apps/ plugins in the Studio appstore which give access – free or paid – to various types of machine translation services and facilities. Since I have lately found that the use of MT may give surprisingly good results at least for En > Sv (as well as completely useless ones), I was curious to know more about all these various options. Below is a brief overview of what I found trying to explore them to the best of my ability. Some general observations: Several of the plugins offer to build MT engines based on your own TMs (with some of them, that’s the only option) and, in some cases, also termbase data. This requires fairly big TMs – typically at least 80,000 TUs – for reasonable usefulness. This type of service of course also means full confidentiality (you are its only user). However, normally such an engine suffers the same problem as other MT engines: it is not updated while you translate; you have to rebuild the it with the new TM data – and that will take some time. Newer engines, however, gives the possibility of adding fragments as you go (SDL’s Adaptive MT does this automatically, but only for some language combinations), but for full usage of all new TUs, the engine must be rebuilt. Some plugins require you to sign up for an account, free or paid, but details about that is not always provided with the plugin/app.
    [Show full text]
  • Stefanie Geldbach: Lexicon Exchange in MT
    Stefanie Geldbach Lexicon Exchange in MT The Long Way to Standardization Abstract in diff erent systems. MT systems typically fol- Th is paper discusses the question to what extent low a lemma-oriented approach for the repre- lexicon exchange in MT has been standardized sentation of homonymy which means that diff e- during the last years. Th e introductory section rent semantic readings of one word are collapsed is followed by a brief description of OLIF2, a into one entry. Th e entry for Maus in the Ger- format specifi cally designed for the exchange of man monolexicon of LexShop 2.2 (see Section terminological and lexicographical data (Section 3.4) illustrates this approach. Th is entry contains 2). Section 3 contains an overview of the import/ (among others) following feature-value pairs: export functionalities of fi ve MT systems (Promt Expert 7.0, Systran 5.0 Professional Premium, CAN “Maus” Translate pro 8.0, LexShop 2.2, OpenLogos). CAT NST Th is evaluation shows that despite the standar- ALO “Maus” dization eff orts of the last years the exchange of TYN (ANI C-POT) lexicographical data between MT systems is still not a straightforward task. Th e feature TYN (type of noun) which indica- tes the semantic type of the given noun has two 1 Introduction values, ANI (animal) and C-POT (concrete-po- Th e creation and maintenance of MT lexicons tent) representing two diff erent concepts, i.e. the is time-consuming and cost-intensive. Th erefore, small rodent and the peripheral device. the development of standardized exchange for- Term bases usually are concept-oriented mats has received considerable attention over the which means that diff erent semantic readings of last years.
    [Show full text]
  • Online Translation Pricing Issues»
    Online Translation Pricing Issues Claire Larsonneur Abstract Digital technologies such as translation platforms, crowdsourcing and neural machine translation disrupt the economics of translation. Benchmarking the pricing policies of nine global language services firms uncovers a shift Claire Larsonneur towards online business models that contribute to reshaping the traditional volume-based content-oriented 2018 University Paris 8 model of translation towards a range of linguistic services Claire.larsonneur@univ- focused on user experience. de e paris8.fr; ORCID: Keywords: online translation, economics, pricing, freemium, 0000-0002-5129-5844 language services, user experience, content. desembr Resum Les tecnologies digitals, com les plataformes de traducció, Publicació: | el crowdsourcing i la traducció automàtica neuronal han creat disrupció en l'economia de la traducció. L'anàlisi de les polítiques de preus de nou empreses de serveis lingüístics d'abast mundial revela un canvi cap a models en línia que contribueixen a reformar el model de traducció tradicional basat en el volum i orientat al contingut per oferir un catàleg de serveis lingüístics centrat en l'experiència d'usuari. 2018 de juliol de 20 Paraules clau: Traducció en línia; economia; tarifació; serveis de semipagament; serveis lingüístics, experiència d'usuari; contingut. | Acceptació: Acceptació: | Resumen Las tecnologías digitales como las plataformas de traducción, el crowdsourcing y la traducción automática neuronal disrumpen la economía de la traducción. El análisis de las políticas de precios de nueve empresas de servicios lingüísticos de envergadura mundial muestra un 2018 de març de 3 cambio hacia modelos de negocio en línea que contribuyen a reformar el modelo de traducción basado en el volumen y orientado al contenido para ofrecer un Rebuda: 2 Rebuda: catálogo de servicios lingüísticos centrado en la experiencia de usuario.
    [Show full text]
  • English/Arabic/English Machine Translation: a Historical Perspective Muhammad Raji Zughoul Et Awatef Miz’Il Abu-Alshaar
    Document généré le 26 sept. 2021 12:34 Meta Journal des traducteurs Translators' Journal English/Arabic/English Machine Translation: A Historical Perspective Muhammad Raji Zughoul et Awatef Miz’il Abu-Alshaar Le prisme de l’histoire Résumé de l'article The History Lens Cet article examine l’histoire et l’évolution des applications de la traduction Volume 50, numéro 3, août 2005 automatique (TA) en langue arabe, dans le contexte de l’histoire de la TA en général. Il commence par décrire les débuts de la TA aux États-Unis et son URI : https://id.erudit.org/iderudit/011612ar déclin dû à l’épuisement du financement ; ensuite, son renouveau suscité par DOI : https://doi.org/10.7202/011612ar la mondialisation, le développement des technologies de l’information et les besoins croissants de lever les barrières linguistiques. Finalement, il aborde les progrès vertigineux réalisés grâce à l’informatique. L’article étudie aussi les Aller au sommaire du numéro principales approches de la TA dans une perspective historique. Le cas de l’arabe est traité dans cette perspective, compte tenu des travaux effectués par les instituts de recherche occidentaux et quelques sociétés privées Éditeur(s) occidentales. Un accent particulier est mis sur les recherches de la société arabe Sakr, fondée dès 1982, qui a mis au point plusieurs logiciels de Les Presses de l'Université de Montréal traitement de langues naturelles pour l’arabe. Ces divers logiciels de TA arabe-anglais-arabe ainsi que des applications associées sont présentés dans ISSN un cadre historique. 0026-0452 (imprimé) 1492-1421 (numérique) Découvrir la revue Citer cet article Zughoul, M.
    [Show full text]
  • Exploring Machine Translation on the Web Ana Guerberof Arenas
    Exploring Machine Translation on the Web Ana Guerberof Arenas PhD programme in Translation and Intercultural Studies Universitat Rovira i Virgili, Tarragona, Spain [email protected] Abstract This article briefly explores machine translation on the web, its history and current research. It briefly examines as well four free on-line machine translation engines in terms of language combinations offered, text length accepted and document formats supported as well as the quality of their raw MT output. Keywords machine translation, free on-line service, web, internet, MT 1. Introduction Machine Translation (MT) and the web have had a long and endurable relationship that continues to grow and consolidate. Users frequently use free machine translation on the web to understand texts coming from another language (often referred to as assimilation) or to publish a text into another language (dissemination). Moreover, they also use MT on the web to communicate instantly with other users in different languages through instant messaging. MT providers use the readily available bilingual data on the web to boost the performance of their engines in different language combinations. And finally, many language and translation teachers and students are using MT on the web to help their language learning as well as translation and post-editing strategies. In this article we will look at free MT services offered on the web and explore their performance through a fast comparison of four different engines. The purpose of this article is not to setup a scientific experiment but to give general information on this topic. 2. Brief history In the 1980s the MT provider Systran implemented a service in France to be used on the postal service Minitel that allowed basic paid translation of short strings on a few language combinations.
    [Show full text]
  • A Comparison of Free Online Machine Language Translators Mahesh Vanjani Jesse H
    Journal of Management Science and Business Intelligence, 2020, 5–1 July. 2020, pages 26-31 doi: 10.5281/zenodo.3960835 http://www.ibii-us.org/Journals/JMSBI/ ISBN 2472-9264 (Online), 2472-9256 (Print) A Comparison of Free Online Machine Language Translators Mahesh Vanjani Jesse H. Jones School of Business, Texas Southern University, 3100 Cleburne Street, Houston, TX 77004 Email: [email protected] Received on 5/15/2020; revised on 7/25/2020; published on 7/26/2020 Abstract Automatic Language Translators also referred to as machine translation software automate the process of language translation without the intervention of humans While several automated language translators are available online at no cost there are large variations in their capabilities. This article reviews prior tests of some of these systems, and, provides a new and current comprehensive evaluation of the following eight: Google Translate, Bing Translator, Systran, PROMT, Babylon, WorldLingo, Yandex, and Reverso. This re- search could be helpful for users attempting to explore and decide which automated language translator best suits their needs. Keywords: Automated Language Translator, Multilingual, Communication, Machine Translation language translators: Google Translate, Bing Translator, Systran, 1 Introduction PROMT, Babylon, WorldLingo, Yandex, and Reverso. This research could be helpful for users attempting to explore and decide which auto- Automatic Language Translators also referred to as machine translation mated language translator best suits their needs. software automate the process of language translation without the inter- vention of humans. Text from the source language is translated to text in the target language. The most basic automatic language translators strictly 2 Prior Comparisons using Human Review of Text rely on word-for-word substitution.
    [Show full text]
  • SYSTRAN-7-User-Guide.Pdf
    SYSTRAN Desktop 7 User Guide i Table of Contents Chapter 1: SYSTRAN Desktop 7 Overview ............................................................. 1 What's New? ....................................................................................................................... 3 SYSTRAN Desktop 7 Products Comparison .................................................................... 4 Special Terms Used in this Guide ..................................................................................... 7 About Language Translation Software ............................................................................. 8 SYSTRAN Support ............................................................................................................. 8 Symbols .......................................................................................................................... 8 Tips ................................................................................................................................. 8 Notes .............................................................................................................................. 8 Cautions .......................................................................................................................... 8 Typographic Conventions ................................................................................................. 8 Menu, Command, and Button Names ............................................................................. 8 Filenames and Items You Type
    [Show full text]
  • 2003 Q1 Revenue Release
    www.systransoft.com Leading Provider of Information and Translation Technologies 2003 Q1 Revenue Release April 23, 2003 -- SYSTRAN (Bloomberg: SYST NM, Reuters: SYTN.LN, Code Euroclear Paris: 7729) today announced consolidated revenue for the first quarter ended March 31, 2003 representing a 51,3 percent increase over revenue for the same quarter last year. As % As % Variation In K€ 2003 2002 of total of total 2003/2002 Software Publishing 1 188 47 % 981 59 % + 21,1 % Home & Small Business (HSB) 171 7 % 59 4 % + 189,8 % Corporate 314 12 % 432 26 % (27,3 %) Resellers 400 16 % 244 15 % + 63,9 % Online sales 303 12 % 245 15 % + 23,7 % Professional Services 1 333 53 % 685 41 % + 94,6 % Corporate 661 26 % 238 14 % + 177 ,7 % Administrations 179 7 % 351 21 % (49,0 %) Co-Funded 493 20 % 96 6 % + 413,5 % Consolidated Revenue 2 521 100 % 1 666 100 % + 51,3 % Revenue for the first quarter 2003 amounts to 2,5 M€ compared with 1,5 M€ for the first quarter of 2002 on the basis of the current scope of consolidation, reflecting an increase of 66,7% in sales. Increase of consolidated revenue of 51,3 % Software Publishing revenue rose 21,1% compared with the first quarter of 2002, due to the increase of sales by download and the Resellers’ activity. Professional Services are expanding and revenue is up 94,6 % compared with the first quarter of 2002, due to orders received in the second semester of 2002. Corporate sales continue to grow Corporate sales (licenses and professional services) increased by 28,6% at 0,9 M€ during the first quarter of 2003, compared with 0,7 M€ from the same reporting period last year.
    [Show full text]
  • From Bible to Babelfish: the Evolution of Translation
    The Babel fish is small, yellow, and leechlike, and probably the oddest thing in the Universe. It feeds on brainwave energy received not from its own carrier but from those around it. It absorbs all unconscious mental frequencies from this brainwave energy to nourish itself with. It then excretes into the mind of its carrier a telepathic matrix formed by combining the conscious thought frequencies with nerve signals picked up from the speech center of the brain which has supplied them. The practical upshot of all this is that if you stick a Babel fish in your ear you can instantly understand anything said to you in any form of language. -Douglas Adams, The Hitchhiker’s Guide to the Galaxy Introduction Translation, according to scholar, author, and Italian translator Lawrence Venuti, is "the rewriting of an original text" in another language (Venuti, Invisibility 1).1 The methods of rewriting that text – whether to translate literally to preserve the exact words, or to translate the sense of the text to preserve the ideas of the original author, to preserve cultural origins and foreignness of the text or to completely domesticate2 it – have been matters of a heated and occasionally deadly debate that began with Cicero in 55 B. C. and continues to the present day. Some theorists, such as Boethius in the sixth century, argue in favor of an excessively literal translation method, others, such as Cicero, Martin Luther, and Ezra Pound, support a freer method of translation. Regarding literary translation, some such as Gregory Rabassa and Margaret Peden wish to preserve style and vocabulary of the original author in the translation, some, like John Dryden, try to improve upon the text, adding their own signature style and ideas to the translated version.
    [Show full text]