Parallel Corpora, Terminology Extraction Und Machine Translation

Total Page:16

File Type:pdf, Size:1020Kb

Parallel Corpora, Terminology Extraction Und Machine Translation Zurich Open Repository and Archive University of Zurich Main Library Strickhofstrasse 39 CH-8057 Zurich www.zora.uzh.ch Year: 2018 Parallel Corpora, Terminology Extraction and Machine Translation Volk, Martin Abstract: In this paper we first give an overview of parallel corpus annotation, alignment and retrieval. We present standard annotation methods such as Part-of-Speech tagging, lemmatization and dependency parsing, but we also introduce language-specific methods, e.g. for dealing with split verbs or truncated compounds in German. We argue for careful sentence and word alignment for parallel corpora. And we explain how word alignment is the basis for a wide range of applications from translation variant ranking to terminology extraction. We conclude with a discussion of the latest developments in Machine Translation. Posted at the Zurich Open Repository and Archive, University of Zurich ZORA URL: https://doi.org/10.5167/uzh-150769 Conference or Workshop Item Published Version Originally published at: Volk, Martin (2018). Parallel Corpora, Terminology Extraction and Machine Translation. In: 16. DTT- Symposion. Terminologie und Text(e), Mannheim, 22 March 2018 - 24 March 2018, 3-14. Parallel Corpora, Terminology Extraction und Machine Translation Martin Volk University of Zurich Institute of Computational Linguistics [email protected] Abstract this is a natural basis for a plethora of parallel cor- pora. We have taken advantage of this situation In this paper we first give an overview of and collected and annotated various Swiss parallel parallel corpus annotation, alignment and corpora. retrieval. We present standard annotation Our research group specializes in building par- methods such as Part-of-Speech tagging, allel corpora for special domains which span over lemmatization and dependency parsing, but time: We have digitized parallel texts from the we also introduce language-specific meth- Swiss Alpine Club in French and German from ods, e.g. for dealing with split verbs or trun- 1957 until today (Gohring¨ and Volk, 2011), banking cated compounds in German. We argue for texts in English, French, German and Italian from careful sentence and word alignment for 1895 up to the present (Volk et al., 2016a), and the parallel corpora. And we explain how word announcements of the Swiss federal government alignment is the basis for a wide range of (DE: Bundesblatt, FR:Feuille fed´ erale,´ IT:Foglio applications from translation variant rank- federale) since 1849. ing to terminology extraction. We conclude In the current paper we will focus on the latest with a discussion of the latest developments methods in the automatic annotation and alignment in Machine Translation. of parallel corpora. We will argue that word align- 1 Introduction ment across languages improves annotation. We focus on parallel corpora for linguistic and trans- In recent years an increasing number of large par- lation studies, but we believe that parallel corpus allel corpora have become available for research search systems are also interesting for language in natural language processing. The best known learners for viewing translation variants in context is Europarl with the proceedings of the European and for terminologists who want to extract terms parliament (Koehn, 2005, Graen¨ et al., 2014) with or verify domain-specific language usage. around 50 million tokens in the languages of the Eu- The paper is structured as follows. In section 2 ropean Union. Other well known multilingual and we describe corpus annotation methods. Section 3 multiparallel corpora are JRC Acquis (Steinberger is devoted to alignment techniques and their bene- et al., 2006) with the EU law collection, OpenSubti- fits for corpus annotation such as word sense dis- tles (Lison and Tiedemann, 2016), United Nations ambiguation with practical applications in lemma documents (Ziemski et al., 2016), and collections disambiguation and named entity recognition. In of patent applications (Junczys-Dowmunt et al., section 4 we give usage examples of our parallel 2016b). corpora including translation discovery and transla- Switzerland is a country with four official lan- tion error detection. Section 5 describes the latest guages (French, German, Italian and Rumansh) and developments in using parallel corpora for machine because of the many international companies and translation. organizations in Switzerland English is becoming ever more popular. Therefore there is a constant need for translations between these languages and Figure 1: A Multilingwis query with hits in six Europarl languages 2 Corpus Annotations Often PoS tagging also provides lemmas. For example, the TreeTagger outputs lemmas for word Corpus building starts with corpus collection, clean- form - lemma pairs that it has seen in its training ing and tokenization. The latter is often language- corpus. For all other word forms the corpus builder specific and therefore multilingual corpora require may provide a tagger lexicon with additional pairs. language identification. Typically identification is These pairs lack the probabilities that tagger train- done on the sentence level. For each sentence we ing derives from a manually annotated corpus, but compute the language in order to be able to use the for word forms with little or no PoS ambiguity appropriate processing tools during corpus annota- the extension of the tagger lexicon is still useful. tion. Using, for instance, a Part-of-Speech tagger, So, any additional lemma information from other that was trained for one language, for the annota- corpora or from dictionaries or morphological ana- tion of a sentence in another language will give lyzers are valuable. erratic results. Therefore language identification Recent parallel corpora have been annotated on the sentence level is of paramount importance with more annotation layers: Named entity recog- for all texts with mixed languages. nition (NER) is a popular method for a first step 2.1 General Corpus Annotation towards semantic annotation. Typically it involves the recognition of person names, location names Part-of-Speech tagging is standard procedure when and organization names which are the central the corpora are meant for linguistic research. There classes when processing newspaper texts. Spe- are a number of PoS taggers available with parame- cial text types may require other name classes (e.g. ter files for many large languages of Europe. Most event names) or more fine-grained distinctions. For of the time the parameter files are the result of train- example, in our parallel corpus of Alpine texts ing the taggers on newspaper texts. This means that we sub-classify toponyms into the name classes the taggers work best on newspaper texts and grad- of mountains, glaciers, lakes, valleys, cabins, and ually worse the more the corpus material differs cities. Toponyms are essential for the mountaineer- from newspapers. ing reports and therefore very frequent. to strike, to notice) (Volk et al., 2016b). Shallow NER includes only the recognition and classification of names. A deeper analysis includes (1) Selber fallt¨ mir der kleine Fehler aber kaum co-reference resolution (Ebling et al., 2011) and en- auf. tity linking (sometimes called grounding). Mono- EN: However I do not notice the little lingual co-reference resolution will deal with men- mistake. tion variants like Grand Combin = Combin, while Our re-attachment algorithm is based on PoS multilingual co-references will catch translation tags and re-attaches the separated prefix to the most variants as e.g. DE:Matterhorn = FR:Combin = recent preceding finite verb form when this results IT:Combino. Monolingual co-references might in- in a valid German prefix verb (from a manually cu- clude anaphora resolution and will thus allow for rated list of about 8000 such verbs). It works with investigating coherence phenomena in texts. 96.8% precision when evaluated against manually Lately, dependency parsing has become avail- re-attached prefixes in the TuBa/DZ¨ treebank. able for many languages (e.g. Maltparser, Spacy, Stanford, ...). These parsers allow for the effi- 2.3 Exploiting Parallel Corpora for cient analysis of large corpora with a labeled at- Annotation tachment score of 80-90% (McDonald and Nivre, Traditionally most corpus annotation is done mono- 2011) and higher values for unlabeled attachment. lingually. This means that PoS tagging, lemmati- Even though parsing is far from perfect, the au- zation and parsing of e.g. a German corpus is done tomatically assigned syntax information opens a irrespective of a parallel text in English or any other whole new chapter for corpus studies. For example, language. However the parallel text may help to searches for verb-object relations no longer need to disambiguate and thus to improve the annotation speculate on co-occurrence in some arbitrary range, precision on many levels. Most obviously the paral- but can be conditioned on parsing evidence. In lel corpus may help to determine the correct word this way, we find candidates for support verb con- sense in a given sentence. For example, the word structions like to take into consideration or for verb Monch¨ in our Alpine corpus may refer to a promi- sub-categorizations for particular prepositional ob- nent mountain in central Switzerland or to a monk jects like to wait for. (= male person in a monastery). If the correspond- 2.2 Language-specific Corpus Annotation ing sentence in the English or French translation also contains the word Monch¨ , then it is clear that In addition to these general
Recommended publications
  • Integrating Optical Character Recognition and Machine
    Integrating Optical Character Recognition and Machine Translation of Historical Documents Haithem Afli and Andy Way ADAPT Centre School of Computing Dublin City University Dublin, Ireland haithem.afli, andy.way @adaptcentre.ie { } Abstract Machine Translation (MT) plays a critical role in expanding capacity in the translation industry. However, many valuable documents, including digital documents, are encoded in non-accessible formats for machine processing (e.g., Historical or Legal documents). Such documents must be passed through a process of Optical Character Recognition (OCR) to render the text suitable for MT. No matter how good the OCR is, this process introduces recognition errors, which often renders MT ineffective. In this paper, we propose a new OCR to MT framework based on adding a new OCR error correction module to enhance the overall quality of translation. Experimenta- tion shows that our new system correction based on the combination of Language Modeling and Translation methods outperforms the baseline system by nearly 30% relative improvement. 1 Introduction While research on improving Optical Character Recognition (OCR) algorithms is ongoing, our assess- ment is that Machine Translation (MT) will continue to produce unacceptable translation errors (or non- translations) based solely on the automatic output of OCR systems. The problem comes from the fact that current OCR and Machine Translation systems are commercially distinct and separate technologies. There are often mistakes in the scanned texts as the OCR system occasionally misrecognizes letters and falsely identifies scanned text, leading to misspellings and linguistic errors in the output text (Niklas, 2010). Works involved in improving translation services purchase off-the-shelf OCR technology but have limited capability to adapt the OCR processing to improve overall machine translation perfor- mance.
    [Show full text]
  • Champollion: a Robust Parallel Text Sentence Aligner
    Champollion: A Robust Parallel Text Sentence Aligner Xiaoyi Ma Linguistic Data Consortium 3600 Market St. Suite 810 Philadelphia, PA 19104 [email protected] Abstract This paper describes Champollion, a lexicon-based sentence aligner designed for robust alignment of potential noisy parallel text. Champollion increases the robustness of the alignment by assigning greater weights to less frequent translated words. Experiments on a manually aligned Chinese – English parallel corpus show that Champollion achieves high precision and recall on noisy data. Champollion can be easily ported to new language pairs. It’s freely available to the public. framework to find the maximum likelihood alignment of 1. Introduction sentences. Parallel text is a very valuable resource for a number The length based approach works remarkably well on of natural language processing tasks, including machine language pairs with high length correlation, such as translation (Brown et al. 1993; Vogel and Tribble 2002; French and English. Its performance degrades quickly, Yamada and Knight 2001;), cross language information however, when the length correlation breaks down, such retrieval, and word disambiguation. as in the case of Chinese and English. Parallel text provides the maximum utility when it is Even with language pairs with high length correlation, sentence aligned. The sentence alignment process maps the Gale-Church algorithm may fail at regions that contain sentences in the source text to their translation. The labor many sentences with similar length. A number of intensive and time consuming nature of manual sentence algorithms, such as (Wu 1994), try to overcome the alignment makes large parallel text corpus development weaknesses of length based approaches by utilizing lexical difficult.
    [Show full text]
  • Machine Translation for Academic Purposes Grace Hui-Chin Lin
    Proceedings of the International Conference on TESOL and Translation 2009 December 2009: pp.133-148 Machine Translation for Academic Purposes Grace Hui-chin Lin PhD Texas A&M University College Station Master of Science, University of Southern California Paul Shih Chieh Chien Center for General Education, Taipei Medical University Abstract Due to the globalization trend and knowledge boost in the second millennium, multi-lingual translation has become a noteworthy issue. For the purposes of learning knowledge in academic fields, Machine Translation (MT) should be noticed not only academically but also practically. MT should be informed to the translating learners because it is a valuable approach to apply by professional translators for diverse professional fields. For learning translating skills and finding a way to learn and teach through bi-lingual/multilingual translating functions in software, machine translation is an ideal approach that translation trainers, translation learners, and professional translators should be familiar with. In fact, theories for machine translation and computer assistance had been highly valued by many scholars. (e.g., Hutchines, 2003; Thriveni, 2002) Based on MIT’s Open Courseware into Chinese that Lee, Lin and Bonk (2007) have introduced, this paper demonstrates how MT can be efficiently applied as a superior way of teaching and learning. This article predicts the translated courses utilizing MT for residents of global village should emerge and be provided soon in industrialized nations and it exhibits an overview about what the current developmental status of MT is, why the MT should be fully applied for academic purpose, such as translating a textbook or teaching and learning a course, and what types of software can be successfully applied.
    [Show full text]
  • A Comparison of Knowledge Extraction Tools for the Semantic Web
    A Comparison of Knowledge Extraction Tools for the Semantic Web Aldo Gangemi1;2 1 LIPN, Universit´eParis13-CNRS-SorbonneCit´e,France 2 STLab, ISTC-CNR, Rome, Italy. Abstract. In the last years, basic NLP tasks: NER, WSD, relation ex- traction, etc. have been configured for Semantic Web tasks including on- tology learning, linked data population, entity resolution, NL querying to linked data, etc. Some assessment of the state of art of existing Knowl- edge Extraction (KE) tools when applied to the Semantic Web is then desirable. In this paper we describe a landscape analysis of several tools, either conceived specifically for KE on the Semantic Web, or adaptable to it, or even acting as aggregators of extracted data from other tools. Our aim is to assess the currently available capabilities against a rich palette of ontology design constructs, focusing specifically on the actual semantic reusability of KE output. 1 Introduction We present a landscape analysis of the current tools for Knowledge Extraction from text (KE), when applied on the Semantic Web (SW). Knowledge Extraction from text has become a key semantic technology, and has become key to the Semantic Web as well (see. e.g. [31]). Indeed, interest in ontology learning is not new (see e.g. [23], which dates back to 2001, and [10]), and an advanced tool like Text2Onto [11] was set up already in 2005. However, interest in KE was initially limited in the SW community, which preferred to concentrate on manual design of ontologies as a seal of quality. Things started changing after the linked data bootstrapping provided by DB- pedia [22], and the consequent need for substantial population of knowledge bases, schema induction from data, natural language access to structured data, and in general all applications that make joint exploitation of structured and unstructured content.
    [Show full text]
  • Machine Translation for Language Preservation
    Machine translation for language preservation Steven BIRD1,2 David CHIANG3 (1) Department of Computing and Information Systems, University of Melbourne (2) Linguistic Data Consortium, University of Pennsylvania (3) Information Sciences Institute, University of Southern California [email protected], [email protected] ABSTRACT Statistical machine translation has been remarkably successful for the world’s well-resourced languages, and much effort is focussed on creating and exploiting rich resources such as treebanks and wordnets. Machine translation can also support the urgent task of document- ing the world’s endangered languages. The primary object of statistical translation models, bilingual aligned text, closely coincides with interlinear text, the primary artefact collected in documentary linguistics. It ought to be possible to exploit this similarity in order to improve the quantity and quality of documentation for a language. Yet there are many technical and logistical problems to be addressed, starting with the problem that – for most of the languages in question – no texts or lexicons exist. In this position paper, we examine these challenges, and report on a data collection effort involving 15 endangered languages spoken in the highlands of Papua New Guinea. KEYWORDS: endangered languages, documentary linguistics, language resources, bilingual texts, comparative lexicons. Proceedings of COLING 2012: Posters, pages 125–134, COLING 2012, Mumbai, December 2012. 125 1 Introduction Most of the world’s 6800 languages are relatively unstudied, even though they are no less im- portant for scientific investigation than major world languages. For example, before Hixkaryana (Carib, Brazil) was discovered to have object-verb-subject word order, it was assumed that this word order was not possible in a human language, and that some principle of universal grammar must exist to account for this systematic gap (Derbyshire, 1977).
    [Show full text]
  • A Combined Method for E-Learning Ontology Population Based on NLP and User Activity Analysis
    A Combined Method for E-Learning Ontology Population based on NLP and User Activity Analysis Dmitry Mouromtsev, Fedor Kozlov, Liubov Kovriguina and Olga Parkhimovich ITMO University, St. Petersburg, Russia [email protected], [email protected], [email protected], [email protected] Abstract. The paper describes a combined approach to maintaining an E-Learning ontology in dynamic and changing educational environment. The developed NLP algorithm based on morpho-syntactic patterns is ap- plied for terminology extraction from course tasks that allows to interlink extracted terms with the instances of the system’s ontology whenever some educational materials are changed. These links are used to gather statistics, evaluate quality of lectures’ and tasks’ materials, analyse stu- dents’ answers to the tasks and detect difficult terminology of the course in general (for the teachers) and its understandability in particular (for every student). Keywords: Semantic Web, Linked Learning, terminology extraction, education, educational ontology population 1 Introduction Nowadays reusing online educational resources becomes one of the most promis- ing approaches for e-learning systems development. A good example of using semantics to make education materials reusable and flexible is the SlideWiki system[1]. The key feature of an ontology-based e-learning system is the possi- bility for tutors and students to treat elements of educational content as named objects and named relations between them. These names are understandable both for humans as titles and for the system as types of data. Thus educational materials in the e-learning system thoroughly reflect the structure of education process via relations between courses, modules, lectures, tests and terms.
    [Show full text]
  • The Web As a Parallel Corpus
    The Web as a Parallel Corpus Philip Resnik∗ Noah A. Smith† University of Maryland Johns Hopkins University Parallel corpora have become an essential resource for work in multilingual natural language processing. In this article, we report on our work using the STRAND system for mining parallel text on the World Wide Web, first reviewing the original algorithm and results and then presenting a set of significant enhancements. These enhancements include the use of supervised learning based on structural features of documents to improve classification performance, a new content- based measure of translational equivalence, and adaptation of the system to take advantage of the Internet Archive for mining parallel text from the Web on a large scale. Finally, the value of these techniques is demonstrated in the construction of a significant parallel corpus for a low-density language pair. 1. Introduction Parallel corpora—bodies of text in parallel translation, also known as bitexts—have taken on an important role in machine translation and multilingual natural language processing. They represent resources for automatic lexical acquisition (e.g., Gale and Church 1991; Melamed 1997), they provide indispensable training data for statistical translation models (e.g., Brown et al. 1990; Melamed 2000; Och and Ney 2002), and they can provide the connection between vocabularies in cross-language information retrieval (e.g., Davis and Dunning 1995; Landauer and Littman 1990; see also Oard 1997). More recently, researchers at Johns Hopkins University and the
    [Show full text]
  • Information Extraction Using Natural Language Processing
    INFORMATION EXTRACTION USING NATURAL LANGUAGE PROCESSING Cvetana Krstev University of Belgrade, Faculty of Philology Information Retrieval and/vs. Natural Language Processing So close yet so far Outline of the talk ◦ Views on Information Retrieval (IR) and Natural Language Processing (NLP) ◦ IR and NLP in Serbia ◦ Language Resources (LT) in the core of NLP ◦ at University of Belgrade (4 representative resources) ◦ LR and NLP for Information Retrieval and Information Extraction (IE) ◦ at University of Belgrade (4 representative applications) Wikipedia ◦ Information retrieval ◦ Information retrieval (IR) is the activity of obtaining information resources relevant to an information need from a collection of information resources. Searches can be based on full-text or other content-based indexing. ◦ Natural Language Processing ◦ Natural language processing is a field of computer science, artificial intelligence, and computational linguistics concerned with the interactions between computers and human (natural) languages. As such, NLP is related to the area of human–computer interaction. Many challenges in NLP involve: natural language understanding, enabling computers to derive meaning from human or natural language input; and others involve natural language generation. Experts ◦ Information Retrieval ◦ As an academic field of study, Information Retrieval (IR) is finding material (usually documents) of an unstructured nature (usually text) that satisfies an information need from within large collection (usually stored on computers). ◦ C. D. Manning, P. Raghavan, H. Schutze, “Introduction to Information Retrieval”, Cambridge University Press, 2008 ◦ Natural Language Processing ◦ The term ‘Natural Language Processing’ (NLP) is normally used to describe the function of software or hardware components in computer system which analyze or synthesize spoken or written language.
    [Show full text]
  • Machine Translation and Computer-Assisted Translation: a New Way of Translating? Author: Olivia Craciunescu E-Mail: Olivia [email protected]
    Machine Translation and Computer-Assisted Translation: a New Way of Translating? Author: Olivia Craciunescu E-mail: [email protected] Author: Constanza Gerding-Salas E-mail: [email protected] Author: Susan Stringer-O’Keeffe E-mail: [email protected] Source: http://www.translationdirectory.com The Need for Translation IT has produced a screen culture that Technology tends to replace the print culture, with printed documents being dispensed Advances in information technology (IT) with and information being accessed have combined with modern communication and relayed directly through computers requirements to foster translation automation. (e-mail, databases and other stored The history of the relationship between information). These computer documents technology and translation goes back to are instantly available and can be opened the beginnings of the Cold War, as in the and processed with far greater fl exibility 1950s competition between the United than printed matter, with the result that the States and the Soviet Union was so intensive status of information itself has changed, at every level that thousands of documents becoming either temporary or permanent were translated from Russian to English and according to need. Over the last two vice versa. However, such high demand decades we have witnessed the enormous revealed the ineffi ciency of the translation growth of information technology with process, above all in specialized areas of the accompanying advantages of speed, knowledge, increasing interest in the idea visual
    [Show full text]
  • Language Service Translation SAVER Technote
    TechNote August 2020 LANGUAGE TRANSLATION APPLICATIONS The U.S. Department of Homeland Security (DHS) AEL Number: 09ME-07-TRAN established the System Assessment and Validation for Language translation equipment used by emergency services has evolved into Emergency Responders commercially available, mobile device applications (apps). The wide adoption of (SAVER) Program to help cell phones has enabled language translation applications to be useful in emergency responders emergency scenarios where first responders may interact with individuals who improve their procurement speak a different language. Most language translation applications can identify decisions. a spoken language and translate in real-time. Applications vary in functionality, Located within the Science such as the ability to translate voice or text offline and are unique in the number and Technology Directorate, of available translatable languages. These applications are useful tools to the National Urban Security remedy communication barriers in any emergency response scenario. Technology Laboratory (NUSTL) manages the SAVER Overview Program and conducts objective operational During an emergency response, language barriers can challenge a first assessments of commercial responder’s ability to quickly assess and respond to a situation. First responders equipment and systems must be able to accurately communicate with persons at an emergency scene in relevant to the emergency order to provide a prompt, appropriate response. The quick translations provided responder community. by mobile apps remove the language barrier and are essential to the safety of The SAVER Program gathers both first responders and civilians with whom they interact. and reports information about equipment that falls within the Before the adoption of smartphones, first responders used language interpreters categories listed in the DHS and later language translation equipment such as hand-held language Authorized Equipment List translation devices (Figure 1).
    [Show full text]
  • Terminology Extraction, Translation Tools and Comparable Corpora Helena Blancafort, Béatrice Daille, Tatiana Gornostay, Ulrich Heid, Claude Méchoulam, Serge Sharoff
    TTC: Terminology Extraction, Translation Tools and Comparable Corpora Helena Blancafort, Béatrice Daille, Tatiana Gornostay, Ulrich Heid, Claude Méchoulam, Serge Sharoff To cite this version: Helena Blancafort, Béatrice Daille, Tatiana Gornostay, Ulrich Heid, Claude Méchoulam, et al.. TTC: Terminology Extraction, Translation Tools and Comparable Corpora. 14th EURALEX International Congress, Jul 2010, Leeuwarden/Ljouwert, Netherlands. pp.263-268. hal-00819365 HAL Id: hal-00819365 https://hal.archives-ouvertes.fr/hal-00819365 Submitted on 2 May 2013 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. TTC: Terminology Extraction, Translation Tools and Comparable Corpora Helena Blancafort, Syllabs, Universitat Pompeu Fabra, Barcelona, Spain Béatrice Daille, LINA, Université de Nantes, France Tatiana Gornostay, Tilde, Latva Ulrich Heid, IMS, Universität Stuttgart, Germany Claude Mechoulam, Sogitec, France Serge Sharoff, CTS, University of Leeds, England The need for linguistic resources in any natural language application is undeniable. Lexicons and terminologies
    [Show full text]
  • Acknowledging the Needs of Computer-Assisted Translation Tools Users: the Human Perspective in Human-Machine Translation Annemarie Taravella and Alain O
    The Journal of Specialised Translation Issue 19 – January 2013 Acknowledging the needs of computer-assisted translation tools users: the human perspective in human-machine translation AnneMarie Taravella and Alain O. Villeneuve, Université de Sherbrooke, Quebec ABSTRACT The lack of translation specialists poses a problem for the growing translation markets around the world. One of the solutions proposed for the lack of human resources is automated translation tools. In the last few decades, organisations have had the opportunity to increase their use of technological resources. However, there is no consensus on the way that technological resources should be integrated into translation service providers (TSP). The approach taken by this article is to set aside both 100% human translation and 100% machine translation (without human intervention), to examine a third, more realistic solution: interactive translation where humans and machines co-operate. What is the human role? Based on the conceptual framework of information systems and organisational sciences, we recommend giving users, who are mainly translators for whom interactive translation tools are designed, a fundamental role in the thinking surrounding the implementation of a technological tool. KEYWORDS Interactive translation, information system, implementation, business process reengineering, organisational science. 1. Introduction Globalisation and the acceleration of world trade operations have led to an impressive growth of the global translation services market. According to EUATC (European Union of Associations of Translation Companies), translation industry is set to grow around five percent annually in the foreseeable future (Hager, 2008). With its two official languages, Canada is a good example of an important translation market, where highly skilled professional translators serve not only the domestic market, but international clients as well.
    [Show full text]