L2 Translation Assistance by Emulating the Manual Post-Editing Process
Total Page:16
File Type:pdf, Size:1020Kb
Sensible: L2 Translation Assistance by Emulating the Manual Post-Editing Process Liling Tan, Anne-Kathrin Schumann, Jose M.M. Martinez and Francis Bond1 Universitat¨ des Saarland / Campus, Saarbrucken,¨ Germany Nanyang Technological University1 / 14 Nanyang Drive, Singapore [email protected], [email protected], [email protected], [email protected] Abstract The main difference is that in the context of post- editing the source text is provided. A transla- This paper describes the Post-Editor Z sys- tion workflow that incorporates post-editing be- tem submitted to the L2 writing assis- gins with a source sentence, e.g. “I could still tant task in SemEval-2014. The aim of sit on the border in the very last tram to West task is to build a translation assistance Berlin.” and the human translator is provided with system to translate untranslated sentence a machine-generated translation with untranslated fragments. This is not unlike the task fragments such as the previous example and some- of post-editing where human translators times “fixing” the translation would simply re- improve machine-generated translations. quire substituting the appropriate translation for Post-Editor Z emulates the manual pro- the untranslated fragment. cess of post-editing by (i) crawling and ex- tracting parallel sentences that contain the 2 Related Tasks and Previous untranslated fragments from a Web-based Approaches translation memory, (ii) extracting the pos- sible translations of the fragments indexed The L2 writing assistant task lies between the by the translation memory and (iii) apply- lines of machine translation and crosslingual word ing simple cosine-based sentence similar- sense disambiguation (CLWSD) or crosslingual ity to rank possible translations for the un- lexical substitution (CLS) (Lefever and Hoste, translated fragment. 2013; Mihalcea et al. 2010). While CLWSD systems resolve the correct 1 Introduction semantics of the translation by providing the correct lemma in the target language, CLS at- In this paper, we present a collaborative submis- tempts to provide also the correct form of the sion between Saarland University and Nanyang translation with the right morphology. Machine Technological University to the L2 Translation As- translation tasks focus on producing translations sistant task in SemEval-2014. Our team name of whole sentences/documents while crosslingual is Sensible and the participating system is Post- word sense disambiguation targets a single lexical Editor Z (PEZ). item. The L2 Translation Assistant task concerns the Previously, CLWSD systems have tried distri- translation of an untranslated fragment from a par- butional semantics and string matching methods tially translated sentence. For instance, given a (Tan and Bond, 2013), unsupervised clustering of sentence, “Ich konnte Barbel¨ noch on the border word alignment vectors (Apidianaki, 2013) and in einen letzten S-Bahn-Zug nach Westberlin set- supervised classification-based approaches trained zen.”, the aim is to provide an appropriate transla- on local context features for a window of three tion for the underline phrase, i.e. an der Grenze. words containing the focus word (van Gompel, The aim of the task is not unlike the task of 2010; van Gompel and van den Bosch, 2013; Rud- post-editing where human translators correct er- nick et al., 2013). Interestingly, Carpuat (2013) rors provided by machine-generated translations. approached the CLWSD task with a Statistical MT This work is licensed under a Creative Commons At- system . tribution 4.0 International Licence. Page numbers and pro- ceedings footer are added by the organisers. Licence details: Short of concatenating outputs of CLWSD / http://creativecommons.org/licenses/by/4.0/ CLS outputs and dealing with a reordering issue 541 Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014), pages 541–545, Dublin, Ireland, August 23-24, 2014. and responding to the task organizers’ call to avoid The PEZ system was designed to emulate the implementing a full machine translation system manual post-editing process by (i) first crawling to tackle the task, we designed PEZ as an Auto- a web-based translation memory, (ii) then extract- matic Post-Editor (APE) that attempts to resolve ing parallel sentences that contain the untranslated untranslated fragments. fragments and the corresponding translations of the fragments indexed by the translation memory 3 Automatic Post-Editors and (iii) finally ranking them based on cosine sim- ilarity of the context words. APEs target various types of MT errors from de- terminer selection (Knight and Chander, 1994) to 5 System Description grammatical agreement (Marecekˇ et al., 2011). The PEZ system consists of three components, Untranslated fragments from machine translations viz (i) a Web Translation Memory (WebTM) are the result of out-of-vocabulary (OOV) words. crawler, (ii) the XLING reranker and (iii) a longest Previous approaches to the handling of un- ngram/string match module. translated fragments include using a pivot lan- guage to translate the OOV word(s) into a third 5.1 WebTM Crawler language and then back into to the source lan- Given the query fragment and the context sen- guage, thereby extracting paraphrases to OOV tence, “Die Frau kehrte alone nach Lima zuruck¨ ”, (Callison-burch and Osborne, 2006), combining the crawler queries www.bab.la and returns sub-lexical/constituent translations of the OOV sentences containing the untranslated fragment word(s) to generate the translation (Huang et al., with various possible tranlsations, e.g: 2011) or finding paraphrases of the OOV words that have available translations (Marton et al., isoliert : Darum sollten wir den Kaffee nicht 1 • 2009; Razmara et al., 2013). isoliert betrachten. However the simplest approach to handle un- alleine : Die Kommission kann nun aber fur¨ translated fragments is to increase the size of par- • allel data. The web is vast and infinite, a human ihr Verhalten nicht alleine die Folgen tragen. translator would consult the web when encounter- Allein : Allein in der Europischen Union • ing a word that he/she cannot translate easily. The sind. most human-like approach to post-editing a for- eign untranslated fragment is to do a search on The retrieval mechanism is based on the the web or a translation memory and choose the fact that the target translations of the queried most appropriate translation of the fragment from word/phrase are bolded on a web-based TM and the search result given the context of the machine thus they can be easily extracted by manipulating translated sentence. the text between <bold>...</bold> tags. Al- though the indexed translations were easy to ex- 4 Motivation tract, there were few instances where the transla- tions were embedded betweeen the bold tags on When post-editing an untranslated fragment, a hu- the web-based TM. man translator would (i) first query a translation memory or parallel corpus for the untranslated 5.2 XLING Reranker fragment in the source language, (ii) then attempt XLING is a light-weight cosine-based sentence to understand the various context that the fragment similarity script used in the previous CLWSD can occur in and (iii) finally he/she would sur- shared task in SemEval-2013 (Tan and Bond, mise appropriate translations for the untranslated 2013). Given the sentences from the WebTM fragment based on semantic and grammatical con- crawler, the reranker first removes all stopwords straints of the chosen translations. from the sentences and then ranks the sentences 1in MT, evaluation is normally performed using automatic based on the number of overlapping stems. metrics based on automatic evaluation metrics that compares In situations where there are no overlapping scores based on string/word similarity between the machine- content words from the sentences, XLING falls generated translation and a reference output, simply remov- ing OOV would have improved the metric “scores” of the back on the most common translation of the un- system (Habash, 2008; Tan and Pal, 2014). translated fragment. 542 en-de en-es fr-en nl-en acc wac rec acc wac rec acc wac rec acc wac rec WebTM 0.160 0.184 0.647 0.145 0.175 0.470 0.055 0.067 0.210 0.092 0.099 0.214 XLING 0.152 0.178 0.647 0.141 0.171 0.470 0.055 0.067 0.210 0.088 0.095 0.214 PEZ 0.162 0.233 0.878 0.239 0.351 0.819 0.081 0.116 0.321 0.115 0.152 0.335 Table 1: Results for Best Evaluation of the System Runs. 5.3 Longest Ngram/String Matches 1. WebTM: a baseline configuration which out- Due to the low coverage of the indexed trans- puts the most frequent indexed translation of lations on the web TM, it is necessary to ex- the untranslated fragment from the Web TM. tract more candidate translations. Assuming lit- 2. XLING: reranks the WebTM outputs based tle knowledge about the target language, human on cosine similarity. translator would find parallel sentences containing 3. PEZ: similar to the XLING but when the the untranslated fragment and resort to finding re- WebTM fetches no output, the system looks peating phrases that occurs among the target lan- for longest common substring and reranks the guage sentences. outputs based on cosine similarity. For instance, when we query the phrase history 6 Evaluation book from the context “Von ihr habe ich mehr gel- ernt als aus manchem history book.”, the longest The evaluation of the task is based on three met- ngram/string matches module retrieves several tar- rics, viz. absolute accuracy (acc), word-based ac- get language sentences without any indexed trans- curacy (wac) and recall (rec). lation: Absolute accuracy measures the number of fragments that match the gold translation of the Ich weise darauf hin oder nehme an, dass • untranslated fragments. Word-based accuracy as- dies in den Geschichtsbuchern¨ auch so signs a score according to the longest consecutive erwahnt¨ wird.