Multi-Task Neural Model for Agglutinative Language Translation

Total Page:16

File Type:pdf, Size:1020Kb

Multi-Task Neural Model for Agglutinative Language Translation Multi-Task Neural Model for Agglutinative Language Translation Yirong Pan1,2,3, Xiao Li1,2,3, Yating Yang1,2,3, and Rui Dong1,2,3 1 Xinjiang Technical Institute of Physics & Chemistry, Chinese Academy of Sciences, China 2 University of Chinese Academy of Sciences, China 3 Xinjiang Laboratory of Minority Speech and Language Information Processing, China [email protected] {xiaoli, yangyt, dongrui}@ms.xjb.ac.cn Abstract (Ablimit et al., 2010). Since the suffixes have many inflected and morphological variants, the Neural machine translation (NMT) has vocabulary size of an agglutinative language is achieved impressive performance recently considerable even in small-scale training data. by using large-scale parallel corpora. Moreover, many words have different morphemes However, it struggles in the low-resource and morphologically-rich scenarios of and meanings in different context, which leads to agglutinative language translation task. inaccurate translation results. Inspired by the finding that monolingual Recently, researchers show their great interest data can greatly improve the NMT in utilizing monolingual data to further improve performance, we propose a multi-task the NMT model performance (Cheng et al., 2016; neural model that jointly learns to perform Ramachandran et al., 2017; Currey et al., 2017). bi-directional translation and agglutinative Sennrich et al. (2016) pair the target-side language stemming. Our approach employs monolingual data with automatic back-translation the shared encoder and decoder to train a as additional training data to train the NMT model. single model without changing the standard Zhang and Zong (2016) use the source-side NMT architecture but instead adding a token before each source-side sentence to monolingual data and employ the multi-task specify the desired target outputs of the two learning framework for translation and source different tasks. Experimental results on sentence reordering. Domhan and Hieber (2017) Turkish-English and Uyghur-Chinese modify the decoder to enable multi-task learning show that our proposed approach can for translation and language modeling. However, significantly improve the translation the above works mainly focus on boosting the performance on agglutinative languages by translation fluency, and lack the consideration of using a small amount of monolingual data. morphological and linguistic knowledge. Stemming is a morphological analysis method, 1 Introduction which is widely used for information retrieval tasks Neural machine translation (NMT) has achieved (Kishida, 2005). By removing the suffixes in the impressive performance on many high-resource word, stemming allows the variants of the same machine translation tasks (Bahdanau et al., 2015; word to share representations and reduces data Luong et al., 2015a; Vaswani et al., 2017). The sparseness. We consider that stemming can lead to standard NMT model uses the encoder to map the better generalization on agglutinative languages, source sentence to a continuous representation which helps NMT to capture the in-depth semantic vector, and then it feeds the resulting vector to the information. Thus we use stemming as an auxiliary decoder to produce the target sentence. task for agglutinative language translation. However, the NMT model still suffers from the In this paper, we investigate a method to exploit low-resource and morphologically-rich scenarios the monolingual data of the agglutinative language of agglutinative language translation tasks, such as to enhance the representation ability of the encoder. Turkish-English and Uyghur-Chinese. Both This is achieved by training a multi-task neural Turkish and Uyghur are agglutinative languages model to jointly perform bi-directional translation with complex morphology. The morpheme and agglutinative language stemming, which structure of the word can be denoted as: prefix1 utilizes the shared encoder and decoder. We treat + … + prefixN + stem + suffix1 + … + suffixN stemming as a sequence generation task. 1031 Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: Student Research Workshop, pages 103–110 July 5 - July 10, 2020. c 2020 Association for Computational Linguistics Training data Bilingual Data Monolingual Data The method requires no changes to the standard NMT architecture but instead requires adding a <MT> + English sentence Turkish sentence token at the beginning of each source sentence to Encoder-Decoder specify the desired target sentence. Inspired by <MT> + Turkish sentence English sentence Framework their work, we employ the standard NMT model <ST> + Turkish sentence stem sequence with one encoder and one decoder for parameter sharing and model generalization. In addition, we Figure 1: The architecture of the multi-task neural model build a joint vocabulary on the concatenation of the that jointly learns to perform bi-directional translation between Turkish and English, and stemming for Turkish source-side and target-side words. sentence. Several works on morphologically-rich NMT have focused on using morphological analysis to 2 Related Work pre-process the training data (Luong et al., 2016; Huck et al., 2017; Tawfik et al., 2019). Gulcehre et Multi-task learning (MTL) aims to improve the al. (2015) segment each Turkish sentence into a generalization performance of a main task by using sequence of morpheme units and remove any non- the other related tasks, which has been successfully surface morphemes for Turkish-English translation. applied to various research fields ranging from Ataman et al. (2017) propose a vocabulary language (Liu et al., 2015; Luong et al., 2015a), reduction method that considers the morphological vision (Yim et al., 2015; Misra et al., 2016), and properties of the agglutinative language, which is speech (Chen and Mak, 2015; Kim et al., 2016). based on the unsupervised morphology learning. Many natural language processing (NLP) tasks This work takes inspiration from our previously have been chosen as auxiliary task to deal with the proposed segmentation method (Pan et al., 2020) increasingly complex tasks. Luong et al. (2015b) that segments the word into a sequence of sub- employ a small amount of data of syntactic parsing word units with morpheme structure, which can and image caption for English-German translation. effectively reduce language complexity. Hashimoto et al. (2017) present a joint MTL model to handle the tasks of part-of-speech (POS) tagging, 3 Multi-Task Neural Model dependency parsing, semantic relatedness, and textual entailment for English. Kiperwasser and 3.1 Overview Ballesteros (2018) utilize the POS tagging and We propose a multi-task neural model for machine dependency parsing for English-German machine translation from and into a low-resource and translation. To the best of our knowledge, we are morphologically-rich agglutinative language. We the first to incorporate stemming task into MTL train the model to jointly learn to perform both the framework to further improve the translation bi-directional translation task and the stemming performance on agglutinative languages. task on an agglutinative language by using the Recently, several works have combined the standard NMT framework. Moreover, we add an MTL method with sequence-to-sequence NMT artificial token before each source sentence to model for machine translation tasks. Dong et al. specify the desired target outputs for different tasks. (2015) follow a one-to-many setting that utilizes a The architecture of the proposed model is shown in shared encoder for all the source languages with Figure 1. We take the Turkish-English translation respective attention mechanisms and multiple task as example. The “<MT>” token denotes the decoders for the different target languages. Luong bilingual translation task and the “<ST>” token et al. (2015b) follow a many-to-many setting that denotes the stemming task on Turkish sentence. uses multiple encoders and decoders with two separate unsupervised objective functions. Zoph 3.2 Neural Machine Translation (NMT) and Knight (2016) follow a many-to-one setting Our proposed multi-task neural model on using the that employs multiple encoders for all the source source-side monolingual data for agglutinative languages and one decoder for the desired target language translation task can be applied in any language. Johnson et al. (2017) propose a more NMT structures with encoder-decoder framework. simple method in one-to-one setting, which trains In this work, we follow the NMT model proposed a single NMT model with the shared encoder and by Vaswani et al. (2017), which is implemented as decoder in order to enable multilingual translation. Transformer. We will briefly summarize it here. 1042 Task Data # Sent # Src # Trg 4 Experiment Tr-En train 355,251 6,356,767 8,021,161 valid 2,455 37,153 52,125 4.1 Dataset test 4,962 69,006 96,291 The statistics of the training, validation, and test Uy- train 333,097 6,026,953 5,748,298 datasets on Turkish-English and Uyghur-Chinese Ch valid 700 17,821 17,085 machine translation tasks are shown in Table 1. test 1,000 20,580 18,179 For the Turkish-English machine translation, Table 1: The statistics of the training, validation, and following (Sennrich et al., 2015a), we use the WIT test datasets on Turkish-English and Uyghur-Chinese corpus (Cettolo et al., 2012) and the SETimes machine translation tasks. The “# Src” denotes the corpus (Tyers and Alperen, 2010) as the training number of the source tokens, and the “# Trg” denotes dataset,
Recommended publications
  • A Sentiment-Based Chat Bot
    A sentiment-based chat bot Automatic Twitter replies with Python ALEXANDER BLOM SOFIE THORSEN Bachelor’s essay at CSC Supervisor: Anna Hjalmarsson Examiner: Mårten Björkman E-mail: [email protected], [email protected] Abstract Natural language processing is a field in computer science which involves making computers derive meaning from human language and input as a way of interacting with the real world. Broadly speaking, sentiment analysis is the act of determining the attitude of an author or speaker, with respect to a certain topic or the overall context and is an application of the natural language processing field. This essay discusses the implementation of a Twitter chat bot that uses natural language processing and sentiment analysis to construct a be- lievable reply. This is done in the Python programming language, using a statistical method called Naive Bayes classifying supplied by the NLTK Python package. The essay concludes that applying natural language processing and sentiment analysis in this isolated fashion was simple, but achieving more complex tasks greatly increases the difficulty. Referat Natural language processing är ett fält inom datavetenskap som innefat- tar att få datorer att förstå mänskligt språk och indata för att på så sätt kunna interagera med den riktiga världen. Sentiment analysis är, generellt sagt, akten av att försöka bestämma känslan hos en författare eller talare med avseende på ett specifikt ämne eller sammanhang och är en applicering av fältet natural language processing. Den här rapporten diskuterar implementeringen av en Twitter-chatbot som använder just natural language processing och sentiment analysis för att kunna svara på tweets genom att använda känslan i tweetet.
    [Show full text]
  • Integrating Optical Character Recognition and Machine
    Integrating Optical Character Recognition and Machine Translation of Historical Documents Haithem Afli and Andy Way ADAPT Centre School of Computing Dublin City University Dublin, Ireland haithem.afli, andy.way @adaptcentre.ie { } Abstract Machine Translation (MT) plays a critical role in expanding capacity in the translation industry. However, many valuable documents, including digital documents, are encoded in non-accessible formats for machine processing (e.g., Historical or Legal documents). Such documents must be passed through a process of Optical Character Recognition (OCR) to render the text suitable for MT. No matter how good the OCR is, this process introduces recognition errors, which often renders MT ineffective. In this paper, we propose a new OCR to MT framework based on adding a new OCR error correction module to enhance the overall quality of translation. Experimenta- tion shows that our new system correction based on the combination of Language Modeling and Translation methods outperforms the baseline system by nearly 30% relative improvement. 1 Introduction While research on improving Optical Character Recognition (OCR) algorithms is ongoing, our assess- ment is that Machine Translation (MT) will continue to produce unacceptable translation errors (or non- translations) based solely on the automatic output of OCR systems. The problem comes from the fact that current OCR and Machine Translation systems are commercially distinct and separate technologies. There are often mistakes in the scanned texts as the OCR system occasionally misrecognizes letters and falsely identifies scanned text, leading to misspellings and linguistic errors in the output text (Niklas, 2010). Works involved in improving translation services purchase off-the-shelf OCR technology but have limited capability to adapt the OCR processing to improve overall machine translation perfor- mance.
    [Show full text]
  • Applying Ensembling Methods to BERT to Boost Model Performance
    Applying Ensembling Methods to BERT to Boost Model Performance Charlie Xu, Solomon Barth, Zoe Solis Department of Computer Science Stanford University cxu2, sbarth, zoesolis @stanford.edu Abstract Due to its wide range of applications and difficulty to solve, Machine Comprehen- sion has become a significant point of focus in AI research. Using the SQuAD 2.0 dataset as a metric, BERT models have been able to achieve superior performance to most other models. Ensembling BERT with other models has yielded even greater results, since these ensemble models are able to utilize the relative strengths and adjust for the relative weaknesses of each of the individual submodels. In this project, we present various ensemble models that leveraged BiDAF and multiple different BERT models to achieve superior performance. We also studied the effects how dataset manipulations and different ensembling methods can affect the performance of our models. Lastly, we attempted to develop the Synthetic Self- Training Question Answering model. In the end, our best performing multi-BERT ensemble obtained an EM score of 73.576 and an F1 score of 76.721. 1 Introduction Machine Comprehension (MC) is a complex task in NLP that aims to understand written language. Question Answering (QA) is one of the major tasks within MC, requiring a model to provide an answer, given a contextual text passage and question. It has a wide variety of applications, including search engines and voice assistants, making it a popular problem for NLP researchers. The Stanford Question Answering Dataset (SQuAD) is a very popular means for training, tuning, testing, and comparing question answering systems.
    [Show full text]
  • Computational Challenges for Polysynthetic Languages
    Computational Modeling of Polysynthetic Languages Judith L. Klavans, Ph.D. US Army Research Laboratory 2800 Powder Mill Road Adelphi, Maryland 20783 [email protected] [email protected] Abstract Given advances in computational linguistic analysis of complex languages using Machine Learning as well as standard Finite State Transducers, coupled with recent efforts in language revitalization, the time was right to organize a first workshop to bring together experts in language technology and linguists on the one hand with language practitioners and revitalization experts on the other. This one-day meeting provides a promising forum to discuss new research on polysynthetic languages in combination with the needs of linguistic communities where such languages are written and spoken. Finally, this overview article summarizes the papers to be presented, along with goals and purpose. Motivation Polysynthetic languages are characterized by words that are composed of multiple morphemes, often to the extent that one long word can express the meaning contained in a multi-word sentence in lan- guage like English. To illustrate, consider the following example from Inuktitut, one of the official languages of the Territory of Nunavut in Canada. The morpheme -tusaa- (shown in boldface below) is the root, and all the other morphemes are synthetically combined with it in one unit.1 (1) tusaa-tsia-runna-nngit-tu-alu-u-junga hear-well-be.able-NEG-DOER-very-BE-PART.1.S ‘I can't hear very well.’ Kabardian (Circassian), from the Northwest Caucasus, also shows this phenomenon, with the root -še- shown in boldface below: (2) wə-q’ə-d-ej-z-γe-še-ž’e-f-a-te-q’əm 2SG.OBJ-DIR-LOC-3SG.OBJ-1SG.SUBJ-CAUS-lead-COMPL-POTENTIAL-PAST-PRF-NEG ‘I would not let you bring him right back here.’ Polysynthetic languages are spoken all over the globe and are richly represented among Native North and South American families.
    [Show full text]
  • Machine Translation for Academic Purposes Grace Hui-Chin Lin
    Proceedings of the International Conference on TESOL and Translation 2009 December 2009: pp.133-148 Machine Translation for Academic Purposes Grace Hui-chin Lin PhD Texas A&M University College Station Master of Science, University of Southern California Paul Shih Chieh Chien Center for General Education, Taipei Medical University Abstract Due to the globalization trend and knowledge boost in the second millennium, multi-lingual translation has become a noteworthy issue. For the purposes of learning knowledge in academic fields, Machine Translation (MT) should be noticed not only academically but also practically. MT should be informed to the translating learners because it is a valuable approach to apply by professional translators for diverse professional fields. For learning translating skills and finding a way to learn and teach through bi-lingual/multilingual translating functions in software, machine translation is an ideal approach that translation trainers, translation learners, and professional translators should be familiar with. In fact, theories for machine translation and computer assistance had been highly valued by many scholars. (e.g., Hutchines, 2003; Thriveni, 2002) Based on MIT’s Open Courseware into Chinese that Lee, Lin and Bonk (2007) have introduced, this paper demonstrates how MT can be efficiently applied as a superior way of teaching and learning. This article predicts the translated courses utilizing MT for residents of global village should emerge and be provided soon in industrialized nations and it exhibits an overview about what the current developmental status of MT is, why the MT should be fully applied for academic purpose, such as translating a textbook or teaching and learning a course, and what types of software can be successfully applied.
    [Show full text]
  • Aconcorde: Towards a Proper Concordance for Arabic
    aConCorde: towards a proper concordance for Arabic Andrew Roberts, Dr Latifa Al-Sulaiti and Eric Atwell School of Computing University of Leeds LS2 9JT United Kingdom {andyr,latifa,eric}@comp.leeds.ac.uk July 12, 2005 Abstract Arabic corpus linguistics is currently enjoying a surge in activity. As the growth in the number of available Arabic corpora continues, there is an increased need for robust tools that can process this data, whether it be for research or teaching. One such tool that is useful for both groups is the concordancer — a simple tool for displaying a specified target word in its context. However, obtaining one that can reliably cope with the Arabic lan- guage had proved extremely difficult. Therefore, aConCorde was created to provide such a tool to the community. 1 Introduction A concordancer is a simple tool for summarising the contents of corpora based on words of interest to the user. Otherwise, manually navigating through (of- ten very large) corpora would be a long and tedious task. Concordancers are therefore extremely useful as a time-saving tool, but also much more. By isolat- ing keywords in their contexts, linguists can use concordance output to under- stand the behaviour of interesting words. Lexicographers can use the evidence to find if a word has multiple senses, and also towards defining their meaning. There is also much research and discussion about how concordance tools can be beneficial for data-driven language learning (Johns, 1990). A number of studies have demonstrated that providing access to corpora and concordancers benefited students learning a second language.
    [Show full text]
  • Machine Translation for Language Preservation
    Machine translation for language preservation Steven BIRD1,2 David CHIANG3 (1) Department of Computing and Information Systems, University of Melbourne (2) Linguistic Data Consortium, University of Pennsylvania (3) Information Sciences Institute, University of Southern California [email protected], [email protected] ABSTRACT Statistical machine translation has been remarkably successful for the world’s well-resourced languages, and much effort is focussed on creating and exploiting rich resources such as treebanks and wordnets. Machine translation can also support the urgent task of document- ing the world’s endangered languages. The primary object of statistical translation models, bilingual aligned text, closely coincides with interlinear text, the primary artefact collected in documentary linguistics. It ought to be possible to exploit this similarity in order to improve the quantity and quality of documentation for a language. Yet there are many technical and logistical problems to be addressed, starting with the problem that – for most of the languages in question – no texts or lexicons exist. In this position paper, we examine these challenges, and report on a data collection effort involving 15 endangered languages spoken in the highlands of Papua New Guinea. KEYWORDS: endangered languages, documentary linguistics, language resources, bilingual texts, comparative lexicons. Proceedings of COLING 2012: Posters, pages 125–134, COLING 2012, Mumbai, December 2012. 125 1 Introduction Most of the world’s 6800 languages are relatively unstudied, even though they are no less im- portant for scientific investigation than major world languages. For example, before Hixkaryana (Carib, Brazil) was discovered to have object-verb-subject word order, it was assumed that this word order was not possible in a human language, and that some principle of universal grammar must exist to account for this systematic gap (Derbyshire, 1977).
    [Show full text]
  • “Today's Morphology Is Yeatyerday's Syntax”
    MOTHER TONGUE Journal of the Association for the Study of Language in Prehistory • Issue XXI • 2016 From North to North West: How North-West Caucasian Evolved from North Caucasian Viacheslav A. Chirikba Abkhaz State University, Sukhum, Republic of Abkhazia The comparison of (North-)West Caucasian with (North-)East Caucasian languages suggests that early Proto-West Caucasian underwent a fundamental reshaping of its phonological, morphological and syntactic structures, as a result of which it became analytical, with elementary inflection and main grammatical roles being expressed by lexical means, word order and probably also by tones. The subsequent development of compounding and incorporation resulted in a prefixing polypersonal polysynthetic agglutinative language type typical for modern West Caucasian languages. The main evolutionary line from a North Caucasian dialect close to East Caucasian to modern West Caucasian languages was thus from agglutinative to the analytical language-type, due to a near complete loss of inflection, and then to the agglutinative polysynthetic type. Although these changes blurred the genetic relationship between West Caucasian and East Caucasian languages, however, this can be proven by applying standard procedures of comparative-historical linguistics. 1. The West Caucasian languages.1 The West Caucasian (WC), or Abkhazo-Adyghean languages constitute a branch of the North Caucasian (NC) linguistic family, which consists of five languages: Abkhaz and Abaza (the Abkhaz sub-group), Adyghe and Kabardian (the Circassian sub-group), and Ubykh. The traditional habitat of these languages is the Western Caucasus, where they are still spoken, with the exception of the extinct Ubykh. Typologically, the WC languages represent a rather idiosyncratic linguistic type not occurring elsewhere in Eurasia.
    [Show full text]
  • Hunspell – the Free Spelling Checker
    Hunspell – The free spelling checker About Hunspell Hunspell is a spell checker and morphological analyzer library and program designed for languages with rich morphology and complex word compounding or character encoding. Hunspell interfaces: Ispell-like terminal interface using Curses library, Ispell pipe interface, OpenOffice.org UNO module. Main features of Hunspell spell checker and morphological analyzer: - Unicode support (affix rules work only with the first 65535 Unicode characters) - Morphological analysis (in custom item and arrangement style) and stemming - Max. 65535 affix classes and twofold affix stripping (for agglutinative languages, like Azeri, Basque, Estonian, Finnish, Hungarian, Turkish, etc.) - Support complex compoundings (for example, Hungarian and German) - Support language specific features (for example, special casing of Azeri and Turkish dotted i, or German sharp s) - Handle conditional affixes, circumfixes, fogemorphemes, forbidden words, pseudoroots and homonyms. - Free software (LGPL, GPL, MPL tri-license) Usage The src/tools dictionary contains ten executables after compiling (or some of them are in the src/win_api): affixcompress: dictionary generation from large (millions of words) vocabularies analyze: example of spell checking, stemming and morphological analysis chmorph: example of automatic morphological generation and conversion example: example of spell checking and suggestion hunspell: main program for spell checking and others (see manual) hunzip: decompressor of hzip format hzip: compressor of
    [Show full text]
  • Machine Translation and Computer-Assisted Translation: a New Way of Translating? Author: Olivia Craciunescu E-Mail: Olivia [email protected]
    Machine Translation and Computer-Assisted Translation: a New Way of Translating? Author: Olivia Craciunescu E-mail: [email protected] Author: Constanza Gerding-Salas E-mail: [email protected] Author: Susan Stringer-O’Keeffe E-mail: [email protected] Source: http://www.translationdirectory.com The Need for Translation IT has produced a screen culture that Technology tends to replace the print culture, with printed documents being dispensed Advances in information technology (IT) with and information being accessed have combined with modern communication and relayed directly through computers requirements to foster translation automation. (e-mail, databases and other stored The history of the relationship between information). These computer documents technology and translation goes back to are instantly available and can be opened the beginnings of the Cold War, as in the and processed with far greater fl exibility 1950s competition between the United than printed matter, with the result that the States and the Soviet Union was so intensive status of information itself has changed, at every level that thousands of documents becoming either temporary or permanent were translated from Russian to English and according to need. Over the last two vice versa. However, such high demand decades we have witnessed the enormous revealed the ineffi ciency of the translation growth of information technology with process, above all in specialized areas of the accompanying advantages of speed, knowledge, increasing interest in the idea visual
    [Show full text]
  • Language Service Translation SAVER Technote
    TechNote August 2020 LANGUAGE TRANSLATION APPLICATIONS The U.S. Department of Homeland Security (DHS) AEL Number: 09ME-07-TRAN established the System Assessment and Validation for Language translation equipment used by emergency services has evolved into Emergency Responders commercially available, mobile device applications (apps). The wide adoption of (SAVER) Program to help cell phones has enabled language translation applications to be useful in emergency responders emergency scenarios where first responders may interact with individuals who improve their procurement speak a different language. Most language translation applications can identify decisions. a spoken language and translate in real-time. Applications vary in functionality, Located within the Science such as the ability to translate voice or text offline and are unique in the number and Technology Directorate, of available translatable languages. These applications are useful tools to the National Urban Security remedy communication barriers in any emergency response scenario. Technology Laboratory (NUSTL) manages the SAVER Overview Program and conducts objective operational During an emergency response, language barriers can challenge a first assessments of commercial responder’s ability to quickly assess and respond to a situation. First responders equipment and systems must be able to accurately communicate with persons at an emergency scene in relevant to the emergency order to provide a prompt, appropriate response. The quick translations provided responder community. by mobile apps remove the language barrier and are essential to the safety of The SAVER Program gathers both first responders and civilians with whom they interact. and reports information about equipment that falls within the Before the adoption of smartphones, first responders used language interpreters categories listed in the DHS and later language translation equipment such as hand-held language Authorized Equipment List translation devices (Figure 1).
    [Show full text]
  • Terminology Extraction, Translation Tools and Comparable Corpora Helena Blancafort, Béatrice Daille, Tatiana Gornostay, Ulrich Heid, Claude Méchoulam, Serge Sharoff
    TTC: Terminology Extraction, Translation Tools and Comparable Corpora Helena Blancafort, Béatrice Daille, Tatiana Gornostay, Ulrich Heid, Claude Méchoulam, Serge Sharoff To cite this version: Helena Blancafort, Béatrice Daille, Tatiana Gornostay, Ulrich Heid, Claude Méchoulam, et al.. TTC: Terminology Extraction, Translation Tools and Comparable Corpora. 14th EURALEX International Congress, Jul 2010, Leeuwarden/Ljouwert, Netherlands. pp.263-268. hal-00819365 HAL Id: hal-00819365 https://hal.archives-ouvertes.fr/hal-00819365 Submitted on 2 May 2013 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. TTC: Terminology Extraction, Translation Tools and Comparable Corpora Helena Blancafort, Syllabs, Universitat Pompeu Fabra, Barcelona, Spain Béatrice Daille, LINA, Université de Nantes, France Tatiana Gornostay, Tilde, Latva Ulrich Heid, IMS, Universität Stuttgart, Germany Claude Mechoulam, Sogitec, France Serge Sharoff, CTS, University of Leeds, England The need for linguistic resources in any natural language application is undeniable. Lexicons and terminologies
    [Show full text]