Adam Kilgarriff's Legacy to Computational Linguistics And

Total Page:16

File Type:pdf, Size:1020Kb

Adam Kilgarriff's Legacy to Computational Linguistics And Adam Kilgarriff’s Legacy to Computational Linguistics and Beyond Roger Evans?, Alexander Gelbukhy, Gregory Grefenstettez, Patrick Hanks}, Miloš Jakubícekˇ ♧, Diana McCarthy⊕, Martha Palmer~, Ted Pedersen/, Michael Rundell], Pavel Rychlý1, Serge Sharoff♤, David Tugwell ? University of Brighton, [email protected], y CIC, Instituto Politécnico Nacional, Mexico, [email protected], z Inria Saclay, [email protected], } University of Wolverhampton, [email protected], ♧ Lexical Computing and Masaryk University, [email protected], ⊕ DTAL University of Cambridge, [email protected], ~ University of Colorado, [email protected], / University of Minnesota, [email protected] ] Lexicography MasterClass, [email protected] 1 Lexical Computing and Masaryk University, pary@fi.muni.cz, ♤ University of Leeds, [email protected], Independent Researcher, [email protected] Abstract. This year, the CICLing conference is dedicated to the memory of Adam Kilgarriff who died last year. Adam leaves behind a tremendous scientific legacy and those working in computational linguistics, other fields of linguistics and lexicography are indebted to him. This paper is a summary review of some of Adam’s main scientific contributions. It is not and cannot be exhaustive. It is writ- ten by only a small selection of his large network of collaborators. Nevertheless we hope this will provide a useful summary for readers wanting to know more about the origins of work, events and software that are so widely relied upon by scientists today, and undoubtedly will continue to be so in the foreseeable future. 1 Introduction Last year was marred by the loss of Adam Kilgarriff who during the last 27 years contributed greatly to the field of computational linguistics1, as well as to other fields of linguistics and to lexicography. This paper provides a review of some of the key scientific contributions he made. His legacy is impressive, not simply in terms of the numerous academic papers, which are widely cited in many fields, but also the many scientific events and communities he founded and fostered and the commercial Sketch Engine software. The Sketch Engine has provided computational linguistics tools and corpora to scientists in other fields, notably lexicography for example [61,50,17], as 1 In this paper, natural language processing (NLP) is used synonymously with computational linguistics. well as facilitating research in other areas of linguistics [56,12,11,54] and our own subfield of computational linguistics [60,74]. Adam was hugely interested in lexicography from the very inception of his post- graduate career. His DPhil2 on polysemy and subsequent interest in word sense disam- biguation (WSD) and its evaluation was firmly rooted in examining corpus data and dictionary senses with a keen eye on the lexicographic process [20]. After his DPhil, Adam spent several years as a computational linguist advising Longman Dictionaries on use of language engineering for the development of lexical databases, and he contin- ued this line of knowledge transfer in consultancies with other publishers until realizing the potential of computational linguistics with the development of his commercial soft- ware, the Sketch Engine. The origins of this software lay in his earlier ideas of using computational linguistics tools for providing word profiles from corpus data. For Adam, data was key. He fully appreciated the need for empirical approaches to both computational linguistics and lexicography. In computational linguistics from the 90s onwards there was a huge swing from symbolic to statistical approaches, however the choice of input data, in composition and size, was often overlooked in favor of a focus on algorithms. Furthermore, early on in this statistical tsunami, issues of repli- cability were not always appreciated. A large portion of his work was devoted to these issues; in his work on WSD evaluation and in his work on building and comparing corpora. His signature company slogan was ‘corpora for all’. This paper has come together from a small sample of the very large pool of Adam’s collaborators. The sections have been written by different subsets of the authors and with different perspectives on Adam’s work and on his ideas. We hope that this approach will give the reader an overview of some of Adam’s main scientific contributions to both academia and the commercial world, while not detracting too greatly from the coherence of the article. The article is structured as follows. Section 2 outlines Adam’s thesis and origins of his thoughts on word senses and lexicography. Section 3 continues with his subsequent work on WSD evaluation in the Senseval series as well as discussing his qualms about the adoption of dictionary senses in computational linguistics as an act of faith without a specific purpose in mind. Section 4 summarizes his early work on using corpus data to provide word profiles in a project known as the WASP-bench, the precursor to his company’s3 commercial software, the Sketch Engine. Corpus data lay at the very heart of these word profiles, and indeed just about all of Computational Linguistics from the mid 90s on. Section 5 discuss Adam’s ideas for building and comparing corpus data, while section 6 describes the Sketch Engine itself. Finally section 7 details some of the impact Adam has had transferring ideas from computational and corpus linguistics to the field of lexicography. 2 Like Oxford, the University of Sussex, where Adam undertook his doctoral training, uses DPhil rather than PhD as the abbreviation for its doctoral degrees. 3 The company he founded is Lexical Computing Ltd. He was also a partner – with Sue Atkins and Michael Rundell – in another company, Lexicography MasterClass, which provides con- sultancy and training and runs the Lexicom workshops in lexicography and lexical comput- ing; http://www.lexmasterclass.com/. 2 Adam’s Doctoral Research To lay the foundation for an understanding of Adam’s contribution to our field, an ob- vious place to start is his DPhil thesis [19]. But let us first sketch out the background in which he undertook his doctoral research. Having obtained first class honours in Philsophy and Engineering at Cambridge in 1982, Adam had spent a few years away from academia before arriving at Sussex in 1987 to undertake the Masters in Intelligent Knowledge-Based Systems, a programme which aimed to give non-computer scientists a grounding in Cognitive Science and Artificial Intelligence. This course introduced him to Natural Language Processing (NLP) and lexical semantics, and in 1988 he enrolled on the DPhil program, supervised by Gerald Gazdar and Roger Evans. At that time, NLP had moved away from its roots in Artificial Intelligence towards more formal approaches, with increasing interest in formal lexical issues and more elaborate models of lexical structure, such as Copestake’s LKB [6] and Evans & Gazdar’s DATR [9]. In addition, the idea of improving lexical coverage by exploiting digitized versions of dictionaries was gaining currency, although the advent of large-scale corpus-based ap- proaches was still some way off. In this context, Adam set out to explore Polysemy, or as he put it himself: What does it mean to say a word has several meanings? On what grounds do lexicographers make their judgments about the number of meanings a word has? How do the senses a dictionary lists relate to the full range of ways a word might get used? How might NLP systems deal with multiple meanings? [19, p. i] Two further quotes from Adam’s thesis neatly summarize the broad interdisciplinar- ity which characterized his approach to his thesis, and throughout his research career. The first is from the Preface: There are four kinds of thesis in cognitive science: formal, empirical, program- based and discursive. What sort was mine to be? . I look round in delight to find [my thesis] does a little bit of all the things a cognitive science thesis might do! [19, p. vi] while the second is in the introduction to his discussion of methodology: We take the study of the lexicon to be intimately related to the study of the mind . For an understanding of the lexicon, the contributing disciplines are lexicography, psycholinguistics and theoretical, computational and corpus linguistics. [19, p. 4] The distinctions made in the first of these quotes provide a neat framework to dis- cuss the content of the thesis in more detail. Adam’s own starting point was empirical: in two studies he demonstrated first that the range and type of sense distinctions found in a typical dictionary defied any simple systematic classification, and second that the so-called bank model of word senses (where senses from dictionaries were considered to be distinct and easy to enumerate and match to textual instances) did not in general reflect actual dictionary sense distinctions (which tend to overlap). A key practical con- sequence of this is that the then-current NLP WSD systems which assumed the bank model could never achieve the highest levels of performance in sense matching tasks. From this practical exploration, Adam moved to more discursive territory. He ex- plored the basis on which lexicographers decide which sense distinctions appear in dic- tionaries, and introduced an informal criterion to characterize it – the Sufficiently Fre- quent and Insufficiently Predicable (SFIP) condition, which essentially favors senses which are both common and non-obvious. However he noted that while this criterion had empirical validity as a way of circumscribing polysemy in dictionaries, it did not offer any clear understanding of the nature of polysemy itself. He argued that this is because polysemy is not a ‘natural kind’ but rather a cover term for several other more specific but distinct phenomena: homonymy (the bank model), alternation (systematic usage differences), collocation (lexically contextualized usage) and analogy. This characterization led into the formal/program-based contribution (which in the spirit of logic-based programming paradigms collapse into one) of his thesis, for which he developed two formal descriptions of lexical alternations using the inheritance-based lexical description language DATR.
Recommended publications
  • Preparation and Exploitation of Bilingual Texts Dusko Vitas, Cvetana Krstev, Eric Laporte
    Preparation and exploitation of bilingual texts Dusko Vitas, Cvetana Krstev, Eric Laporte To cite this version: Dusko Vitas, Cvetana Krstev, Eric Laporte. Preparation and exploitation of bilingual texts. Lux Coreana, 2006, 1, pp.110-132. hal-00190958v2 HAL Id: hal-00190958 https://hal.archives-ouvertes.fr/hal-00190958v2 Submitted on 27 Nov 2007 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Preparation and exploitation of bilingual texts Duško Vitas Faculty of Mathematics Studentski trg 16, CS-11000 Belgrade, Serbia Cvetana Krstev Faculty of Philology Studentski trg 3, CS-11000 Belgrade, Serbia Éric Laporte Institut Gaspard-Monge, Université de Marne-la-Vallée 5, bd Descartes, 77454 Marne-la-Vallée CEDEX 2, France Introduction A bitext is a merged document composed of two versions of a given text, usually in two different languages. An aligned bitext is produced by an alignment tool or aligner, that automatically aligns or matches the versions of the same text, generally sentence by sentence. A multilingual aligned corpus or collection of aligned bitexts, when consulted with a search tool, can be extremely useful for translation, language teaching and the investigation of literary text (Veronis, 2000).
    [Show full text]
  • International Computer Science Institute Activity Report 2007
    International Computer Science Institute Activity Report 2007 International Computer Science Institute, 1947 Center Street, Suite 600, Berkeley, CA 94704-1198 USA phone: (510) 666 2900 (510) fax: 666 2956 [email protected] http://www.icsi.berkeley.edu PRINCIPAL 2007 SPONSORS Cisco Defense Advanced Research Projects Agency (DARPA) Disruptive Technology Offi ce (DTO, formerly ARDA) European Union (via University of Edinburgh) Finnish National Technology Agency (TEKES) German Academic Exchange Service (DAAD) Google IM2 National Centre of Competence in Research, Switzerland Microsoft National Science Foundation (NSF) Qualcomm Spanish Ministry of Education and Science (MEC) AFFILIATED 2007 SPONSORS Appscio Ask AT&T Intel National Institutes of Health (NIH) SAP Volkswagen CORPORATE OFFICERS Prof. Nelson Morgan (President and Institute Director) Dr. Marcia Bush (Vice President and Associate Director) Prof. Scott Shenker (Secretary and Treasurer) BOARD OF TRUSTEES, JANUARY 2008 Prof. Javier Aracil, MEC and Universidad Autónoma de Madrid Prof. Hervé Bourlard, IDIAP and EPFL Vice Chancellor Beth Burnside, UC Berkeley Dr. Adele Goldberg, Agile Mind, Inc. and Pharmaceutrix, Inc. Dr. Greg Heinzinger, Qualcomm Mr. Clifford Higgerson, Walden International Prof. Richard Karp, ICSI and UC Berkeley Prof. Nelson Morgan, ICSI (Director) and UC Berkeley Dr. David Nagel, Ascona Group Prof. Prabhakar Raghavan, Stanford and Yahoo! Prof. Stuart Russell, UC Berkeley Computer Science Division Chair Mr. Jouko Salo, TEKES Prof. Shankar Sastry, UC Berkeley, Dean
    [Show full text]
  • The Iafor European Conference Series 2014 Ece2014 Ecll2014 Ectc2014 Official Conference Proceedings ISSN: 2188-1138
    the iafor european conference series 2014 ece2014 ecll2014 ectc2014 Official Conference Proceedings ISSN: 2188-1138 “To Open Minds, To Educate Intelligence, To Inform Decisions” The International Academic Forum provides new perspectives to the thought-leaders and decision-makers of today and tomorrow by offering constructive environments for dialogue and interchange at the intersections of nation, culture, and discipline. Headquartered in Nagoya, Japan, and registered as a Non-Profit Organization 一般社( 団法人) , IAFOR is an independent think tank committed to the deeper understanding of contemporary geo-political transformation, particularly in the Asia Pacific Region. INTERNATIONAL INTERCULTURAL INTERDISCIPLINARY iafor The Executive Council of the International Advisory Board IAB Chair: Professor Stuart D.B. Picken IAB Vice-Chair: Professor Jerry Platt Mr Mitsumasa Aoyama Professor June Henton Professor Frank S. Ravitch Director, The Yufuku Gallery, Tokyo, Japan Dean, College of Human Sciences, Auburn University, Professor of Law & Walter H. Stowers Chair in Law USA and Religion, Michigan State University College of Law Professor David N Aspin Professor Emeritus and Former Dean of the Faculty of Professor Michael Hudson Professor Richard Roth Education, Monash University, Australia President of The Institute for the Study of Long-Term Senior Associate Dean, Medill School of Journalism, Visiting Fellow, St Edmund’s College, Cambridge Economic Trends (ISLET) Northwestern University, Qatar University, UK Distinguished Research Professor of Economics,
    [Show full text]
  • Kernerman Kdictionaries.Com/Kdn DICTIONARY News the European Network of E-Lexicography (Enel) Tanneke Schoonheim
    Number 22 ● July 2014 Kernerman kdictionaries.com/kdn DICTIONARY News The European Network of e-Lexicography (ENeL) Tanneke Schoonheim On October 11th 2013, the kick-off meeting of the European production and reception of dictionaries. The internet offers Network of e-Lexicography (ENeL) project took place in entirely new possibilities for developing and presenting Brussels. This meeting was the outcome of an idea ventilated dictionary information, such as with the integration of sound, a year and a half earlier, in March 2012 in Berlin, at the maps or video, and various novel ways of interacting with European Workshop on Future Standards in Lexicography. dictionary users. For editors of scholarly dictionaries the new The workshop participants then confirmed the imperative to medium is not only a source of inspiration, it also generates coordinate and harmonise research in the field of (electronic) new and serious challenges that demand cooperation and lexicography across Europe, namely to share expertise relating standardization on various levels: to standards, discuss new methodologies in lexicography that a. Through the internet scholarly dictionaries can potentially fully exploit the possibilities of the digital medium, reflect on reach large audiences. However, at present scholarly the pan-European nature of the languages of Europe and attain dictionaries providing reliable information are often not easy a wider audience. to find and are hard to decode for a non-academic audience; A proposal was written by a team of researchers from
    [Show full text]
  • A Resource of Corpus-Derived Typed Predicate Argument Structures for Croatian
    CROATPAS: A Resource of Corpus-derived Typed Predicate Argument Structures for Croatian Costanza Marini Elisabetta Ježek University of Pavia University of Pavia Department of Humanities Department of Humanities costanza.marini01@ [email protected] universitadipavia.it Abstract 2014). The potential uses and purposes of the resource range from multilingual The goal of this paper is to introduce pattern linking between compatible CROATPAS, the Croatian sister project resources to computer-assisted language of the Italian Typed-Predicate Argument learning (CALL). Structure resource (TPAS1, Ježek et al. 2014). CROATPAS is a corpus-based 1 Introduction digital collection of verb valency structures with the addition of semantic Nowadays, we live in a time when digital tools type specifications (SemTypes) to each and resources for language technology are argument slot, which is currently being constantly mushrooming all around the world. developed at the University of Pavia. However, we should remind ourselves that some Salient verbal patterns are discovered languages need our attention more than others if following a lexicographical methodology they are not to face – to put it in Rehm and called Corpus Pattern Analysis (CPA, Hegelesevere’s words – “a steadily increasing Hanks 2004 & 2012; Hanks & and rather severe threat of digital extinction” Pustejovsky 2005; Hanks et al. 2015), (2018: 3282). According to the findings of initiatives such as whereas SemTypes – such as [HUMAN], [ENTITY] or [ANIMAL] – are taken from a the META-NET White Paper Series (Tadić et al. shallow ontology shared by both TPAS 2012; Rehm et al. 2014), we can state that and the Pattern Dictionary of English Croatian is unfortunately among the 21 out of 24 Verbs (PDEV2, Hanks & Pustejovsky official languages of the European Union that are 2005; El Maarouf et al.
    [Show full text]
  • TANGO: Bilingual Collocational Concordancer
    TANGO: Bilingual Collocational Concordancer Jia-Yan Jian Yu-Chia Chang Jason S. Chang Department of Computer Inst. of Information Department of Computer Science System and Applictaion Science National Tsing Hua National Tsing Hua National Tsing Hua University University University 101, Kuangfu Road, 101, Kuangfu Road, 101, Kuangfu Road, Hsinchu, Taiwan Hsinchu, Taiwan Hsinchu, Taiwan [email protected] [email protected] [email protected] du.tw on elaborated statistical calculation. Moreover, log Abstract likelihood ratios are regarded as a more effective In this paper, we describe TANGO as a method to identify collocations especially when the collocational concordancer for looking up occurrence count is very low (Dunning, 1993). collocations. The system was designed to Smadja’s XTRACT is the pioneering work on answer user’s query of bilingual collocational extracting collocation types. XTRACT employed usage for nouns, verbs and adjectives. We first three different statistical measures related to how obtained collocations from the large associated a pair to be collocation type. It is monolingual British National Corpus (BNC). complicated to set different thresholds for each Subsequently, we identified collocation statistical measure. We decided to research and instances and translation counterparts in the develop a new and simple method to extract bilingual corpus such as Sinorama Parallel monolingual collocations. Corpus (SPC) by exploiting the word- We also provide a web-based user interface alignment technique. The main goal of the capable of searching those collocations and its concordancer is to provide the user with a usage. The concordancer supports language reference tools for correct collocation use so learners to acquire the usage of collocation.
    [Show full text]
  • Aconcorde: Towards a Proper Concordance for Arabic
    aConCorde: towards a proper concordance for Arabic Andrew Roberts, Dr Latifa Al-Sulaiti and Eric Atwell School of Computing University of Leeds LS2 9JT United Kingdom {andyr,latifa,eric}@comp.leeds.ac.uk July 12, 2005 Abstract Arabic corpus linguistics is currently enjoying a surge in activity. As the growth in the number of available Arabic corpora continues, there is an increased need for robust tools that can process this data, whether it be for research or teaching. One such tool that is useful for both groups is the concordancer — a simple tool for displaying a specified target word in its context. However, obtaining one that can reliably cope with the Arabic lan- guage had proved extremely difficult. Therefore, aConCorde was created to provide such a tool to the community. 1 Introduction A concordancer is a simple tool for summarising the contents of corpora based on words of interest to the user. Otherwise, manually navigating through (of- ten very large) corpora would be a long and tedious task. Concordancers are therefore extremely useful as a time-saving tool, but also much more. By isolat- ing keywords in their contexts, linguists can use concordance output to under- stand the behaviour of interesting words. Lexicographers can use the evidence to find if a word has multiple senses, and also towards defining their meaning. There is also much research and discussion about how concordance tools can be beneficial for data-driven language learning (Johns, 1990). A number of studies have demonstrated that providing access to corpora and concordancers benefited students learning a second language.
    [Show full text]
  • Panacea D6.1
    SEVENTH FRAMEWORK PROGRAMME THEME 3 Information and communication Technologies PANACEA Project Grant Agreement no.: 248064 Platform for Automatic, Normalized Annotation and Cost-Effective Acquisition of Language Resources for Human Language Technologies D6.1 Technologies and Tools for Lexical Acquisition Dissemination Level: Public Delivery Date: July 16th 2010 Status – Version: Final Author(s) and Affiliation: Laura Rimell (UCAM), Anna Korhonen (UCAM), Valeria Quochi (ILC-CNR), Núria Bel (UPF), Tommaso Caselli (ILC-CNR), Prokopis Prokopidis (ILSP), Maria Gavrilidou (ILSP), Thierry Poibeau (UCAM), Muntsa Padró (UPF), Eva Revilla (UPF), Monica Monachini(CNR-ILC), Maurizio Tesconi (CNR-IIT), Matteo Abrate (CNR-IIT) and Clara Bacciu (CNR-IIT) D6.1 Technologies and Tools for Lexical Acquisition This document is part of technical documentation generated in the PANACEA Project, Platform for Automatic, Normalized Annotation and Cost-Effective Acquisition (Grant Agreement no. 248064). This documented is licensed under a Creative Commons Attribution 3.0 Spain License. To view a copy of this license, visit http://creativecommons.org/licenses/by/3.0/es/. Please send feedback and questions on this document to: [email protected] TRL Group (Tecnologies dels Recursos Lingüístics), Institut Universitari de Lingüística Aplicada, Universitat Pompeu Fabra (IULA-UPF) D6.1 – Technologies and Tools for Lexical Acquisition Table of contents Table of contents ..........................................................................................................................
    [Show full text]
  • Translate's Localization Guide
    Translate’s Localization Guide Release 0.9.0 Translate Jun 26, 2020 Contents 1 Localisation Guide 1 2 Glossary 191 3 Language Information 195 i ii CHAPTER 1 Localisation Guide The general aim of this document is not to replace other well written works but to draw them together. So for instance the section on projects contains information that should help you get started and point you to the documents that are often hard to find. The section of translation should provide a general enough overview of common mistakes and pitfalls. We have found the localisation community very fragmented and hope that through this document we can bring people together and unify information that is out there but in many many different places. The one section that we feel is unique is the guide to developers – they make assumptions about localisation without fully understanding the implications, we complain but honestly there is not one place that can help give a developer and overview of what is needed from them, we hope that the developer section goes a long way to solving that issue. 1.1 Purpose The purpose of this document is to provide one reference for localisers. You will find lots of information on localising and packaging on the web but not a single resource that can guide you. Most of the information is also domain specific ie it addresses KDE, Mozilla, etc. We hope that this is more general. This document also goes beyond the technical aspects of localisation which seems to be the domain of other lo- calisation documents.
    [Show full text]
  • 3 Corpus Tools for Lexicographers
    Comp. by: pg0994 Stage : Proof ChapterID: 0001546186 Date:14/5/12 Time:16:20:14 Filepath:d:/womat-filecopy/0001546186.3D31 OUP UNCORRECTED PROOF – FIRST PROOF, 14/5/2012, SPi 3 Corpus tools for lexicographers ADAM KILGARRIFF AND IZTOK KOSEM 3.1 Introduction To analyse corpus data, lexicographers need software that allows them to search, manipulate and save data, a ‘corpus tool’. A good corpus tool is the key to a comprehensive lexicographic analysis—a corpus without a good tool to access it is of little use. Both corpus compilation and corpus tools have been swept along by general technological advances over the last three decades. Compiling and storing corpora has become far faster and easier, so corpora tend to be much larger than previous ones. Most of the first COBUILD dictionary was produced from a corpus of eight million words. Several of the leading English dictionaries of the 1990s were produced using the British National Corpus (BNC), of 100 million words. Current lexico- graphic projects we are involved in use corpora of around a billion words—though this is still less than one hundredth of one percent of the English language text available on the Web (see Rundell, this volume). The amount of data to analyse has thus increased significantly, and corpus tools have had to be improved to assist lexicographers in adapting to this change. Corpus tools have become faster, more multifunctional, and customizable. In the COBUILD project, getting concordance output took a long time and then the concordances were printed on paper and handed out to lexicographers (Clear 1987).
    [Show full text]
  • Corpus Linguistics and Data-Driven Learning: a Critical Overview
    Published in Bulletin VALS-ASLA 97, 97-118, 2013 which should be used for any reference to this work Corpus linguistics and data-driven learning: a critical overview Alex BOULTON Université de Lorraine et Crapel, ATILF (UMR 7118) 44 avenue de la libération, 54000 Nancy, France [email protected] Henry TYNE Université de Perpignan Via Domitia et VECT (EA 2983) 52 avenue Paul Alduy, 66000 Perpignan, France [email protected] L'utilisation de corpus s'est avérée intéressante dans de nombreux domaines depuis plus de vingt ans, notamment en didactique des langues. Un état de l'art des études publiées dans ce domaine nous permettra de constater que si les résultats sont en général assez positifs (l'apprentissage a lieu), l'exploitation de corpus reste une activité marginale. Si, en général, de telles recherches en didactique des langues ne donnent pas toujours des résultats fulgurants, on pourrait faire l’hypothèse que les approches sont plus ou moins adaptées à certains types d'apprenants, ou à certains types d'activité, et qu'elles contribuent au bagage général des outils et des techniques disponibles. Qui plus est, avec une approche sur corpus, on pourrait de même croire que les étudiants deviennent tout simplement des apprenants plus conscients de leur apprentissage grâce au travail effectué. Nous suggérons que l'apport des corpus pour l'apprentissage des langues offre ainsi de nombreuses possibilités. Mots-clés: Corpus, apprentissage sur corpus, pratiques ordinaires d'enseignement-apprentissage, processus, TIC 1. Introduction Corpus linguistics is not simply a recondite field of research within linguistics: it affords practical methodologies and tools to further the study of all aspects of language use (e.g.
    [Show full text]
  • From a Bilingual Concordancer to a Translation Finder Julien Bourdaillet, Stéphane Huet, Philippe Langlais, Guy Lapalme
    TransSearch: from a bilingual concordancer to a translation finder Julien Bourdaillet, Stéphane Huet, Philippe Langlais, Guy Lapalme To cite this version: Julien Bourdaillet, Stéphane Huet, Philippe Langlais, Guy Lapalme. TransSearch: from a bilingual concordancer to a translation finder. Machine Translation, Springer Verlag, 2010, 24 (3-4), pp.241-271. 10.1007/s10590-011-9089-6. hal-02021930 HAL Id: hal-02021930 https://hal.archives-ouvertes.fr/hal-02021930 Submitted on 26 Feb 2019 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. 2 J. Bourdaillet, S. Huet, P. Langlais, G. Lapalme Figure 1. English sentences, along with their French translation, where the query in keeping with occurs in the current version of the bilingual concordancer Trans- Search. The query is highlighted in the English sentences, but the user has to search its translations in the French sentences. This makes the discovery of the different translation possibilities more difficult (adapt´ees`a, conforme aux, d´etonne and conform´ement`a for the four displayed sentences), and forces the user to scroll down the page in order to read other sentences while searching for further translation material. This paper describes the enhancement of the web-based bilingual concordancer TransSearch1 (Macklovitch et al., 2000).
    [Show full text]