
Example-based Acquisition of Fine-grained Collocational Resources Sara Rodr´ıguez-Fernandez´ 1, Roberto Carlini1, Luis Espinosa-Anke1, Leo Wanner1,2 1NLP Group, Department of Information and Communication Technologies, Pompeu Fabra University C/ Roc Boronat, 138, 08018 Barcelona (Spain) 2Catalan Institute for Research and Advanced Studies (ICREA) sara.rodriguez.fernandezjroberto.carlinijluis.espinosajleo.wanner@upf.edu Abstract Collocations such as heavy rain or make [a] decision, are combinations of two elements where one (the base) is freely chosen, while the choice of the other (collocate) is restricted, depending on the base. Collocations present difficulties even to advanced language learners, who usually struggle to find the right collocate to express a particular meaning, e.g., both heavy and strong express the meaning ‘intense’, but while rain selects heavy, wind selects strong. Lexical Functions (LFs) describe the meanings that hold between the elements of collocations, such as ‘intense’, ‘perform’, ‘create’, ‘increase’, etc. Language resources with semantically classified collocations would be of great help for students, however they are expensive to build, since they are manually constructed, and scarce. We present an unsu- pervised approach to the acquisition and semantic classification of collocations according to LFs, based on word embeddings in which, given an example of a collocation for each of the target LFs and a set of bases, the system retrieves a list of collocates for each base and LF. Keywords: collocation retrieval, collocation semantic classification, collocation resources, second language learning, word em- beddings 1. Introduction the high cost of their compilation, such dictionaries are of- ten of limited coverage1 and available only for a few lan- Collocations of the kind make [a] suggestion, attend [a] guages. lecture, heavy rain, deep thought, etc., are restricted lexical In what follows, we present an unsupervised example- co-occurrences between two syntactically related lexical el- based approach to automatic compilation of semantically ements. One of these elements (the base) is freely chosen, typed collocation resources. The typology that underlies while the choice of the other (the collocate) depends on our work are the glossed lexical functions (LFs) (Mel’cuk,ˇ the base (Hausmann, 1984; Cowie, 1994; Mel’cuk,ˇ 1995). 1996), which are the most fine-grained semantic colloca- For instance, in make [a] suggestion, the choice of sug- tion typology available to date. Using a state-of-the-art con- gestion is free, while the choice of make is restricted; cf., tinuous word representation, we take as input seed a single e.g., *take [a] suggestion. Such collocations are by their representative example of a specific LF to retrieve from a nature idiosyncratic and therefore also language-specific. corpus the collocates that are of the same LF (i.e., type) Thus, you make [a] suggestion, but you give [an] advice, for new bases. So far, we focused in our experiments on you attend [a] lecture, but you assist [an] operation, rain Spanish. In the next section, we introduce the LF typology; is heavy while wind is strong. In English, you take [a] in Section 3., we describe our methodology for the acqui- walk, while in Spanish you ‘give’ it (dar [un] paseo, and sition of collocation resources. Section 4. outlines the ex- in German you ‘make’ it ([einen] Spaziergang machen); periments carried out to assess the performance of the im- in English, rain is heavy, while in Spanish and German it plementation of the methodology, and Section 5. discusses is ‘strong’ (fuerte lluvia/starker Regen). And so on. The their outcome. Section 6., finally, presents some conclu- idiosyncrasy of such collocations makes them a real chal- sions we draw from the presented work. lenge even for advanced second language learners (Haus- mann, 1984; Bahns and Eldaw, 1993; Granger, 1998; Lewis 2. Lexical Functions: A Semantic and Conzett, 2000; Nesselhauf, 2005; Alonso Ramos et al., Collocation Typology 2010). Nesselhauf (2005) and Farghal and Obiedat (1995) Collocation dictionaries, such as the Oxford Colloca- report that learners of English paraphrase their discourse in tions Dictionary or the MacMillan Collocations Dictionary order to avoid using collocations that they do not master. group collocations in terms of semantic categories to facil- In other words, a learner knows the meaning they want to itate that language learners can easily retrieve the collocate express, but they fail to do it by means of a collocation, that expresses the meaning they want to express. However, which means that they fail to pick the collocate that ex- this categorization (or classification) is not always homo- presses this meaning. Semantically annotated collocation geneous. For instance, in the MacMillan Dictionary, the resources would thus be of great aid. A number of collo- entries for admiration and affinity contain the categories cation dictionaries already either group collocates semanti- ‘have’ and ‘show’, each with their own collocates, while cally (as, e.g., the Oxford Collocations Dictionary and BBI for other headwords, such as, e.g., ability, collocates with (Benson et al., 2010) for English) or use explicit semantic the meaning ‘have’ and ‘show’ are grouped under the same glosses (as, e.g., the MacMillan Collocations Dictionary for English, the LAF (Mel’cukˇ and Polguere,` 2007) for French, 1To the best of our knowledge, only for English collocations and DiCE http://dicesp.com for Spanish). However, due to dictionaries of a reasonable coverage are available 2317 category; in the entry for alarm, cause or express are not Oper1(decision) = fmakeg assigned to any category, while for other keywords the cat- Oper1(idea) = fhaveg 3 egories ‘cause’ and ‘show’ are used (see e.g., problem for Real1 (‘realize/ do what is expected with B’) ‘cause’ or admiration for ‘show’); and so on. On the other Real1(temptation) = fsuccumb [to ∼], hand, in the case of some headwords, the categories are yield [to ∼]g very fine-grained (cf., e.g., amount, which includes glosses Real1(exam) = fpassg like ‘very large’, ‘too large’, ‘rather large’, ‘at the limit’, Real1(piano) = fplayg etc.), while in the case of others, it is much more coarse- IncepOper1 (‘begin to do, begin to have B’) grained (cf., e.g., analogy, for which collocates with differ- IncepOper1(fireN ) = fopeng ent semantics are included under the same gloss, as, e.g., IncepOper1(debt) = frun up, incurg appropriate, apt, close, exact, good, helpful, interesting, CausOper1 (‘do something so that B is performed/done’) obvious, perfect, simple, useful that all belong to the cate- CausOper1(opinion) = lead [to ∼] gory ‘good’). This lack of uniformity may confuse learners, who will expect that collocates grouped together share sim- 3. Methodology for the Acquisition of ilar semantic features. Still, the use of semantic categories Collocation Resources that reveal a sufficient level of detail for the presentation of Taking as inspiration the Neural Probabilistic Model (Ben- collocations in dictionaries is meaningful. gio et al., 2006), Mikolov et al. (2013c) proposed an ap- In computational lexicography, categories of different gran- proach for computing continuous vector representations of ularity have been used for automatic classification of col- words from large corpora by predicting words given their locations from given lists; cf., e.g., Wanner et al. (in context, while at the same time predicting context, given print), who use 16 categories for the classification of an input word. The vectors computed following the ap- verb+noun collocations and 5 categories for the classifica- proaches described in (Mikolov et al., 2013a; Mikolov tion of adj+noun collocations; Moreno et al. (2013), who et al., 2013c) have been extensively used for semanti- work with 5 broader categories for verb+noun collocations, cally intensive tasks, mainly because of the properties or Chung-Chi et al. (2009), who also use very coarse- that word embeddings have to capture relationships among grained semantic categories of the type ‘goodness’, ‘heavi- words which are not explicitly encoded in the training data. ness’, ‘measures’, etc. But all of these categories have the Among these tasks are: Machine Translation (Mikolov et disadvantage to be ad hoc. Therefore, we follow a differ- al., 2013b), where a transition matrix is learned from word ent approach. As already Wanner et al. (2006), Gelbukh pairs of two different languages and then applied to unseen and Kolesnikova. (2012) and also Moreno et al. (2013) in cases in order to provide word-level translation; Knowledge their second run of experiments, we use the semantic ty- Base (KB) Embedding (transformation of structured infor- pology of Lexical Functions (LFs) to classify collocations– mation in KBs such as Freebase or DBpedia into continu- assuming that once we obtained LF instances, they can be ous vectors in a shared space) (Bordes et al., 2011); Knowl- grouped by lexicographers into more generic coherent se- edge Base Completion (introduction of novel relationships mantic categories. However, in contrast to these works, we in existing KBs) (Lin et al., 2015); Word Similarity, Syn- acquire and classify the collocations simultaneously, while tactic Relations, Synonym Selection and Sentiment Anal- they classify only already given collocations. ysis (Faruqui et al., 2015); Word Similarity and Related- As already mentioned above, LFs (Mel’cuk,ˇ 1996) are a ness (Iacobacci et al., 2015); and taxonomy
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages6 Page
-
File Size-