A Comparison of Smoothing Techniques for Bilingual Lexicon Extraction from Comparable Corpora

Total Page:16

File Type:pdf, Size:1020Kb

A Comparison of Smoothing Techniques for Bilingual Lexicon Extraction from Comparable Corpora A Comparison of Smoothing Techniques for Bilingual Lexicon Extraction from Comparable Corpora Amir Hazem and Emmanuel Morin Laboratore d’Informatique de Nantes-Atlantique (LINA) Universite´ de Nantes, 44322 Nantes Cedex 3, France {Amir.Hazem, Emmanuel.Morin}@univ-nantes.fr Abstract vector of association scores for its translated cooc- currences (Fano, 1961; Dunning, 1993), with the Smoothing is a central issue in lan- profiles of all words of the target language. The guage modeling and a prior step in dif- distance between two such vectors is interpreted ferent natural language processing (NLP) as an indicator of their semantic similarity and tasks. However, less attention has been their translational relation. If using association given to it for bilingual lexicon extrac- measures to extract word translation equivalents tion from comparable corpora. If a first has shown a better performance than using a work to improve the extraction of low raw cooccurrence model, the latter remains the frequency words showed significant im- core of any statistical generalisation (Evert, 2005). provement while using distance-based av- eraging (Pekar et al., 2006), no investi- As has been known, words and other type-rich gation of the many smoothing techniques linguistic populations do not contain instances has been carried out so far. In this pa- of all types in the population, even the largest per, we present a study of some widely- samples (Zipf, 1949; Evert and Baroni, 2007). used smoothing algorithms for language Therefore, the number and distribution of types n-gram modeling (Laplace, Good-Turing, in the available sample are not reliable estimators Kneser-Ney...). Our main contribution is (Evert and Baroni, 2007), especially for small to investigate how the different smoothing comparable corpora. The literature suggests two techniques affect the performance of the major approaches for solving the data sparseness standard approach (Fung, 1998) tradition- problem: smoothing and class-based methods. ally used for bilingual lexicon extraction. Smoothing techniques (Good, 1953) are often We show that using smoothing as a pre- used to better estimate probabilities when there processing step of the standard approach is insufficient data to estimate probabilities ac- increases its performance significantly. curately. They tend to make distributions more uniform, by adjusting low probabilities such as 1 Introduction zero probabilities upward, and high probabilities Cooccurrences play an important role in many downward. Generally, smoothing methods not corpus based approaches in the field of natural- only prevent zero probabilities, but they also language processing (Dagan et al., 1993). They attempt to improve the accuracy of the model as a represent the observable evidence that can be whole (Chen and Goodman, 1999). Class-based distilled from a corpus and are employed for a models (Pereira et al., 1993) use classes of similar variety of applications such as machine transla- words to distinguish between unseen cooccur- tion (Brown et al., 1992), information retrieval rences. The relationship between given words is (Maarek and Smadja, 1989), word sense disam- modeled by analogy with other words that are biguation (Brown et al., 1991), etc. In bilingual in some sense similar to the given ones. Hence, lexicon extraction from comparable corpora, class-based models provide an alternative to the frequency counts for word pairs often serve as independence assumption on the cooccurrence a basis for distributional methods, such as the of given words w1 and w2: the more frequent standard approach (Fung, 1998) which compares w2 is, the higher estimate of P (w2|w1) will be, the cooccurrence profile of a given source word, a regardless of w1. 24 Proceedings of the 6th Workshop on Building and Using Comparable Corpora, pages 24–33, Sofia, Bulgaria, August 8, 2013. c 2013 Association for Computational Linguistics Starting from the observation that smoothing es- method are then described in Section 3. Section timates ignore the expected degree of association 4 describes the experimental setup and our re- between words (assign the same estimate for all sources. Section 5 presents the experiments and unseen cooccurrences) and that class-based mod- comments on several results. We finally discuss els may not structure and generalize word cooc- the results in Section 6 and conclude in Section 7. currence to class cooccurrence patterns without losing too much information, (Dagan et al., 1993) 2 Smoothing Techniques proposed an alternative to these latter approaches Smoothing describes techniques for adjusting the to estimate the probabilities of unseen cooccur- maximum likelihood estimate of probabilities to rences. They presented a method that makes reduce more accurate probabilities. The smooth- analogies between each specific unseen cooccur- ing techniques tend to make distributions more rence and other cooccurrences that contain similar uniform. In this section we present the most words. The analogies are based on the assump- widely used techniques. tion that similar word cooccurrences have simi- lar values of mutual information. Their method 2.1 Additive Smoothing has shown significant improvement for both: word The Laplace estimator or the additive smoothing sense disambiguation in machine translation and (Lidstone, 1920; Johnson, 1932; Jeffreys, 1948) data recovery tasks. (Pekar et al., 2006) em- is one of the simplest types of smoothing. Its ployed the nearest neighbor variety of the previ- principle is to estimate probabilities P assuming ous approach to extract translation equivalents for that each unseen word type actually occurred once. low frequency words from comparable corpora. Then, if we have N events and V possible words They used a distance-based averaging technique instead of : for smoothing (Dagan et al., 1999). Their method occ(w) yielded a significant improvement in relation to P(w) = (1) low frequency words. N We estimate: Starting from the assumption that smoothing improves the accuracy of the model as a whole (Chen and Goodman, 1999), we believe that occ(w) + 1 Paddone(w) = (2) smoothed context vectors should lead to bet- N + V ter performance for bilingual terminology extrac- Applying Laplace estimation to word’s cooc- tion from comparable corpora. In this work we currence suppose that : if two words cooccur to- carry out an empirical comparison of the most gether n times in a corpus, they can cooccur to- widely-used smoothing techniques, including ad- gether (n + 1) times. According to the maximum ditive smoothing (Lidstone, 1920), Good-Turing likelihood estimation (MLE): estimate (Good, 1953), Jelinek-Mercer (Mercer, 1980), Katz (Katz, 1987) and kneser-Ney smooth- ing (Kneser and Ney, 1995). Unlike (Pekar et al., C(wi, wi+1) P(wi+1|wi) = (3) 2006), the present work does not investigate un- C(wi) seen words. We only concentrate on observed Laplace smoothing: cooccurrences. We believe it constitutes the most systematic comparison made so far with differ- ent smoothing techniques for aligning translation ∗ C(wi, wi+1) + 1 P (wi+1|wi) = (4) equivalents from comparable corpora. We show C(wi) + V that using smoothing as a pre-processing step of Several disadvantages emanate from this the standard approach, leads to significant im- method: provement even without considering unseen cooc- currences. 1. The probability of frequent n-grams is under- estimated. In the remainder of this paper, we present in Section 2, the different smoothing techniques. The 2. The probability of rare or unseen n-grams is steps of the standard approach and our extended overestimated. 25 3. All the unseen n-grams are smoothed in the Good-Turing estimator in the linear interpolation same way. as follows: 4. Too much probability mass is shifted towards ∗ ∗ cint(wi+1|wi) = λc (wi+1|wi) + (1 − λ)P(wi) (8) unseen n-grams. One improvement is to use smaller added count 2.4 Katz Smoothing following the equation below: (Katz, 1987) extends the intuitions of Good- Turing estimate by adding the combination of ∗ δ + C(wi, wi+1) P (wi+1|wi) = (5) higher-order models with lower-order models. For δ|V| + C(wi) i i a bigram wi−1 with count r = c(wi−1), its cor- rected count is given by the equation: with δ ∈]0, 1]. ∗ i r if r > 0 2.2 Good-Turing Estimator ckatz(wi−1) = (9) α(wi−1)PML(wi) if r = 0 The Good-Turing estimator (Good, 1953) pro- vides another way to smooth probabilities. It states and: P i 1 − i Pkatz(wi−1) that for any n-gram that occurs r times, we should wi:c(wi−1)>0 ∗ α(wi−1) = P (10) pretend that it occurs r times. The Good-Turing 1 − i PML(wi−1) wi:c(wi−1)>0 estimators use the count of things you have seen once to help estimate the count of things you have According to (Katz, 1987), the general dis- counted estimate c∗ of Good-Turing is not used for never seen. In order to compute the frequency of all counts c. Large counts where c > k for some words, we need to compute Nc, the number of threshold k are assumed to be reliable. (Katz, ∗ events that occur c times (assuming that all items 1987) suggests k = 5. Thus, we define c = c for c > k, and: are binomially distributed). Let Nr be the num- Nc+1 (k+1)Nk+1 ber of items that occur r times. Nr can be used to (c + 1) N − c N c∗ = c 1 (11) provide a better estimate of r, given the binomial (k+1)Nk+1 1 − N distribution. The adjusted frequency r∗ is then: 1 2.5 Kneser-Ney Smoothing Nr+1 r∗ = (r + 1) (6) Kneser-Ney have introduced an extension of ab- N r solute discounting (Kneser and Ney, 1995). The 2.3 Jelinek-Mercer Smoothing estimate of the higher-order distribution is created by subtracting a fixed discount D from each non- As one alternative to missing n-grams, useful in- zero count.
Recommended publications
  • Sequence Models
    Sequence Models Noah Smith Computer Science & Engineering University of Washington [email protected] Lecture Outline 1. Markov models 2. Hidden Markov models 3. Viterbi algorithm 4. Other inference algorithms for HMMs 5. Learning algorithms for HMMs Shameless Self-Promotion • Linguistic Structure Prediction (2011) • Links material in this lecture to many related ideas, including some in other LxMLS lectures. • Available in electronic and print form. MARKOV MODELS One View of Text • Sequence of symbols (bytes, letters, characters, morphemes, words, …) – Let Σ denote the set of symbols. • Lots of possible sequences. (Σ* is infinitely large.) • Probability distributions over Σ*? Pop QuiZ • Am I wearing a generative or discriminative hat right now? Pop QuiZ • Generative models tell a • Discriminative models mythical story to focus on tasks (like explain the data. sorting examples). Trivial Distributions over Σ* • Give probability 0 to sequences with length greater than B; uniform over the rest. • Use data: with N examples, give probability N-1 to each observed sequence, 0 to the rest. • What if we want every sequence to get some probability? – Need a probabilistic model family and algorithms for constructing the model from data. A History-Based Model n+1 p(start,w1,w2,...,wn, stop) = γ(wi w1,w2,...,wi 1) | − i=1 • Generate each word from left to right, conditioned on what came before it. Die / Dice one die two dice start one die per history: … … … start I one die per history: … … … history = start start I want one die per history: … … history = start I … start I want a one die per history: … … … history = start I want start I want a flight one die per history: … … … history = start I want a start I want a flight to one die per history: … … … history = start I want a flight start I want a flight to Lisbon one die per history: … … … history = start I want a flight to start I want a flight to Lisbon .
    [Show full text]
  • Introduction to Machine Learning Lecture 3
    Introduction to Machine Learning Lecture 3 Mehryar Mohri Courant Institute and Google Research [email protected] Bayesian Learning Bayes’ Formula/Rule Terminology: likelihood prior Pr[X Y ]Pr[Y ] Pr[Y X]= | . posterior| Pr[X] probability evidence Mehryar Mohri - Introduction to Machine Learning page 3 Loss Function Definition: function L : R + indicating the penalty for an incorrectY prediction.×Y→ • L ( y,y ) : loss for prediction of y instead of y . Examples: • zero-one loss: standard loss function in classification; L ( y,y )=1 y = y for y,y . ∈Y • non-symmetric losses: e.g., for spam classification; L ( ham , spam) L ( spam , ham) . ≤ squared loss: standard loss function in • 2 regression; L ( y,y )=( y y ) . − Mehryar Mohri - Introduction to Machine Learning page 4 Classification Problem Input space : e.g., set of documents. X N feature vector Φ ( x ) R associated to x . • ∈ ∈X N notation: feature vector x R . • ∈ • example: vector of word counts in document. Output or target space : set of classes; e.g., sport, Y business, art. Problem: given x , predict the correct class y ∈Y associated to x . Mehryar Mohri - Introduction to Machine Learning page 5 Bayesian Prediction Definition: the expected conditional loss of predicting y is ∈Y [y x]= L(y,y)Pr[y x]. L | | y ∈Y Bayesian decision : predict class minimizing expected conditional loss, that is y∗ =argmin [y x]=argmin L(y,y)Pr[y x]. L | | y y y ∈Y zero-oneb loss: y∗ =argmaxb Pr[y x]. • y | Maximum a Posteriori (MAP) principle. b Mehryar Mohri - Introduction to Machine Learning page 6 Binary Classification - Illustration 1 Pr[y x] 1 | Pr[y x] 2 | 0 x Mehryar Mohri - Introduction to Machine Learning page 7 Maximum a Posteriori (MAP) Definition: the MAP principle consists of predicting according to the rule y =argmaxPr[y x].
    [Show full text]
  • Additive Smoothing for Relevance-Based Language Modelling of Recommender Systems
    Additive Smoothing for Relevance-Based Language Modelling of Recommender Systems Daniel Valcarce, Javier Parapar, Álvaro Barreiro Information Retrieval Lab Department of Computer Science University of A Coruña, Spain {daniel.valcarce, javierparapar, barreiro}@udc.es ABSTRACT the users' past behaviour. Continuous developments in this The use of Relevance-Based Language Models for top-N rec- field have been made to meet these high expectations. ommendation has become a promising line of research. Pre- We can distinguish multiple approaches to recommenda- vious works have used collection-based smoothing methods tion [24]. They are often classified in three main categories: for this task. However, a recent analysis on RM1 (an estima- content-based, collaborative filtering and hybrid techniques. tion of Relevance-Based Language Models) in document re- Content-based approaches generate recommendations based trieval showed that this type of smoothing methods demote on the item and user descriptions: they suggest items simi- the IDF effect in pseudo-relevance feedback. In this paper, lar to those liked by the target user [9]. In contrast, collab- we claim that the IDF effect from retrieval is closely related orative filtering methods rely on the interactions (typically to the concept of novelty in recommendation. We perform ratings) between users and items in the system [21]. Finally, an axiomatic analysis of the IDF effect on RM2 conclud- there exist hybrid algorithms that combine both collabora- ing that this kind of smoothing methods also demotes the tive filtering and content-based approaches. IDF effect in recommendation. By axiomatic analysis, we Traditionally, Information Retrieval (IR) has focused on find that a collection-agnostic method, Additive smoothing, delivering the information that users demand [1].
    [Show full text]
  • Smoothing Parameter and Model Selection for General Smooth Models
    JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION , VOL. , NO. , –, Theory and Methods http://dx.doi.org/./.. Smoothing Parameter and Model Selection for General Smooth Models Simon N. Wooda,NatalyaPyab, and Benjamin Säfkenc aSchool of Mathematics, University of Bristol, Bristol, UK; bSchool of Science and Technology, Nazarbayev University, Astana, Kazakhstan, and KIMEP University, Almaty, Kazakhstan; cChairs of Statistics and Econometrics, Georg-August-Universität Göttingen, Germany ABSTRACT ARTICLE HISTORY This article discusses a general framework for smoothing parameter estimation for models with regular like- Received October lihoods constructed in terms of unknown smooth functions of covariates. Gaussian random effects and Revised March parametric terms may also be present. By construction the method is numerically stable and convergent, KEYWORDS and enables smoothing parameter uncertainty to be quantified. The latter enables us to fix a well known Additive model; AIC; problem with AIC for such models, thereby improving the range of model selection tools available. The Distributional regression; smooth functions are represented by reduced rank spline like smoothers, with associated quadratic penal- GAM; Location scale and ties measuring function smoothness. Model estimation is by penalized likelihood maximization, where shape model; Ordered the smoothing parameters controlling the extent of penalization are estimated by Laplace approximate categorical regression; marginal likelihood. The methods cover, for example, generalized additive models for nonexponential fam- Penalized regression spline; ily responses (e.g., beta, ordered categorical, scaled t distribution, negative binomial and Tweedie distribu- REML; Smooth Cox model; tions), generalized additive models for location scale and shape (e.g., two stage zero inflation models, and Smoothing parameter uncertainty; Statistical Gaussian location-scale models), Cox proportional hazards models and multivariate additive models.
    [Show full text]
  • Introduction to Naivebayes Package
    Introduction to naivebayes package Michal Majka March 8, 2020 1 Introduction The naivebayes package provides an efficient implementation of the popular Naïve Bayes classifier. It was developed and is now maintained based on three principles: it should be efficient, user friendly and written in Base R. The last implies no dependencies, however, it neither denies nor interferes with the first as many functions from the Base R distribution use highly efficient routines programmed in lower level languages, such as C or FORTRAN. In fact, the naivebayes package utilizes only such functions for resource-intensive calculations. This vignette should make the implementation of the general naive_bayes() function more transparent and give an overview over its functionalities. 2 Installation Just like many other R packages, the naivebayes can be installed from the CRAN repository by simply typing into the console the following line: install.packages("naivebayes") An alternative way of obtaining the package is first downloading the package source from https://CRAN.R-project.org/package=naivebayes, specifying the location of the file and running in the console: # path_to_tar.gz file <- " " install.packages(path_to_tar.gz, repos= NULL, type= "source") The full source code can be viewed either on the official GitHub CRAN repository: https: //github.com/cran/naivebayes or on the development repository: https://github.com/ majkamichal/naivebayes. After successful installation, the package can be used with: library(naivebayes) 3 Main functions The general function
    [Show full text]
  • Adding Improvements to Multinomial Naive Bayes for Increasing the Accuracy of Aggressive Tweets Classification
    ISSN (Online) 2278-1021 IJARCCE ISSN (Print) 2319-5940 International Journal of Advanced Research in Computer and Communication Engineering Vol. 8, Issue 11, November 2019 Adding Improvements to Multinomial Naive Bayes for Increasing the Accuracy of Aggressive Tweets Classification Divisha Bisht1, Sanjay Joshi2 Student, Department of Information Technology, College of Technology, GBPUA&T, Pantnagar, India1 Assistant Professor, Department of Information Technology, College of Technology, GBPUA&T, Pantnagar, India2 Abstract: Naïve Bayes is a popular supervised learning method widely used for text classification and sentiment analysis. There has been a rise of aggressive troll comments in the social networking sites which leads to online harassment and causes distressful online experiences. This paper uses Naïve Bayes classifier using Bag of Words on ‘Tweets dataset for Detection of Cyber-Trolls’ (dataset taken from Kaggle) and aims to improve baseline model by adding cumulative changes and studying their impact on the performance of the model. Keywords: Naïve Bayes, Classification, Improvements, Accuracy I. INTRODUCTION The prevalence, ease of access and anonymity in the social media communities supported by the all dominating web of internet has led to a rise of online negative behaviour in the form of trolling. A regular influx of a large number of troll comments can be observed in most Social Networking Sites (SNS) like Twitter, Facebook, Reddit. The Collins English Dictionary defines troll as “someone who posts unkind or offensive messages
    [Show full text]
  • Efficient Learning of Smooth Probability Functions from Bernoulli Tests With
    Efficient learning of smooth probability functions from Bernoulli tests with guarantees Paul Rolland 1 Ali Kavis 1 Alex Immer 1 Adish Singla 2 Volkan Cevher 1 Abstract grows cubicly with the number of tests, and can thus be inapplicable when the amount of data becomes large. There We study the fundamental problem of learning an has been extensive work to resolve this cubic complexity unknown, smooth probability function via point- associated with GP computations (Rasmussen, 2004). How- wise Bernoulli tests. We provide a scalable algo- ever, these methods require additional approximations on rithm for efficiently solving this problem with rig- the posterior distribution, which impacts the efficiency and orous guarantees. In particular, we prove the con- make the overall algorithm even more complicated, leading vergence rate of our posterior update rule to the to further difficulties in establishing theoretical convergence true probability function in L2-norm. Moreover, guarantees. we allow the Bernoulli tests to depend on con- Recently, Goetschalckx et al.(2011) tackled the issues en- textual features and provide a modified inference countered by LGP, and proposed a scalable inference engine engine with provable guarantees for this novel based on Beta Processes called Continuous Correlated Beta setting. Numerical results show that the empirical Process (CCBP) for approximating the probability function. convergence rates match the theory, and illustrate By scalable, we mean the algorithm complexity scales lin- the superiority of our approach in handling con- early with the number of tests. However, no theoretical anal- textual features over the state-of-the-art. ysis is provided, and the approximation error saturates as the number of tests becomes large (cf., section 5.1).
    [Show full text]
  • Bias/Variance Tradeoff
    Behavioral Data Mining Lecture 2 Outline Review of Statistical Learning • Linear Regression, Nearest Neighbor Estimator, Loss • Bias-Variance Tradeoff • Naïve Bayes Classifier Statistical Learning We will follow the exposition in Hastie et al., “Statistical Learning” Variables: Qualitative (discrete): {True, False}, {Red, Green, Blue}, {1, 2, 3, 4, 5}, or latent (anonymous) factors like {F1, F2, F3, F4, F5} Quantitative (continuous): Real values ∈ ℝ, or values in an interval. e.g. height, weight, frequency,… Statistical Learning Variables will often be vectors (uppercase italic) X = (X1,…, Xn) 푋11 ⋯ 푋1푛 or m x n matrices (uppercase bold) 퐗 = ⋮ ⋱ ⋮ 푋푚1 ⋯ 푋푚푛 Observations will be written as matching lowercase, e.g. a series of observations of X would be 푥1, … , 푥푝 Estimates will be written as symbols with a hat: 푌 for a variable 푌 or 훽 for a parameter 훽. Linear Regression The predicted value of Y is given by: 푝 푌 = 훽0 + 푋푗훽푗 푗=1 and the vector of coefficients 훽 comprise the regression model. In the general case, we can have a vector output and a matrix of coefficients and write 푌 = 훽퐗 (assume 푋0 = 1). (thinking of observations as columns rather than rows will work better for matrix algorithms, i.e. 퐗 is transposed rel. to book). Statistical Learning 푋11 ⋯ 푋1푛 i.e. columns of 퐗 = ⋮ ⋱ ⋮ 푋푚1 ⋯ 푋푚푛 are distinct observations, rows of X are input features. Same for output features. Residual Sum-of-Squares To determine the model parameters 훽 from some data, we need to define an error function, like Residual Sum of Squares: 푁 2 RSS 훽 = 푦푖 − 훽푥푖 푖=1 or symbolically RSS 훽 = 퐲 − 훽퐗 푇 퐲 − 훽퐗 .
    [Show full text]
  • Package 'Naivebayes'
    Package ‘naivebayes’ March 8, 2020 Type Package Title High Performance Implementation of the Naive Bayes Algorithm Version 0.9.7 Author Michal Majka Maintainer Michal Majka <[email protected]> Description In this implementation of the Naive Bayes classifier following class conditional distribu- tions are available: Bernoulli, Categorical, Gaussian, Poisson and non-parametric representa- tion of the class conditional density estimated via Kernel Density Estimation. Implemented clas- sifiers handle missing data and can take advantage of sparse data. URL https://github.com/majkamichal/naivebayes, https://majkamichal.github.io/naivebayes/ BugReports https://github.com/majkamichal/naivebayes/issues License GPL-2 Encoding UTF-8 NeedsCompilation no VignetteBuilder knitr Suggests knitr, Matrix RoxygenNote 6.1.1 Repository CRAN Date/Publication 2020-03-08 18:20:05 UTC R topics documented: bernoulli_naive_bayes . .2 coef .............................................5 gaussian_naive_bayes . .6 get_cond_dist . .8 Infix operators . .9 multinomial_naive_bayes . 10 naivebayes . 12 1 2 bernoulli_naive_bayes naive_bayes . 13 nonparametric_naive_bayes . 18 plot.bernoulli_naive_bayes . 20 plot.gaussian_naive_bayes . 21 plot.naive_bayes . 22 plot.nonparametric_naive_bayes . 24 plot.poisson_naive_bayes . 26 poisson_naive_bayes . 27 predict.bernoulli_naive_bayes . 30 predict.gaussian_naive_bayes . 31 predict.multinomial_naive_bayes . 33 predict.naive_bayes . 34 predict.nonparametric_naive_bayes . 36 predict.poisson_naive_bayes . 38 tables . 39 Index 41
    [Show full text]
  • Language Modeling and Probability
    Chapter 1 Language modeling and probability 1.1 Introduction Which of the following is a reasonable English sentence: `I bought a rose' or `I bought arose'1? Of course the question is rhetorical. Anyone reading this text must have sufficient mastery of English to recognize that the first is good English while the second is not. Even so, when spoken the two sound the same, so a computer dictation system or speech-recognition system would have a hard time distinguishing them by sound alone. Consequently such systems employ a language model, which distinguishes the fluent (i.e., what a native English speaker would say) phrases and sentences of a particular natural language such as English from the multitude of possible sequences of words. Virtually all methods for language modeling are probabilistic in nature. That is, instead of making a categorical (yes-or-no) judgment about the fluency of a sequence of words, they return (an estimate of) the probabil- ity of the sequence. The probability of a sequence is a real number be- tween 0 and 1, and high-probability sequences are more likely to occur than low-probability ones. Thus a model that estimates the probability that a sequence of words is a phrase or sentence permits us to rank different se- quences in terms of their fluency; it can answer the question with which we started this chapter. In fact, probabilistic methods are pervasive in modern computational linguistics, and all the major topics discussed in this book involve proba- 1 c Eugene Charniak, Mark Johnson 2013 7 8 CHAPTER 1.
    [Show full text]
  • How to Count Thumb-Ups and Thumb-Downs: User-Rating Based Ranking of Items from an Axiomatic Perspective
    How to Count Thumb-Ups and Thumb-Downs: User-Rating based Ranking of Items from an Axiomatic Perspective Dell Zhang1, Robert Mao2, Haitao Li3, and Joanne Mao4 1 Birkbeck, University of London Malet Street, London WC1E 7HX, UK [email protected] 2 Microsoft Research 1 Microsoft Way, Redmond, WA 98052, USA [email protected] 3 Microsoft Corporation 1 Microsoft Way, Redmond, WA 98052, USA [email protected] 4 MNX Consulting 9833 Wilden Lane, Potomac, MD 20854, USA [email protected] Abstract. It is a common practice among Web 2.0 services to allow users to rate items on their sites. In this paper, we first point out the flaws of the popular methods for user-rating based ranking of items, and then argue that two well-known Information Retrieval (IR) tech- niques, namely the Probability Ranking Principle and Statistical Lan- guage Modelling, provide simple but effective solutions to this problem. Furthermore, we examine the existing and proposed methods in an ax- iomatic framework, and prove that only the score functions given by the Dirichlet Prior smoothing method as well as its special cases can satisfy both of the two axioms borrowed from economics. 1 Introduction Suppose that you are building a Web 2.0 service which allows users to rate items (such as commercial-products, photos, videos, songs, news-reports, and answers- to-questions) on your site, you probably want to sort items according to their user-ratings so that stuff \liked" by users would be ranked higher than those \disliked". How should you do that? What is the best way to count such thumb- ups and thumb-downs? Although this problem | user-rating based ranking of items | looks easy and occurs in numerous applications, the right solution to it is actually not very obvious.
    [Show full text]
  • Language Models
    Language Models Natural Language Processing CS 6120—Spring 2014 Northeastern University ! David Smith Predicting Language Predicting Language Predicting Language A SMALL OBLONG READING LAMP ON THE DESK Predicting Language A SMALL OBLONG READING LAMP ON THE DESK --SM----OBL----REA----------O------D--- Predicting Language A SMALL OBLONG READING LAMP ON THE DESK --SM----OBL----REA----------O------D--- What informs this prediction? Predicting Language • Optical character recognition • Automatic speech recognition • Machine translation • Spelling/grammar correction • Restoring redacted texts Scoring Language • Language identification • Text categorization • Grading essays (!) • Information retrieval Larger Contexts text1.concordance("match") Displaying 9 of 9 matches: t in the seventh heavens . Elsewhere match that bloom of theirs , ye cannot , s ey all stand before me ; and I their match . Oh , hard ! that to fire others , h , hard ! that to fire others , the match itself must needs be wasting ! What so sweet on earth -- heaven may not match it !-- as those swift glances of war end ; but hardly had he ignited his match across the rough sandpaper of his ha utting the lashing of the waterproof match keg , after many failures Starbuck c asks heaped up in him and the slow - match silently burning along towards them followed by Stubb ' s producing his match and igniting his pipe , for now a re aspect , Pip and Dough - Boy made a match , like a black pony and a white one text2.concordance("match") Displaying 15 of 15 matches: isregarded her disapprobation of the match . Mr . John Dashwood told his mother ced of it . It would be an excellent match , for HE was rich , and SHE was hand you have any reason to expect such a match ." " Don ' t pretend to deny it , be ry much .
    [Show full text]