Unsupervised Record Matching with Noisy and Incomplete Data

Unsupervised Record Matching with Noisy and Incomplete Data

Noname manuscript No. (will be inserted by the editor) Unsupervised record matching with noisy and incomplete data Yves van Gennip · Blake Hunter · Anna Ma · Daniel Moyer · Ryan de Vera · Andrea L. Bertozzi Abstract We consider the problem of duplicate the Cora Citation Matching data set, and two sets detection in noisy and incomplete data: given a of restaurant review data. The results show that large data set in which each record has multiple the methods that use words as the basic units are entries (attributes), detect which distinct records preferable to those that use 3-grams. Moreover, in refer to the same real world entity. This task is some (but certainly not all) parameter ranges soft complicated by noise (such as misspellings) and term frequency-inverse document frequency meth- missing data, which can lead to records being dif- ods can outperform the standard term frequency- ferent, despite referring to the same entity. Our inverse document frequency method. The results method consists of three main steps: creating a also confirm that our method for automatically similarity score between records, grouping records determining the number of groups typically works together into \unique entities", and refining the well in many cases and allows for accurate results groups. We compare various methods for creat- in the absence of a priori knowledge of the number ing similarity scores between noisy records, con- of unique entities in the data set. sidering different combinations of string matching, Keywords duplicate detection · data cleaning · term frequency-inverse document frequency meth- data integration · record linkage · entity matching · ods, and n-gram techniques. In particular, we in- identity uncertainty · transcription error troduce a vectorized soft term frequency-inverse document frequency method, with an optional re- finement step. We also discuss two methods to deal 1 Introduction with missing data in computing similarity scores. We test our method on the Los Angeles Po- Fast methods for matching records in databases lice Department Field Interview Card data set, that are similar or identical have growing impor- tance as database sizes increase [69,71,21,43,2]. Y. van Gennip Slight errors in observation, processing, or enter- University of Nottingham ing data may cause multiple unlinked nearly dupli- E-mail: [email protected] cated records to be created for a single real world B. Hunter entity. Furthermore, records are often made up of Claremont McKenna College multiple attributes, or fields; a small error or miss- E-mail: [email protected] ing entry for any one of these fields could cause A. Ma duplication. Claremont Graduate University E-mail: [email protected] For example, one of the data sets we consider in this paper is a database of personal information D. Moyer University of Southern California generated by the Los Angeles Police Department E-mail: [email protected] (LAPD). Each record contains information such as R. de Vera first name, last name, and address. Misspellings, formerly California State University, Long Beach different ways of writing names, and even address E-mail: [email protected] changes over time, can all lead to duplicate entries A. L. Bertozzi in the database for the same person. University of California, Los Angeles Duplicate detection problems do not scale well. E-mail: [email protected] The number of comparisons which are required 2 Yves van Gennip et al. grows quadratically with the number of records, the features are and the number of possible subsets grows exponen- k k k k tially. Unlinked duplicate records bloat the storage f1 = \ Ei"; f2 = \Alb"; f3 = \Ein"; f4 = \ber"; k k k k size of the database and make compression into f5 = \ein"; f6 = \ert"; f7 = \ins"; f8 = \lbe"; other formats difficult. Duplicates also make anal- k k k k f9 = \nst"; f10 = \rt "; f11 = \ste"; f12 = \t E"; yses of the data much more complicated, much less k accurate, and may render many forms of analyses f13 = \tei": impossible, as the data is no longer a true repre- In our applications we remove any N-grams that sentation of the real world. After a detailed de- consist purely of white spaces. scription of the problem in Section 2 and a review When discussing our results we will specify where of related methods in Section 3, we present in Sec- we have used method (1) and where we have used tion 4 a vectorized soft term frequency-inverse doc- method (2), by indicating if we have used word ument frequency (soft TF-IDF) solution for string features or N-gram features respectively. and record comparison. In addition to creating a For each field we create a dictionary of all fea- vectorized version of the soft TF-IDF scheme we tures in that field and then remove stop words or also present an automated thresholding and refine- words that are irrelevant, such as \and", \the", ment method, which uses the computed soft TF- \or", \None", \NA", or \ " (the empty string). We IDF similarity scores to cluster together likely du- refer to such words collectively as \stop words" (as plicates. In Section 5 we explore the performances is common in practice) and to this reduced dictio- of different variations of our method on four text nary as the set of features, f k, where: databases that contain duplicates. k k k k k f := f1 ; f2 ; : : : ; fm−1; fm ; if the kth field contains m features. This reduced dictionary represents an ordered set of unique fea- tures found in field ck. Note that m, the number of features in f k, de- pends on k, since a separate set of features is con- 2 Terminology and problem statement structed for each field. To keep the notation as sim- ple as possible, we will not make this dependence explicit in our notation. Since, in this paper, m is We define a data set D to be an n × a array where always used in the context of a given, fixed k, this each element of the array is a string (possibly the should not lead to confusion. empty string). We refer to a column as a field, and We will write f k 2 e if the entry e contains th k j i;k i;k denote the k field c . A row is referred to as a the feature f k. Multiple copies of the same feature th j record, with ri denoting the i record of the data can be contained in any given entry. This will be set. An element of the array is referred to as an en- explored further in Section 3.2. Note that an entry th try, denoted ei;k (referring to the i entry in the can be \empty" if it only contains stop words, since th k field). Each entry can contain multiple features those are not included in the set of features f k. where a feature is a string of characters. There We refer to a subset of records as a cluster is significant freedom in choosing how to divide and denote it R = frt1 ; : : : ; rtp g where each ti 2 the string which makes up entry ei;k into multi- f1; 2; : : : ng is the index of a record in the data set. ple features. In our implementations in this paper The duplicate detection problem can then be we compare two different methods: (1) cutting the stated as follows: Given a data set containing du- string at white spaces and (2) dividing the string plicate records, find clusters of records that repre- into N-grams. For example, consider an entry ei;k sent a single entity, i.e., subsets containing those which is made up of the string \Albert Einstein". records that are duplicates of each other. Duplicate Following method (1) this entry has two features: records, in this sense, are not necessarily identical \Albert" and `'Einstein". Method (2), the N-gram records but can also be `near identical' records. k k representation, creates features f1 ; : : : ; fL, corre- They are allowed to vary due to spelling errors or sponding to all possible substrings of ei;k contain- missing entries. ing N consecutive characters (if an entry contains N characters or fewer, the full entry is considered to be a single token). Hence L is equal to the length 3 Related methods of the string minus (N − 1). In our example, if we use N = 3, ei;k contains 13 features. Ordered al- Numerous algorithms for duplicate detection exist, phabetically (with white space \ " preceding \A"), including various probabilistic methods [33], string Unsupervised record matching with noisy and incomplete data 3 comparison metrics [32,68], feature frequency meth- ings a priori, but they can be incorporated into our ods [57], and hybrid methods [16]. There are many method other proposed methods for data matching, record Pay-as-you-go [67] or progressive duplicate de- linkage and various stages of data cleaning, that tection methods [52,34] have been developed for have a range of success in specific applications but applications in which the duplicate detection has also come with their own limitations and draw- to happen in limited time on data which is ac- backs. Surveys of various duplicate detection meth- quired in small batches or in (almost) real-time ods can be found in [21,4,29,1,54]. [41]. In our paper we consider the situation in which Probabilistic rule based methods, such as Fellegi- we have all data available from the start. Sunter based models [68], are methods that at- In [9] the authors suggest to use trainable sim- tempt to learn features and rules for record match- ilarity measures that can adapt to different do- ing using conditional probabilities, however, these mains from which the data originate.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    23 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us