Adaptive Name Matching in Information Integration

Total Page:16

File Type:pdf, Size:1020Kb

Adaptive Name Matching in Information Integration Information Integration on the Web Adaptive Name Matching in Information Integration Mikhail Bilenko and Raymond Mooney, University of Texas at Austin William Cohen, Pradeep Ravikumar, and Stephen Fienberg, Carnegie Mellon University hen you combine information from heterogeneous information sources, you W must identify data records that refer to equivalent entities. However, records that describe the same object might differ syntactically—for example, the same person can be referred to as “William Jefferson Clinton” and “bill clinton.” Figure 1 presents Identifying more complex examples of duplicate records that are tant in duplicate identification. We examine a few effec- not identical. tive and widely used metrics for measuring similarity. approximately Variations in representation across information sources can arise from differences in formats that Edit distance duplicate database store data, typographical and optical character recog- An important class of such metrics are edit dis- nition (OCR) errors, and abbreviations. Variations tances. Here, the distance between strings s and t is records that refer to are particularly pronounced in data that’s automati- the cost of the best sequence of edit operations that cally extracted from Web pages and unstructured or converts s to t. For example, consider mapping the the same entity is semistructured documents, making the matching task string s = “Willlaim” to t = “William” using these essential for information integration on the Web. edit operations: essential for Researchers have investigated the problem of identifying duplicate objects under several monikers, • Copy the next letter in s to the next position in t. information including record linkage, merge-purge, duplicate • Insert a new letter in t that does not appear in s. detection, database hardening, identity uncertainty, • Substitute a different letter in t for the next letter integration. The coreference resolution, and name matching. Such in s. diversity reflects research in several areas: statistics, • Delete the next letter in s; that is, don’t copy it to t. authors compare and databases, digital libraries, natural language pro- cessing, and data mining. The sidebar summarizes Table 1 shows one possible sequence of opera- describe methods for various traditional approaches to name matching. tions. (The vertical bar represents a “cursor” in s or Our research explores approaches to the name- t, indicating the next letter.) If the copy operation has combining and matching problem that improve accuracy. Particu- cost zero and all other operations have cost one, this larly, we employ methods that adapt to a specific is the least expensive such sequence for s and t,so learning similarity domain by combining multiple string similarity the edit distance between s and t is three. methods that capture different notions of similarity. A fairly efficient scheme exists for computing the measures for name lowest-cost edit sequence for these operations. The Static string similarity metrics trick is to consider a slightly more complex function, matching. Most methods we discuss in the sidebar use metrics D(s, t, i, j), which is the edit distance between the that measure the similarity between individual record first i letters in s and the first j letters in t. Let si denote fields. As Figure 1 suggests, individual record fields are the ith letter of s, and, similarly, let tj be the jth letter often stored as strings. This means that functions that of t. It’s easy to see that we can define D(s, t, i, j) accurately measure two strings’ similarity are impor- recursively, where D(s, t,0,0) = 0 and 2 1094-7167/03/$17.00 © 2003 IEEE IEEE INTELLIGENT SYSTEMS Published by the IEEE Computer Society Ds(),,, t i j Name Address City Phone Cuisine ()−− = Ds,, ti11 , jif sij t , and youcopy s ij to t Fenix 8358 Sunset Blvd. West Hollywood 213/848-6677 American −−+ Ds(),, t i111 , jif yousubstitute tji for s = min Fenix at the Argyle 8358 Sunset Blvd. W. Hollywood 213-848-6677 French (New) Ds(),,, t i j−1 if you insert the letter t j − (a) Ds(),, t i1 , jif youdelete the letter si (1) L.P. Kaelbling. An architecture for intelligent reactive systems. In Reasoning About Actions and Plans: Proceedings of the 1986 Workshop. Morgan Kaufmann, 1986 We can evaluate this recursive definition Kaelbling, L.P., 1987. An architecture for intelligent reactive systems. In M.P. Georgeff & A.L. efficiently using dynamic programming tech- Lansky, eds., Reasoning about Actions and Plans, Morgan Kaufmann, Los Altos, CA, pp. 395–410 niques. Specifically, for a fixed s and t,we (b) can store the D(s, t, i, j) values in a matrix that is filled in a particular order. The total Figure 1. Sample duplicate records from (a) a restaurant database and (b) a scientific computational effort for D(s, t,|s|, |t|), the edit citation database. distance between s and t, is thus approxi- mately O(|s||t|). for s′ and t′. The Jaro metric for s and t is Table 1. An example of an Edit distance metrics are widely used— edit-distance computation. not only for text processing but also for bio- ′ ′ ′ − stOperation 1 s t sTst′′, logical sequence alignment1—and many Jaro() s, t =⋅ + + . 3 s t s′ |Willlaim | variations are possible. The simple unit-cost W|illlaim W| Copy “W” metric just described is usually called Lev- enstein distance. The Needleman-Wunsch To better understand the intuition behind Wi|lllaim Wi| Copy “i” distance is a natural extension to Levenstein this metric, consider the matrix M in Figure Wil|llaim Wil| Copy “l” distance that introduces additional parame- 2, which compares the strings s = “WILL- Will|laim Will| Copy “l” ters defining each possible character substi- LAIM” and t = “WILLIAM.” The boxed tution’s cost and the cost of insertions and entries are the main diagonal, and M(i, j) = 1 Willl|aim Will| Delete “l” deletions. We can implement it straightfor- if and only if the ith character of s equals the Willla|im Willi| Substitute “i” for “a” wardly by modifying Equation 1—replacing jth character of t. The Jaro metric is based on Willlai|m Willia| Substitute “a” for “i” the second term of the min with something the number of characters in s that are in com- such as D(s, t, i − 1, j − 1) + substitution- mon with t. In terms of the matrix M of the Willlaim| William| Copy “m” Cost(si, tj). The Smith-Waterman distance figure, the ith character of s is in common lets us easily modify the metric to discount with t if Mi,j = 1 for some entry (i, j) that is mismatching text at the beginning and the “sufficiently close” to M’s main diagonal, WI LLI AM end of strings. The affine gap cost is another where sufficiently close means that |i − j| < W 10 0 0000 widely used variation that introduces two min(|s|, |t|)/2 (shown in the matrix in bold). I 01 0 0 100 costs for insertion: one for inserting the first William Winkler proposed a variant of the L 00 11000 character and a second (usually lower) for Jaro metric that also uses the length P of the L00 11000 inserting additional characters. longest common prefix of s and t.6 Letting L0011000 In information integration, Alvaro Monge P′ = max(P, 4), we define A0000010 and Charles Elkan performed several exper- I0100100 iments in which they used an affine-cost vari- Jaro-Winkler(s, t) = Jaro(s, t) + M0000001 ant of the Smith-Waterman distance to per- (P′/10) ⋅ (1 − Jaro(s, t)). form duplicate detection.2,3 This emphasizes matches in the first few char- Figure 2. The Jaro metric. The boxed entries are the main diagonal, and each The Jaro metric and variants acters. The Jaro and Jaro-Winkler metrics bold character is in common with the Another effective similarity metric is the seem to be intended primarily for short strings string “WILLIAM” (“WILLLAIM”). Jaro metric,which is based on the number (for example, personal first or last names). and order of common characters between 4–6 … two strings. Given strings s = a1 aK and Token-based and hybrid distances based metric is Jaccard similarity. Between … t = b1 bL, define a character ai in s to be In many situations, word order is unim- the word sets S and T,Jaccard similarity is “in common” with t iff there is a bj = ai in t portant. For instance, the strings “Ray simply (|S ∩ T|)/(|S ∪ T|). We can define such that i − H ≤ j ≤ i + H,where H = Mooney” and “Mooney, Ray” are likely to term frequency-inverse document frequency ′ = ′′ min(|s|,|t|)/2. Let saa1... k be the charac- be duplicates, even if they aren’t close in edit (TF-IDF) or cosine similarity—which the ters in s that are common with t (in the same distance. In such cases, we might convert the information retrieval community widely ′ = ′′ order they appear in s), and let tbb1... L′ strings s and t to token multisets (where each uses—as be analogous. Then define a transposition for token is a word) and consider similarity met- ()= ()⋅ () s′, t′ to be a position i such that ai = bi. Let rics on these multisets. TF-, IDF S T∑ V w , S V w , T , ∈∩ Ts′,t′ be one-half the number of transpositions One simple and often effective token- wS T SEPTEMBER/OCTOBER 2003 computer.org/intelligent 3 Information Integration on the Web Traditional Name-Matching Approaches Record linkage—the task of matching equivalent records blocks (“windows”) in which candidates for matching lie.3 Alter- that differ syntactically—was first explored in the late 1950s natively, the canopies method uses a computationally cheap and and 1960s.1 Ivan Fellegi and Alan Sunter’s seminal paper— general string similarity metric such as term frequency-inverse where they studied record linkage in the context of matching data frequency (TF-IDF) cosine similarity to create overlapping population records—provides a theoretical foundation for record clusters that contain possible matching pairs.4 subsequent work on the problem.2 They described several key We can roughly categorize methods for improving matching insights that still lie at the base of many modern name-match- accuracy by how much human expertise they require and the ing systems: extent to which they use machine learning and probabilistic methods.
Recommended publications
  • Headprint: Detecting Anomalous Communications Through Header-Based Application Fingerprinting
    HeadPrint: Detecting Anomalous Communications through Header-based Application Fingerprinting Riccardo Bortolameotti Thijs van Ede Andrea Continella University of Twente University of Twente UC Santa Barbara [email protected] [email protected] [email protected] Thomas Hupperich Maarten H. Everts Reza Rafati University of Muenster University of Twente Bitdefender [email protected] [email protected] [email protected] Willem Jonker Pieter Hartel Andreas Peter University of Twente Delft University of Technology University of Twente [email protected] [email protected] [email protected] ABSTRACT 1 INTRODUCTION Passive application fingerprinting is a technique to detect anoma- Data breaches are a major concern for enterprises across all indus- lous outgoing connections. By monitoring the network traffic, a tries due to the severe financial damage they cause [25]. Although security monitor passively learns the network characteristics of the many enterprises have full visibility of their network traffic (e.g., applications installed on each machine, and uses them to detect the using TLS-proxies) and they stop many cyber attacks by inspect- presence of new applications (e.g., malware infection). ing traffic with different security solutions (e.g., IDS, IPS, Next- In this work, we propose HeadPrint, a novel passive fingerprint- Generation Firewalls), attackers are still capable of successfully ing approach that relies only on two orthogonal network header exfiltrating data. Data exfiltration often occurs over network covert characteristics to distinguish applications, namely the order of the channels, which are communication channels established by attack- headers and their associated values. Our approach automatically ers to hide the transfer of data from a security monitor.
    [Show full text]
  • Author Guidelines for 8
    I.J. Information Technology and Computer Science, 2014, 01, 68-75 Published Online December 2013 in MECS (http://www.mecs-press.org/) DOI: 10.5815/ijitcs.2014.01.08 FSM Circuits Design for Approximate String Matching in Hardware Based Network Intrusion Detection Systems Dejan Georgiev Faculty of Electrical Engineering and Information Technologies , Skopje, Macedonia E-mail: [email protected] Aristotel Tentov Faculty of Electrical Engineering and Information Technologies, Skopje, Macedonia E-mail: [email protected] Abstract— In this paper we present a logical circuits to perform a check to a possible variations of the design for approximate content matching implemented content because of so named "evasion" techniques. as finite state machines (FSM). As network speed Software systems have processing limitations in that increases the software based network intrusion regards. On the other hand hardware platforms, like detection and prevention systems (NIDPS) are lagging FPGA offer efficient implementation for high speed behind requirements in throughput of so called deep network traffic. package inspection - the most exhaustive process of Excluding the regular expressions as special case ,the finding a pattern in package payloads. Therefore, there problem of approximate content matching basically is a is a demand for hardware implementation. k-differentiate problem of finding a content Approximate content matching is a special case of content finding and variations detection used by C c1c2.....cm in a string T t1t2.......tn such as the "evasion" techniques. In this research we will enhance distance D(C,X) of content C and all the occurrences of the k-differentiate problem with "ability" to detect a a substring X is less or equal to k , where k m n .
    [Show full text]
  • Comparison of Levenshtein Distance Algorithm and Needleman-Wunsch Distance Algorithm for String Matching
    COMPARISON OF LEVENSHTEIN DISTANCE ALGORITHM AND NEEDLEMAN-WUNSCH DISTANCE ALGORITHM FOR STRING MATCHING KHIN MOE MYINT AUNG M.C.Sc. JANUARY2019 COMPARISON OF LEVENSHTEIN DISTANCE ALGORITHM AND NEEDLEMAN-WUNSCH DISTANCE ALGORITHM FOR STRING MATCHING BY KHIN MOE MYINT AUNG B.C.Sc. (Hons:) A Dissertation Submitted in Partial Fulfillment of the Requirements for the Degree of Master of Computer Science (M.C.Sc.) University of Computer Studies, Yangon JANUARY2019 ii ACKNOWLEDGEMENTS I would like to take this opportunity to express my sincere thanks to those who helped me with various aspects of conducting research and writing this thesis. Firstly, I would like to express my gratitude and my sincere thanks to Prof. Dr. Mie Mie Thet Thwin, Rector of the University of Computer Studies, Yangon, for allowing me to develop this thesis. Secondly, my heartfelt thanks and respect go to Dr. Thi Thi Soe Nyunt, Professor and Head of Faculty of Computer Science, University of Computer Studies, Yangon, for her invaluable guidance and administrative support, throughout the development of the thesis. I am deeply thankful to my supervisor, Dr. Ah Nge Htwe, Associate Professor, Faculty of Computer Science, University of Computer Studies, Yangon, for her invaluable guidance, encouragement, superior suggestion and supervision on the accomplishment of this thesis. I would like to thank Daw Ni Ni San, Lecturer, Language Department of the University of Computer Studies, Yangon, for editing my thesis from the language point of view. I thank all my teachers who taught and helped me during the period of studies in the University of Computer Studies, Yangon. Finally, I am grateful for all my teachers who have kindly taught me and my friends from the University of Computer Studies, Yangon, for their advice, their co- operation and help for the completion of thesis.
    [Show full text]
  • Automating Error Frequency Analysis Via the Phonemic Edit Distance Ratio
    JSLHR Research Note Automating Error Frequency Analysis via the Phonemic Edit Distance Ratio Michael Smith,a Kevin T. Cunningham,a and Katarina L. Haleya Purpose: Many communication disorders result in speech corresponds to percent segments with these phonemic sound errors that listeners perceive as phonemic errors. errors. Unfortunately, manual methods for calculating phonemic Results: Convergent validity between the manually calculated error frequency are prohibitively time consuming to use in error frequency and the automated edit distance ratio was large-scale research and busy clinical settings. The purpose excellent, as demonstrated by nearly perfect correlations of this study was to validate an automated analysis based and negligible mean differences. The results were replicated on a string metric—the unweighted Levenshtein edit across 2 speech samples and 2 computation applications. distance—to express phonemic error frequency after left Conclusions: The phonemic edit distance ratio is well hemisphere stroke. suited to estimate phonemic error rate and proves itself Method: Audio-recorded speech samples from 65 speakers for immediate application to research and clinical practice. who repeated single words after a clinician were transcribed It can be calculated from any paired strings of transcription phonetically. By comparing these transcriptions to the symbols and requires no additional steps, algorithms, or target, we calculated the percent segments with a combination judgment concerning alignment between target and production. of phonemic substitutions, additions, and omissions and We recommend it as a valid, simple, and efficient substitute derived the phonemic edit distance ratio, which theoretically for manual calculation of error frequency. n the context of communication disorders that affect difference between a series of targets and a series of pro- speech production, sound error frequency is often ductions.
    [Show full text]
  • APPROXIMATE STRING MATCHING METHODS for DUPLICATE DETECTION and CLUSTERING TASKS by Oleksandr Rudniy
    Copyright Warning & Restrictions The copyright law of the United States (Title 17, United States Code) governs the making of photocopies or other reproductions of copyrighted material. Under certain conditions specified in the law, libraries and archives are authorized to furnish a photocopy or other reproduction. One of these specified conditions is that the photocopy or reproduction is not to be “used for any purpose other than private study, scholarship, or research.” If a, user makes a request for, or later uses, a photocopy or reproduction for purposes in excess of “fair use” that user may be liable for copyright infringement, This institution reserves the right to refuse to accept a copying order if, in its judgment, fulfillment of the order would involve violation of copyright law. Please Note: The author retains the copyright while the New Jersey Institute of Technology reserves the right to distribute this thesis or dissertation Printing note: If you do not wish to print this page, then select “Pages from: first page # to: last page #” on the print dialog screen The Van Houten library has removed some of the personal information and all signatures from the approval page and biographical sketches of theses and dissertations in order to protect the identity of NJIT graduates and faculty. ABSTRACT APPROXIMATE STRING MATCHING METHODS FOR DUPLICATE DETECTION AND CLUSTERING TASKS by Oleksandr Rudniy Approximate string matching methods are utilized by a vast number of duplicate detection and clustering applications in various knowledge domains. The application area is expected to grow due to the recent significant increase in the amount of digital data and knowledge sources.
    [Show full text]
  • Finite State Recognizer and String Similarity Based Spelling
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by BRAC University Institutional Repository FINITE STATE RECOGNIZER AND STRING SIMILARITY BASED SPELLING CHECKER FOR BANGLA Submitted to A Thesis The Department of Computer Science and Engineering of BRAC University by Munshi Asadullah In Partial Fulfillment of the Requirements for the Degree of Bachelor of Science in Computer Science and Engineering Fall 2007 BRAC University, Dhaka, Bangladesh 1 DECLARATION I hereby declare that this thesis is based on the results found by me. Materials of work found by other researcher are mentioned by reference. This thesis, neither in whole nor in part, has been previously submitted for any degree. Signature of Supervisor Signature of Author 2 ACKNOWLEDGEMENTS Special thanks to my supervisor Mumit Khan without whom this work would have been very difficult. Thanks to Zahurul Islam for providing all the support that was required for this work. Also special thanks to the members of CRBLP at BRAC University, who has managed to take out some time from their busy schedule to support, help and give feedback on the implementation of this work. 3 Abstract A crucial figure of merit for a spelling checker is not just whether it can detect misspelled words, but also in how it ranks the suggestions for the word. Spelling checker algorithms using edit distance methods tend to produce a large number of possibilities for misspelled words. We propose an alternative approach to checking the spelling of Bangla text that uses a finite state automaton (FSA) to probabilistically create the suggestion list for a misspelled word.
    [Show full text]
  • Form Similarity Via Levenshtein Distance Between Ortho-Filtered Logarithmic Ruling-Gap Ratios, Procs
    G. Nagy, D. Lopresti, Form similarity via Levenshtein distance between ortho-filtered logarithmic ruling-gap ratios, Procs. SPIE/IST Document Recognition and Retrieval, Feb. 201 Form similarity via Levenshtein distance between ortho-filtered logarithmic ruling-gap ratios George Nagya and Daniel Loprestib* a RPI, Troy, NY USA; b Lehigh University, Bethlehem, PA USA ABSTRACT Geometric invariants are combined with edit distance to compare the ruling configuration of noisy filled-out forms. It is shown that gap-ratios used as features capture most of the ruling information of even low-resolution and poorly scanned form images, and that the edit distance is tolerant of missed and spurious rulings. No preprocessing is required and the potentially time-consuming string operations are performed on a sparse representation of the detected rulings. Based on edit distance, 158 Arabic forms are classified into 15 groups with 89% accuracy. Since the method was developed for an application that precludes public dissemination of the data, it is illustrated on public-domain death certificates. Keywords: Office forms, form identification, rulings, edit distance 1. INTRODUCTION We propose a translation-, scale- and rotation-invariant metric for classifying forms. The method is independent of language and script. It is remarkably immune to common types of document degradation, especially those incurred in copying and scanning. Even the Matlab implementation is fast enough for production runs. Our program accepts a set of scanned form images and produces a matrix of pairwise similarity measures based on the configuration of detected rulings. Geometric invariance is achieved by using ratios of the horizontal and vertical distances between rulings.
    [Show full text]
  • Using String Metrics to Improve the Design of Virtual Conversational Characters: Behavior Simulator Development Study
    JMIR SERIOUS GAMES García-Carbajal et al Original Paper Using String Metrics to Improve the Design of Virtual Conversational Characters: Behavior Simulator Development Study Santiago García-Carbajal1*, PhD; María Pipa-Muniz2*, MSc; Jose Luis Múgica3*, MSc 1Computer Science Department, Universidad de Oviedo, Gijón, Spain 2Cabueñes Hospital, Gijón, Spain 3Signal Software SL, Parque Científico Tecnológico de Gijón, Gijón, Asturias, Spain *all authors contributed equally Corresponding Author: Santiago García-Carbajal, PhD Computer Science Department Universidad de Oviedo Campus de Viesques Office 1 b 15 Gijón, 33203 Spain Phone: 34 985182487 Email: [email protected] Abstract Background: An emergency waiting room is a place where conflicts often arise. Nervous relatives in a hostile, unknown environment force security and medical staff to be ready to deal with some awkward situations. Additionally, it has been said that the medical interview is the first diagnostic and therapeutic tool, involving both intellectual and emotional skills on the part of the doctor. At the same time, it seems that there is something mysterious about interviewing that cannot be formalized or taught. In this context, virtual conversational characters (VCCs) are progressively present in most e-learning environments. Objective: In this study, we propose and develop a modular architecture for a VCC-based behavior simulator to be used as a tool for conflict avoidance training. Our behavior simulators are now being used in hospital environments, where training exercises must be easily designed and tested. Methods: We define training exercises as labeled, directed graphs that help an instructor in the design of complex training situations. In order to increase the perception of talking to a real person, the simulator must deal with a huge number of sentences that a VCC must understand and react to.
    [Show full text]
  • MS Word Template for Final Thesis Report
    Offline Approximate String Matching for Information Retrieval: An experiment on technical documentation Simon Dubois MASTER THESIS 2013 INFORMATICS Postadress: Besöksadress: Telefon: Box 1026 Gjuterigatan 5 036-10 10 00 (vx) 551 11 Jönköping EXAMENSARBETETS TITEL Offline Approximate String Matching for Information Retrieval:An experiment on technical documentation Simon Dubois Detta examensarbete är utfört vid Tekniska Högskolan i Jönköping inom ämnesområdet informatik. Arbetet är ett led i masterutbildningen med inriktning informationsteknik och management. Författarna svarar själva för framförda åsikter, slutsatser och resultat. Handledare: He Tan Examinator: Vladimir Tarasov Omfattning: 30 hp (D-nivå) Datum: 2013 Arkiveringsnummer: Postadress: Besöksadress: Telefon: Box 1026 Gjuterigatan 5 036-10 10 00 (vx) 551 11 Jönköping Abstract Abstract Approximate string matching consists in identifying strings as similar even if there is a number of mismatch between them. This technique is one of the solutions to reduce the exact matching strictness in data comparison. In many cases it is useful to identify stream variation (e.g. audio) or word declension (e.g. prefix, suffix, plural). Approximate string matching can be used to score terms in Information Retrieval (IR) systems. The benefit is to return results even if query terms does not exactly match indexed terms. However, as approximate string matching algorithms only consider characters (nor context neither meaning), there is no guarantee that additional matches are relevant matches. This paper presents the effects of some approximate string matching algorithms on search results in IR systems. An experimental research design has been conducting to evaluate such effects from two perspectives. First, result relevance is analysed with precision and recall.
    [Show full text]
  • Ontology Alignment Based on Word Embedding and Random Forest Classification
    NKISI-ORJI, I., WIRATUNGA, N., MASSIE, S., HUI, K.-Y. and HEAVEN, R. 2019. Ontology alignment based on word embedding and random forest classification. In Berlingerio, M., Bonchi, F., Gärtner, T., Hurley, N. and Ifrim, G. (eds.) Machine learning and knowledge discovery in databases: proceedings of the 2018 European conference on machine learning and principles and practice of knowledge discovery in databases (ECML PKDD 2018), 10-14 September 2018, Dublin, Ireland. Lecture notes in computer science, 11051. Cham: Springer [online], part I, pages 557-572. Available from: https://doi.org/10.1007/978-3-030-10925-7_34 Ontology alignment based on word embedding and random forest classification. NKISI-ORJI, I., WIRATUNGA, N., MASSIE, S., HUI, K.-Y., HEAVEN, R. 2018 This document was downloaded from https://openair.rgu.ac.uk Ontology alignment based on word embedding and random forest classification Ikechukwu Nkisi-Orji1[0000−0001−9734−9978], Nirmalie Wiratunga1, Stewart Massie1, Kit-Ying Hui1, and Rachel Heaven2 1 Robert Gordon University, Aberdeen, UK {i.o.nkisi-orji , n.wiratunga, s.massie, k.hui}@rgu.ac.uk 2 British Geological Survey, Nottingham, UK [email protected] Abstract. Ontology alignment is crucial for integrating heterogeneous data sources and forms an important component for realising the goals of the semantic web. Accordingly, several ontology alignment techniques have been proposed and used for discovering correspondences between the concepts (or entities) of different ontologies. However, these tech- niques mostly depend on string-based similarities which are unable to handle the vocabulary mismatch problem. Also, determining which simi- larity measures to use and how to effectively combine them in alignment systems are challenges that have persisted in this area.
    [Show full text]
  • Dedupe Documentation Release 2.0.0
    dedupe Documentation Release 2.0.0 Forest Gregg, Derek Eder, and contributors Sep 03, 2021 CONTENTS 1 Important links 3 2 Tools built with dedupe 5 3 Contents 7 4 Features 41 5 Installation 43 6 Using dedupe 45 7 Errors / Bugs 47 8 Contributing to dedupe 49 9 Citing dedupe 51 10 Indices and tables 53 Index 55 i ii dedupe Documentation, Release 2.0.0 dedupe is a library that uses machine learning to perform de-duplication and entity resolution quickly on structured data. If you’re looking for the documentation for the Dedupe.io Web API, you can find that here: https://apidocs.dedupe.io/ dedupe will help you: • remove duplicate entries from a spreadsheet of names and addresses • link a list with customer information to another with order history, even without unique customer id’s • take a database of campaign contributions and figure out which ones were made by the same, person even if the names were entered slightly differently for each record dedupe takes in human training data and comes up with the best rules for your dataset to quickly and automatically find similar records, even with very large databases. CONTENTS 1 dedupe Documentation, Release 2.0.0 2 CONTENTS CHAPTER ONE IMPORTANT LINKS • Documentation: https://docs.dedupe.io/ • Repository: https://github.com/dedupeio/dedupe • Issues: https://github.com/dedupeio/dedupe/issues • Mailing list: https://groups.google.com/forum/#!forum/open-source-deduplication • Examples: https://github.com/dedupeio/dedupe-examples • IRC channel, #dedupe on irc.freenode.net 3 dedupe Documentation, Release 2.0.0 4 Chapter 1.
    [Show full text]
  • Combining String and Phonetic Similarity Matching to Identify Misspelt Names of Drugs in Medical Records Written in Portuguese Hegler Tissot1* and Richard Dobson1,2,3
    Tissot and Dobson Journal of Biomedical Semantics 2019, 10(Suppl 1):17 https://doi.org/10.1186/s13326-019-0216-2 RESEARCH Open Access Combining string and phonetic similarity matching to identify misspelt names of drugs in medical records written in Portuguese Hegler Tissot1* and Richard Dobson1,2,3 From UK Healthcare Text Analysis Conference (HealTAC 2018) Manchester, UK. 18–19 April 2018 Abstract Background: There is an increasing amount of unstructured medical data that can be analysed for different purposes. However, information extraction from free text data may be particularly inefficient in the presence of spelling errors. Existing approaches use string similarity methods to search for valid words within a text, coupled with a supporting dictionary. However, they are not rich enough to encode both typing and phonetic misspellings. Results: Experimental results showed a joint string and language-dependent phonetic similarity is more accurate than traditional string distance metrics when identifying misspelt names of drugs in a set of medical records written in Portuguese. Conclusion: We present a hybrid approach to efficiently perform similarity match that overcomes the loss of information inherit from using either exact match search or string based similarity search methods. Keywords: Phonetic similarity, Similarity search, Misspelt names of drugs Background in electronic drug information systems. Such errors are There is a large amount of unstructured data being pro- likely to be responsible for up to 12% of adverse drug duced by different kinds of information systems, in a events, mainly caused by errors during transcription of variety of formats, due to the advancement of commu- prescriptions, illegible prescriptions, or drug name confu- nication and information technologies [1, 2].
    [Show full text]