Word normalization in Indian languages by Prasad Pingali, Vasudeva Varma in the proceeding of 4th International Conference on Natural Language Processing (ICON 2005). December 2005. Report No: IIIT/TR/2008/81 Centre for Search and Information Extraction Lab International Institute of Information Technology Hyderabad - 500 032, INDIA June 2008 Word normalization in Indian languages Prasad Pingali Vasudeva Varma Language Technologies Research Centre Language Technologies Research Centre IIIT, Hyderabad, India IIIT, Hyderabad, India [email protected] [email protected] Abstract tongues, but of the absorption of Middle-Eastern and European influences as well. This richness is Indian language words face spelling also evident in the written form of the language. A standardization issues, thereby resulting in remarkable feature of the alphabets of India is the multiple spelling variants for the same word. manner in which they are organised. It is The major reasons for this phenomenon can organised according to phonetic principle, unlike be attributed to the phonetic nature of Indian the Roman alphabet, which has a random sequence languages and multiple dialects, of letters. This richness has also led to a set of transliteration of proper names, words problems over a period of time. The variety in the borrowed from foreign languages, and the alphabet, different dialects and influence of phonetic variety in Indian language alphabet. foreign languages has resulted in spelling Given such variations in spelling it becomes variations of the same word. Such variations difficult for web Information Retrieval sometimes can be treated as errors in writing, applications built for Indian languages, since while some are very widely used to be called as finding relevant documents would require errors. In this paper we consider all types of more than performing an exact string match. spelling variations of a word in the language. In this paper we examine the characteristics of such word spelling variations and explore This study on Indian language words is part of a how to computationally model such web search engine project for Indian languages. variations. We compare a set of language When dealing with real web data, the data could specific rules with many approximate string be really problematic. A lot of Information matching algorithms in evaluation. Retrieval systems, web search systems rarely explicitly mention the problems of the real world 1 Problem statement data on the web. While comparing strings on real web, they assume data to be homogeneous and India is rich in languages, boasting not only the comparable across different sources. But in indigenous sprouting of Dravidian and Indo-Aryan practice when one looks at real web data, there could be lot of variations in strings which need to other Indian languages as well, such as telugu, be handled. Especially in the case of Indian tamil and bengali. Given such huge percentage of languages such variations tend to occur a lot more words it becomes important to study what are the due to various reasons. Some of such reasons that characteristics of such spelling variations and see we could identify are the phonetic nature of Indian if we can computationally model such variations. languages, larger size of alphabet, lack of standardization in the use of such alphabet, words We propose two solutions for the above said entering from foreign languages such as English problem and compare them. One solution is to and Persian languages and last but not least to come up with a set of rules specific to language mention the variations in transliteration of proper which can handle such variations, which could names. In order to quantize these issues, we result in more precise performance. However such randomly picked 10 hindi and 10 telugu news a solution is not scalable for new languages since a articles. We manually counted the number of separate program will need to be written for each proper names and words borrowed from English in Indian language. Another solution could be to try these news articles. We found that an average of approximate string matching algorithms. Such 5.19% of words were proper names in Hindi algorithms are easily extensible to other languages documents and 4.8% words were proper names in but may not perform as well as language specific Telugu documents. We also found an average of rules in terms of precision. 5.73% of words were borrowed from English in Hindi documents while this number was 6.9% for 2 Rule based algorithm Telugu documents. Therefore apart from the In this section we discuss an algorithm using a set Indian language words we should also be able to of language specific rules by taking Hindi as an handle proper names and English words example. In this algorithm we achieve transliterated in Indian languages since they form normalization of words by mapping the alphabet substantial percentage of words. To give an idea of of the given language L into another alphabet L© the data problem, the following words were found where L© ⊂ L. Before discussing the actual rules on various websites. we would like to introduce chandra-bindu, bindu, अँगरेजी, अँगरेजी, अँगेजी, अँगेजी, अंगरेजी, अंगरेजी, अंगेजी, अंगेजी nukta, halanth, maatra and chandra in Hindi अनतरराषरीय, अनतरराषरीय, अनतरारिषरय, अनतरारषरीय, अंतरराषरीय, alphabet which are being referred in the rules. A अंतराषर रीय, अंतरराषरीय, अनतरािषरय, अनतराषरीय chandra-bindu is a half-moon with a dot, which It has been empirically found that there is lot of has the function of vowel nasalization. A bindu disagreement among website authors with regard (also called anusvar) is a dot written on top of to spellings of words. We found that 65,774 words consonants which achieves consonant nasalization. had variations out of 278,529 words. These 65,774 A nukta is a dot under a consonant which achieves words belong to 28,038 words. Therefore about sounds mostly used in words of persian and arabic 23.61% of Indian language words found atleast languages. A halanth is a consonant reducer. A one variant word. The average number of maatra is vowel character that occurs in variations a word would contain is about 2.34 combination with a consonant. A chandra is a words. It was found that more the number of special character which achieves the function of websites being studied, more is the amount of vowel rounding, such as the sound of ©o© in the disagreement. This phenomenon was observed in word ©documentary©. The following rules are applied on words before comparison of two words larger values indicate greater similarity; at some to achieve normalization. risk of confusion to the reader, we will use these terms interchangably, depending on which if found map to Examples interpretation is most natural. One important class of distance functions are edit distances, in which chandra-bindu bindu अँगेज, अंगेज distance is the cost of best sequence of edit , consonant + corresponding अंगेज अंगेज operations that convert s to t. Typical edit nukta consonant operations are character insertion, deletion, and consonant + corresponding अँगरेज, अँगेज substitution, and each operation much be assigned halanth consonant a cost. We will consider two edit-distance functions. The simple Levenstein distance assigns longer vowel equivalent shorter अनतरारिषरय, अनतरारषरीय a unit cost to all edit operations. As an example of maatra vowel maatra a more complex well-tuned distance function, we character + corresponding डॉकयमु टे री, also consider the Monge-Elkan distance function डाकयमु टे री chandra character (Monge & Elkan 1996), which is an affine1 Table 1: Rules applied to achieve normalization variant of the Smith-Waterman distance function in Hindi. (Durban et al. 1998) with particular cost parameters, and scaled to the interval [0,1]. A While we employed these basic rules, we also broadly similar metric, which is not based on an tried using unaspirated consonants in the place edit-distance model, is the Jaro metric (Jaro 1995; their respective aspirated ones. We found that this 1989; Winkler 1999). In the record-linkage operation did not yield much in recall and literature, good results have been obtained using deteriorated precision. Therefore we dropped this variants of this method, which is based on the feature in our algorithm. number and order of the common characters between two strings. Given strings s = a1 . aK 3 Approximate string matching and t = b1 . bL , define a character ai in s to be algorithms common with t there is a bj = ai in t such that i - H <= j <= i + H , where H = min(|s|.|t|) / 2 . Let s© We used a set of approximate string matching = a©1 . a©K be the characters in s which are algorithms from the second-string (found at common with t (in the same order they appear in http://secondstring.sourceforge.net) project to s) and let t = b1 . bL be analogous; now define a evaluate to what extent would they help solve the transposition , for s©, t© to be a position i such that problem of normalizing Indian language words. ai not equals to bi . Let Ts©,t© for s©, t© be half the We shall briefly discuss about each of these number of transpositions for s and t . algorithms in this section before proceeding to The Jaro similarity metric for s and t is experimental results. Approximate string matching algorithms decide whether two given strings are equal by using a distance function between the two where strings. Distance functions map a pair of strings s and t to a real number r, where a smaller value of r indicates greater similarity between s and t. Similarity functions are analogous, except that A variant of this metric due to Winkler (1999) also As shown in figure 1, we find that the Indian uses the length P of the longest common prefix of Language Normalizer algorithm which is the set of s and t.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages5 Page
-
File Size-