Word Segmentation for Chinese Wikipedia Using N-Gram Mutual Information Ling-Xiang Tang1, Shlomo Geva1, Yue Xu1 and Andrew Trotman2 1School of Information Technology Faculty of Science and Technology Queensland University of Technology Queensland 4000 Australia {l4.tang, s.geva, yue.xu}@qut.edu.au 2Department of Computer Science Universityof Otago Dunedin 9054 New Zealand
[email protected] Abstract In this paper, we propose an unsupervised seg- is called 激光打印机 in mainland China, but 鐳射打印機 in mentation approach, named "n-gram mutual information", or Hongkong, and 雷射印表機 in Taiwan. NGMI, which is used to segment Chinese documents into n- In digital representations of Chinese text different encoding character words or phrases, using language statistics drawn schemes have been adopted to represent the characters. How- from the Chinese Wikipedia corpus. The approach allevi- ever, most encoding schemes are incompatible with each other. ates the tremendous effort that is required in preparing and To avoid the conflict of different encoding standards and to maintaining the manually segmented Chinese text for train- cater for people's linguistic preferences, Unicode is often used ing purposes, and manually maintaining ever expanding lex- in collaborative work, for example in Wikipedia articles. With icons. Previously, mutual information was used to achieve Unicode, Chinese articles can be composed by people from all automated segmentation into 2-character words. The NGMI the above Chinese-speaking areas in a collaborative way with- approach extends the approach to handle longer n-character out encoding difficulties. As a result, these different forms of words. Experiments with heterogeneous documents from the Chinese writings and variants may coexist within same pages.