Proceedings of the Twelfth Australasian Data Mining Conference (AusDM 2014), Brisbane, Australia Detecting Digital Newspaper Duplicates with Focus on eliminating OCR errors John F. Roddick - style "Author" School of Computer Scien Yeshey Peden Richi Nayak ICT Officer Associate Professor Department of Public Accounts Higher Degree Research Director Ministry of Finance Science and Engineering Faculty Thimphu, Bhutan Queensland University of Technology
[email protected] r.nayak.qut.edu.au 0, Adelaide 5001, South Australia - "Email Abstract These duplicate documents are not only an annoyance to With the advancement in digitalization, archived users as search results, but, they also decrease efficiency documents such as newspapers have been increasingly of search engines (Uyar, 2009). The processing of these converted into electronic documents and become duplicate documents and results is not only time available for user search. Many of these newspaper consuming, but, their presence also does not add any articles appear in several publication avenues with some value to the information presented to users. Duplicate variations. Their presence decreases both effectiveness document detection has gained research interest as it and efficiency of search engines which directly affects assists in search engines in increasing the effectiveness user experience. This emphasizes on development of a and storage efficiency (Uyar, 2009). While carrying out duplicate detection method, however, digitized this task, consideration of the domain/application is very newspapers, in particular, have their own unique important. For example, in plagiarism detection scenario, challenges. One important challenge that is discussed in even if a sentence or a paragraph of one document is this paper is the presence of OCR (Optical Character found in another document the two documents could be recognition) errors which negatively affects the value of seen as near-duplicates.