
Relating Articles Textually and Visually Nachum Dershowitz Daniel Labenski Adi Silberpfennig School of Computer Science School of Computer Science School of Electrical Engineering Tel Aviv University Tel Aviv University Tel Aviv University Tel Aviv, Israel Tel Aviv, Israel Tel Aviv, Israel [email protected] [email protected] [email protected] Lior Wolf Yaron Tsur School of Computer Science Department of Jewish History Tel Aviv University Tel Aviv University Tel Aviv, Israel Tel Aviv, Israel [email protected] [email protected] Abstract— Historical documents have been undergoing The task we tackle is topic detection – specifically, large-scale digitization over the past years, placing massive looking for newspaper articles that discuss the same image collections online. Optical character recognition ongoing event. Identifying related articles in historical (OCR) often performs poorly on such material, which makes searching within these resources problematic and archives can help scholars explore historical stories from textual analysis of such documents difficult. a broad perspective, can provide visitors of archival We present two approaches to overcome this obstacle, websites with effective navigation – as in modern news one textual and one visual. We show that, for tasks like sites, and can help compare the coverage of events finding newspaper articles related by topic, poor-quality between different publications.2 Unfortunately, optical OCR text suffices. An ordinary vector-space model is used to represent articles. Additional improvements obtain by character recognition (OCR) for older documents is not adding words with similar distributional representations. yet satisfactory, making it quite challenging to search As an alternative to OCR-based methods, one can within those images for something particular, limiting perform image-based search, using word spotting. Synthetic research options.3 images are generated for every word in a lexicon, and word- We adopt the following nomenclature [1]: An event is spotting is used to compile vectors of their occurrences. “something that happens at a specific time and location”. Retrieval is by means of a usual nearest-neighbor search. The results of this visual approach are comparable to those A story is “a topically cohesive segment of news that obtained using noisy OCR. includes two or more declarative independent clauses We report on experiments applying both methods, sepa- about a single event”; in our case this is always a single rately and together, on historical Hebrew newspapers, with newspaper article. A topic is “a set of news stories that their added problem of rich morphology. are strongly related by some seminal real world event”. We use subject to refer to thematically related topics. I. INTRODUCTION For example [2], the eruption of Mount Pinatubo on June Recent large-scale digitization and preservation efforts 15th, 1991 is an event, whereas the ensuing crisis and have made a huge quantity of images of historical books, the aftermaths caused by the cataclysmic event are part manuscripts, and other media readily available over the of the larger topic, and the subject is “natural disasters”. Internet, opening doors to new opportunities, like distant The following is one of the stories that day (New York 1 reading and data mining. Times): Research supported in part by Grant #I-145-101.3-2013 from the 2A list of archives may be found at https://en.wikipedia.org/wiki/ German-Israeli Foundation for Scientific Research and Development Wikipedia:LOONA. (GIF), Grant #01019841 from the Deutsch-Israelische Projektkooper- 3See, for example, the famous newspaper article by Craig Clai- ation (DIP), Grant #1330/14 of the Israel Science Foundation (ISF), borne (The New York Times, November 14, 1975), “Just a quiet and a grant from the Blavatnik Family Fund. It forms part of D.L.s dinner for two in Paris” (https://ia801009.us.archive.org/18/items/ and A.S.’s M.Sc. theses at Tel Aviv University. 354644-claiborne/354644-claiborne.pdf, last item), which read as gib- 1http://www.nytimes.com/2011/06/26/books/review/ berish not too long ago (https://assets.documentcloud.org/documents/ the-mechanic-muse-what-is-distant-reading.html. 354644/claiborne.txt). Mount Pinatubo rumbled today with explosions that Vector Representations: For solving such tasks, it hurled ash and gas more than 12 miles high. is common to represent each story as a vector in a President Corazon C. Aquino dismissed a British conventional vector space [3]. Various term weighting newspaper report that the Americans had warned her of possible radioactive contamination if the schemes for combining term frequency (tf ) and inverse volcano damaged nuclear storage sites at Clark Air document frequency (idf ) can be employed [7]. We use Base. She said the story was “baseless and purely this standard method for document representation and fabricated.” . measuring similarity. We first consider the case where printed texts have Allan and Harding [2], [8] used a story representation been digitized, resulting in noisy OCR text. We approach of tf-idf weights of the 1000 most weighted terms in the problem traditionally and then enhance it by using the document, computed cosine similarity of the story word embeddings to augment document representations. to every previous story and assign a story to the nearest This helps solve the sparsity problem, due to language neighbor cluster if the similarity was above a threshold variation, morphology, and OCR noise. Augmentation otherwise it creates a new cluster containing the story. proved to be helpful in finding more related articles We take this basic approach. Other document represen- and proved to be even better than language-specific tation and similarity measures have been suggested; for standardization methods. example, [9], [10] use language models for clusters to As an alternative to working with OCR results, we compute the probability that a new story is related to a propose a novel image-based approach for retrieval. cluster and use that as a similarity measure. Given a query image of a word, one seeks all occurrences Lexical Expansions: Petrovic´ and Osborne [4] use of that same word within the dataset. To that end, we lexical paraphrases from various sources of paraphrases, utilize a word-spotting engine that is both simple to including WordNet, to expand document vectors and implement and fast to run, making it extremely easy to overcome lexical variation in documents with different incorporate for a wide range of resources. The method is terms discussing the same topic. Moran and McCreadie completely unsupervised: words in the document images [5] show how to use word embedding to improve ac- are unsegmented and unlabeled. curacy by expanding tweets with semantically related The next section summarizes some recent approaches phrases. Their method does not depend on complex to topic detection and to word spotting. It is followed linguistic resources like WordNet and still gets better by a section on the newspaper corpus we experimented results. This resource independence allows one to run it with, its problematics, and the additional linguistic/visual on low-resource languages. We use a similar approach resources and tools that we used. SectionIV describes to augment the document bag of words. the textual method, and SectionV describes the visual Morphological Expansions: one. They are followed by experimental results for the Avraham and Goldberg two methods. We conclude with a brief discussion. [11] reported that, when using word embedding in Hebrew, which is a morphologically rich inflectional II. RELATED WORK language, word vectors bear a mix of semantic and Topic Detection: The task we study is related to morphological properties. Many neighboring word vec- the field of topic detection and tracking (TDT). This tors are of inflected forms, rather than having semantic research program, which began in 1997, aims to find similarity. We will leverage morphological properties of core technologies for organizing streams of news arti- word embeddings to expand terms with others sharing cles by topic [3]. Event detection has recently regained the same base form. popularity with the emergence of social media [4], [5]. Noisy Texts: Agarwal et al. [12] experimented with We look for stories in a retrospective fashion on a static texts from different resource types in order to understand database (Retrospective Event Detection [RED]), unlike how different levels of noise in the text harm text TDT, where one wishes “to identify new events as they classification success. They developed a spelling error occur, based on an on-line stream of stories” (New simulator and created synthetic texts with several levels Event Detection [NED]). For example, a common way of noise. They found that with up to 40% word error rate, of approaching detection is by representing stories via the accuracy of the classification was not significantly a set of features and assigning every new story to the affected. Vinciarelly [13], [14] reported similar results cluster of the most similar past one. Yang et al. [6] found in text clustering and categorization (classification) tasks. that retrospective event detection can obtain much better We were encouraged by these results to work on topic results than the online setup. detection despite the noisy OCR text. 2 Word Spotting: Whereas topic tracking in the field The average word error rate for this newspaper is of NLP is widely researched, we suggest here a novel about 20% [Ido Kissos, private communication], but is approach using images as sole input. This appears to somewhat worse for the test suite. (For example, the be the first attempt to take an image-based approach error rate in the first two paragraphs of the lead article for solving the topic-detection problem. The work of of the February 1 issue is 30=85 = 35%, and may even Wilkinson et al. [15] is somewhat related. They use be considerably worse.) Only about 45% of the tokens a word-spotting engine for computing word clouds, produced by OCR of the test corpus are found in the whereas we are applying spotting to topic detection and MILA lexicon of modern Hebrew,7 on account of OCR categorization.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages8 Page
-
File Size-