
Text-Driven Toponym Resolution using Indirect Supervision Michael Speriosu Jason Baldridge Department of Linguistics University of Texas at Austin Austin, TX 78712 USA speriosu,jbaldrid @utexas.edu { } Abstract ically recorded travel costs on the shaping of em- pires (Scheidel et al., 2012), and systems that con- Toponym resolvers identify the specific lo- vey the geographic content in news articles (Teitler cations referred to by ambiguous place- et al., 2008; Sankaranarayanan et al., 2009) and names in text. Most resolvers are based on microblogs (Gelernter and Mushegian, 2011). heuristics using spatial relationships be- Entity disambiguation systems such as those of tween multiple toponyms in a document, Kulkarni et al. (2009) and Hoffart et al. (2011) or metadata such as population. This pa- disambiguate references to people and organiza- per shows that text-driven disambiguation tions as well as locations, but these systems do not for toponyms is far more effective. We ex- take into account any features or measures unique ploit document-level geotags to indirectly to geography such as physical distance. Here we generate training instances for text classi- demonstrate the utility of incorporating distance fiers for toponym resolution, and show that measurements in toponym resolution systems. textual cues can be straightforwardly in- Most work on toponym resolution relies on tegrated with other commonly used ones. heuristics and hand-built rules. Some use sim- Results are given for both 19th century ple rules based on information from a gazetteer, texts pertaining to the American Civil War such as population or administrative level (city, and 20th century newswire articles. state, country, etc.), resolving every instance of 1 Introduction the same toponym type to the same location re- gardless of context (Ladra et al., 2008). Others use It has been estimated that at least half of the relationships between multiple toponyms in a con- world’s stored knowledge, both printed and digi- text (local or whole document) and look for con- tal, has geographic relevance, and that geographic tainment relationships, e.g. London and England information pervades many more aspects of hu- occurring in the same paragraph or as the bigram manity than previously thought (Petras, 2004; London, England (Li et al., 2003; Amitay et al., Skupin and Esperbe,´ 2011). Thus, there is value 2004; Zong et al., 2005; Clough, 2005; Li, 2007; in connecting linguistic references to places (e.g. Volz et al., 2007; Jones et al., 2008; Buscaldi and placenames) to formal references to places (coor- Rosso, 2008; Grover et al., 2010). Still others first dinates) (Hill, 2006). Allowing for the querying identify unambiguous toponyms and then disam- and exploration of knowledge in a geographically biguate other toponyms based on geopolitical re- informed way requires more powerful tools than a lationships with or distances to the unambiguous keyword-based search can provide, in part due to ones (Ding et al., 2000). Many favor resolutions of the ambiguity of toponyms (placenames). toponyms within a local context or document that Toponym resolution is the task of disambiguat- cover a smaller geographic area over those that are ing toponyms in natural language contexts to geo- more dispersed (Rauch et al., 2003; Leidner, 2008; graphic locations (Leidner, 2008). It plays an es- Grover et al., 2010; Loureiro et al., 2011; Zhang sential role in automated geographic indexing and et al., 2012). Roberts et al. (2010) use relation- information retrieval. This is useful for histori- ships learned between people, organizations, and cal research that combines age-old geographic is- locations from Wikipedia to aid in toponym reso- sues like territoriality with modern computational lution when such named entities are present, but tools (Guldi, 2009), studies of the effect of histor- do not exploit any other textual context. 1466 Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1466–1476, Sofia, Bulgaria, August 4-9 2013. c 2013 Association for Computational Linguistics Most of these approaches suffer from a major weakness: they rely primarily on spatial relation- ships and metadata about locations (e.g., popu- lation). As such, they often require nearby to- ponyms (including unambiguous or containing to- ponyms) to resolve ambiguous ones. This reliance can result in poor coverage when the required in- formation is missing in the context or when a doc- ument mentions locations that are neither nearby geographically nor in a geopolitical relationship. There is a clear opportunity that most ignore: use non-toponym textual context. Spatially rel- Figure 1: Points representing the United States. evant words like downtown that are not explicit toponyms can be strong cues for resolution (Hol- lenstein and Purves, 2012). Furthermore, the con- 2 Data nection between non-spatial words and locations has been successfully exploited in data-driven 2.1 Gazetteer approaches to document geolocation (Eisenstein Toponym resolvers need a gazetteer to obtain can- et al., 2010, 2011; Wing and Baldridge, 2011; didate locations for each toponym. Additionally, Roller et al., 2012) and other tasks (Hao et al., many gazetteers include other information such as 2010; Pang et al., 2011; Intagorn and Lerman, population and geopolitical hierarchy information. 2012; Hecht et al., 2012; Louwerse and Benesh, We use GEONAMES, a freely available gazetteer 2012; Adams and McKenzie, 2013). containing over eight million entries worldwide.1 In this paper, we learn resolvers that use all Each location entry contains a name (sometimes words in local or document context. For example, more than one) and latitude/longitude coordinates. the word lobster appearing near the toponym Port- Entries also include the location’s administrative land indicates the location is Portland in Maine level (e.g. city or state) and its position in the rather than Oregon or Michigan. Essentially, we geopolitical hierarchy of countries, states, etc. learn a text classifier per toponym. There are no GEONAMES gives the locations of regional massive collections of toponyms labeled with lo- items like states, provinces, and countries as single cations, so we train models indirectly using geo- points. This is clearly problematic when we seek tagged Wikipedia articles. Our results show these connections between words and locations: e.g. we text classifiers are far more accurate than algo- might learn that many words associated with the rithms based on spatial proximity or metadata. USA are connected to a point in Kansas. To get Furthermore, they are straightforward to combine around this, we represent regional locations as a with such algorithms and lead to error reductions set of points derived from the gazetteer. Since re- for documents that match those algorithms’ as- gional locations are named in the entries for loca- sumptions. tions they contain, all locations contained in the region are extracted (in some cases over 100,000 Our primary focus is toponym resolution, so we of them) and then k-means is run to find a smaller evaluate on toponyms identified by human anno- set of spatial centroids. These act as a tractable tators. However, it is important to consider the proxy for the spatial extent of the entire region. k utility of an end-to-end toponym identification and is set to the number of 1 by 1 grid cells covered resolution system, so we also demonstrate that ◦ ◦ by that region. Figure 1 shows the points com- performance is still strong when toponyms are de- puted for the United States.2 A nice property of tected with a standard named entity recognizer. this representation is that it does not involve re- We have implemented all the models discussed gion shape files and the additional programming in this paper in an open source software package infrastructure they require. called Fieldspring, which is available on GitHub: 1 http://github.com/utcompling/fieldspring Downloaded April 16, 2013 from www.geonames. org. Explicit instructions are provided for preparing 2The representation also contains three points each in data and running code to reproduce our results. Hawaii and Alaska not shown in Figure 1. 1467 Corpus docs toks types tokstop typestop ambavg ambmax TRC-DEV 631 136k 17k 4356 613 15.0 857 TRC-DEV-NER - - - 3165 391 18.2 857 TRC-TEST 315 68k 11k 1903 440 13.7 857 TRC-TEST-NER - - - 1346 305 15.7 857 CWAR-DEV 228 33m 200k 157k 850 29.9 231 CWAR-TEST 113 25m 305k 85k 760 31.5 231 Table 1: Statistics of the corpora used for evaluation. Columns subscripted by top give figures for toponyms. The last two columns give the average number of candidate locations per toponym token and the number of candidate locations for the most ambiguous toponym. A location for present purposes is thus a set of ponyms for TR-CONLL.4 We use the pre-trained points on the earth’s surface. The distance be- English NER from the OpenNLP project.5 tween two locations is computed as the great circle distance between the closest pair of representative 2.3 Geolocated Wikipedia Corpus points, one from each location. The GEOWIKI dataset contains over one million English articles from the February 11, 2012 dump 2.2 Toponym Resolution Corpora of Wikipedia. Each article has human-annotated latitude/longitude coordinates. We divide the cor- We need corpora with toponyms identified and re- pus into training (80%), development (10%), and solved by human annotators for evaluation. The test (10%) at random and perform preprocessing TR-CONLL corpus (Leidner, 2008) contains 946 to remove markup in the same manner as Wing REUTERS news articles published in August and Baldridge (2011). The training portion is used 1996. It has about 204,000 words and articles here to learn models for text-driven resolvers. range in length from a few hundred words to sev- eral thousand words. Each toponym in the corpus 3 Toponym Resolvers was identified and resolved by hand.3 We place every third article into a test portion (TRC-TEST) Given a set of toponyms provided via annotations and the rest in a development portion.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages11 Page
-
File Size-