
NoWaC: a large web-based corpus for Norwegian Emiliano Guevara Tekstlab, Institute for Linguistics and Nordic Studies, University of Oslo [email protected] Abstract Since most of the current work in NLP is carried out with data from the economically most impact- In this paper we introduce the first version ing languages (and especially with English data), of noWaC , a large web-based corpus of Bok- an amazing wealth of tools and resources is avail- mål Norwegian currently containing about able for them. However, researchers interested in 700 million tokens. The corpus has been built by crawling, downloading and process- “smaller” languages (whether by the number of ing web documents in the .no top-level in- speakers or by their market relevance in the NLP ternet domain. The procedure used to col- industry) must struggle to transfer and adapt the lect the noWaC corpus is largely based on available technologies because the suitable sources the techniques described by Ferraresi et al. of data are lacking. Using the web as corpus is a (2008). In brief, first a set of “seed” URLs promising option for the latter case, since it can pro- containing documents in the target language vide with reasonably large and reliable amounts of is collected by sending queries to commer- cial search engines (Google and Yahoo). The data in a relatively short time and with a very low obtained seeds (overall 6900 URLs) are then production cost. used to start a crawling job using the Heritrix web-crawler limited to the .no domain. The downloaded documents are then processed in various ways in order to build a linguistic cor- pus (e.g. filtering by document size, language identification, duplicate and near duplicate de- In this paper we present the first version of tection, etc.). noWaC , a large web-based corpus of Bokmål Nor- wegian, a language with a limited web presence, 1 Introduction and motivations built by crawling the .no internet top level domain. The computational procedure used to collect the The development, training and testing of NLP tools noWaC corpus is by and large based on the tech- requires suitable electronic sources of linguistic data niques described by Ferraresi et al. (2008). Our (corpora, lexica, treebanks, ontological databases, initiative was originally aimed at collecting a 1.5–2 etc.), which demand a great deal of work in or- billion word general-purpose corpus comparable to der to be built and are, very often copyright pro- the corpora made available by the WaCky initiative tected. Furthermore, the ever growing importance of (http://wacky.sslmit.unibo.it ). How- heavily data-intensive NLP techniques for strategic ever, carrying out this project on a language with a tasks such as machine translation and information relatively small online presence such as Bokmål has retrieval, has created the additional requirement that lead to results which differ from previously reported these electronic resources be very large and general similar projects. In its current, first version, noWaC in scope. contains about 700 million tokens. 1 Proceedings of the NAACL HLT 2010 Sixth Web as Corpus Workshop, pages 1–7, Los Angeles, California, June 2010. c 2010 Association for Computational Linguistics 1.1 Norwegian: linguistic situation and document must be, at least to a certain extent, available corpora copyright-free if it is to be visited/viewed at all. This is a major difference with respect to other types Norway is a country with a population of ca. 4.8 mil- of documents (e.g. printed materials, films, music lion inhabitants that has two official national written records) which cannot be copied at all. standards: Bokmål and Nynorsk (respectively, ‘book language’ and ‘new Norwegian’). Of the two stan- However, when building a web corpus, we do not dards, Bokmål is the most widely used, being ac- only wish to visit (i.e. download) web documents, tively written by about 85% of the country’s popu- but we would like to process them in various ways, lation (cf. http://www.sprakrad.no/ for de- index them and, finally, make them available to other tailed up to date statistics). The two written stan- researchers and users in general. All of this would dards are extremely similar, especially from the ideally require clearance from the copyright holders point of view of their orthography. In addition, Nor- of each single document in the corpus, something way recognizes a number of regional minority lan- which is simply impossible to realize for corpora guages (the largest of which, North Sami, has ca. that contain millions of different documents. 1 15,000 speakers). While the written language is generally standard- In short, web corpora are, from the legal point of ized, the spoken language in Norway is not, and view, still a very dark spot in the field of computa- using one’s dialect in any occasion is tolerated and tional linguistics. In most countries, there is simply even encouraged. This tolerance is rapidly extend- no legal background to refer to, and the internet is a ing to informal writing, especially in modern means sort of no-man’s land. of communication and media such as internet fo- rums, social networks, etc. Norway is a special case: while the law explicitly There is a fairly large number of corpora of the protects online content as intellectual property, Norwegian language, both spoken and written (in there is rather new piece of legislation in Forskrift both standards). However, most of them are of a til åndsverkloven av 21.12 2001 nr. 1563, § 1-4 limited size (under 50 million words, cf. http:// that allows universities and other research insti- www.hf.uio.no/tekstlab/ for an overview). tutions to ask for permission from the Ministry To our knowledge, the largest existing written cor- of Culture and Church in order to use copyright pus of Norwegian is the Norsk Aviskorpus (Hofland protected documents for research purposes that 2000, cf. http://avis.uib.no/ ), an expand- do not cause conflict with the right holders’ ing newspaper-based corpus currently containing own use or their economic interests (cf. http: 700 million words. However, the Norsk Aviskor- //www.lovdata.no/cgi-wift/ldles? pus is only available though a dedicated web inter- ltdoc=/for/ff-20011221-1563.html ). face for non commercial use, and advanced research We have been officially granted this permission for tasks cannot be freely carried out on its contents. this project, and we can proudly say that noWaC is a totally legal and recognized initiative. The Even though we have only worked on building a results of this work will be legally made available web corpus for Bokmål Norwegian, we intend to free of charge for research (i.e. non commercial) apply the same procedures to create web-corpora purposes. NoWaC will be distributed in association also for Nynorsk and North Sami, thus covering the with the WaCky initiative and also directly from the whole spectrum of written languages in Norway. University of Oslo. 1.2 Obtaining legal clearance The legal status of openly accessible web- documents is not clear. In practice, when one 1 visits a web page with a browsing program, an Search engines are in a clear contradiction to the copyright policies in most countries: they crawl, download and index bil- electronic exact copy of the remote document is lions of documents with no clearance whatsoever, and also re- created locally; this logically implies that any online distribute whole copies of the cached documents. 2 2 Building a corpus of Bokmål by URLs were collapsed in a single list of root URLs, web-crawling deduplicated and filtered, only keeping those in the .no top level domain. 2.1 Methods and tools After one week of automated queries (limited In this project we decided to follow the methods to 1000 queries per day per search engine by the used to build the WaCky corpora, and to use the re- respective APIs) we had about 6900 filtered seed lated tools as much as possible (e.g. the BootCaT URLs. tools ). In particular, we tried to reproduce the pro- cedures described by Ferraresi et al. (2008) and Ba- 2.3 Crawling roni et al. (2009). The methodology has already We used the Heritrix open-source, web-scale produced web-corpora ranging from 1.7 to 2.6 bil- crawler ( http://crawler.archive.org/ ) lion tokens (German, Italian, British English). How- seeded with the 6900 URLs we obtained to tra- ever, most of the steps needed some adaptation, fine- verse the internet .no domain and to download tuning and some extra programming. In particu- only HTML documents (all other document types lar, given the relatively complex linguistic situation were discarded from the archive). We instructed in Norway, a step dedicated to document language the crawler to use a multi-threaded breadth-first identification was added. strategy, and to follow a very strict politeness policy, In short, the building and processing chain used respecting all robots.txt exclusion directives for noWaC comprises the following steps: while downloading pages at a moderate rate (90 second pause before retrying any URL) in order not 1. Extraction of list of mid-frequency Bokmål to disrupt normal website activity. words from Wikipedia and building query The final crawling job was stopped after 15 days. strings In this period of time, a total size of 1 terabyte 2. Retrieval of seed URLs from search engines by was crawled, with approximately 90 million URLs sending automated queries, limited to the .no being processed by the crawler. Circa 17 million top-level domain HTML documents were downloaded, adding up to 3.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages7 Page
-
File Size-