
Measuring the Wikisphere∗ Jeff Stuckman James Purtilo Department of Computer Science Department of Computer Science University of Maryland University of Maryland College Park, Maryland USA College Park, Maryland USA [email protected] [email protected] ABSTRACT 1. INTRODUCTION Due to the inherent difficulty in obtaining experimental data Mediawiki is an open-source platform used to host wikis from wikis, past quantitative wiki research has largely been (user-editable repositories of content). Wikipedia, the most focused on Wikipedia, limiting the degree that it can be gen- popular instance of Mediawiki, has popularized wikis, and eralized. We developed WikiCrawler, a tool that automat- many webmasters have installed their own wikis using this ically downloads and analyzes wikis, and studied 151 pop- platform as a result. Many prior studies have analyzed Wiki- ular wikis running Mediawiki (none of them Wikipedias). pedia and its user base, but few have examined other wikis. We found that our studied wikis displayed signs of collabo- This is because Wikipedia makes database dumps available rative authorship, validating them as objects of study. We to researchers for analysis1, while obtaining dumps of other also discovered that, as in Wikipedia, the relative contri- wikis would require the cooperation of their individual web- bution levels of users in the studied wikis were highly un- masters. The result is a lack of information on how wikis equal, with a small number of users contributing a dispro- other than Wikipedia are being used, information that could portionate amount of work. In addition, power-law distri- increase the knowledge available to wiki practitioners by butions were successfully fitted to the contribution levels of making the increasingly sophisticated models and analysis most of the studied wikis, and the parameters of the fit- of Wikipedia applicable to wikis in general. ted distributions largely predicted the high inequality that To narrow this knowledge gap, we developed WikiCrawler, was found. Along with demonstrating our methodology of a tool that converts wikis running Mediawiki into machine- analyzing wikis from diverse sources, the discovered simi- readable data suitable for research by parsing their gener- larities between wikis suggest that most wikis accumulate ated HTML pages. We then assembled a collection of 151 edits through a similar underlying mechanism, which could popular wikis of varying sizes (totaling 132393 wiki pages) motivate a model of user activity that is applicable to wikis and observed that nearly all were authored collaboratively in general. (like Wikipedia). Then, to demonstrate that our method- ology can produce useful data for analysis, we analyzed the Categories and Subject Descriptors distributions of activity (across users and articles) for each wiki. We determined that the studied wikis have highly un- H.3.4 [Information Storage and Retrieval]: Systems equal distributions of activity across authors, and that the and Software—performance evaluation; K.4.3 [Computers inequality in these observed distributions can largely be pre- and Society]: Organizational Impacts—Computer-supported dicted by their underlying power-law forms. We conclude collaborative work that large-scale quantitative analysis of wikis is practical, and that our methodology can be used to generalize findings General Terms for Wikipedia to a broader population of wikis, enabling new Measurement applications such as calibration of wiki metrics. Our findings also support the notion that a generative model could be de- Keywords veloped that accurately reflects the activity distributions of most wikis. wiki, Mediawiki, crawler, gini, power law, distribution, met- rics 2. RELATED WORK ∗Authors are supported in part by Office of Naval Research contract N000140710329 while doing this research. The methodologies of many previous studies have included quantitative analysis of Wikipedia. Some of these studies measured the distributions of edits across articles or users, such as [12], which found that article lengths had a log- Permission to make digital or hard copies of all or part of this work for normal distribution and the number of unique editors per personal or classroom use is granted without fee provided that copies are article had a power-law distribution. The mechanisms that not made or distributed for profit or commercial advantage and that copies generate these distributions were studied by Wilkinson and bear this notice and the full citation on the first page. To copy otherwise, to Huberman [13], who proposed a stochastic model that pro- republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. duces a log-normal distribution for the lengths of articles WikiSym ’09, October 25-27, 2009, Orlando, Florida, U.S.A. 1 Copyright 2009 ACM 978-1-60558-730-1/09/10 ...$10.00. http://en.wikipedia.org/wiki/Wikipedia_database lectively downloads the subset of pages required for the re- searcher to compute the desired statistics. The researcher accomplishes this by programming rules that indicate which quantities the WikiCrawler should measure, allowing the WikiCrawler to determine which pages to download and parse. The desired quantities are then stored in an SQL database for later analysis. Lists of pages to download are extracted from the wiki’s “all pages” index, and revision histories are obtained by fol- lowing the appropriate links. User histories are obtained in a similar manner. Only the current revisions of the page text were downloaded, but the complete revision histories were retrieved. To work around server-side URL rewriting rules, the WikiCrawler often harvests links from previously parsed pages instead of generating them, to avoid training the crawler to generate every type of URL needed under a specific rewriting rule. 3.1 Finding wikis We obtained a list of candidate wikis to analyze by us- ing the Yahoo and Microsoft Live search web services to retrieve the first 1000 results for the string Main_Page. Be- cause Mediawiki creates a page called Main_Page by default, Figure 1: Two examples of wikis hosted by Media- this allows us to easily find Mediawiki instances. We did not wiki use wiki directory sites such as WikiIndex3 or the previously mentioned S23 because it was unclear if the wikis in the di- rectories were collected by humans, which could result in a with a given age. selection bias in our experimental population. Using search One actively debated question involves the role of frequent engines to form our sample ensures that the only bias is that users versus occasional contributers to Wikipedia. Kittur et the wikis were popular enough to appear in a search result. al. [4] tracked words and revisions of articles to suggest that Filtering non-wikis and duplicates between the two search occasional users are responsible for an increasing amount engines out of our 2000 seed URLs, we found 1445 wikis of Wikipedia’s content, while Ortega et al. [8] used a Gini which could potentially be analyzed. Wikimedia projects coefficient metric to conclude that few users are responsible were excluded because they can be downloaded and analyzed for a bulk of the activity. However, the scope of all such more easily by using the database dumps mentioned earlier. studies was limited to Wikipedia. The Robot Exclusion Standard4 allows websites to indi- The S232 website, which provides statistics for tens of cate that robots and crawlers should not visit. (This stan- thousands of public wikis, has been used for research that dard is not enforced by technical means; therefore, compli- compared multiple wikis, such as [9] and [10]. We did not ance is voluntary.) We rejected 77 of these 1445 wikis where use this data source because it only collects a few simple our crawling would have violated this protocol.5 statistics for each wiki, precluding in-depth analysis of distri- Because MediaWiki is an open-source product, users often butions of work. Other research involved case studies of in- customize it to improve the appearance of the wiki or add dividual wikis [1] or analysis of multiple Wikipedia projects new features. These customizations sometimes confound the [8], but we were not able to find any prior research that ana- screen-scraping component of our crawler, which checks for lyzed (in depth) multiple wikis with diverse administration, inconsistencies in the downloaded pages to compensate for motivating our current work. this. 182 sites were excluded due to such inconsistencies, or because a password was required to access one or more 3. DATA COLLECTION FROM WIKIS wiki pages, preventing accurate statistics from being com- piled. In addition, we manually removed 3 sites because We developed a Java-based tool which we will call the they contained illegal or pornographic content, or because “WikiCrawler” to collect the data for our analysis. Through they were duplicates (which happens when multiple virtual the use of webcrawling and screen-scraping techniques, the hosts are backed by the same wiki.) In the end, 1183 wikis WikiCrawler converts the HTML served by wikis into data were available for analysis. suitable for analysis. Despite the fact that Mediawiki per- mits extensive user interface customization, most Mediawiki 3.2 The study population sites use consistent HTML element IDs on their UI elements. We estimated the sizes of the 1183 available wikis by ex- For example, the two wikis depicted in Figure 1 look differ- ent, but similarities in the underlying HTML allow informa- 3http://www.wikiindex.org/ tion to be extracted from both. There are a few variations 4http://www.robotstxt.org/orig.html in the HTML generated by different versions of MediaWiki, 5This is an apparent contradiction, because our list of wikis but most of the processing rules are identical across versions. originally came from search engines. This would mean that Unlike general-purpose webcrawlers, the WikiCrawler se- major search engines are violating this protocol, or that the wikis can be accessed from an alternate URL that falls out- 2http://s23.org/wikistats/ side of the restrictions.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages8 Page
-
File Size-