Around the Web in Six Weeks: Documenting a Large-Scale Crawl Sarker Tanzir Ahmed, Clint Sparkmany, Hsin-Tsang Lee, Dmitri Loguinov∗ Department of Computer Science and Engineering Texas A&M University, College Station, TX 77843, USA Email: ftanzir,
[email protected],
[email protected],
[email protected] Abstract—Exponential growth of the web continues to present there exists no standard methodology for examining web challenges to the design and scalability of web crawlers. Our crawls and comparing their performance against one another. previous work on a high-performance platform called IRLbot With each paper providing different, and often very limited, [28] led to the development of new algorithms for realtime URL manipulation, domain ranking, and budgeting, which were types of information, little can be said about the relative tested in a 6.3B-page crawl. Since very little is known about the strengths of various crawling techniques or even their web crawl itself, our goal in this paper is to undertake an extensive coverage. Setting aside the financial aspect discussed above, measurement study of the collected dataset and document its this lack of transparency has helped stifle innovation, allowing crawl dynamics. We also propose a framework for modeling the industry to take a technological and scientific lead in this area. scaling rate of various data structures as crawl size goes to infinity and offer a methodology for comparing crawl coverage to that Finally, the majority of existing web studies have no pro- of commercial search engines. visions to handle spam [8], [12], [17], [30], [36], [37], [41]. One technique for tackling the massive scale, infinite script- I.