Risk Management Resolutions Easier Than a Diet; Good for the Health of Your Nonprofit by Melanie L

Total Page:16

File Type:pdf, Size:1020Kb

Risk Management Resolutions Easier Than a Diet; Good for the Health of Your Nonprofit by Melanie L A publication of the Nonprofit Risk Management Center Volume 14, No. 1, January/February 2005 Risk Management Resolutions Easier than a diet; good for the health of your nonprofit by Melanie L. Herman ere you one of thousands dilemma you’re facing and the of Americans who received a solution we recommend. To W gym membership gift tucked access this free service, visit neatly in a card from a loved one? www.nonprofitrisk.org and Across the country millions of click on the ADVICE tab. Or Americans are jotting down resolutions, give us a call at (202) 785-3891. most of which have something to do with the three Cs: calories, cardio- Resolution #1 workouts, or carcinogens. Blow the Dust off Resolutions about the three health- inducing Cs are awfully tough to keep, Your Policies as are risk management resolutions that Throughout the year the could change the health of your Nonprofit Risk Management nonprofit. Does the organization you Center receives phone calls from serve have a mission worth preserving? chief financial officers, Are there any risks lurking in your continued on page 2 nonprofit’s future that could spell significant set-back or disaster? Are you losing any sleep about risks related to HR, financial management, fundraising, NEW! Risk Management Essentials Series reputation, or staff/participant injuries? This article offers some simple but Visit www.nonprofitrisk.org to check out a important Risk Management special offer from the Nonprofit Risk Resolutions for 2005. The resolutions can be easily adapted to reflect the Management Center. The Risk Management circumstances and resources of your Essentials Series takes the guesswork out of nonprofit. And best of all, they don’t building a library of valuable risk management require the equivalent of giving up resources. We’ve assembled our best-selling dessert or enrolling in a spinning class. books whose contents apply to all nonprofits in Remember that the Nonprofit Risk an affordable package. You’ll save 20% off the Management Center exists to help price of the books when you compare the cost of nonprofits address risk management challenges. We’re here to take your purchasing them separately. Plus, shipping is free phone calls and e-mails on any for the bundle, and we’ll send them to you in a imaginable risk management topic. We handy totebag. The cost is only $129. To order, visit offer frank risk management advice and www.nonprofitrisk.org or call (202) 785-3891. point you to resources and materials that will help you better understand the 2 Community Risk Management & Insurance • January/February 2005 Risk Management Resolutions management effort that looks continued from page 1 backwards and projects into the future. The process of formalizing risk executive directors, program managers management in a nonprofit begins with and board members who ask various a goal-setting exercise: What do you forms of the same question: “How will I hope to accomplish by stepping up your know if my nonprofit is adequately risk management efforts in 2005? What insured?” During the conversation that are your top risk management goals in Volume 14, Number 1 January/February 2005 follows, a staff member of the Center often asks, “Have you read your Published three times insurance policies?” annually by the Nonprofit Risk Management Center, Most callers quickly admit to having 1130 Seventeenth Street, NW, never actually read the insurance Suite 210, Washington, DC policies purchased by their nonprofits. 20036, (202) 785-3891, FAX: (202) 296-0349, or Most callers can name the insurance www.nonprofitrisk.org agent or broker who sold them the Questions about the content coverage, and a small percentage have a of this publication should be clear idea of the exposures the policies directed to the editor, Barbara Oliver. For information on address. Yet only a scant number of advertising contact Suzanne these professionals responsible for Hensell. They can be reached managing their nonprofit’s insurance at (202) 785-3891. program have taken the first and perhaps most fundamental step necessary to discharging this fiduciary duty. Reading an insurance policy is not Staff Directory an easy nor necessarily enjoyable task. (all staff can be reached But doing so almost always opens the light of past losses, current concerns, at 202.785.3891) reader’s eyes with regard to how the and changes looming on the horizon? coverage works. See the sidebar on page An organization’s risk management Sheryl Augustine Customer Service Representative 7 for steps to reading your policies. goals will also depend on the number of [email protected] losses it has incurred and whether it changes minimally from year to year or Suzanne M. Hensell Resolution #2 expects to grow exponentially in the Program and Research Assistant Establish goals for your risk [email protected] year ahead. They also depend on its risk management program management appetite. An organization Melanie L. Herman It is not uncommon for risk with a hearty appetite for risk-taking Executive Director management activities to arise in a [email protected] may vow to gain greater control over its nonprofit even prior to a conscious risk financing strategy by shifting to Barbara B. Oliver decision on the part of the self-insurance. Another nonprofit may Director of Communications organization’s leaders to embrace the choose to focus its energies on avoiding [email protected] discipline. For example, many nonprofit recurring losses while a third John C. Patterson leaders who report that their organization may identify forecasting Senior Program Director organizations have negligle risk catastrophic losses as a top concern. [email protected] management activities later admit to the Another aspect of goal setting Special Advisors existence of a rigorous screening concerns the role of risk management in George L. Head, Ph.D. program, periodic training for the organization. Some nonprofits [email protected] supervisors, safety training for aspire to involve personnel throughout volunteers and a long history of Richard B. Vincelette the organization in risk management [email protected] purchasing insurance to finance certain activities, from managers to line staff to exposures. The elements of a risk volunteers and participants. Other management program do indeed exist, organizations aim to consolidate risk but the elements are not linked or management responsibility under one effectively organized into a risk continued on page 3 Community Risk Management & Insurance • January/February 2005 3 Risk Management Resolutions continued from page 2 Peter Bernstein’s Big Ideas This sidebar is reprinted with permission from Risk Management Reports, person or as a department. the monthly commentary on risk management issues written and No single set of risk management published by H. Felix Kloman. For more information on this informative goals will work for every nonprofit, nor and thought-provoking publication, visit www.riskreports.com. is there an ideal structure or framework that meets the needs of a single type of My favorite guru, the octogenarian Peter Bernstein, the author of nonprofit or all nonprofits. Against the Gods: The Remarkable Story of Risk, was the subject of an Consider your risk management goals. What do you hope to accomplish interview with Jason Zweig in the November 2004 issue of Money in 2005 with regard to: magazine. Zweig called him “patience personified,” a characterization to which I subscribe. But the best part of this article was Zweig’s ❑ Protecting assets (including reputation, and financial, synopsis of Peter’s four “Big Ideas.” property and people assets) 1. Anything can happen. We do not and cannot know the future. ❑ Planning for surprises 2. Whether you should take a risk depends not just on the probability ❑ Forecasting trends that you are right but also on the consequences if you are wrong. ❑ Financing risk 3. The riskiest moment is when you are right. ❑ Managing key relationships ❑ Working with outside advisors 4. If you’re comfortable with all you own, you’re not diversified. Diversification is an explicit recognition of your ignorance. ❑ Involving key personnel in risk management activities. Marvelous words with which to start the new year! Resolution #3 Kick the template habit (or cutback your consumption) expanded or altered to reflect the unique nature of our operations or The second most common call our special circumstances? fielded by the experts at the Nonprofit ❑ Does the wording of this document Risk Management Center comes from a suit the culture of our organization, nonprofit leader seeking a “template” or or does it contain archaic or other “sample policy” that “I can use in my language that contrasts brightly with nonprofit.” The desire to avoid costly other policy documents? legal expenses is understandable. And ❑ What additional issues or concerns the goal of developing new policies should be addressed in our version of this policy (e.g., the fact that we efficiently and without delay is work with vulnerable clients or the admirable. What gets too many fact that we’re located in California nonprofits into trouble is the tendency and special rules apply)? to take another organization’s template ❑ Who else should review this policy and simply change the name at the top prior to its adoption to make certain of the page. it reflects our circumstances, is legally As risk management consultants, we appropriate, and will be readily understood by all who must follow
Recommended publications
  • Uva-DARE (Digital Academic Repository)
    UvA-DARE (Digital Academic Repository) Search engine freedom: on the implications of the right to freedom of expression for the legal governance of Web search engines van Hoboken, J.V.J. Publication date 2012 Link to publication Citation for published version (APA): van Hoboken, J. V. J. (2012). Search engine freedom: on the implications of the right to freedom of expression for the legal governance of Web search engines. General rights It is not permitted to download or to forward/distribute the text or part of it without the consent of the author(s) and/or copyright holder(s), other than for strictly personal, individual use, unless the work is under an open content license (like Creative Commons). Disclaimer/Complaints regulations If you believe that digital publication of certain material infringes any of your rights or (privacy) interests, please let the Library know, stating your reasons. In case of a legitimate complaint, the Library will make the material inaccessible and/or remove it from the website. Please Ask the Library: https://uba.uva.nl/en/contact, or a letter to: Library of the University of Amsterdam, Secretariat, Singel 425, 1012 WP Amsterdam, The Netherlands. You will be contacted as soon as possible. UvA-DARE is a service provided by the library of the University of Amsterdam (https://dare.uva.nl) Download date:29 Sep 2021 Chapter 2: A Short History of Search Engines and Related Market Developments 22 2.1 The Internet, the Web and the rise of navigational media 2.1.1. Early visions of navigation in digitized information environments The way in which digital computing would lead to a revolution in information and knowledge navigation was already being explored more than half a century ago, when computers were still a rarity and neither the Internet, nor the World Wide Web existed.
    [Show full text]
  • Webcrawler: Finding What People Want
    © Copyright 2000 Brian Pinkerton WebCrawler: Finding What People Want Brian Pinkerton A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy University of Washington 2000 Program Authorized to Offer Degree: Department of Computer Science & Engineering University of Washington Graduate School This is to certify that I have examined this copy of a doctoral dissertation by Brian Pinkerton and have found that it is complete and satisfactory in all respects and that any and all revisions required by the final examining committee have been made. Co-chairs of the Supervisory Committee: _______________________________________________ Edward Lazowska _______________________________________________ John Zahorjan Reading Committee: _______________________________________________ Edward Lazowska _______________________________________________ John Zahorjan _______________________________________________ David Notkin Date: _____________________ In presenting this dissertation in partial fulfillment of the requirements for the Doctoral degree at the Univer- sity of Washington, I agree that the Library shall make its copies freely available for inspection. I further agree that extensive copying of the dissertation is allowable only for scholarly purposes, consistent with “fair use” as prescribed in the U.S. Copyright Law. Requests for copying or reproduction of this dissertation may be referred to Bell and Howell Information and Learning, 300 North Zeeb Road, Ann Arbor, MI 48106- 1346, to whom the author
    [Show full text]
  • The Commodification of Search
    San Jose State University SJSU ScholarWorks Master's Theses Master's Theses and Graduate Research 2008 The commodification of search Hsiao-Yin Chen San Jose State University Follow this and additional works at: https://scholarworks.sjsu.edu/etd_theses Recommended Citation Chen, Hsiao-Yin, "The commodification of search" (2008). Master's Theses. 3593. DOI: https://doi.org/10.31979/etd.wnaq-h6sz https://scholarworks.sjsu.edu/etd_theses/3593 This Thesis is brought to you for free and open access by the Master's Theses and Graduate Research at SJSU ScholarWorks. It has been accepted for inclusion in Master's Theses by an authorized administrator of SJSU ScholarWorks. For more information, please contact [email protected]. THE COMMODIFICATION OF SEARCH A Thesis Presented to The School of Journalism and Mass Communications San Jose State University In Partial Fulfillment of the Requirement for the Degree Master of Science by Hsiao-Yin Chen December 2008 UMI Number: 1463396 INFORMATION TO USERS The quality of this reproduction is dependent upon the quality of the copy submitted. Broken or indistinct print, colored or poor quality illustrations and photographs, print bleed-through, substandard margins, and improper alignment can adversely affect reproduction. In the unlikely event that the author did not send a complete manuscript and there are missing pages, these will be noted. Also, if unauthorized copyright material had to be removed, a note will indicate the deletion. ® UMI UMI Microform 1463396 Copyright 2009 by ProQuest LLC. All rights reserved. This microform edition is protected against unauthorized copying under Title 17, United States Code. ProQuest LLC 789 E.
    [Show full text]
  • Web Crawler for Mining Web Data
    International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395 -0056 Volume: 04 Issue: 02 | Feb -2017 www.irjet.net p-ISSN: 2395-0072 WEB CRAWLER FOR MINING WEB DATA S.AMUDHA, B.SC., M.SC., M.PHIL., Assistant Professor, VLB Janakiammal College of Arts and Science, Tamilnadu, India [email protected] -----------------------------------------------------------------********--------------------------------------------------------------------- ABSTRACT Web crawler is a computer program that browses the World Wide Web in a methodical, automated manner or in an orderly fashion. Web crawlers are full text search engines which assist users in navigating the web. Web crawling is an important method for collecting data on, and keeping up with, the rapidly expanding Internet. Users can find their resources by using different hypertext links. A vast number of web pages are continually being added every day, and information is constantly changing. Search engines are used to extract valuable Information from the internet. Web crawlers are the principal part of search engine, is a computer program or software that browses the World Wide Web in a methodical, automated manner or in an orderly fashion. This Paper is an overview of various types of Web Crawlers and the policies like selection, revisit, politeness, and parallelization. Key Words: Web Crawler, World Wide Web, Search Engine, Hyperlink, Uniform Resource Locator. 1. INTRODUCTION A Web crawler starts with a list of URLs to visit, called the seeds. As the crawler visits these URLs, it identifies all the hyperlinks in the page and adds them to the list of URLs to visit, called the crawl frontier. URLs from the frontier are recursively visited according to a set of policies.
    [Show full text]
  • Effective Performance of Information Retrieval on Web by Using Web Crawling
    International Journal of Web & Semantic Technology (IJWesT) Vol.3, No.2, April 2012 EFFECTIVE PERFORMANCE OF INFORMATION RETRIEVAL ON WEB BY USING WEB CRAWLING Sk.AbdulNabi 1 and Dr. P. Premchand 2 1Department of CSE, AVN Inst. of Engg. & Tech, Hyderabad, A.P., India [email protected] 2Department of CSE, Osmania University, Hyderabad, A.P., India [email protected] ABSTRACT World Wide Web consists of more than 50 billion pages online. It is highly dynamic [6] i.e. the web continuously introduces new capabilities and attracts many people. Due to this explosion in size, the effective information retrieval system or search engine can be used to access the information. In this paper we have proposed the EPOW (Effective Performance of WebCrawler) architecture. It is a software agent whose main objective is to minimize the overload of a user locating needed information. We have designed the web crawler by considering the parallelization policy. Since our EPOW crawler has a highly optimized system it can download a large number of pages per second while being robust against crashes. We have also proposed to use the data structure concepts for implementation of scheduler & circular Queue to improve the performance of our web crawler. (Abstract) KEYWORDS EPOW, Effective Web Crawler, Circular Queue, Scheduler, Basic Crawler, Precision & Recall. 1. INTRODUCTION In world wide web, information retrieval system [1] acts a vital role to extract the useful information with minimum amount of time. It has the ability to store the information and retrieved the information from the repository and also manages the information. The general objective of any Information Retrieval Method is to minimize the overhead of a user locating needed information.
    [Show full text]
  • Download Download
    International Journal of Management & Information Systems – Fourth Quarter 2011 Volume 15, Number 4 History Of Search Engines Tom Seymour, Minot State University, USA Dean Frantsvog, Minot State University, USA Satheesh Kumar, Minot State University, USA ABSTRACT As the number of sites on the Web increased in the mid-to-late 90s, search engines started appearing to help people find information quickly. Search engines developed business models to finance their services, such as pay per click programs offered by Open Text in 1996 and then Goto.com in 1998. Goto.com later changed its name to Overture in 2001, and was purchased by Yahoo! in 2003, and now offers paid search opportunities for advertisers through Yahoo! Search Marketing. Google also began to offer advertisements on search results pages in 2000 through the Google Ad Words program. By 2007, pay-per-click programs proved to be primary money-makers for search engines. In a market dominated by Google, in 2009 Yahoo! and Microsoft announced the intention to forge an alliance. The Yahoo! & Microsoft Search Alliance eventually received approval from regulators in the US and Europe in February 2010. Search engine optimization consultants expanded their offerings to help businesses learn about and use the advertising opportunities offered by search engines, and new agencies focusing primarily upon marketing and advertising through search engines emerged. The term "Search Engine Marketing" was proposed by Danny Sullivan in 2001 to cover the spectrum of activities involved in performing SEO, managing paid listings at the search engines, submitting sites to directories, and developing online marketing strategies for businesses, organizations, and individuals.
    [Show full text]
  • Indexing the World Wide Web: the Journey So Far Abhishek Das Google Inc., USA Ankit Jain Google Inc., USA
    Indexing The World Wide Web: The Journey So Far Abhishek Das Google Inc., USA Ankit Jain Google Inc., USA ABSTRACT In this chapter, we describe the key indexing components of today’s web search engines. As the World Wide Web has grown, the systems and methods for indexing have changed significantly. We present the data structures used, the features extracted, the infrastructure needed, and the options available for designing a brand new search engine. We highlight techniques that improve relevance of results, discuss trade-offs to best utilize machine resources, and cover distributed processing concept in this context. In particular, we delve into the topics of indexing phrases instead of terms, storage in memory vs. on disk, and data partitioning. We will finish with some thoughts on information organization for the newly emerging data-forms. INTRODUCTION The World Wide Web is considered to be the greatest breakthrough in telecommunications after the telephone. Quoting the new media reader from MIT press [Wardrip-Fruin , 2003]: “The World-Wide Web (W3) was developed to be a pool of human knowledge, and human culture, which would allow collaborators in remote sites to share their ideas and all aspects of a common project.” The last two decades have witnessed many significant attempts to make this knowledge “discoverable”. These attempts broadly fall into two categories: (1) classification of webpages in hierarchical categories (directory structure), championed by the likes of Yahoo! and Open Directory Project; (2) full-text index search engines such as Excite, AltaVista, and Google. The former is an intuitive method of arranging web pages, where subject-matter experts collect and annotate pages for each category, much like books are classified in a library.
    [Show full text]
  • Readingsample
    Information Science and Knowledge Management 14 Web Search Multidisciplinary Perspectives Bearbeitet von Amanda Spink, Michael Zimmer 1. Auflage 2008. Buch. xii, 352 S. Hardcover ISBN 978 3 540 75828 0 Format (B x L): 15,5 x 23,5 cm Gewicht: 703 g Weitere Fachgebiete > EDV, Informatik > EDV, Informatik: Allgemeines, Moderne Kommunikation > Soziale, sicherheitstechnische, ethische Aspekte Zu Inhaltsverzeichnis schnell und portofrei erhältlich bei Die Online-Fachbuchhandlung beck-shop.de ist spezialisiert auf Fachbücher, insbesondere Recht, Steuern und Wirtschaft. Im Sortiment finden Sie alle Medien (Bücher, Zeitschriften, CDs, eBooks, etc.) aller Verlage. Ergänzt wird das Programm durch Services wie Neuerscheinungsdienst oder Zusammenstellungen von Büchern zu Sonderpreisen. Der Shop führt mehr als 8 Millionen Produkte. 16 Web Searching: A Quality Measurement Perspective D. Lewandowski and N. Höchstötter Summary The purpose of this paper is to describe various quality measures for search engines and to ask whether these are suitable. We especially focus on user needs and their use of Web search engines. The paper presents an extensive litera- ture review and a first quality measurement model, as well. Findings include that Web search engine quality can not be measured by just retrieval effectiveness (the quality of the results), but should also consider index quality, the quality of the search features and Web search engine usability . For each of these sections, empiri- cal results from studies conducted in the past, as well as from our own research are presented. These results have implications for the evaluation of Web search engines and for the development of better search systems that give the user the best possible search experience.
    [Show full text]
  • Intelligent Web Agent for Search Engines
    International Conference on Trends and Advances in Computation and Engineering, TRACE- 2010 Intelligent Web Agent for Search Engines Avinash N. Bhute1,Harsha A. Bhute2 , Dr.B.B.Meshram3 Abstract II. WEB CRAWLER In this paper we review studies of the growth of the Internet and technologies that are useful for information search and retrieval on A Web crawler is a computer program that browses the the Web. Search engines are retrieve the efficient information. We World Wide Web in a methodical, automated manner. Other collected data on the Internet from several different sources, e.g., terms for Web crawlers are ants, automatic indexers, bots, and current as well as projected number of users, hosts, and Web sites. worm [3] or Web spider, Web robot. Web crawling or The trends cited by the sources are consistent and point to spidering is term alternatively used for same. Many sites, in exponential growth in the past and in the coming decade. Hence it is particular search engines, use spidering as a means of not surprising that about 85% of Internet users surveyed claim using search engines and search services to find specific information and providing up-to-date data. Web crawlers are mainly used to users are not satisfied with the performance of the current generation create a copy of all the visited pages for later processing by a of search engines; the slow retrieval speed, communication delays, search engine that will index the downloaded pages to provide and poor quality of retrieved results. Web agents, programs acting fast searches. Crawlers can also be used for automating autonomously on some task, are already present in the form of maintenance tasks on a Web site, such as checking links or spiders, crawler, and robots.
    [Show full text]
  • Looksmart Expands Strategic Relationship with Infospace, Bringing Its Search Results to Excite and Webcrawler
    LookSmart Expands Strategic Relationship With InfoSpace, Bringing Its Search Results to Excite and Webcrawler SAN FRANCISCO & BELLEVUE, Wash.--(BUSINESS WIRE)--May 7, 2002-- LookSmart's Search Results, Found Within InfoSpace's Next Generation Meta-Search Product, Launches Today at Excite.com and WebCrawler.com LookSmart (Nasdaq:LOOK; ASX:LOK), a global leader in search targeted marketing, and InfoSpace, Inc. (Nasdaq:INSP), a provider of wireless and Internet software and application services, today announced that they have extended and expanded their search relationship. Under the agreement, LookSmart search listings will now appear on InfoSpace's newly-launched, next generation, meta-search product found at Excite (www.excite.com) and WebCrawler (www.webcrawler.com). Excite is one of the leading personalization portals on the Internet, offering world-class content and functionality from more than 75 leading sources with more than 14 million monthly unique users. Additionally, LookSmart will continue to provide results for InfoSpace's other meta-search solutions, which include Dogpile (www.dogpile.com) and MetaCrawler (www.metacrawler.com). InfoSpace's proprietary meta-search technology allows users to search multiple engines at once, returning comprehensive results fast. "LookSmart is pleased to expand its relationship with InfoSpace by playing a significant role in the new search product that is served on Excite and WebCrawler," said Scott Stanford, senior vice president of business development for LookSmart. "LookSmart will meet an immediate need for high-quality search results and we look forward to continue working together to create great search for users of InfoSpace's properties." "LookSmart's search listings produce highly relevant results for users of our meta-search products, including the next generation engine launched at Excite and WebCrawler," said York Baur, InfoSpace executive vice president, wireline and broadband.
    [Show full text]
  • Effective Searching Policies for Web Crawler
    International Journal of Modern Engineering Research (IJMER) www.ijmer.com Vol. 3, Issue. 5, Sep - Oct. 2013 pp-3137-3139 ISSN: 2249-6645 Effective Searching Policies for Web Crawler Suyash Gupta, KrishanDev Mishra, Prerna Singh (Department of Information Technology, Mahamaya Technical University, India) (Department of Information Technology, Mahamaya Technical University, India) (Department of Information Technology, Mahamaya Technical University, India) ABSTRACT: As we know search engines cannot index every Web page, due to limited storage, bandwidth, computational resources and the dynamic nature of the web. It cannot monitored continuously all parts of the web for changes. Therefore it is important to develop effective searching policies. In this technique there is the combination of different searching technique to form a effective searching policies. These combined techniques are best first search, focused crawling, info spiders, recrawling pages for updation, crawling the hidden web page.This combined technique also includes Selection policy such as page rank, path ascending, focused crawling Revisit policy such as freshness , age Politeness , Parallelization so that it allow distributed web crawling. Keywords: Best first search, recrawling pages for updation, crawling the hidden web page, focused crawling, Revisit policy, Politeness, Parallelization. I. INTRODUCTION A Web crawler is a software program that helps in locating information stored on a computer system, typically based on WWW. Web crawlers are mainly used to create a copy of all the visited pages for later processing by a search engine that will index the downloaded pages to provide fast searches. Crawlers can also be used for automating maintenance tasks on a Web site, such as checking links or validating HTML code.
    [Show full text]
  • Web Crawling with Carlos Castillo
    Modern Information Retrieval Chapter 12 Web Crawling with Carlos Castillo Applications of a Web Crawler Architecture and Implementation Scheduling Algorithms Crawling Evaluation Extensions Examples of Web Crawlers Trends and Research Issues Web Crawling, Modern Information Retrieval, Addison Wesley, 2010 – p. 1 Introduction and a Brief History A Web Crawler is a software for downloading pages from the Web Also known as Web Spider, Web Robot, or simply Bot Cycle of a Web crawling process The crawler start downloading a set of seed pages, that are parsed and scanned for new links The links to pages that have not yet been downloaded are added to a central queue for download later Next, the crawler selects a new page for download and the process is repeated until a stop criterion is met Web Crawling, Modern Information Retrieval, Addison Wesley, 2010 – p. 2 Introduction and a Brief History The first known web crawler was created in 1993 by Matthew Gray, an undergraduate student at MIT On June of that year, Gray sent the following message to the www-talk mailing list: “I have written a perl script that wanders the WWW collecting URLs, keeping tracking of where it’s been and new hosts that it finds. Eventually, after hacking up the code to return some slightly more useful information (currently it just returns URLs), I will produce a searchable index of this.” The project of this Web crawler was called WWWW (World Wide Web Wanderer) It was used mostly for Web characterization studies Web Crawling, Modern Information Retrieval, Addison Wesley, 2010 – p. 3 Introduction and a Brief History On June 1994, Brian Pinkerton, a PhD student at the University of Washington, posted the following message to the comp.infosystems.announce news “The WebCrawler Index is now available for searching! The index is broad: it contains information from as many different servers as possible.
    [Show full text]