Link Spam  Artificial Link Topographies Created in Order to Boost Page Rank Topic-Specific Page Rank

Total Page:16

File Type:pdf, Size:1020Kb

Link Spam  Artificial Link Topographies Created in Order to Boost Page Rank Topic-Specific Page Rank CS345 Data Mining Link Analysis 2: Topic-Specific Page Rank Hubs and Authorities Spam Detection Anand Rajaraman, Jeffrey D. Ullman Some problems with page rank Measures generic popularity of a page Biased against topic-specific authorities Ambiguous queries e.g., jaguar Uses a single measure of importance Other models e.g., hubs-and-authorities Susceptible to Link spam Artificial link topographies created in order to boost page rank Topic-Specific Page Rank Instead of generic popularity, can we measure popularity within a topic? E.g., computer science, health Bias the random walk When the random walker teleports, he picks a page from a set S of web pages S contains only pages that are relevant to the topic E.g., Open Directory (DMOZ) pages for a given topic (www.dmoz.org ) For each teleport set S, we get a different rank vector rS Matrix formulation 2 Aij = βMij + (1-β)/|S| if i S Aij = βMij otherwise Show that A is stochastic We have weighted all pages in the teleport set S equally Could also assign different weights to them Example 0.2 Suppose S = {1}, β = 0.8 0.5 1 0.4 0.5 0.4 1 0.8 2 3 1 1 0.8 0.8 Node Iteration 0 1 2… stable 4 1 1.0 0.2 0.52 0.294 2 0 0.4 0.08 0.118 3 0 0.4 0.08 0.327 4 0 0 0.32 0.261 Note how we initialize the page rank vector differently from the unbiased page rank case. How well does TSPR work? Experimental results [Haveliwala 2000] Picked 16 topics Teleport sets determined using DMOZ E.g., arts, business, sports,… “Blind study” using volunteers 35 test queries Results ranked using Page Rank and TSPR of most closely related topic E.g., bicycling using Sports ranking In most cases volunteers preferred TSPR ranking Which topic ranking to use? User can pick from a menu Use Bayesian classification schemes to classify query into a topic Can use the context of the query E.g., query is launched from a web page talking about a known topic History of queries e.g., “basketball” followed by “jordan” User context e.g., user’s My Yahoo settings, bookmarks, … Hubs and Authorities Suppose we are given a collection of documents on some broad topic e.g., stanford, evolution, iraq perhaps obtained through a text search Can we organize these documents in some manner? Page rank offers one solution HITS (Hypertext-Induced Topic Selection) is another proposed at approx the same time (1998) HITS Model Interesting documents fall into two classes Authorities are pages containing useful information course home pages home pages of auto manufacturers Hubs are pages that link to authorities course bulletin list of US auto manufacturers Idealized view Hubs Authorities Mutually recursive definition A good hub links to many good authorities A good authority is linked from many good hubs Model using two scores for each node Hub score and Authority score Represented as vectors h and a Transition Matrix A HITS uses a matrix A[i, j] = 1 if page i links to page j, 0 if not AT, the transpose of A, is similar to the PageRank matrix M, but AT has 1’s where M has fractions Example y a m Yahoo y 1 1 1 A = a 1 0 1 m 0 1 0 Amazon M’soft Hub and Authority Equations The hub score of page P is proportional to the sum of the authority scores of the pages it links to h = λAa Constant λ is a scale factor The authority score of page P is proportional to the sum of the hub scores of the pages it is linked from a = µAT h Constant µ is scale factor Iterative algorithm Initialize h, a to all 1’s h = Aa Scale h so that its max entry is 1.0 a = ATh Scale a so that its max entry is 1.0 Continue until h, a converge Example 1 1 1 1 1 0 A = 1 0 1 AT = 1 0 1 0 1 0 1 1 0 a(yahoo) = 1 1 1 1 . 1 a(amazon) = 1 1 4/5 0.75 . 0.732 a(m’soft) = 1 1 1 1 . 1 h(yahoo) = 1 1 1 1 . 1.000 h(amazon) = 1 2/3 0.71 0.73 . 0.732 h(m’soft) = 1 1/3 0.29 0.27 . 0.268 Existence and Uniqueness h = λAa a = µAT h h = λµ AA T h a = λµ ATA a Under reasonable assumptions about A, the dual iterative algorithm converges to vectors h* and a* such that: • h* is the principal eigenvector of the matrix AA T • a* is the principal eigenvector of the matrix ATA Bipartite cores Hubs Authorities Most densely-connected core (primary core ) Less densely-connected core (secondary core ) Secondary cores A single topic can have many bipartite cores corresponding to different meanings, or points of view abortion: pro-choice, pro-life evolution: darwinian, intelligent design jaguar: auto, Mac, NFL team, panthera onca How to find such secondary cores? Non-primary eigenvectors AA T and ATA have the same set of eigenvalues An eigenpair is the pair of eigenvectors with the same eigenvalue The primary eigenpair (largest eigenvalue) is what we get from the iterative algorithm Non-primary eigenpairs correspond to other bipartite cores The eigenvalue is a measure of the density of links in the core Finding secondary cores Once we find the primary core, we can remove its links from the graph Repeat HITS algorithm on residual graph to find the next bipartite core Technically, not exactly equivalent to non-primary eigenpair model Creating the graph for HITS We need a well-connected graph of pages for HITS to work well Page Rank and HITS Page Rank and HITS are two solutions to the same problem What is the value of an inlink from S to D? In the page rank model, the value of the link depends on the links into S In the HITS model, it depends on the value of the other links out of S The destinies of Page Rank and HITS post-1998 were very different Why? Web Spam Search has become the default gateway to the web Very high premium to appear on the first page of search results e.g., e-commerce sites advertising-driven sites What is web spam? Spamming = any deliberate action solely in order to boost a web page’s position in search engine results, incommensurate with page’s real value Spam = web pages that are the result of spamming This is a very broad defintion SEO industry might disagree! SEO = search engine optimization Approximately 10-15% of web pages are spam Web Spam Taxonomy We follow the treatment by Gyongyi and Garcia-Molina [2004] Boosting techniques Techniques for achieving high relevance/importance for a web page Hiding techniques Techniques to hide the use of boosting From humans and web crawlers Boosting techniques Term spamming Manipulating the text of web pages in order to appear relevant to queries Link spamming Creating link structures that boost page rank or hubs and authorities scores Term Spamming Repetition of one or a few specific terms e.g., free, cheap, viagra Goal is to subvert TF.IDF ranking schemes Dumping of a large number of unrelated terms e.g., copy entire dictionaries Weaving Copy legitimate pages and insert spam terms at random positions Phrase Stitching Glue together sentences and phrases from different sources Link spam Three kinds of web pages from a spammer’s point of view Inaccessible pages Accessible pages e.g., web log comments pages spammer can post links to his pages Own pages Completely controlled by spammer May span multiple domain names Link Farms Spammer’s goal Maximize the page rank of target page t Technique Get as many links from accessible pages as possible to target page t Construct “link farm” to get page rank multiplier effect Link Farms Accessible Own 1 InaccessibleInaccessible 2 t M One of the most common and effective organizations for a link farm Analysis Accessible Own 1 Inaccessible Inaccessible t 2 M Suppose rank contributed by accessible pages = x Let page rank of target page = y Rank of each “farm” page = βy/M + (1-β)/N y = x + βM[ βy/M + (1-β)/N] + (1-β)/N = x + β2y + β(1-β)M/N + (1-β)/N Very small; ignore y = x/(1-β2) + cM/N where c = β/(1+ β) Analysis Accessible Own 1 Inaccessible Inaccessible t 2 M y = x/(1-β2) + cM/N where c = β/(1+ β) For β = 0.85, 1/(1-β2)= 3.6 Multiplier effect for “acquired” page rank By making M large, we can make y as large as we want Detecting Spam Term spamming Analyze text using statistical methods e.g., Naïve Bayes classifiers Similar to email spam filtering Also useful: detecting approximate duplicate pages Link spamming Open research area One approach: TrustRank TrustRank idea Basic principle: approximate isolation It is rare for a “good” page to point to a “bad” (spam) page Sample a set of “seed pages” from the web Have an oracle (human) identify the good pages and the spam pages in the seed set Expensive task, so must make seed set as small as possible Trust Propagation Call the subset of seed pages that are identified as “good” the “trusted pages” Set trust of each trusted page to 1 Propagate trust through links Each page gets a trust value between 0 and 1 Use a threshold value and mark all pages below the trust threshold as spam Example 1 2 3 good 4 bad 5 6 7 Rules for trust propagation Trust attenuation The degree of trust conferred by a trusted page decreases with distance Trust splitting The larger the number of outlinks from a page, the less scrutiny the page author gives each outlink Trust is “split” across outlinks Simple model Suppose trust of page p is t(p) Set of outlinks O(p) For each q 2O(p), p confers the trust βt(p)/|O(p)| for 0< β<1 Trust is additive Trust of p is the sum of the trust conferred on p by all its inlinked pages Note similarity to Topic-Specific Page Rank Within a scaling factor, trust rank = biased page rank with trusted pages as teleport set Picking the seed set Two conflicting considerations Human has to inspect each seed page,
Recommended publications
  • Search Engine Optimization: a Survey of Current Best Practices
    Grand Valley State University ScholarWorks@GVSU Technical Library School of Computing and Information Systems 2013 Search Engine Optimization: A Survey of Current Best Practices Niko Solihin Grand Valley Follow this and additional works at: https://scholarworks.gvsu.edu/cistechlib ScholarWorks Citation Solihin, Niko, "Search Engine Optimization: A Survey of Current Best Practices" (2013). Technical Library. 151. https://scholarworks.gvsu.edu/cistechlib/151 This Project is brought to you for free and open access by the School of Computing and Information Systems at ScholarWorks@GVSU. It has been accepted for inclusion in Technical Library by an authorized administrator of ScholarWorks@GVSU. For more information, please contact [email protected]. Search Engine Optimization: A Survey of Current Best Practices By Niko Solihin A project submitted in partial fulfillment of the requirements for the degree of Master of Science in Computer Information Systems at Grand Valley State University April, 2013 _______________________________________________________________________________ Your Professor Date Search Engine Optimization: A Survey of Current Best Practices Niko Solihin Grand Valley State University Grand Rapids, MI, USA [email protected] ABSTRACT 2. Build and maintain an index of sites’ keywords and With the rapid growth of information on the web, search links (indexing) engines have become the starting point of most web-related 3. Present search results based on reputation and rele- tasks. In order to reach more viewers, a website must im- vance to users’ keyword combinations (searching) prove its organic ranking in search engines. This paper intro- duces the concept of search engine optimization (SEO) and The primary goal is to e↵ectively present high-quality, pre- provides an architectural overview of the predominant search cise search results while efficiently handling a potentially engine, Google.
    [Show full text]
  • Prospects, Leads, and Subscribers
    PAGE 2 YOU SHOULD READ THIS eBOOK IF: You are looking for ideas on finding leads. Spider Trainers can help You are looking for ideas on converting leads to Marketing automation has been shown to increase subscribers. qualified leads for businesses by as much as 451%. As You want to improve your deliverability. experts in drip and nurture marketing, Spider Trainers You want to better maintain your lists. is chosen by companies to amplify lead and demand generation while setting standards for design, You want to minimize your list attrition. development, and deployment. Our publications are designed to help you get started, and while we may be guilty of giving too much information, we know that the empowered and informed client is the successful client. We hope this white paper does that for you. We look forward to learning more about your needs. Please contact us at 651 702 3793 or [email protected] . ©2013 SPIDER TRAINERS PAGE 3 TAble Of cOnTenTS HOW TO cAPTure SubScriberS ...............................2 HOW TO uSe PAiD PrOGrAMS TO GAin Tipping point ..................................................................2 SubScriberS ...........................................................29 create e mail lists ...........................................................3 buy lists .........................................................................29 Pop-up forms .........................................................4 rent lists ........................................................................31 negative consent
    [Show full text]
  • Fully Automatic Link Spam Detection∗ Work in Progress
    SpamRank – Fully Automatic Link Spam Detection∗ Work in progress András A. Benczúr1,2 Károly Csalogány1,2 Tamás Sarlós1,2 Máté Uher1 1 Computer and Automation Research Institute, Hungarian Academy of Sciences (MTA SZTAKI) 11 Lagymanyosi u., H–1111 Budapest, Hungary 2 Eötvös University, Budapest {benczur, cskaresz, stamas, umate}@ilab.sztaki.hu www.ilab.sztaki.hu/websearch Abstract Spammers intend to increase the PageRank of certain spam pages by creating a large number of links pointing to them. We propose a novel method based on the concept of personalized PageRank that detects pages with an undeserved high PageRank value without the need of any kind of white or blacklists or other means of human intervention. We assume that spammed pages have a biased distribution of pages that contribute to the undeserved high PageRank value. We define SpamRank by penalizing pages that originate a suspicious PageRank share and personalizing PageRank on the penalties. Our method is tested on a 31 M page crawl of the .de domain with a manually classified 1000-page stratified random sample with bias towards large PageRank values. 1 Introduction Identifying and preventing spam was cited as one of the top challenges in web search engines in a 2002 paper [24]. Amit Singhal, principal scientist of Google Inc. estimated that the search engine spam industry had a revenue potential of $4.5 billion in year 2004 if they had been able to completely fool all search engines on all commercially viable queries [36]. Due to the large and ever increasing financial gains resulting from high search engine ratings, it is no wonder that a significant amount of human and machine resources are devoted to artificially inflating the rankings of certain web pages.
    [Show full text]
  • Clique-Attacks Detection in Web Search Engine for Spamdexing Using K-Clique Percolation Technique
    International Journal of Machine Learning and Computing, Vol. 2, No. 5, October 2012 Clique-Attacks Detection in Web Search Engine for Spamdexing using K-Clique Percolation Technique S. K. Jayanthi and S. Sasikala, Member, IACSIT Clique cluster groups the set of nodes that are completely Abstract—Search engines make the information retrieval connected to each other. Specifically if connections are added task easier for the users. Highly ranking position in the search between objects in the order of their distance from one engine query results brings great benefits for websites. Some another a cluster if formed when the objects forms a clique. If website owners interpret the link architecture to improve ranks. a web site is considered as a clique, then incoming and To handle the search engine spam problems, especially link farm spam, clique identification in the network structure would outgoing links analysis reveals the cliques existence in web. help a lot. This paper proposes a novel strategy to detect the It means strong interconnection between few websites with spam based on K-Clique Percolation method. Data collected mutual link interchange. It improves all websites rank, which from website and classified with NaiveBayes Classification participates in the clique cluster. In Fig. 2 one particular case algorithm. The suspicious spam sites are analyzed for of link spam, link farm spam is portrayed. That figure points clique-attacks. Observations and findings were given regarding one particular node (website) is pointed by so many nodes the spam. Performance of the system seems to be good in terms of accuracy. (websites), this structure gives higher rank for that website as per the PageRank algorithm.
    [Show full text]
  • Understanding and Combating Link Farming in the Twitter Social Network
    Understanding and Combating Link Farming in the Twitter Social Network Saptarshi Ghosh Bimal Viswanath Farshad Kooti IIT Kharagpur, India MPI-SWS, Germany MPI-SWS, Germany Naveen K. Sharma Gautam Korlam Fabricio Benevenuto IIT Kharagpur, India IIT Kharagpur, India UFOP, Brazil Niloy Ganguly Krishna P. Gummadi IIT Kharagpur, India MPI-SWS, Germany ABSTRACT Web, such as current events, news stories, and people’s opin- Recently, Twitter has emerged as a popular platform for ion about them. Traditional media, celebrities, and mar- discovering real-time information on the Web, such as news keters are increasingly using Twitter to directly reach au- stories and people’s reaction to them. Like the Web, Twitter diences in the millions. Furthermore, millions of individual has become a target for link farming, where users, especially users are sharing the information they discover over Twit- spammers, try to acquire large numbers of follower links in ter, making it an important source of breaking news during the social network. Acquiring followers not only increases emergencies like revolutions and disasters [17, 23]. Recent the size of a user’s direct audience, but also contributes to estimates suggest that 200 million active Twitter users post the perceived influence of the user, which in turn impacts 150 million tweets (messages) containing more than 23 mil- the ranking of the user’s tweets by search engines. lion URLs (links to web pages) daily [3,28]. In this paper, we first investigate link farming in the Twit- As the information shared over Twitter grows rapidly, ter network and then explore mechanisms to discourage the search is increasingly being used to find interesting trending activity.
    [Show full text]
  • Adversarial Web Search by Carlos Castillo and Brian D
    Foundations and TrendsR in Information Retrieval Vol. 4, No. 5 (2010) 377–486 c 2011 C. Castillo and B. D. Davison DOI: 10.1561/1500000021 Adversarial Web Search By Carlos Castillo and Brian D. Davison Contents 1 Introduction 379 1.1 Search Engine Spam 380 1.2 Activists, Marketers, Optimizers, and Spammers 381 1.3 The Battleground for Search Engine Rankings 383 1.4 Previous Surveys and Taxonomies 384 1.5 This Survey 385 2 Overview of Search Engine Spam Detection 387 2.1 Editorial Assessment of Spam 387 2.2 Feature Extraction 390 2.3 Learning Schemes 394 2.4 Evaluation 397 2.5 Conclusions 400 3 Dealing with Content Spam and Plagiarized Content 401 3.1 Background 402 3.2 Types of Content Spamming 405 3.3 Content Spam Detection Methods 405 3.4 Malicious Mirroring and Near-Duplicates 408 3.5 Cloaking and Redirection 409 3.6 E-mail Spam Detection 413 3.7 Conclusions 413 4 Curbing Nepotistic Linking 415 4.1 Link-Based Ranking 416 4.2 Link Bombs 418 4.3 Link Farms 419 4.4 Link Farm Detection 421 4.5 Beyond Detection 424 4.6 Combining Links and Text 426 4.7 Conclusions 429 5 Propagating Trust and Distrust 430 5.1 Trust as a Directed Graph 430 5.2 Positive and Negative Trust 432 5.3 Propagating Trust: TrustRank and Variants 433 5.4 Propagating Distrust: BadRank and Variants 434 5.5 Considering In-Links as well as Out-Links 436 5.6 Considering Authorship as well as Contents 436 5.7 Propagating Trust in Other Settings 437 5.8 Utilizing Trust 438 5.9 Conclusions 438 6 Detecting Spam in Usage Data 439 6.1 Usage Analysis for Ranking 440 6.2 Spamming Usage Signals 441 6.3 Usage Analysis to Detect Spam 444 6.4 Conclusions 446 7 Fighting Spam in User-Generated Content 447 7.1 User-Generated Content Platforms 448 7.2 Splogs 449 7.3 Publicly-Writable Pages 451 7.4 Social Networks and Social Media Sites 455 7.5 Conclusions 459 8 Discussion 460 8.1 The (Ongoing) Struggle Between Search Engines and Spammers 460 8.2 Outlook 463 8.3 Research Resources 464 8.4 Conclusions 467 Acknowledgments 468 References 469 Foundations and TrendsR in Information Retrieval Vol.
    [Show full text]
  • Link Spam Detection Based on Mass Estimation
    Link Spam Detection Based on Mass Estimation ∗ Zoltan Gyongyi Pavel Berkhin Computer Science Department Yahoo! Inc. Stanford University 701 First Avenue Stanford, CA 94305, USA Sunnyvale, CA 94089, USA [email protected] [email protected] Hector Garcia-Molina Jan Pedersen Computer Science Department Yahoo! Inc. Stanford University 701 First Avenue Stanford, CA 94305, USA Sunnyvale, CA 94089, USA [email protected] [email protected] ABSTRACT tivity remains largely undetected by search engines, often Link spamming intends to mislead search engines and trig- manage to obtain very high rankings for their spam pages. ger an artificially high link-based ranking of specific target web pages. This paper introduces the concept of spam mass, This paper proposes a novel method for identifying the largest a measure of the impact of link spamming on a page’s rank- and most sophisticated spam farms, by turning the spam- ing. We discuss how to estimate spam mass and how the es- mers’ ingenuity against themselves. Our focus is on spam- timates can help identifying pages that benefit significantly ming attempts that target PageRank. We introduce the from link spamming. In our experiments on the host-level concept of spam mass, a measure of how much PageRank a Yahoo! web graph we use spam mass estimates to success- page accumulates through being linked to by spam pages. fully identify tens of thousands of instances of heavy-weight The target pages of spam farms, whose PageRank is boosted link spamming. by many spam pages, are expected to have a large spam mass. At the same time, popular reputable pages, which have high PageRank because other reputable pages point to 1.
    [Show full text]
  • Lapland UAS Thesis Template
    DELIVERING USER-FRIENDLY EXPERIENCES BY SEARCH ENGINE OPTIMIZATION Lian Heng Bachelor’s Thesis School of Business and Culture Degree Programme in Business Information Technology Bachelor of Business Administration 2014 Abstract of thesis School of Business and Culture Degree Programme in Business Information Technology Author Heng Lian Year 2014 Supervisor Tuomo Lindholm Commissioned by N/A Title of thesis Delivering User-Friendly Experiences by Search Engine Optimization No. of pages + app. 60 The objective of this research is to study different and recent Google algorithms to encourage the uses of White Hat search engine optimization in order to cre- ate a user-friendly experience on websites. Moreover, reasons for not using Black Hat search engine optimization are justified. This research focuses on Google search engine, specifically on the search en- gine optimization principles and its benefits to deliver user-friendly experiences. The exploratory research approach was adopted in this thesis research due to the formulation of the research questions. Descriptive method was also utilized as a secondary research method to enhance the findings. To ensure the validity of this thesis research, the sources drawn from are more recent than from year 2007. On the basis of research analyses and findings, this thesis research proposes different techniques and principles to be applied to webmaster’s websites in order to create websites acceptable by Google standards, more importantly us- er-friendly experiences. Keywords: user-friendly experience, SEO, Google algorithm, Google Panda, PageRank, Google Penguin, Page Layout, CONTENT ABSTRACT FIGURES 1 INTRODUCTION ............................................................................................ 5 1.1 Motivation and background .................................................................... 5 1.2 Objectives .............................................................................................. 6 1.3 Structure of the thesis ...........................................................................
    [Show full text]
  • Understanding and Combating Link Farming in the Twitter Social Network
    WWW 2012 – Session: Security and Fraud in Social Networks April 16–20, 2012, Lyon, France Understanding and Combating Link Farming in the Twitter Social Network Saptarshi Ghosh Bimal Viswanath Farshad Kooti IIT Kharagpur, India MPI-SWS, Germany MPI-SWS, Germany Naveen K. Sharma Gautam Korlam Fabricio Benevenuto IIT Kharagpur, India IIT Kharagpur, India UFOP, Brazil Niloy Ganguly Krishna P. Gummadi IIT Kharagpur, India MPI-SWS, Germany ABSTRACT Web, such as current events, news stories, and people’s opin- Recently, Twitter has emerged as a popular platform for ion about them. Traditional media, celebrities, and mar- discovering real-time information on the Web, such as news keters are increasingly using Twitter to directly reach au- stories and people’s reaction to them. Like the Web, Twitter diences in the millions. Furthermore, millions of individual has become a target for link farming, where users, especially users are sharing the information they discover over Twit- spammers, try to acquire large numbers of follower links in ter, making it an important source of breaking news during the social network. Acquiring followers not only increases emergencies like revolutions and disasters [17, 23]. Recent the size of a user’s direct audience, but also contributes to estimates suggest that 200 million active Twitter users post the perceived influence of the user, which in turn impacts 150 million tweets (messages) containing more than 23 mil- the ranking of the user’s tweets by search engines. lion URLs (links to web pages) daily [3, 28]. In this paper, we first investigate link farming in the Twit- As the information shared over Twitter grows rapidly, ter network and then explore mechanisms to discourage the search is increasingly being used to find interesting trending activity.
    [Show full text]
  • Mining Page Farms and Its Application in Link Spam Detection
    MINING PAGE FARMS AND ITS APPLICATION IN LINK SPAM DETECTION Bin Zhou B.Sc., Fudan University, 2005 A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTEROF SCIENCE in the School of Computing Science @ Bin Zhou 2007 SIMON FRASER UNIVERSITY Spring 2007 All rights reserved. This work may not be reproduced in whole or in part, by photocopy or other means, without the permission of the author. APPROVAL Name: Bin Zhou Degree: Master of Science Title of thesis: Mining Page Farms and Its Application in Link Spam Detec- tion Examining Committee: Dr. Jiangchuan Liu Chair Dr. Jian Pei, Senior Supervisor Dr. Joseph Peters, Supervisor Dr. Martin Ester, SFU Examiner Date Approved: - SIMON FRASER UNIWEIW~I. brary DECLARATION OF PARTIAL COPYRIGHT LICENCE The author, whose copyright is declared on the title page of this work, has granted to Simon Fraser University the right to lend this thesis, project or extended essay to users of the Simon Fraser University Library, and to make partial or single copies only for such users or in response to a request from the library of any other university, or other educational institution, on its own behalf or for one of its users. The author has further granted permission to Simon Fraser University to keep or make a digital copy for use in its circulating collection (currently available to the public at the "Institutional Repository" link of the SFU Library website <www.lib.sfu.ca> at: <http:llir.lib.sfu.calhandlell8921112>) and, without changing the content, to translate the thesislproject or extended essays, if technically possible, to any medium or format for the purpose of preservation of the digital work.
    [Show full text]
  • Mining Page Farms and Its Application in Link Spam Detec- Tion
    MINING PAGE FARMS AND ITS APPLICATION IN LINK SPAM DETECTION by Bin Zhou B.Sc., Fudan University, 2005 a thesis submitted in partial fulfillment of the requirements for the degree of Master of Science in the School of Computing Science °c Bin Zhou 2007 SIMON FRASER UNIVERSITY Spring 2007 All rights reserved. This work may not be reproduced in whole or in part, by photocopy or other means, without the permission of the author. APPROVAL Name: Bin Zhou Degree: Master of Science Title of thesis: Mining Page Farms and Its Application in Link Spam Detec- tion Examining Committee: Dr. Jiangchuan Liu Chair Dr. Jian Pei, Senior Supervisor Dr. Joseph Peters, Supervisor Dr. Martin Ester, SFU Examiner Date Approved: ii Abstract Understanding the general relations of Web pages and their environments is important with a few interesting applications such as Web spam detection. In this thesis, we study the novel problem of page farm mining and its application in link spam detection. A page farm is the set of Web pages contributing to (a major portion of) the PageRank score of a target page. We show that extracting page farms is computationally expensive, and propose heuristic methods. We propose the concept of link spamicity based on page farms to evaluate the degree of a Web page being link spam. Using a real sample of more than 3 million Web pages, we analyze the statistics of page farms. We examine the e®ectiveness of our spamicity-based link spam detection methods using a newly available real data set of spam pages. The empirical study results strongly indicate that our methods are e®ective.
    [Show full text]
  • Brave River Solutions 875 Centerville Rd Suite #3, Warwick, RI 02886
    Brave River Solutions ● 875 Centerville Rd Suite #3, Warwick, RI 02886 ● (401) 828-6611 Is your head spinning from acronyms like PPC, SEO, SEM, SMM, or terms like bot, canonical, duplicate content, and link sculpting? Well this is the resource you've been looking for. In this guide we have compiled a comprehensive collection of SEO terms and practices that marketers, businesses, and clients have needed further clarification on. 301 redirect To begin, let’s cover the basics of redirects. A redirection occurs if you visit a specific webpage and then are immediately redirected to an entirely new webpage with a different URL. The two main types of redirects are temporary and permanent. To users, there is no obvious difference. But for search engines, a 301 redirect informs them that the page attempting to be accessed has changed permanently and must have its link juice transferred to the new address (which would not happen with a temporary redirect). Affiliate An affiliate site markets products or services that are sold by another website or business in exchange for fees or commissions. An example is a site that markets Amazon or eBay products on their own website. Affiliate sites often assist in increasing traffic to the site in which they are marketing products or services for. Algorithm (algo) An algorithm is a type of program that search engines when deciding which pages to put forth as most relevant for a particular search query. Alt tag An alt tag is an HTML attribute of the IMG tag. The IMG tag is responsible for displaying images, while the alt tag/attribute is the text that gets displayed if an image is unable to be loaded (or if the file happens to be missing).
    [Show full text]