A Co-Classification Framework for Detecting Web Spam

Total Page:16

File Type:pdf, Size:1020Kb

A Co-Classification Framework for Detecting Web Spam A Co-classification Framework for Detecting Web Spam and Spammers in Social Media Web Sites Feilong Chen, Pang-Ning Tan, and Anil K. Jain Department of Computer Science & Engineering Michigan State University East Lansing, Michigan 48824 {chenfeil, ptan, jain}@cse.msu.edu ABSTRACT Despite the extensive research in this area, Web spam de- Social media are becoming increasingly popular and have tection remains a critically important but unsolved problem attracted considerable attention from spammers. Using a because spammers may adjust their strategies to adapt to sample of more than ninety thousand known spam Web the defense mechanisms employed against them. sites, we found between 7% to 18% of their URLs are posted Social media are becoming increasingly popular and have on two popular social media Web sites, digg.com and de- attracted considerable attention from spammers. From a licious.com. In this paper, we present a co-classi¯cation list of 94; 198 spam Web sites extracted from a benchmark framework to detect Web spam and the spammers who are email spam data [9], we found 6; 420 (¼ 7%) of them were responsible for posting them on the social media Web sites. posted at digg.com and 16; 537 (¼ 18%) of them were posted The rationale for our approach is that since both detection at delicious.com. While there has been considerable re- tasks are related, it would be advantageous to train them search to detect spam from hyperlinked Web pages to im- simultaneously to make use of the labeled examples in the prove search engine performance, spam detection from social Web spam and spammer training data. We have evaluated media is still in its infancy, with existing work focusing pri- the e®ectiveness of our algorithm on the delicious.com marily on detecting spam in blogs [3, 7, 6] and online review data set. Our experimental results showed that the pro- forums [4]. Unlike spam detection from hyperlinked Web posed co-classi¯cation algorithm signi¯cantly outperforms pages, there are richer amount of data available in social classi¯ers that learn each detection task independently. media that can be utilized for Web spam detection, such as the links between users, hyperlinks between the Web con- tent, and links between users and the Web content. Users Categories and Subject Descriptors may also assign a set of keyword tags to annotate the Web H.3.3 [Information Storage and Retrieval]: Information content they have posted. Therefore, a key challenge is to Search and Retrieval|Information ¯ltering systematically incorporate all the heterogeneous data in a uni¯ed framework to improve Web spam detection. General Terms In addition to Web spam detection, it is useful to iden- tify the spammers who are responsible for posting such links Algorithms, Experimentation, Measurement, Security to prevent them from future spamming activities. This is a challenging task because some legitimate users may unknow- Keywords ingly post links to spam Web sites, while some spammers may deliberately add links to non-spam Web sites to avoid Social Media, Web Spam, Classi¯cation detection. This paper assumes that spammers tend to post considerably higher amount of Web spam compared to non- 1. INTRODUCTION spammers. Since both Web spam and spammer detection Web spamming refers to any deliberate activity to pro- tasks are related, it would be advantageous to train their mote fraudulent Web pages in order to mislead Web users classi¯ers simultaneously to make use of their labeled exam- into believing they were viewing legitimate Web pages. Since ples. To the best of our knowledge, there has not been any search engines play a pivotal role in guiding users to ¯nd prior work on the joint detection of Web spam and spam- relevant information on the World Wide Web, much of the mers, an approach which we termed as co-classi¯cation. early spamming activities were directed toward misguiding This paper presents a co-classi¯cation framework for Web search engines into ranking spam pages unjusti¯ably higher. spam and spammer detection in social media based on the maximum margin principle. Speci¯cally, we formalize the joint detection tasks as a constraint optimization problem, in which the relationships between users and their submit- Permission to make digital or hard copies of all or part of this work for ted Web content are represented as constraints in the form personal or classroom use is granted without fee provided that copies are graph regularization. To ground the discussion of our frame- not made or distributed for profit or commercial advantage and that copies work, we use as an example the social bookmarking Web site bear this notice and the full citation on the first page. To copy otherwise, to delicious.com. While the concepts in this paper are pre- republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. sented for the social bookmarking domain [2], our proposed CIKM’09, November 2–6, 2009, Hong Kong, China. framework is applicable to other social media Web sites Copyright 2009 ACM 978-1-60558-512-3/09/11 ...$5.00. (u) where the following data are available: (1) links between vector for the i-th user and yi is its class label (+1 users, (2) links between users and their submitted Web con- for a spammer and ¡1 for non-spammer). tent (social news, blogs, opinions, etc), and (3) tags or other ² a set of n ¡ l unlabeled users, U = fx(u) ; x(u) , u u u lu+1 lu+2 content-based features derived from the Web content. We (u) ¢ ¢ ¢ , x g. also show that our framework is applicable to both super- nu ² a set of user-bookmark pairs, E , where (u; b) 2 E if vised and semi-supervised learning settings. Experimental b b user u posts a bookmark b. results using the delicious.com data set showed that our ² a set of user-user pairs, E , where (u; v) 2 E if user co-classi¯cation framework signi¯cantly outperforms classi- u u u is a fan of user v. ¯ers that learn each detection task independently. th Furthermore, let Xb (or Xu) be a matrix whose i row corresponds to the feature vector of bookmark (or user) i, 2. PRELIMINARIES (b)T (u)T i.e., xi (or xi ). We begin with a brief discussion of the terminology and Given Vb, Vu, Eb, and Eu, the goal of Web spam and notations used in this paper. While the terminology intro- spammer detection tasks is to learn a pair of classi¯ers: (1) duced here are based on the features available at delicious. (b) (b) fb(x ) that accurately maps a bookmark x to its class com, they are also applicable to other social media Web sites. label y(b) 2 f¡1; +1g and (2) f (x(u)) that accurately maps ² User: A registered visitor of the social bookmarking u a user x(u) to its class label y(u) 2 f¡1; +1g. Web site. Let U be the set of all users. ² Bookmark: A shortcut to the URL of a Web page. Let B be the set of all bookmarks. ² Tag: A keyword or text description assigned by a user 3.2 Maximum Margin Classifier to a bookmark. Let T be the set of all tags. The classi¯ers used in this paper are extensions of the ² Post: A 4-tuple (b, u, t, ¿), where b 2 B, u 2 U, least-square support vector machine (LS-SVM), a variant t ⊆ T , and ¿ is the timestamp at which the user posted of maximum margin classi¯er proposed by Suykens et al. the bookmark on the Web site. We denote the set [8]. Speci¯cally, we may construct a LS-SVM classi¯er for of all posts, also known as the posting history, as ¦. bookmarks by minimizing the following objective function: Furthermore, let Eb = f(u; b)ju 2 U; b 2 B; 9t; ¿ : (b; u; t; ¿) 2 ¦g. ² Fan: A directional link from one user to another. If a Xlb user u adds another user v to his/her network, then u 1 T 1 2 Lb(wb; bb; e; ®) = w wb + γ1 e becomes a fan of v. Let Eu = f(u; v)ju; v 2 Ug be the 2 b 2 i set of all pairs of users in which u is a fan of v. i=1 Xlb n h i o The overall data can be represented as a graph G = (V; E), (b) T (b) where V = B[U is the set of nodes (bookmarks and users) ¡ ®i yi wb xi + bb ¡ 1 + ei ; i=1 and E = Eb [ Eu is the set of links. Our work is based on the following two assumptions: (1) If (u; b) 2 Eb and b is a spam bookmark, then u is more likely to be a spammer. (2) If (u; v) 2 Eu and v is a spammer, then u is also likely to Similarly, the corresponding objective function for classify- be a spammer. As will be shown later in Section 3, these ing users can be expressed as follows: assumptions are enforced as graph regularization constraints in our proposed co-classi¯cation framework. Throughout this paper, matrices are denoted by boldface Xlu 1 T 1 2 capital letters like X and vectors are denoted by boldface Lu(wu; bu; »; ¯) = wu wu + γ2 »i T 2 2 small letters like y (for column vector) or y (for row vec- i=1 tor), where T is the transpose operator. Elements of matri- Xlu n h i o ces and vectors are of the form X and y , respectively.
Recommended publications
  • Copyright 2009 Cengage Learning. All Rights Reserved. May Not Be Copied, Scanned, Or Duplicated, in Whole Or in Part
    Index Note: Page numbers referencing fi gures are italicized and followed by an “f ”. Page numbers referencing tables are italicized and followed by a “t”. A Ajax, 353 bankruptcy, 4, 9f About.com, 350 Alexa.com, 42, 78 banner advertising, 7f, 316, 368 AboutUs.org, 186, 190–192 Alta Vista, 7 Barack Obama’s online store, 328f Access application, 349 Amazon.com, 7f, 14f, 48, 247, BaseballNooz.com, 98–100 account managers, 37–38 248f–249f, 319–320, 322 BBC News, 3 ActionScript, 353–356 anonymity, 16 Bebo, 89t Adobe Flash AOL, 8f, 14f, 77, 79f, 416 behavioral changes, 16 application, 340–341 Apple iTunes, 13f–14f benign disinhibition, 16 fi le format. See .fl v fi le format Apple site, 11f, 284f Best Dates Now blog, 123–125 player, 150, 153, 156 applets, Java, 352, 356 billboard advertising, 369 Adobe GoLive, 343 applications, see names of specifi c bitmaps, 290, 292, 340, 357 Adobe Photoshop, 339–340 applications BJ’s site, 318 Advanced Research Projects ARPA (Advanced Research Black Friday, 48 Agency (ARPA), 2 Projects Agency), 2 blog communities, 8f advertising artistic fonts, 237 blog editors, 120, 142 dating sites, 106 ASCO Power University, 168–170 blog search engines, 126 defi ned, 397 .asf fi le format, 154t–155t Blogger, 344–347 e-commerce, 316 AskPatty.com, 206–209 blogging, 7f, 77–78, 86, 122–129, Facebook, 94–96 AuctionWeb, 7f 133–141, 190, 415 family and lifestyle sites, 109 audience, capturing and retaining, blogosphere, 122, 142 media, 373–376 61–62, 166, 263, 405–407, blogrolls, 121, 142 message, 371–372 410–422, 432 Blue Nile site,
    [Show full text]
  • Liferay Portal 6 Enterprise Intranets
    Liferay Portal 6 Enterprise Intranets Build and maintain impressive corporate intranets with Liferay Jonas X. Yuan BIRMINGHAM - MUMBAI Liferay Portal 6 Enterprise Intranets Copyright © 2010 Packt Publishing All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews. Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing, and its dealers and distributors will be held liable for any damages caused or alleged to be caused directly or indirectly by this book. Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information. First published: April 2010 Production Reference: 1230410 Published by Packt Publishing Ltd. 32 Lincoln Road Olton Birmingham, B27 6PA, UK. ISBN 978-1-849510-38-7 www.packtpub.com Cover Image by Karl Swedberg ([email protected]) Credits Author Editorial Team Leader Jonas X. Yuan Aanchal Kumar Reviewer Project Team Leader Amine Bousta Lata Basantani Acquisition Editor Project Coordinator Dilip Venkatesh Shubhanjan Chatterjee Development Editor Proofreaders Mehul Shetty Aaron Nash Lesley Harrison Technical Editors Aditya Belpathak Graphics Alfred John Geetanjali Sawant Charumathi Sankaran Nilesh Mohite Copy Editors Production Coordinators Leonard D'Silva Avinish Kumar Sanchari Mukherjee Aparna Bhagat Nilesh Mohite Indexers Hemangini Bari Cover Work Rekha Nair Aparna Bhagat About the Author Dr.
    [Show full text]
  • Address Munging: the Practice of Disguising, Or Munging, an E-Mail Address to Prevent It Being Automatically Collected and Used
    Address Munging: the practice of disguising, or munging, an e-mail address to prevent it being automatically collected and used as a target for people and organizations that send unsolicited bulk e-mail address. Adware: or advertising-supported software is any software package which automatically plays, displays, or downloads advertising material to a computer after the software is installed on it or while the application is being used. Some types of adware are also spyware and can be classified as privacy-invasive software. Adware is software designed to force pre-chosen ads to display on your system. Some adware is designed to be malicious and will pop up ads with such speed and frequency that they seem to be taking over everything, slowing down your system and tying up all of your system resources. When adware is coupled with spyware, it can be a frustrating ride, to say the least. Backdoor: in a computer system (or cryptosystem or algorithm) is a method of bypassing normal authentication, securing remote access to a computer, obtaining access to plaintext, and so on, while attempting to remain undetected. The backdoor may take the form of an installed program (e.g., Back Orifice), or could be a modification to an existing program or hardware device. A back door is a point of entry that circumvents normal security and can be used by a cracker to access a network or computer system. Usually back doors are created by system developers as shortcuts to speed access through security during the development stage and then are overlooked and never properly removed during final implementation.
    [Show full text]
  • Adversarial Web Search by Carlos Castillo and Brian D
    Foundations and TrendsR in Information Retrieval Vol. 4, No. 5 (2010) 377–486 c 2011 C. Castillo and B. D. Davison DOI: 10.1561/1500000021 Adversarial Web Search By Carlos Castillo and Brian D. Davison Contents 1 Introduction 379 1.1 Search Engine Spam 380 1.2 Activists, Marketers, Optimizers, and Spammers 381 1.3 The Battleground for Search Engine Rankings 383 1.4 Previous Surveys and Taxonomies 384 1.5 This Survey 385 2 Overview of Search Engine Spam Detection 387 2.1 Editorial Assessment of Spam 387 2.2 Feature Extraction 390 2.3 Learning Schemes 394 2.4 Evaluation 397 2.5 Conclusions 400 3 Dealing with Content Spam and Plagiarized Content 401 3.1 Background 402 3.2 Types of Content Spamming 405 3.3 Content Spam Detection Methods 405 3.4 Malicious Mirroring and Near-Duplicates 408 3.5 Cloaking and Redirection 409 3.6 E-mail Spam Detection 413 3.7 Conclusions 413 4 Curbing Nepotistic Linking 415 4.1 Link-Based Ranking 416 4.2 Link Bombs 418 4.3 Link Farms 419 4.4 Link Farm Detection 421 4.5 Beyond Detection 424 4.6 Combining Links and Text 426 4.7 Conclusions 429 5 Propagating Trust and Distrust 430 5.1 Trust as a Directed Graph 430 5.2 Positive and Negative Trust 432 5.3 Propagating Trust: TrustRank and Variants 433 5.4 Propagating Distrust: BadRank and Variants 434 5.5 Considering In-Links as well as Out-Links 436 5.6 Considering Authorship as well as Contents 436 5.7 Propagating Trust in Other Settings 437 5.8 Utilizing Trust 438 5.9 Conclusions 438 6 Detecting Spam in Usage Data 439 6.1 Usage Analysis for Ranking 440 6.2 Spamming Usage Signals 441 6.3 Usage Analysis to Detect Spam 444 6.4 Conclusions 446 7 Fighting Spam in User-Generated Content 447 7.1 User-Generated Content Platforms 448 7.2 Splogs 449 7.3 Publicly-Writable Pages 451 7.4 Social Networks and Social Media Sites 455 7.5 Conclusions 459 8 Discussion 460 8.1 The (Ongoing) Struggle Between Search Engines and Spammers 460 8.2 Outlook 463 8.3 Research Resources 464 8.4 Conclusions 467 Acknowledgments 468 References 469 Foundations and TrendsR in Information Retrieval Vol.
    [Show full text]
  • Efficient and Trustworthy Review/Opinion Spam Detection
    International Journal on Recent and Innovation Trends in Computing and Communication ISSN: 2321-8169 Volume: 5 Issue: 4 86 – 94 _____________________________________________________________________________________________ Efficient and Trustworthy Review/Opinion Spam Detection Sanketi P. Raut Prof. Chitra Wasnik ME II (Computer), Computer Dept. Lokmanya Tilak College of Engineering, Lokmanya Tilak College of Engineering, New Mumbai, New Mumbai, Email:[email protected] Email:[email protected] Abstract - The most common mode for consumers to express their level of satisfaction with their purchases is through online ratings, which we can refer as Online Review System. Network analysis has recently gained a lot of attention because of the arrival and the increasing attractiveness of social sites, such as blogs, social networking applications, micro blogging, or customer review sites. The reviews are used by potential customers to find opinions of existing users before purchasing the products. Online review systems plays an important part in affecting consumers' actions and decision making, and therefore attracting many spammers to insert fake feedback or reviews in order to manipulate review content and ratings. Malicious users misuse the review website and post untrustworthy, low quality, or sometimes fake opinions, which are referred as Spam Reviews. In this study, we aim at providing an efficient method to identify spam reviews and to filter out the spam content with the dataset of gsmarena.com. Experiments on the dataset collected from gsmarena.com show that the proposed system achieves higher accuracy than the standard naïve bayes. Keywords - Review Spam detection, Opinion, Text mining, WordNet, naïve bayes classifier, logistic Regression. __________________________________________________*****_________________________________________________ making purchase. Opinions/Reviews are middle of almost all I.
    [Show full text]
  • The History of Digital Spam
    The History of Digital Spam Emilio Ferrara University of Southern California Information Sciences Institute Marina Del Rey, CA [email protected] ACM Reference Format: This broad definition will allow me to track, in an inclusive Emilio Ferrara. 2019. The History of Digital Spam. In Communications of manner, the evolution of digital spam across its most popular appli- the ACM, August 2019, Vol. 62 No. 8, Pages 82-91. ACM, New York, NY, USA, cations, starting from spam emails to modern-days spam. For each 9 pages. https://doi.org/10.1145/3299768 highlighted application domain, I will dive deep to understand the nuances of different digital spam strategies, including their intents Spam!: that’s what Lorrie Faith Cranor and Brian LaMacchia ex- and catalysts and, from a technical standpoint, how they are carried claimed in the title of a popular call-to-action article that appeared out and how they can be detected. twenty years ago on Communications of the ACM [10]. And yet, Wikipedia provides an extensive list of domains of application: despite the tremendous efforts of the research community over the last two decades to mitigate this problem, the sense of urgency ``While the most widely recognized form of spam is email spam, the term is applied to similar abuses in other media: instant remains unchanged, as emerging technologies have brought new messaging spam, Usenet newsgroup spam, Web search engine spam, dangerous forms of digital spam under the spotlight. Furthermore, spam in blogs, wiki spam, online classified ads spam, mobile when spam is carried out with the intent to deceive or influence phone messaging spam, Internet forum spam, junk fax at scale, it can alter the very fabric of society and our behavior.
    [Show full text]
  • History of Spam
    1 Max W. Mosing1 THE UPS AND DOWNS IN THE HISTORY OF EU-SPAM-REGULATIONS AND THEIR PRACTICAL IMPACT “Two years from now, spam will be solved. I promise a spam-free world by 2006” (Bill Gates; January 2004) Table of content: 1. (Legal) History of Spam................................................................................................................................................................................... 1 1.1. „Monty Python“and „Canter and Seigel“............................................................................................................................................ 2 1.2. Definition of Spam from a Legal Perspective ...................................................................................................................................... 2 1.3. Harm of Spam and– if the Internet community wants to ban spam – why is spam so successful?........................................... 3 1.4. Legal Framework and legal background of spam .............................................................................................................................. 4 2. “Spam-Regulations” in EU-Directives .......................................................................................................................................................... 6 2.1. Directive 97/66/EC concerning the processing of personal data and the protection of privacy in the telecommunications sector – ISDN-Directive and National Laws..............................................................................................................................................
    [Show full text]
  • Spam in Blogs and Social Media
    ȱȱȱȱ ȱ Pranam Kolari, Tim Finin Akshay Java, Anupam Joshi March 25, 2007 ȱ • Spam on the Internet –Variants – Social Media Spam • Reason behind Spam in Blogs • Detecting Spam Blogs • Trends and Issues • How can you help? • Conclusions Pranam Kolari is a UMBC PhD Tim Finin is a UMBC Professor student. His dissertation is on with over 30 years of experience spam blog detection, with tools in the applying AI to information developed in use both by academia and systems, intelligent interfaces and industry. He has active research interest robotics. Current interests include social in internal corporate blogs, the Semantic media, the Semantic Web and multi- Web and blog analytics. agent systems. Akshay Java is a UMBC PhD student. Anupam Joshi is a UMBC Pro- His dissertation is on identify- fessor with research interests in ing influence and opinions in the broad area of networked social media. His research interests computing and intelligent systems. He include blog analytics, information currently serves on the editorial board of retrieval, natural language processing the International Journal of the Semantic and the Semantic Web. Web and Information. Ƿ Ȭȱ • Early form seen around 1992 with MAKE MONEY FAST • 80-85% of all e-mail traffic is spam • In numbers 2005 - (June) 30 billion per day 2006 - (June) 55 billion per day 2006 - (December) 85 billion per day 2007 - (February) 90 billion per day Sources: IronPort, Wikipedia http://www.ironport.com/company/ironport_pr_2006-06-28.html ȱȱǵ • “Unsolicited usually commercial e-mail sent to a large
    [Show full text]
  • Detecting Web Robots with Passive Behavioral Analysis and Forced Behavior by Douglas Brewer (Under the Direction of Kang Li)
    Detecting Web Robots with Passive Behavioral Analysis and Forced Behavior by Douglas Brewer (Under the direction of Kang Li) Abstract The use of web robots has exploded on today's World Wide Web (WWW). Web robots are used for various nefarious activities including click fraud, spamming, email scraping, and gaining an unfair advantage. Click fraud and unfair advantage present a way for bot writers to gain a monetary advantage. Click fraud web bots allow their authors to trick advertisement (ad) networks into paying for clicks on ads that do not belong to a human being. This costs ad networks and advertisers money they wanted to spend on advertisements for human consumption. It also affects the reputation for ad networks in the eyes of advertisers. These problems make combating web robots an important and necessary step on the WWW. Combating web robots is done by various methods that provide the means to make the distinction between bot visits and human visits. These methods involve looking at web server logs or modifying the pages on a website The enhanced log method for identifying web robots expands on previous work using web server access logs to find attributes and uses AI techniques like decision-trees and bayesian- networks for detection. Enhanced log analysis first breaks down attributes into passive and active methods; it also goes further looking at deeper levels of logging including traffic traces. This allows for previously unexplored attributes like web user agent sniffing, condi- tional comments, OS sniffing, javascript requests, and specific HTTP responses. Expanding the attribute footprint in log analysis allows for unique web bot identification.
    [Show full text]
  • D2.4 Weblog Spider Prototype and Associated Methodology
    SEVENTH FRAMEWORK PROGRAMME FP7-ICT-2009-6 BlogForever Grant agreement no.: 269963 BlogForever: D2.4 Weblog spider prototype and associated methodology Editor: M. Rynning Revision: First version Dissemination Level: Public Author(s): M. Rynning, V. Banos, K. Stepanyan, M. Joy, M. Gulliksen Due date of deliverable: 30th November 2011 Actual submission date: 30th November 2011 Start date of project: 01 March 2011 Duration: 30 months Lead Beneficiary name: CyberWatcher Abstract: This document presents potential solutions and technologies for monitoring, capturing and extracting data from weblogs. Additionally, the selected weblog spider prototype and associated methodologies are analysed. D2.2 Report: Weblog Spider Prototype and Associated Methodology 30 November 2011 Project co-funded by the European Commission within the Seventh Framework Programme (2007-2013) The BlogForever Consortium consists of: Aristotle University of Thessaloniki (AUTH) Greece European Organization for Nuclear Research (CERN) Switzerland University of Glasgow (UG) UK The University of Warwick (UW) UK University of London (UL) UK Technische Universitat Berlin (TUB) Germany Cyberwatcher Norway SRDC Yazilim Arastrirma ve Gelistrirme ve Danismanlik Ticaret Limited Sirketi (SRDC) Turkey Tero Ltd (Tero) Greece Mokono GMBH Germany Phaistos SA (Phaistos) Greece Altec Software Development S.A. (Altec) Greece BlogForever Consortium Page 2 of 71 D2.2 Report: Weblog Spider Prototype and Associated Methodology 30 November 2011 History Version Date Modification reason Modified
    [Show full text]
  • A Survey on Web Spam and Spam 2.0
    International Journal of Advanced Computer Research (ISSN (print): 2249-7277 ISSN (online): 2277-7970) Volume-4 Number-2 Issue-15 June-2014 A Survey on Web Spam and Spam 2.0 Ashish Chandra1, Mohammad Suaib2 Abstract 2. Search Engine Spamming In current scenario web is huge, highly distributive, According to Fetterly [3] "web pages that hold no open in nature and changing rapidly. The open actual information value but created to lure web nature of web is the main reason for rapid growth searches to sites that would otherwise not visited". but it has imposed a challenge to Information Retrieval. The one of the biggest challenge is spam. 2.1 Search Engine Result Ranking We focus here to have a study on different forms of Search engines rank pages to search result according the web spam and its new variant called spam 2.0, to two main features. Relevancy of page with respect existing detection methods proposed by different to query (Dynamic Ranking) and Authoritativeness researches and challenges that are still unanswered. of the page (Static Ranking). Keywords Dynamic Ranking is calculated at query time. It depends on query, user's location, day, time, query Web spam, web 2.0, web spam 2.0, search engine, search history, location of page etc. Static Ranking engine ranking, social media spam. involves query independent features of the page to avoid spam. It is pre-computed at index- time. Some 1. Introduction among few hundred features are: Total length of the content, The term Spamming is defined by Gyongyi (2005)[1] Frequency of keyword (most frequent as: any deliberate action solely to boost ranking of a words), web page is known as spamming.
    [Show full text]
  • Pranam Kolari
    On Leveraging Social Media Pranam Kolari Tim Finin, Akshay Java & eBiquity folks! .. collaborations with IBM, NEC, JHU and others SOCIAL MEDIA Social media describes the online technologies and practices that people use to share opinions, insights, experiences, and perspectives with each other. Wikipedia 06 SOCIAL MEDIA Social media describes the online technologies and practices that people use to share opinions, insights, experiences, and perspectives and engage with each other. Wikipedia 07 SOCIAL MEDIA • Engagement protocols defined by platforms; – Blogs, Social Networks, Wiki, Micro-blogs • around content types; – text, audio, video, read-write Web, avatars • instantiated by applications; – Live Spaces, YouTube, Wikipedia, flickr • enabling online communities. SOCIAL MEDIA • Pew (2007) : 55 percent of American youth age 12 to 17 use online social networking sites • Hitwise (February 2007): 6.5% of all Internet visits for social networking sites • Andrew Tomkins at ICWSM 2007 – Professional vs. Personal (Social) Content – 4GB/day vs. 5-10GB/day (minus songs/videos) – 90% vs. 10% clicks – good ranking vs. crazy good ranking SOCIAL MEDIA RESEARCH Efforts best described by published papers in 3 workshops (2004, 2005, 2006) and at ICWSM 2007 A simple experiment… SOCIAL MEDIA RESEARCH Social Media: 2004, 2005, 2006 Web: www 2007 database, communities, ontology, server, analysis, ties, user, applications, moods, bloggers, databases, weblogs, topics, policies, services, blogs, weblog, personalized, blogosphere, scalable, mobile, blog networks,
    [Show full text]