NLSDF for BOOSTING the RECITAL of WEB SPAMDEXING CLASSIFICATION DOI: 10.21917/Ijsc.2016.0183

Total Page:16

File Type:pdf, Size:1020Kb

NLSDF for BOOSTING the RECITAL of WEB SPAMDEXING CLASSIFICATION DOI: 10.21917/Ijsc.2016.0183 S K JAYANTHI AND S SASIKALA: NLSDF FOR BOOSTING THE RECITAL OF WEB SPAMDEXING CLASSIFICATION DOI: 10.21917/ijsc.2016.0183 NLSDF FOR BOOSTING THE RECITAL OF WEB SPAMDEXING CLASSIFICATION S.K. Jayanthi1 and S. Sasikala2 1Department of Computer Science, Vellalar College for Women, India E-mail: [email protected] 2Department of Computer Science, Hindusthan College of Arts and Science, India E-mail: [email protected] Abstract in trending, what type of users seeks which type of information Spamdexing is the art of black hat SEO. Features which are more and class of targeted audience for specific search engines. influential for high rank and visibility are manipulated for the SEO SEO is a task of manipulating content and hyperlinks of a task. The motivation behind the work is utilizing the state of the art Website. SEO techniques can be classified into two broad Website optimization features to enhance the performance of categories: white hat SEO, and black hat SEO. If a Website is spamdexing detection. Features which play a focal role in current SEO strategies show a significant deviation for spam and non-spam samples. subject to white hat SEO it results in genuine rank and visibility This paper proposes 44 features named as NLSDF (New Link in SERP. In contrast, if it is subject to black hat SEO it gives Spamdexing Detection Features). Social media creates an impact in higher than deserved rank and visibility in SERP. The search search engine results ranking. Features pertaining to the social media engines attempt to minimize the effect of the latter called were incorporated with the NLSDF features to boost the recital of the spamdexing. Gyongyi and Garcia-Molina (2004) [2] discusses spamdexing classification. The NLSDF features with 44 attributes about the SEO in the web spam arena. The activity of some SEOs along with 5 social media features boost the classification performance benefits the whole web community, as they help authors create of the WEBSPAM-UK 2007 dataset. The one tailed paired t-test with well-structured, high-quality pages. However, most SEOs engage 95% confidence, performed on the AUC values of the learning models in practices that are called spamming. shows significance of the NLSDF. For instance, there are optimizers who define spamming exclusively as increasing relevance for queries not related to the Keywords: topics of the page. These SEOs endorse and practice techniques Web Spam, Search Engine, SVM, Decision Table, HMM that have an impact on importance scores, to achieve what they call ethical web page positioning or optimization. According to 1. INTRODUCTION them, all types of actions intended to boost ranking (either relevance, or importance, or both), without improving the true The inevitable role of the search engines are proved day by value of a page, are considered spamming. day whenever every user seeks for a piece of the information from WEBSPAM-UK 2007 dataset is the largest compiled the Web. Digital content flourishes at a lightning speed every collection with class labels. It is the most prominent dataset second. The challenge of every content provider in Web is to get available in public to study the characteristics of the spamdexing, higher visibility in SERP for their domain. A survey report [1] when this research has been started on 2010. Since the strategies indicates the importance of the results in first page. 91.5% of the in spamdexing is getting changed day by day, the pace of traffic in Google is flooded to the first SERP. When moving from spamming changes more frequently. It is decided to collect page one to two, the traffic dropped by 95%, and by 78% and 58% features pertaining to SEO for selected samples in the dataset. for the subsequent pages. Samples could be used to elucidate the structural traits and They also reveal a study for traffic share among the Websites temporal amends of the spamdexing phenomenally. based on their results position using a dataset drawn from May st th Selecting the influential SEO features is the next task which 21 to May 27 2013. A website with the first position in the involves an analysis in four renowned SEO sources [3]: SEOmoz, search results contributed to 33% of the traffic, compared to 18% Searchengineland, Search engine journal and SEObook. Page- for the second position. The data also shows that the first position specific and domain-specific link based metrics play a vital role on any page of results contributed to more traffic than the second in optimizing a Website. position in each respective page (i.e. traffic from users dropped by 27%, 11.3%, and 5.4% from the first position to second The page specific metrics considered as important in SEO are: position in page two, three, and four). The traffic dropped by keyword-focused anchor text from external links, external link 140% going from 10th to 11thposition and 86% going from 20th to popularity, diversity of link sources (links from many unique root 21st position. The drop in cumulative traffic moving from one domains), page-specific TrustRank, link popularity, focus of page to another was even more significant. external link sources, keyword-focused anchor text from internal links, location in information architecture of the site, internal link Getting higher visibility would lead to many potential popularity. benefits. SEO task gains attraction at this point. SEO is the process of improving the visibility of a website or a web page in The site specific metrics which are of high importance in SEO a search engine result page (SERP). SEO analyses the working are: trustworthiness of the domain based on link distance from method of ranking algorithm in search engines, factors trusted domains (TrustRank, Domain mozTrust), global link concentrated while ranking and influential features of Website popularity of the domain (PageRank on the domain graph, which improves rank for specific ranking algorithm. Along with Domain mozRank), link diversity of the domain (based on that they also focus on the user profiling; which content user seeks number of unique root domains linking to pages on this domain), 1324 ISSN: 2229-6959 (ONLINE) ICTACT JOURNAL ON SOFT COMPUTING, OCTOBER 2016, VOLUME: 07, ISSUE: 01 links from hubs/authorities in a given topic-specific dataset. Newly included features improve the performance of the neighborhood, temporal growth/shrinkage of links to the domain dataset by 3% accuracy. (the quantity/quality of links earned over time and the temporal Chung et al. [8] proposed new set of features including white distribution), links from domains with restricted access extensions score, spam score, relative trust; outgoing and incoming link (e.g. .edu, .gov, .mil, .ac.uk, etc.). related features, PageRank and hijacked score. It is clear that Domain Trust/Authority acts as the dominant Kariampor et al. [9] performs classification of web spam using factor in the success of rankings at Google. A link strategy that imperialist competitive algorithm and genetic algorithm. positions a site as an authority, or a hub, in a web community is a Imperialist competitive algorithm is a novel optimization powerful way to get attention of search engines [4]. According to algorithm that is inspired by socio-political process of a correlation study done by SEOmoz, a famous SEO concern, link imperialism in the real world. Experiments are carried out on metrics which has significance in visibility and visitability to a WEBSPAM-UK2007 data set, which show feature selection Website are: Domain authority, Page Authority, mozRank, improves classification accuracy, and imperialist competitive mozTrust and link counts (external and internal) [3]. algorithm outperforms GA. MozRank and PageRank scores are being used to show value Geng et al. [10] used re-extracted features based on the host on sites for sale in recent times. Page/Domain Authority level link graph and the predicted spamicity, clustering, represent SEOmoz’s best ranking predictions and propagation and neighbor details and used WEBSPAM-UK 2006 uses MozRank and MozTrust in their calculations. MozRank is a dataset as a base. They use bagging, a famous meta-learning part of Page/Domain Authority mix which gives high granularity algorithm with c4.5. in spamdexing detection. Shen et al. [11] propose method for web spam detection, using Cervino and Malae [5] conducted cyber metric analysis to genetic programming, from existing link-based features and use analyze the correlation between applied metric indicators and them as the inputs to support vector machine and genetic accessibility problems in academic websites. They used the programming classifiers. According to the authors, the classifiers mozRank and domain authority as their focal indicators for that use the new features achieve better results compared with the determining the visibility for a Website. After analyzing the SEO features provided in the original database. sources available in Web, a set of features were demarcated. For Jayanthi and Sasikala used genetic algorithm for Web spam naming convention, the additional features are collected, classification [12]. Later utilized the Reptree based classifier for computed and compiled in the name NLSDF (New Link Spam the same [13]. Naive bayes classifier is also proposed by the Detection Features). authors for the problem [14]. They also applied the Artificial This paper is organized as follows: Section 2 discusses the Immune Recognition System based classifiers for the web spam related work in this problem. Section 3 describes 27 new features problem. They proved that AIRS1 and AIRS2 Parallel are two used for this work along with 17 computed features. Section 4 methods which give best results when compared with pioneered gives a brief about the suite of the classifiers used in this paper. literature [15]. Parameter settings of the classifiers are also briefed. Section 5 briefs the evaluation metrics and presents the results.
Recommended publications
  • Search Engine Optimization: a Survey of Current Best Practices
    Grand Valley State University ScholarWorks@GVSU Technical Library School of Computing and Information Systems 2013 Search Engine Optimization: A Survey of Current Best Practices Niko Solihin Grand Valley Follow this and additional works at: https://scholarworks.gvsu.edu/cistechlib ScholarWorks Citation Solihin, Niko, "Search Engine Optimization: A Survey of Current Best Practices" (2013). Technical Library. 151. https://scholarworks.gvsu.edu/cistechlib/151 This Project is brought to you for free and open access by the School of Computing and Information Systems at ScholarWorks@GVSU. It has been accepted for inclusion in Technical Library by an authorized administrator of ScholarWorks@GVSU. For more information, please contact [email protected]. Search Engine Optimization: A Survey of Current Best Practices By Niko Solihin A project submitted in partial fulfillment of the requirements for the degree of Master of Science in Computer Information Systems at Grand Valley State University April, 2013 _______________________________________________________________________________ Your Professor Date Search Engine Optimization: A Survey of Current Best Practices Niko Solihin Grand Valley State University Grand Rapids, MI, USA [email protected] ABSTRACT 2. Build and maintain an index of sites’ keywords and With the rapid growth of information on the web, search links (indexing) engines have become the starting point of most web-related 3. Present search results based on reputation and rele- tasks. In order to reach more viewers, a website must im- vance to users’ keyword combinations (searching) prove its organic ranking in search engines. This paper intro- duces the concept of search engine optimization (SEO) and The primary goal is to e↵ectively present high-quality, pre- provides an architectural overview of the predominant search cise search results while efficiently handling a potentially engine, Google.
    [Show full text]
  • The Internet and Drug Markets
    INSIGHTS EN ISSN THE INTERNET AND DRUG MARKETS 2314-9264 The internet and drug markets 21 The internet and drug markets EMCDDA project group Jane Mounteney, Alessandra Bo and Alberto Oteo 21 Legal notice This publication of the European Monitoring Centre for Drugs and Drug Addiction (EMCDDA) is protected by copyright. The EMCDDA accepts no responsibility or liability for any consequences arising from the use of the data contained in this document. The contents of this publication do not necessarily reflect the official opinions of the EMCDDA’s partners, any EU Member State or any agency or institution of the European Union. Europe Direct is a service to help you find answers to your questions about the European Union Freephone number (*): 00 800 6 7 8 9 10 11 (*) The information given is free, as are most calls (though some operators, phone boxes or hotels may charge you). More information on the European Union is available on the internet (http://europa.eu). Luxembourg: Publications Office of the European Union, 2016 ISBN: 978-92-9168-841-8 doi:10.2810/324608 © European Monitoring Centre for Drugs and Drug Addiction, 2016 Reproduction is authorised provided the source is acknowledged. This publication should be referenced as: European Monitoring Centre for Drugs and Drug Addiction (2016), The internet and drug markets, EMCDDA Insights 21, Publications Office of the European Union, Luxembourg. References to chapters in this publication should include, where relevant, references to the authors of each chapter, together with a reference to the wider publication. For example: Mounteney, J., Oteo, A. and Griffiths, P.
    [Show full text]
  • Fully Automatic Link Spam Detection∗ Work in Progress
    SpamRank – Fully Automatic Link Spam Detection∗ Work in progress András A. Benczúr1,2 Károly Csalogány1,2 Tamás Sarlós1,2 Máté Uher1 1 Computer and Automation Research Institute, Hungarian Academy of Sciences (MTA SZTAKI) 11 Lagymanyosi u., H–1111 Budapest, Hungary 2 Eötvös University, Budapest {benczur, cskaresz, stamas, umate}@ilab.sztaki.hu www.ilab.sztaki.hu/websearch Abstract Spammers intend to increase the PageRank of certain spam pages by creating a large number of links pointing to them. We propose a novel method based on the concept of personalized PageRank that detects pages with an undeserved high PageRank value without the need of any kind of white or blacklists or other means of human intervention. We assume that spammed pages have a biased distribution of pages that contribute to the undeserved high PageRank value. We define SpamRank by penalizing pages that originate a suspicious PageRank share and personalizing PageRank on the penalties. Our method is tested on a 31 M page crawl of the .de domain with a manually classified 1000-page stratified random sample with bias towards large PageRank values. 1 Introduction Identifying and preventing spam was cited as one of the top challenges in web search engines in a 2002 paper [24]. Amit Singhal, principal scientist of Google Inc. estimated that the search engine spam industry had a revenue potential of $4.5 billion in year 2004 if they had been able to completely fool all search engines on all commercially viable queries [36]. Due to the large and ever increasing financial gains resulting from high search engine ratings, it is no wonder that a significant amount of human and machine resources are devoted to artificially inflating the rankings of certain web pages.
    [Show full text]
  • Clique-Attacks Detection in Web Search Engine for Spamdexing Using K-Clique Percolation Technique
    International Journal of Machine Learning and Computing, Vol. 2, No. 5, October 2012 Clique-Attacks Detection in Web Search Engine for Spamdexing using K-Clique Percolation Technique S. K. Jayanthi and S. Sasikala, Member, IACSIT Clique cluster groups the set of nodes that are completely Abstract—Search engines make the information retrieval connected to each other. Specifically if connections are added task easier for the users. Highly ranking position in the search between objects in the order of their distance from one engine query results brings great benefits for websites. Some another a cluster if formed when the objects forms a clique. If website owners interpret the link architecture to improve ranks. a web site is considered as a clique, then incoming and To handle the search engine spam problems, especially link farm spam, clique identification in the network structure would outgoing links analysis reveals the cliques existence in web. help a lot. This paper proposes a novel strategy to detect the It means strong interconnection between few websites with spam based on K-Clique Percolation method. Data collected mutual link interchange. It improves all websites rank, which from website and classified with NaiveBayes Classification participates in the clique cluster. In Fig. 2 one particular case algorithm. The suspicious spam sites are analyzed for of link spam, link farm spam is portrayed. That figure points clique-attacks. Observations and findings were given regarding one particular node (website) is pointed by so many nodes the spam. Performance of the system seems to be good in terms of accuracy. (websites), this structure gives higher rank for that website as per the PageRank algorithm.
    [Show full text]
  • Download PDF Document, 456 KB
    ENISA Position Paper No. 2 Reputation-based Systems: a security analysis Editors: Elisabetta Carrara and Giles Hogben, ENISA October 2007 Reputation-based Systems ENISA Position Papers represent expert opinion on topics ENISA considers to be important emerging risks or key security components. They are produced as the result of discussion among a group of experts who were selected for their knowledge in the area. The content was collected via wiki, mailing list and telephone conferences and edited by ENISA. This paper aims to provide a useful introduction to security issues affecting Reputation-based Systems by identifying a number of possible threats and attacks, highlighting the security requirements that should be fulfilled by these systems and providing recommendations for action and best practices to reduce the security risks to users. Examples are given from a number of providers throughout the paper. These should be taken as examples only and there is no intention to single out a specific provider for criticism or praise. The examples provided are not necessarily those most representative or important, nor is the aim of this paper to conduct any kind of market survey, as there might be other providers which are not mentioned here and nonetheless are equally or more representative of the market. Audience This paper is aimed at providers, designers, research and standardisation communities, government policy-makers and businesses. ENISA Position Paper No.2 Reputation-based Systems: a security analysis 1 Reputation-based Systems EXECUTIVE
    [Show full text]
  • Seo On-Page Optimization
    SEO ON-PAGE OPTIMIZATION Equipping Your Website to Become a Powerful Marketing Platform Copyright 2016 SEO GLOBAL MEDIA TABLE OF CONTENTS Introduction 01 Chapter I Codes, Tags, and Metadata 02 Chapter II Landing Pages and Buyer Psychology 06 Chapter III Conclusion 08 Copyright 2016 SEO GLOBAL MEDIA INTRODUCTION Being indexed and ranked on the search engine results pages (SERPs) depends on many factors, beginning with the different elements on each of your website. Optimizing these factors helps search engine crawlers find your website, index the pages appropriately , and rank it according to your desired keywords. On-page optimization plays a big role in ensuring your online marketing campaign's success. In this definitive guide, you will learn how to successfully optimize your website to ensure that your website is indexed and ranked on the SERPs. In addition you’ll learn how to make your web pages convert. Copyright 2016 SEO GLOBAL MEDIA SEO On-Page Optimization | 1 CODES, MARK-UPS, AND METADATA Let’s get technical Having clean codes, optimized HTML tags and metadata helps search engines crawl your site better and index and rank your pages according to the relevant search terms. Make sure to check the following: Source Codes Your source code is the backbone of your website. The crawlers finds everything it needs in order to index your website here. Make sure your source code is devoid of any problems by checking the following: INCORRECTLY IMPLEMENTED TAGS: Examples of these are re=canonical tags, authorship mark-up, or redirects. These could prove catastrophic, especially the canonical code, which can end up in duplicate content penalties.
    [Show full text]
  • SEO - What Are the BIG Things to Get Right? Keyword Research – You Must Focus on What People Search for - Not What You Want Them to Search For
    Search Engine Optimization SEO - What are the BIG things to get right? Keyword Research – you must focus on what people search for - not what you want them to search for. Content – Extremely important - #2. Page Title Tags – Clearly a strong contributor to good rankings - #3 Usability – Obviously search engine robots do not know colors or identify a ‘call to action’ statement (which are both very important to visitors), but they do see broken links, HTML coding errors, contextual links between pages and download times. Links – Numbers are good; but Quality, Authoritative, Relevant links on Keyword-Rich anchor text will provide you the best improvement in your SERP rankings. #1 Keyword Research: The foundation of a SEO campaign is accurate keyword research. When doing keyword research: Speak the user's language! People search for terms like "cheap airline tickets," not "value-priced travel experience." Often, a boring keyword is a known keyword. Supplement brand names with generic terms. Avoid "politically correct" terminology. (example: use ‘blind users’ not ‘visually challenged users’). Favor legacy terms over made-up words, marketese and internal vocabulary. Plural verses singular form of words – search it yourself! Keywords: Use keyword search results to select multi-word keyword phrases for the site. Formulate Meta Keyword tag to focus on these keywords. Google currently ignores the tag as does Bing, however Yahoo still does index the keyword tag despite some press indicating otherwise. I believe it still is important to include one. Why – what better place to document your work and it might just come back into vogue someday. List keywords in order of importance for that given page.
    [Show full text]
  • The History of Digital Spam
    The History of Digital Spam Emilio Ferrara University of Southern California Information Sciences Institute Marina Del Rey, CA [email protected] ACM Reference Format: This broad definition will allow me to track, in an inclusive Emilio Ferrara. 2019. The History of Digital Spam. In Communications of manner, the evolution of digital spam across its most popular appli- the ACM, August 2019, Vol. 62 No. 8, Pages 82-91. ACM, New York, NY, USA, cations, starting from spam emails to modern-days spam. For each 9 pages. https://doi.org/10.1145/3299768 highlighted application domain, I will dive deep to understand the nuances of different digital spam strategies, including their intents Spam!: that’s what Lorrie Faith Cranor and Brian LaMacchia ex- and catalysts and, from a technical standpoint, how they are carried claimed in the title of a popular call-to-action article that appeared out and how they can be detected. twenty years ago on Communications of the ACM [10]. And yet, Wikipedia provides an extensive list of domains of application: despite the tremendous efforts of the research community over the last two decades to mitigate this problem, the sense of urgency ``While the most widely recognized form of spam is email spam, the term is applied to similar abuses in other media: instant remains unchanged, as emerging technologies have brought new messaging spam, Usenet newsgroup spam, Web search engine spam, dangerous forms of digital spam under the spotlight. Furthermore, spam in blogs, wiki spam, online classified ads spam, mobile when spam is carried out with the intent to deceive or influence phone messaging spam, Internet forum spam, junk fax at scale, it can alter the very fabric of society and our behavior.
    [Show full text]
  • Information Retrieval and Web Search Engines
    Information Retrieval and Web Search Engines Lecture 13: Miscellaneous July 23th, 2020 Wolf-Tilo Balke and Janus Wawrzinek Institut für Informationssysteme Technische Universität Braunschweig Lecture 13: Miscellaneous 1. Spamdexing 2. Hardware for Large Scale Web Search 3. Metasearch 4. Privacy Issues Information Retrieval and Web Search Engines — Wolf-Tilo Balke and José Pinto — Technische Universität Braunschweig 2 Spamdexing • Spamdexing = The practice of modifying the Web to get certain Web resources unjustifiably ranked high on search engine result lists • Often a synonym of SEO (“search engine optimization”) Information Retrieval and Web Search Engines — Wolf-Tilo Balke and José Pinto — Technische Universität Braunschweig 3 Spamdexing (2) • Spamdexing usually means finding weaknesses in ranking algorithms and exploiting them • Usually, it looks like this: Finds a new loophole Spammer Search Engine Fills the loophole • There are two classes of spamdexing techniques: – Content spam: Alter a page’s contents – Link spam: Alter the link structure between pages Information Retrieval and Web Search Engines — Wolf-Tilo Balke and José Pinto — Technische Universität Braunschweig 4 Content Spam Idea: – Exploit TF–IDF Method: – Repeatedly place the keywords to be found in the text, title, or URI of your page – Place the keywords in anchor texts of pages linking to your page – Weave your content into high-quality content taken from (possibly a lot of) other pages Countermeasures: – Train classification algorithms to detect patterns that are “typical”
    [Show full text]
  • Web Spam Taxonomy
    Web Spam Taxonomy Zolt´an Gy¨ongyi Hector Garcia-Molina Computer Science Department Computer Science Department Stanford University Stanford University [email protected] [email protected] Abstract techniques, but as far as we know, they still lack a fully effective set of tools for combating it. We believe Web spamming refers to actions intended to mislead that the first step in combating spam is understanding search engines into ranking some pages higher than it, that is, analyzing the techniques the spammers use they deserve. Recently, the amount of web spam has in- to mislead search engines. A proper understanding of creased dramatically, leading to a degradation of search spamming can then guide the development of appro- results. This paper presents a comprehensive taxon- priate countermeasures. omy of current spamming techniques, which we believe To that end, in this paper we organize web spam- can help in developing appropriate countermeasures. ming techniques into a taxonomy that can provide a framework for combating spam. We also provide an overview of published statistics about web spam to un- 1 Introduction derline the magnitude of the problem. There have been brief discussions of spam in the sci- As more and more people rely on the wealth of informa- entific literature [3, 6, 12]. One can also find details for tion available online, increased exposure on the World several specific techniques on the Web itself (e.g., [11]). Wide Web may yield significant financial gains for in- Nevertheless, we believe that this paper offers the first dividuals or organizations. Most frequently, search en- comprehensive taxonomy of all important spamming gines are the entryways to the Web; that is why some techniques known to date.
    [Show full text]
  • The Domain Abuse Activity Reporting (DAAR) System
    The DAAR System The Domain Abuse Activity Reporting (DAAR) System David Piscitello, ICANN and Greg Aaron, iThreat Cyber Group Abstract Efforts to study domain name abuse are common today, but these often have one or more limitations. They may only use a statistically valid sampling of the domain name space or use limited sets of domain name abuse (reputation) data. They may concentrate only on a particular security threat, e.g., spam. Notably, few studies collect and retain data over a sufficiently long timeframe to provide opportunities for historical analysis. This paper describes our efforts to collect a large body of domain name registration data, and to complement these data with a large set of reputation data feeds that report and classify multiple security threats. We intend that the resulting data repository will serve as a platform for daily or historical analysis or reporting. Ideally, any findings or reporting derived from our system can be independently validated. We thus use publicly or commercially available data so that the reports or findings from any studies that use our system would be reproducible by any party who collects the same data sets and applies the same processing rules. The long- term objective for our project is that this system will serve the community by establishing a persistent, fact-based repository for ongoing collection, analysis, and reporting. Version 0.3, November 2017 1 The DAAR System Table of Contents INTRODUCTION AND BACKGROUND 3 PURPOSES OF THE DAAR PROJECT 4 DAAR OVERVIEW 4 DAAR COLLECTION SYSTEM 5 DAAR DATA COLLECTION 5 TOP-LEVEL DOMAIN ZONE DATA 5 DOMAIN NAME REGISTRATION DATA 7 DOMAIN REPUTATION DATA (ABUSE DATA) 8 DAAR REPORTING SYSTEM 8 SECURITY THREATS OBSERVED BY THE DAAR 9 DAAR THREAT DATA COMPILATION 12 REPUTATION DATA USED BY DAAR 13 SELECTION OF REPUTATION DATA 14 MULTIPLE REPUTATION DATA SOURCES 15 FALSE POSITIVE RATES 16 DOES DAAR CAPTURE ALL OF THE ABUSE? 16 DAAR REPORTING 18 ABUSE SCORING 19 ACCESS TO THE DAAR SYSTEM 20 CONCLUSION 20 ANNEX A.
    [Show full text]
  • Feasibility Study of a Wiki Collaboration Platform for Systematic Reviews
    Methods Research Report Feasibility Study of a Wiki Collaboration Platform for Systematic Reviews Methods Research Report Feasibility Study of a Wiki Collaboration Platform for Systematic Reviews Prepared for: Agency for Healthcare Research and Quality U.S. Department of Health and Human Services 540 Gaither Road Rockville, MD 20850 www.ahrq.gov Contract No. 290-02-0019 Prepared by: ECRI Institute Evidence-based Practice Center Plymouth Meeting, PA Investigator: Eileen G. Erinoff, M.S.L.I.S. AHRQ Publication No. 11-EHC040-EF September 2011 This report is based on research conducted by the ECRI Institute Evidence-based Practice Center in 2008 under contract to the Agency for Healthcare Research and Quality (AHRQ), Rockville, MD (Contract No. 290-02-0019 -I). The findings and conclusions in this document are those of the author(s), who are responsible for its content, and do not necessarily represent the views of AHRQ. No statement in this report should be construed as an official position of AHRQ or of the U.S. Department of Health and Human Services. The information in this report is intended to help clinicians, employers, policymakers, and others make informed decisions about the provision of health care services. This report is intended as a reference and not as a substitute for clinical judgment. This report may be used, in whole or in part, as the basis for the development of clinical practice guidelines and other quality enhancement tools, or as a basis for reimbursement and coverage policies. AHRQ or U.S. Department of Health and Human Services endorsement of such derivative products or actions may not be stated or implied.
    [Show full text]