Trustworthy Website Detection Based on Social Hyperlink Network Analysis

Trustworthy Website Detection Based on Social Hyperlink Network Analysis

TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING 1 Trustworthy Website Detection Based on Social Hyperlink Network Analysis Xiaofei Niu, Guangchi Liu, Student Member, and Qing Yang, Senior Member Abstract—Trustworthy website detection plays an important role in providing users with meaningful web pages, from a search engine. Current solutions to this problem, however, mainly focus on detecting spam websites, instead of promoting more trustworthy ones. In this paper, we propose the enhanced OpinionWalk (EOW) algorithm to compute the trustworthiness of all websites and identify trustworthy websites with higher trust values. The proposed EOW algorithm treats the hyperlink structure of websites as a social network and applies social trust analysis to calculate the trustworthiness of individual websites. To mingle social trust analysis and trustworthy website detection, we model the trustworthiness of a website based on the quantity and quality of websites it points to. We further design a mechanism in EOW to record which websites’ trustworthiness need to be updated while the algorithm “walks” through the network. As a result, the execution of EOW is reduced by 27.1%, compared to the OpinionWalk algorithm. Using the public dataset, WEBSPAM-UK2006, we validate the EOW algorithm and analyze the impacts of seed selection, size of seed set, maximum searching depth and starting nodes, on the algorithm. Experimental results indicate that EOW algorithm identifies 5.35% to 16.5% more trustworthy websites, compared to TrustRank. Index Terms—Trust model, social trust network, trustworthy website detection, social hyperlink network. F 1INTRODUCTION EARCH engines have become more and more important sites are removed from the searching results, more trust- S for our daily lives, due to their ability in providing worthy websites are promoted. Web spams can be broadly relevant information or web pages to users. Although a classified into four groups: content spam, link spam, cloak- search engine typically returns thousands of web pages to ing and redirection, and click spam [4]. Content spam refers answer a query, users usually read only a few ones on top to deliberate changes in HTML fields of a web page so that of the list of recommended pages [1]. The advantage of the spam page becomes more relevant to certain queries. For a company’s website being ranked on top of the list can example, keywords relevant to popular query terms can be be converted to an increase in sales, revenue and profits. inserted into a spam page. Link spam allows a web page As a result, several techniques are created to clandestinely to be highly ranked by means of manipulating the page’s increase a web page’s ranking position, to achieve an unde- connections to other pages, resulting in a confusion of hy- served high click through rate (CTR). The deceptive actions perlink structure analysis algorithms, e.g., PageRank [5] and produce untrustworthy websites that are generally referred HITS [6]. Cloaking is a technique that provides different ver- to as spam websites [2]. It is shown that 22.08% of English sions of a page to users, based on the information contained websites/hosts are classified as spams [3]. Similarly, about in user queries. The redirection technology redirects users to 15% of Chinese web pages are spams. With spam websites malicious pages through executing JavaScript codes. Click ranked on the top of searching results, users waste their time spam is used to generate fraud clicks, with the intention to in processing useless information, leading to a deteriorated increase a spam page’s ranking position. users’ quality of experience (QoE). Therefore, it is critical to Although human can easily recognize spam websites, it’s design a mechanism to promote more trustworthy websites unrealistic to mark all spam websites manually. Therefore, and eliminate spams in the searching results provided to humongous anti-spam techniques are proposed, including users. solutions based on genetic algorithm [7] and genetic pro- gramming [8], [9], [10], [11], [12], [13], artificial immune 1.1 Limitations of Prior Art system [14], swarm intelligence [15], particle swarm opti- Existing solutions to trustworthy website detection focus mization [16] and ant colony optimization [17]. It is very mainly on identifying spam websites, i.e., while spam web- difficult, however, to detect all types of web spams, due to the fact that new spam techniques are created almost instantly once a particular type of spam is identified and X. Niu is with the School of Computer Science and Technology, Shan- • dong Jianzhu University, Jinan, Shandong 250101, China. Email: niuxi- banned within the Internet. Instead of classifying and de- [email protected]. tecting spam websites, PageRank [5] and TrustRank [18] G. Liu is with the Stratifyd Inc., Charlotte, NC 28207, USA. make an attempt to explore the possibility of promoting • Email:[email protected]. more trustworthy websites to users in the searching results. Q. Yang is with the Department of Computer Science and En- • gineering, University of North Texas, Denton, Texas 76203, USA The solutions first rank all web pages or websites, based on Email:[email protected]. their trust scores, in a descending order. Then, only websites Q. Yang is the corresponding author of this article. • ranked on top of the list are provided to users. As such, Manuscript received January 31, 2018; revised July 16, 2018. the number of spam websites that a user may encounter TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING 2 with will be significantly reduced. Unfortunately, existing algorithm. The key contributions of this paper are as fol- trust-ranking based algorithms do not accurately model the lows. trustworthiness of web pages, and thus often mistakenly For the first time, the hyperlinks between websites are identify spams as trustworthy websites. To improve the viewed as the “social” connections between websites. Lever- performance of trustworthy website detection, we approach aging the trust model designed for social networks, the this problem by studying the trust relations among websites, trustworthiness of websites can be quantified. Due to the leveraging social network analysis techniques. accuracy of 3VSL in modelling trustworthiness, individual website’s trustworthiness can be precisely calculated, using 1.2 Proposed Solution the proposed EOW algorithm. Based on the previous re- search results, the trustworthiness of a website are mainly Trust has been intensively studied in online social net- determined by the numbers of trustworthy and spam web- works [19], [20], and the knowledge obtained from this field sites it links to. As only labeled websites’ trustworthiness are can be applied to analyze the hyperlink network, consisted known, we treat all other websites as uncertain/undecided. of websites, to understand website trustworthiness. Consid- Therefore, by counting the numbers of trustworthy, spam, ering a website as an individual user, and the hyperlinks and uncertain websites a website points to, the website’s connecting websites as the social relations among them, we trustworthiness opinion can be formed. We enhance the can model the network of websites as a social hyperlink OpinionWalk algorithm by identifying which opinions need network. to be updated while the algorithm searching within a social According to the three-valued subjective logic model hyperlink network. Specifically, a Boolean vector is used (3VSL) [21], the trust relation between two websites can to keep track of the websites that are connected from the be modeled as a trust opinion <b,d,n>. Based on the current websites whose trustworthiness values are just up- opinion operations defined in 3VSL, the trustworthiness dated. When the EOW algorithm searches the next level of every website can be computed using the OpinionWalk in the network, only these websites’ trustworthiness are algorithm [22]. The algorithm starts from a seed node and changed accordingly. The proposed EOW algorithm is vali- searches the network, in a breadth first search manner, to dated by experiments using the WEBSPAM-UK2006 dataset compute the trustworthiness of all other nodes, from the that contains both trustworthy and spam websites crawled seed node’s perspective. If multiple seed nodes are chosen, within the .uk domain. Experimental results indicate that the algorithm will compute several different trust opinions the EOW algorithm identifies 16.5%, 12.65%, 8.77%, 5.35% of the same node. These opinions will then be combined to more trustworthy websites, in the top 1000, 2000, 3000 and obtain the trustworthiness of the node. As such, the websites 4000 websites, respectively, compared to TrustRank (the with higher trust values can be ranked on top of the list start-of-art solution), when the number of trustworthy seeds provided to users. is 200. In addition, EOW saves (on average) about 27.1% To apply the OpinionWalk algorithm in trustworthy execution time, compared to the OpinoinWalk algorithm, in website detection, however, we need to address two chal- computing the trustworthiness of all websites. lenges. First, the 3VSL models trust as an opinion vector The rest of this paper is organized as follows. In Sec- containing three values b (belief), d (distrust), and n (un- tion 2, we introduce the proposed EOW algorithm, followed certainty). It unfortunately does not specify how the three by an example illustrating how it works. In Section 3, we values of an opinion are obtained. To initialize the trust describe how EOW

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    12 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us