Detection of Crawler Traps: Formalization and Implementation Defeating Protection on Internet and on the TOR Network

Detection of Crawler Traps: Formalization and Implementation Defeating Protection on Internet and on the TOR Network

Detection of Crawler Traps: Formalization and Implementation Defeating Protection on Internet and on the TOR Network Maxence Delong1, Baptiste David1 and Eric Filiol2;3 1Laboratoire de Virologie et de Cryptologie Operationnelles,´ ESIEA, Laval, France 2Department of Computing, ENSIBS, Vannes, France 3High School of Economics, Moscow, Federation of Russia [email protected], [email protected], efi[email protected] Keywords: Crawler Trap, Information Distance, Bots, TOR Network. Abstract: In the domain of web security, websites want to prevent themselves from data gathering performed by auto- matic programs called bots. In that way, crawler traps are an efficient brake against this kind of programs. By creating similar pages or random content dynamically, crawler traps give fake information to the bot and resulting by wasting time and resources. Nowadays, there is no available bots able to detect the presence of a crawler trap. Our aim was to find a generic solution to escape any type of crawler trap. Since the random generation is potentially endless, the only way to perform crawler trap detection is on the fly. Using machine learning, it is possible to compute the comparison between datasets of webpages extracted from regular web- sites from those generated by crawler traps. Since machine learning requires to use distances, we designed our system using information theory. We used wild used distances compared to a new one designed to take into account heterogeneous data. Indeed, two pages does not have necessary the same words and it is operationally impossible to know all possible words by advance. To solve our problematic, our new distance compares two webpages and the results showed that our distance is more accurate than other tested distances. By extension, we can say that our distance has a much larger potential range than just crawler traps detection. This opens many new possibilities in the scope of data classification and data mining. 1 INTRODUCTION emerged across the last years. In a few cases, the most advanced ones have the ability to neutralize the For most people, surfing the web is done via a web importune bot. The main technique used to deceive browser where it is just about to click on a few links. and hinder bots’ operations is generically described as But the ecosystem is not only made of humans. In- “Crawler traps”. As always in IT security, what refers deed, there are automatic programs, called bots, that to detection measures and protection on one side, also allow you to automatically browse websites. Brows- refers to bypass techniques symmetrically. Hence the ing motivations of the latter are multiple. Indeed, they aim of our paper is to formalize the concept of crawler can be used for archiving, search engine optimization, trap detection and management, to present different content retrieval, user information collection — with techniques capable of interfacing with anti-bot coun- or without agreement —search for non-posted and po- termeasures and to present our detection results. tentially confidential documents, intelligence gather- More than trying to be stealthy (it would result in ing for attack preparation... Usually, bots are efficient reducing the bot’s performance and therefore losing and largely faster than humans to collect all informa- its main interest), we want to counter the measures tion. If the motivations of such tools are more or less put in place to neutralize bots. The objective is to have legitimate, it is understandable that some sites do not ways to pass where the other widely used bots gener- wish to be analyzed in such a way. ally stop and fail, trapped by the website’s defenses. To counter these unwanted automatic analysts, Our whole problem is to have the operational means websites have implemented different strategies. From to be able to detect whenever a website traps our bot the detection of bots with basic methods, to chal- and therefore to be able to avoid it. Surprisingly, there lenges offered that only humans (Turing test) can are neither no research no real public operational so- solve (or supposed to be), various methods have lutions on the subject, which means we are going to explore a new field. The new trend to provide real protection is fo- This paper is therefore organized to address this cused on systems called ”spider trap” or ”crawler problem. Section 2 presents the state of the art on the trap”. The goal is to ambush the malicious bot on a different website protection mechanisms. Section 3 dedicated trap inside the website. The intent is to im- addresses te formalization we have considered and prison the bot in a specific place where it is supposed presents the solutions implemented based on the theo- continue to operate endless false information (or the retical aspects we have considered. Section 4 presents same one, again and again) in order to finally waste the results we have obtained from our solutions dur- time and resources. There are many ways to create ing a large number of experiments both on the Inter- and deploy a crawler trap. net and on the TOR network (aka Darkweb). Finally, Section 5 resumes our approach and possible uses. Involuntarily Crawler Traps. Surprisingly, the notion of crawler trap exists from a long time, since some of them were not designed to be a crawler trap. 2 STATE OF ART This is what we call involuntarily crawler trap. For instance, for management purpose, we can find online As far as web bots are concerned, there is a great di- interactive calendar which are constituted by an infi- versity of uses. If the majority of bots as chatbots nite number of page, one for each month, generated (Rouse, 2019) or Search Engine Optimization (SEO) on the fly by the webserver. Nowadays, most of cal- bots as Googlebot (Google, 2019) are very helpful for endars are developed in javascript but this language is websites owners, some of them are malicious. The prohibited on the TOR network for security reasons. purpose of malicious bots can be multiple. Among This is why we can still find such objects online. An- many example, we can mention spambots which col- other example stands in some website which generate lect email addresses to prepare phishing campaigns. pages on the fly from a general link but with differ- Subsequently to fight against botnets’ activities, many ent parameters. Incorrectly managed by the bot, it is security measures have been developed. possible to make them endlessly loop on this type of Most solutions degrade the user experience or link. come to be impractical from a user point of view. An example of a repressive solution is the IP banishment. Infinite Depth or Infinite Loop. In contrast, we If the web server guessed that a bot is crawling it, can find intentional crawler which are designed for an error code can be returned on legitimate webpages that purpose. The most used solutions stands in the thus disallowing content access. In the case where the automatically generated pages when the system de- website has incorrectly guessed and real user has been tects a bot. The content generated does not matter as blocked, the frustration is high. Another example of long as the crawler hits the trap and that it crawls fake common impractical security on the web are captchas sub-pages. In the majority of cases, a loop with rela- (Hasan, 2016). Indeed, if originally, most of captcha tive URL is created and the bot will crawl the same set were used to validate specific operation where a hu- of two pages. For instance, we can use two twin direc- man user was mandatory (online payment, registra- tories referencing each other such as the bots comes tion...), nowadays, websites more and more condition to one to go to the other and vice-versa: the access to the total content of the website by re- http://www.example.com/foo/bar/foo/ solving a captcha first. But this is not a sufficient so- bar/foo/bar/somepage.php lution since there exists practical ways to automati- cally bypass them (Sivakorn et al., 2016). As a conse- quence, other security systems have been developed Identifiers. For some website, the strategy is to to be more efficient. generate a unique identifier store in the page URL. Usually, the official way to prevent web-crawlers This one is assigned to each client visiting it in order is to create a file named ”robot.txt” put at the root to track him. This one is usually called a session ID. of the website. In this file, all the section forbid- The bot — as any user —will receive such identifier. den for crawling are referenced to prevent legitimate The trap lies in the fact that the user ID is changing bots from falling into them. This file is only declara- randomly at random time. Such a way, the bot can be tive and there is no deeper protection to forbid crawl- redirected to already visited webpages but with a dif- ing. In a way, it is a gentleman’s agreement that ferent ID — an transparent action a regular user will determined bots shamelessly bypass. Moreover, the not perform. The session ID is usually implemented ”robot.txt” give direct information to the attacker on in the URL as follows: sensitive parts of the website. http://example.com/page?sessionid=3 3 USE OF DISTANCES B95930229709341E9D8D7C24510E383 http://example.com/page?sessionid= The objective is to measure the distance between sev- D27E522CBFBFE72457F5479117E3B7D0 eral data sets to perform a distinction between several families. This can be done in section 3.1 in order to There exists countermeasures to bypass such pro- define precisely what we are really trying to measure tection for bots, starting by registering hashes of al- and the approach which drives our forthcoming anal- ready visited webpages to a smarter implementation yses.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    9 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us