Towards Detection and Prevention of Malicious Activities Against Web Applications and Internet Services

Total Page:16

File Type:pdf, Size:1020Kb

Towards Detection and Prevention of Malicious Activities Against Web Applications and Internet Services Towards Detection and Prevention of Malicious Activities Against Web Applications and Internet Services Dissertation zur Erlangung des Grades eines Doktor-Ingenieurs der Fakult¨at fur¨ Elektrotechnik und Informationstechnik an der Ruhr-Universit¨at Bochum vorgelegt von Apostolos Zarras aus Athen, Griechenland Bochum, August 2015 Tag der mundlichen¨ Prufung:¨ 27.08.2015 Gutachter: Prof. Dr. Thorsten Holz, Ruhr-Universit¨at Bochum Zweitgutachter: Prof. Dr. Herbert Bos, Vrije Universiteit Amsterdam Abstract The Internet has witnessed a tremendous growth the last years. Undoubtedly, its services and mostly the World Wide Web have become an integral part in the lives of hundreds of millions of people, who use it in daily basis. Unfortunately, as the Internet's popularity increases, so does the interest of attackers who seek to exploit vulnerabilities in users' machines. What was primary meant to be mainly a competition among computer experts to test and improve their technical skills has turned into a multi-billion dollar business. Nowadays, attackers try to take under their control vulnerable computers that will allow them to perform their nefarious tasks, such as sending spam emails, launching distributed denial-of-service (DDoS) attacks, generate revenue from online advertisements by performing click-frauds, or stealing personal data like email accounts and banking credentials. In this dissertation, we address the security issues online users face every day from two points of view. First, we investigate how infected computers that constitu- te a botnet|network of compromised machines which are remotely controlled by an entity, known as botmaster|perform their malicious activities and we propose countermeasures against them. We study two of the most fundamental Internet pro- tocols, SMTP and HTTP, and leverage the fact that numerous entities, including cybercriminals, implement these protocols with subtle but perceivable differences, which we can accurately detect. We then develop novel mitigation techniques that utilize these discrepancies to block the compromised computers from successfully executing the commands issued by a botmaster. Second, we examine different ways in which attackers exploit the web infrastructure to infect new victims. We initially study the formed alliances among web spammers that aim to boost the page rank of their websites. As a result, these websites gain more popularity and can potentially host exploits. We then move forward and in- vestigate the extent to which attackers can abuse the logic of search engine crawlers in order to perform various attacks. Next, we evaluate the ecosystem of online ad- vertisements and analyze the risks related to abuses. Finally, we propose a system that can protect users from online threats while surfing the web. i Zusammenfassung Das Internet hat in den letzten Jahren ein enormes Wachstum erfahren. Seine Anwendungen, allen voran das World Wide Web, sind ein wesentlicher Bestand- teil des t¨aglichen Lebens fur¨ Millionen von Menschen geworden. Mit der steigen- den Popularit¨at des Internets wuchs auch das Interesse von Angreifern, die nach Schwachstellen in den Computern suchen, um sich diese zu Eigen zu machen. Was als eine Art Wettkampf bzw. Kr¨aftemessen unter Computerexperten begann, um technische F¨ahigkeiten zu testen und weiterzuentwickeln, hat sich mittlerweile in ein Multi-Milliarden-Dollar-Gesch¨aft gewandelt. Heutzutage versuchen Angreifer, anf¨allige Computer unter ihre Kontrolle zu bekommen, um Spam-Emails zu ver- senden, \distributed denial-of-service" (DDoS) Attacken durchzufuhren,¨ Profit aus Online-Anzeigen durch sogenannte Click-Frauds zu erzielen oder aber sensible Da- ten aus Email- und Bankkonten zu stehlen. In dieser Dissertation wenden wir uns von zwei Seiten den Sicherheitsproblemen zu, mit denen Internet Benutzer t¨aglich konfrontiert sind. Erstens untersuchen wir wie infizierte Computer in einem Botnetz|ein Netzwerk aus infiltrierten und von einem Botmaster ferngesteuerten Computern|ihre Angriffe ausuben¨ und pr¨asentieren ent- sprechende Gegenmaßnahmen. Dazu untersuchen wir zwei integrale Protokolle des Internets, SMTP und HTTP, und machen uns zunutze, dass diese Protokolle von unterschiedlichen Akteuren wie z.B. Cyberkriminellen in leicht unterschiedlicher Weise genutzt werden. Durch die Analyse dieser Unterschiede k¨onnen wir neue Me- thoden zur Schadensbegrenzung entwicklen, die infizierte Computer davon abhalten die Befehle eines Botmasters auszufuhren.¨ Zweitens untersuchen wir verschiedene Methoden mit denen die Angreifer uber¨ das Web neue Computer infizieren. Hierbei untersuchen wir zum einen Allianzen von Web-Spammern deren Ziel die Steigerung der so genannten PageRanks b¨osartiger Internetseiten ist. Ein h¨oherer PageRank fuhrt¨ zu einer erh¨ohten Sichtbarkeit einer Webseite und typischerweise daher auch zu h¨oheren Infektionsraten. Zum anderen ermittelten wir in welchem Ausmaß Angreifer die Logik von Web-Crawlern von Suchmaschinen ausnutzen, um verschiedene Angriffe durchzufuhren.¨ Daneben be- trachten wir wie Kriminelle Online-Werbung fur¨ ihre Zwecke missbrauchen. Schließ- lich schlagen wir aufbauend auf unseren Untersuchungen ein System vor das An- wendern helfen kann sicher im Web zu surfen. iii Acknowledgements It is a great pleasure to acknowledge everyone that have significantly influenced and inspired my life during my graduate studies. Thus, in this part, I would like to take the time to thank every single one of them. First and foremost, I would like to express my special appreciation to my advisor Thorsten Holz for the valuable guidance and support he showed me over the past four years. I would like to thank him for giving me the opportunity to focus on topics that I was interested in, encouraging my research and allowing me to grow as a research scientist. Next, I would like to thank all former and current members of the Chair for Systems Security of Ruhr-University Bochum that contributed for a pleasant and productive environment. I recall countless meetings in which we had to switch to English because of me. Thank you guys and sorry for the inconvenience. During the time of my Ph.D. studies I had the privilege of collaborating with re- searchers from different universities and countries. This work would have not been possible without all these folks. Apart from this remote collaboration, I also had the chance of visiting the security labs of University of California, Santa Barbara (UCSB) and Institut Eur´ecom. There, I met and worked together with some really intelligent people. I could not help but thank all my old friends along with the new friends I made here in Bochum. The time we spent together was a welcome break in my daily routine. I would also like to express my deepest gratitude to my parents Nikos and Maria, my beloved little sister Nantia, and my grandparents for their support, patience, encouragement and wise advice during my whole life. I never would have made it through, without their love and support. Last, but certainly not least, I would like to thank my wife Eva who has been a bright light in my life. I am truly grateful for her unconditional love, support, and patience. Without her, I would not have been able to balance my research with everything else. I cannot wait until the day we will meet our baby for the first time. This dissertation is dedicated to all these people who believed in me from the time of my birth until now. v Contents 1 Introduction1 1.1 Motivation................................2 1.2 Dissertation Scope............................3 1.3 Contributions...............................4 1.4 Publications................................6 1.5 Outline..................................8 I Botnet Detection and Mitigation 11 2 Spam Mitigation 15 2.1 Introduction................................ 15 2.2 SMTP Protocol.............................. 18 2.3 SMTP Dialects.............................. 20 2.4 Learning Dialects............................. 22 2.4.1 Learning Algorithm....................... 22 2.4.2 Collecting SMTP Conversations................ 24 2.5 Matching Conversations to Dialects................... 27 2.5.1 Building the Decision State Machine.............. 27 2.5.2 Making a Decision........................ 29 2.5.3 Applying the Decision...................... 30 2.6 Botnet Feedback Mechanism...................... 31 2.7 System Overview............................. 33 2.8 Evaluation................................. 34 2.8.1 Evaluating the Dialects..................... 35 vii Contents 2.8.2 Evaluating the Feedback Manipulation............. 39 2.9 Discussion................................. 40 2.9.1 Evading Dialects Detection................... 40 2.9.2 Mitigating Feedback Manipulation............... 41 2.10 Related Work............................... 41 2.11 Summary................................. 43 3 HTTP-Based Malware Mitigation 45 3.1 Introduction................................ 46 3.2 HTTP Protocol.............................. 48 3.3 HTTP-Based Malware.......................... 49 3.4 HTTP-Level Detection.......................... 50 3.4.1 Header Chains.......................... 51 3.4.2 HTTP Templates......................... 52 3.5 System Overview............................. 53 3.5.1 Virtual Machine Zoo....................... 54 3.5.2 Learner.............................. 55 3.5.3 Decision Maker.......................... 56 3.6 Evaluation................................. 57 3.6.1 Establishing the Ground Truth................. 57 3.6.2 Model Generation in Various Web Clients..........
Recommended publications
  • Search Engine Optimization: a Survey of Current Best Practices
    Grand Valley State University ScholarWorks@GVSU Technical Library School of Computing and Information Systems 2013 Search Engine Optimization: A Survey of Current Best Practices Niko Solihin Grand Valley Follow this and additional works at: https://scholarworks.gvsu.edu/cistechlib ScholarWorks Citation Solihin, Niko, "Search Engine Optimization: A Survey of Current Best Practices" (2013). Technical Library. 151. https://scholarworks.gvsu.edu/cistechlib/151 This Project is brought to you for free and open access by the School of Computing and Information Systems at ScholarWorks@GVSU. It has been accepted for inclusion in Technical Library by an authorized administrator of ScholarWorks@GVSU. For more information, please contact [email protected]. Search Engine Optimization: A Survey of Current Best Practices By Niko Solihin A project submitted in partial fulfillment of the requirements for the degree of Master of Science in Computer Information Systems at Grand Valley State University April, 2013 _______________________________________________________________________________ Your Professor Date Search Engine Optimization: A Survey of Current Best Practices Niko Solihin Grand Valley State University Grand Rapids, MI, USA [email protected] ABSTRACT 2. Build and maintain an index of sites’ keywords and With the rapid growth of information on the web, search links (indexing) engines have become the starting point of most web-related 3. Present search results based on reputation and rele- tasks. In order to reach more viewers, a website must im- vance to users’ keyword combinations (searching) prove its organic ranking in search engines. This paper intro- duces the concept of search engine optimization (SEO) and The primary goal is to e↵ectively present high-quality, pre- provides an architectural overview of the predominant search cise search results while efficiently handling a potentially engine, Google.
    [Show full text]
  • Prospects, Leads, and Subscribers
    PAGE 2 YOU SHOULD READ THIS eBOOK IF: You are looking for ideas on finding leads. Spider Trainers can help You are looking for ideas on converting leads to Marketing automation has been shown to increase subscribers. qualified leads for businesses by as much as 451%. As You want to improve your deliverability. experts in drip and nurture marketing, Spider Trainers You want to better maintain your lists. is chosen by companies to amplify lead and demand generation while setting standards for design, You want to minimize your list attrition. development, and deployment. Our publications are designed to help you get started, and while we may be guilty of giving too much information, we know that the empowered and informed client is the successful client. We hope this white paper does that for you. We look forward to learning more about your needs. Please contact us at 651 702 3793 or [email protected] . ©2013 SPIDER TRAINERS PAGE 3 TAble Of cOnTenTS HOW TO cAPTure SubScriberS ...............................2 HOW TO uSe PAiD PrOGrAMS TO GAin Tipping point ..................................................................2 SubScriberS ...........................................................29 create e mail lists ...........................................................3 buy lists .........................................................................29 Pop-up forms .........................................................4 rent lists ........................................................................31 negative consent
    [Show full text]
  • Fully Automatic Link Spam Detection∗ Work in Progress
    SpamRank – Fully Automatic Link Spam Detection∗ Work in progress András A. Benczúr1,2 Károly Csalogány1,2 Tamás Sarlós1,2 Máté Uher1 1 Computer and Automation Research Institute, Hungarian Academy of Sciences (MTA SZTAKI) 11 Lagymanyosi u., H–1111 Budapest, Hungary 2 Eötvös University, Budapest {benczur, cskaresz, stamas, umate}@ilab.sztaki.hu www.ilab.sztaki.hu/websearch Abstract Spammers intend to increase the PageRank of certain spam pages by creating a large number of links pointing to them. We propose a novel method based on the concept of personalized PageRank that detects pages with an undeserved high PageRank value without the need of any kind of white or blacklists or other means of human intervention. We assume that spammed pages have a biased distribution of pages that contribute to the undeserved high PageRank value. We define SpamRank by penalizing pages that originate a suspicious PageRank share and personalizing PageRank on the penalties. Our method is tested on a 31 M page crawl of the .de domain with a manually classified 1000-page stratified random sample with bias towards large PageRank values. 1 Introduction Identifying and preventing spam was cited as one of the top challenges in web search engines in a 2002 paper [24]. Amit Singhal, principal scientist of Google Inc. estimated that the search engine spam industry had a revenue potential of $4.5 billion in year 2004 if they had been able to completely fool all search engines on all commercially viable queries [36]. Due to the large and ever increasing financial gains resulting from high search engine ratings, it is no wonder that a significant amount of human and machine resources are devoted to artificially inflating the rankings of certain web pages.
    [Show full text]
  • Clique-Attacks Detection in Web Search Engine for Spamdexing Using K-Clique Percolation Technique
    International Journal of Machine Learning and Computing, Vol. 2, No. 5, October 2012 Clique-Attacks Detection in Web Search Engine for Spamdexing using K-Clique Percolation Technique S. K. Jayanthi and S. Sasikala, Member, IACSIT Clique cluster groups the set of nodes that are completely Abstract—Search engines make the information retrieval connected to each other. Specifically if connections are added task easier for the users. Highly ranking position in the search between objects in the order of their distance from one engine query results brings great benefits for websites. Some another a cluster if formed when the objects forms a clique. If website owners interpret the link architecture to improve ranks. a web site is considered as a clique, then incoming and To handle the search engine spam problems, especially link farm spam, clique identification in the network structure would outgoing links analysis reveals the cliques existence in web. help a lot. This paper proposes a novel strategy to detect the It means strong interconnection between few websites with spam based on K-Clique Percolation method. Data collected mutual link interchange. It improves all websites rank, which from website and classified with NaiveBayes Classification participates in the clique cluster. In Fig. 2 one particular case algorithm. The suspicious spam sites are analyzed for of link spam, link farm spam is portrayed. That figure points clique-attacks. Observations and findings were given regarding one particular node (website) is pointed by so many nodes the spam. Performance of the system seems to be good in terms of accuracy. (websites), this structure gives higher rank for that website as per the PageRank algorithm.
    [Show full text]
  • Ubuntu UNLEASHED 2012 Edition Covering 11.10 and 12.04
    Matthew Helmke with Andrew Hudson and Paul Hudson Ubuntu UNLEASHED 2012 Edition Covering 11.10 and 12.04 800 East 96th Street, Indianapolis, Indiana 46240 USA Ubuntu Unleashed 2012 Edition: Covering Ubuntu 11.10 and 12.04 Editor-in Chief Copyright © 2012 by Pearson Education, Inc. Mark Taub All rights reserved. No part of this book shall be reproduced, stored in a retrieval system, or transmitted by any means, electronic, mechanical, photocopying, recording, Executive Editor or otherwise, without written permission from the publisher. No patent liability is Debra Williams assumed with respect to the use of the information contained herein. Although every Cauley precaution has been taken in the preparation of this book, the publisher and author assume no responsibility for errors or omissions. Nor is any liability assumed for Senior Development damages resulting from the use of the information contained herein. Editor ISBN-13: 978-0-672-33578-5 Chris Zahn ISBN-10: 0-672-33578-6 Managing Editor Library of Congress Cataloging-in-Publication Data: Kristy Hart Helmke, Matthew. Ubuntu unleashed / Matthew Helmke. — 2012 ed. Project Editor p. cm. Andrew Beaster “Covering 11.10 and 12.04.” ISBN-13: 978-0-672-33578-5 (pbk. : alk. paper) Copy Editor ISBN-10: 0-672-33578-6 (pbk. : alk. paper) Keith Cline 1. Ubuntu (Electronic resource) 2. Linux. 3. Operating systems (Computers) I. Title. QA76.76.O63U36 2012 Indexer 005.4’32—dc23 Christine Karpeles 2011041953 Printed in the United States of America Proofreader First Printing: January 2012 Water Crest Trademarks Publishing All terms mentioned in this book that are known to be trademarks or service marks Technical Editors have been appropriately capitalized.
    [Show full text]
  • Strider Web Security
    Adversarial Web Crawling with Strider Monkeys Yi-Min Wang Director, Cyber-Intelligence Lab Internet Services Research Center (ISRC) Microsoft Research Search Engine Basics • Crawler – Crawling policy • Page classification & indexing • Static ranking • Query processing • Document-query matching & dynamic ranking – Diversity • Goals of web crawling – Retrieve web page content seen by browser users – Classify and index the content for search ranking • What is a monkey? – Automation program that mimics human user behavior Stateless Static Crawling • Assumptions – Input to the web server: the URL • Stateless client – Output from the web server: page content in HTML • Static crawler ignores scripts Stateful Static Crawling • We all know that Cookies affect web server response • HTTP User-Agent field affects response too – Some servers may refuse low-value crawlers – Some spammers use crawler-browser cloaking • Give crawlers a page that maximizes ranking (=traffic) • Give users a page that maximizes profit Dynamic Crawling • Simple crawler-browser cloaking can be achieved by returning HTML with scripts – Crawlers only parse static HTML text that maximizes ranking/traffic – Users’ browsers additionally execute the dynamic scripts that maximize profit • Usually redirect to a third-party domain to server ads • Need browser-based dynamic crawlers to index the true content Search Spam Example: Google search “coach handbag” Spam Doorway URL = http://coach-handbag-top.blogspot.com http://coach-handbag-top.blogspot.com/ script execution led to redirection
    [Show full text]
  • Understanding and Combating Link Farming in the Twitter Social Network
    Understanding and Combating Link Farming in the Twitter Social Network Saptarshi Ghosh Bimal Viswanath Farshad Kooti IIT Kharagpur, India MPI-SWS, Germany MPI-SWS, Germany Naveen K. Sharma Gautam Korlam Fabricio Benevenuto IIT Kharagpur, India IIT Kharagpur, India UFOP, Brazil Niloy Ganguly Krishna P. Gummadi IIT Kharagpur, India MPI-SWS, Germany ABSTRACT Web, such as current events, news stories, and people’s opin- Recently, Twitter has emerged as a popular platform for ion about them. Traditional media, celebrities, and mar- discovering real-time information on the Web, such as news keters are increasingly using Twitter to directly reach au- stories and people’s reaction to them. Like the Web, Twitter diences in the millions. Furthermore, millions of individual has become a target for link farming, where users, especially users are sharing the information they discover over Twit- spammers, try to acquire large numbers of follower links in ter, making it an important source of breaking news during the social network. Acquiring followers not only increases emergencies like revolutions and disasters [17, 23]. Recent the size of a user’s direct audience, but also contributes to estimates suggest that 200 million active Twitter users post the perceived influence of the user, which in turn impacts 150 million tweets (messages) containing more than 23 mil- the ranking of the user’s tweets by search engines. lion URLs (links to web pages) daily [3,28]. In this paper, we first investigate link farming in the Twit- As the information shared over Twitter grows rapidly, ter network and then explore mechanisms to discourage the search is increasingly being used to find interesting trending activity.
    [Show full text]
  • Adversarial Web Search by Carlos Castillo and Brian D
    Foundations and TrendsR in Information Retrieval Vol. 4, No. 5 (2010) 377–486 c 2011 C. Castillo and B. D. Davison DOI: 10.1561/1500000021 Adversarial Web Search By Carlos Castillo and Brian D. Davison Contents 1 Introduction 379 1.1 Search Engine Spam 380 1.2 Activists, Marketers, Optimizers, and Spammers 381 1.3 The Battleground for Search Engine Rankings 383 1.4 Previous Surveys and Taxonomies 384 1.5 This Survey 385 2 Overview of Search Engine Spam Detection 387 2.1 Editorial Assessment of Spam 387 2.2 Feature Extraction 390 2.3 Learning Schemes 394 2.4 Evaluation 397 2.5 Conclusions 400 3 Dealing with Content Spam and Plagiarized Content 401 3.1 Background 402 3.2 Types of Content Spamming 405 3.3 Content Spam Detection Methods 405 3.4 Malicious Mirroring and Near-Duplicates 408 3.5 Cloaking and Redirection 409 3.6 E-mail Spam Detection 413 3.7 Conclusions 413 4 Curbing Nepotistic Linking 415 4.1 Link-Based Ranking 416 4.2 Link Bombs 418 4.3 Link Farms 419 4.4 Link Farm Detection 421 4.5 Beyond Detection 424 4.6 Combining Links and Text 426 4.7 Conclusions 429 5 Propagating Trust and Distrust 430 5.1 Trust as a Directed Graph 430 5.2 Positive and Negative Trust 432 5.3 Propagating Trust: TrustRank and Variants 433 5.4 Propagating Distrust: BadRank and Variants 434 5.5 Considering In-Links as well as Out-Links 436 5.6 Considering Authorship as well as Contents 436 5.7 Propagating Trust in Other Settings 437 5.8 Utilizing Trust 438 5.9 Conclusions 438 6 Detecting Spam in Usage Data 439 6.1 Usage Analysis for Ranking 440 6.2 Spamming Usage Signals 441 6.3 Usage Analysis to Detect Spam 444 6.4 Conclusions 446 7 Fighting Spam in User-Generated Content 447 7.1 User-Generated Content Platforms 448 7.2 Splogs 449 7.3 Publicly-Writable Pages 451 7.4 Social Networks and Social Media Sites 455 7.5 Conclusions 459 8 Discussion 460 8.1 The (Ongoing) Struggle Between Search Engines and Spammers 460 8.2 Outlook 463 8.3 Research Resources 464 8.4 Conclusions 467 Acknowledgments 468 References 469 Foundations and TrendsR in Information Retrieval Vol.
    [Show full text]
  • Guide to Open Source Solutions
    White paper ___________________________ Guide to open source solutions “Guide to open source by Smile ” Page 2 PREAMBLE SMILE Smile is a company of engineers specialising in the implementing of open source solutions OM and the integrating of systems relying on open source. Smile is member of APRIL, the C . association for the promotion and defence of free software, Alliance Libre, PLOSS, and PLOSS RA, which are regional cluster associations of free software companies. OSS Smile has 600 throughout the World which makes it the largest company in Europe - specialising in open source. Since approximately 2000, Smile has been actively supervising developments in technology which enables it to discover the most promising open source products, to qualify and assess them so as to offer its clients the most accomplished, robust and sustainable products. SMILE . This approach has led to a range of white papers covering various fields of application: Content management (2004), portals (2005), business intelligence (2006), PHP frameworks (2007), virtualisation (2007), and electronic document management (2008), as well as PGIs/ERPs (2008). Among the works published in 2009, we would also cite “open source VPN’s”, “Firewall open source flow control”, and “Middleware”, within the framework of the WWW “System and Infrastructure” collection. Each of these works presents a selection of best open source solutions for the domain in question, their respective qualities as well as operational feedback. As open source solutions continue to acquire new domains, Smile will be there to help its clients benefit from these in a risk-free way. Smile is present in the European IT landscape as the integration architect of choice to support the largest companies in the adoption of the best open source solutions.
    [Show full text]
  • Escape from Monkey Island: ? Evading High-Interaction Honeyclients
    Escape from Monkey Island: ? Evading High-Interaction Honeyclients Alexandros Kapravelos1, Marco Cova2, Christopher Kruegel1, Giovanni Vigna1 1 UC Santa Barbara {kapravel,chris,vigna}@cs.ucsb.edu 2 University of Birmingham, UK {m.cova}@cs.bham.ac.uk Abstract. High-interaction honeyclients are the tools of choice to detect mali- cious web pages that launch drive-by-download attacks. Unfortunately, the ap- proach used by these tools, which, in most cases, is to identify the side-effects of a successful attack rather than the attack itself, leaves open the possibility for malicious pages to perform evasion techniques that allow one to execute an at- tack without detection or to behave in a benign way when being analyzed. In this paper, we examine the security model that high-interaction honeyclients use and evaluate their weaknesses in practice. We introduce and discuss a number of possible attacks, and we test them against several popular, well-known high- interaction honeyclients. Our attacks evade the detection of these tools, while successfully attacking regular visitors of malicious web pages. 1 Introduction In a drive-by-download attack, a user is lured into visiting a malicious web page, which contains code that exploits vulnerabilities in the user’s browser and/or its environment. If successful, the exploits can execute arbitrary code on the victim’s machine [33]. This ability is typically used to automatically download and run malware programs on the compromised machine, which, as a consequence, often becomes part of a botnet [31]. Drive-by-download attacks are one of the most pervasive threats on the web, and past measurements have found millions of malicious web pages [3, 32].
    [Show full text]
  • Tracking and Mitigation of Malicious Remote Control Networks
    Tracking and Mitigation of Malicious Remote Control Networks Inauguraldissertation zur Erlangung des akademischen Grades eines Doktors der Naturwissenschaften der Universität Mannheim vorgelegt von Thorsten Holz aus Trier Mannheim, 2009 Dekan: Prof. Dr. Felix Christoph Freiling, Universität Mannheim Referent: Prof. Dr. Felix Christoph Freiling, Universität Mannheim Korreferent: Prof. Dr. Christopher Krügel, University of California, Santa Barbara Tag der mündlichen Prüfung: 30. April 2009 Abstract Attacks against end-users are one of the negative side effects of today’s networks. The goal of the attacker is to compromise the victim’s machine and obtain control over it. This machine is then used to carry out denial-of-service attacks, to send out spam mails, or for other nefarious purposes. From an attacker’s point of view, this kind of attack is even more efficient if she manages to compromise a large number of machines in parallel. In order to control all these machines, she establishes a malicious remote control network, i.e., a mechanism that enables an attacker the control over a large number of compromised machines for illicit activities. The most common type of these networks observed so far are so called botnets. Since these networks are one of the main factors behind current abuses on the Internet, we need to find novel approaches to stop them in an automated and efficient way. In this thesis we focus on this open problem and propose a general root cause methodology to stop malicious remote control networks. The basic idea of our method consists of three steps. In the first step, we use honeypots to collect information.
    [Show full text]
  • DRBL-Winroll: the Free Configuration Program for Microsoft Windows
    DRBL-Winroll: The Free configuration program for Microsoft Windows Ceasar Sun, Steven Shiau, Thomas Tsai http://drbl-winroll.org , http://drbl.org , http://clonezilla.org/ RMLL (LSM) 2015 Q3, 2015 1 Outline Introduction to DRBL-Winroll – Develop Team – Common Issues for Windows Replication – Feature/Framework Cases of Usages – Basic Installation and usage – How to do centralize management – Advanced usage Limitation/Development/Contribution Q&A 2 Outline Introduction to DRBL-Winroll – Develop Team – Common Issues for Windows Replication – Feature/Framework Cases of Usages – Basic Installation and usage – How to do centralize management – Advanced usage Limitation/Development/Contribution Q&A 3 About us • From Taiwan, working for the NPO NCHC (National Center for High- Performance Computing) • Developers of free/open-source software: – DRBL, Clonezilla – DRBL-Winroll, Tux2live – Partclone, Tuxboot, Cloudboot – ... more Taiwan image source: wikipedia.org 4 Developers/Contributor • Steven Shiau • Ceasar Sun • Thomas Tsai • Jazz Wang • Jean René Mérou Sánchez • K. L. Huang • Jean-Francois Nifenecker • Louie Chen • Nagappan Alagappan • … 5 Replication Issue 6 Copy & Paste ? • Data v.s Configurations – For small scale replication , it's easy. • Deployment is one thing, but configuration is another – Not only copy-and-paste 7 Configuration with Massive Scale • Not possible by hand , automatical configuration is better I©m Robot #1 Hello, I©m Robot #2 Hello, I©m Robot #3 Hello, I©m Robot #.. 8 Mass Deployment • What is “mass deployment”
    [Show full text]