Suspicious Url and Device Detection by Log Mining

Total Page:16

File Type:pdf, Size:1020Kb

Suspicious Url and Device Detection by Log Mining SUSPICIOUS URL AND DEVICE DETECTION BY LOG MINING by Yu Tao B.Sc., University of Science and Technology of China, 2012 Thesis Submitted in Partial Fulfillment of the Requirements for the Degree of Master of Science in the School of Computing Science Faculty of Applied Sciences c Yu Tao 2014 SIMON FRASER UNIVERSITY Spring 2014 All rights reserved. However, in accordance with the Copyright Act of Canada, this work may be reproduced without authorization under the conditions for \Fair Dealing." Therefore, limited reproduction of this work for the purposes of private study, research, criticism, review and news reporting is likely to be in accordance with the law, particularly if cited appropriately. APPROVAL Name: Yu Tao Degree: Master of Science Title of Thesis: SUSPICIOUS URL AND DEVICE DETECTION BY LOG MINING Examining Committee: Dr. Greg Mori, Associate Professor Chair Dr. Jian Pei, Professor, Senior Supervisor Dr. Jiangchuan Liu, Associate Professor, Supervisor Dr. Wo-Shun Luk, Professor, Internal Examiner Date Approved: April 22th, 2014 ii Partial Copyright Licence iii Abstract Malicious URL detection is a very important task in Internet security intelligence. Existing works rely on inspecting web page content and URL text to determine whether a URL is malicious or not. There are a lot of new malicious URLs emerging on the web every day, which make it inefficient and not scalable to scan URL one by one using traditional methods. In this thesis, we harness the power of big data to detect unknown malicious URLs based on known ones with the help of Internet access logs. Using our method, we can find out not only related malicious URLs, but also URLs of new updates and CC(command and control) servers for existing malware, botnets and viruses. In addition, we can also detect possibly infected devices. We also discuss how to scale up our method on huge data sets, up to hundreds of gigabytes in our experiment. Our extensive empirical study using the real data sets from Fortinet, a leader in Internet security industry, shows the effectiveness and efficiency of our method. iv To my parents. v \Men love to wonder, and that is the seed of science." | Ralph Waldo Emerson (1803-1882) vi Acknowledgments I would like to express my sincerest gratitude to my senior supervisor, Dr Jian Pei, who provides creative ideas for my research and warm encouragement for my life. Throughout my master study, he shared with me not only valuable knowledge, but also the wisdom of life. Without his help, never can I accomplish this thesis. My gratitude also goes to my supervisor, Dr Jiangchuan Liu, for reviewing my work and helpful suggestions that helped me to improve my thesis. I am grateful to thank Dr. Wo-Shun Luk and Dr. Greg Mori, for serving in my examing committe. I thank Guanting Tang, Xiao Meng, Juhua Hu, Xiangbo Mao, Xiaoning Xu, Chuancong Gao, Yu Yang, Li Xiong, Lin Liu, Beier Lu and Jiaxing Liang for their kind help during my study at SFU. I am also grateful to my friends at Fortinet. I thank Kai Xu, for his guide and insight suggestions. My deepest gratitude goes to my parents. Their endless love supports me to overcome all the difficulties in my study and life. vii Contents Approval ii Partial Copyright License iii Abstract iv Dedication v Quotation vi Acknowledgments vii Contents viii List of Tablesx List of Figures xi 1 Introduction1 1.1 Background and Motivation............................1 1.2 Challenges......................................2 1.3 Major Idea.....................................3 1.4 Contributions....................................4 1.5 Thesis Organization................................4 2 Related Work6 2.1 Blacklisting.....................................6 2.2 Heuristics Based Methods.............................7 viii 2.3 Classification Based Methods...........................8 2.3.1 Content Based Methods..........................8 2.3.2 URL Based Methods............................9 3 Problem Definition and Graph Representation 13 3.1 Problem Definition................................. 13 3.2 Bipartite Graph Representation.......................... 15 3.3 Assumptions.................................... 17 4 Scalable Methods 20 4.1 The Basic Method................................. 20 4.2 Limitation of Our Method............................. 22 4.3 Data Storage.................................... 23 4.3.1 Data Storage of Graph Structure..................... 23 4.3.2 Data Storage of URLs' Suspicious Scores................ 24 4.3.3 Data Storage of Devices' Suspicious Scores............... 27 4.4 MapReduce Approach............................... 27 4.5 Relationship between Scalable Version and MapReduce Version........ 32 5 Experimental Results 33 5.1 Data Sets...................................... 33 5.2 Efficiency of Our Basic Method.......................... 35 5.3 Effectiveness of Our Method............................ 37 5.3.1 Effectiveness of Malicious URLs Found by Our Method........ 38 5.3.2 Effectiveness of Infected Devices Found By Our Method........ 42 5.4 Efficiency of Our MapReduce Method...................... 44 5.4.1 Number of Mappers............................ 45 5.4.2 Number of Reducers............................ 45 5.4.3 Number of Machines in Hadoop Cluster................. 47 6 Conclusions 48 Bibliography 50 ix List of Tables 1.1 Malicious URLs with the same IP address....................2 1.2 Malicious URLs from the same family of virus..................3 5.1 Top popular websites that we have filtered.................... 34 5.2 Malicious URLs detected by traditional methods................ 35 5.3 Comparision of top 10 URLs of first and second iteration........... 43 5.4 Suspicious URLs that D1 has visited....................... 43 5.5 Suspicious URLs that D2 has visited....................... 45 x List of Figures 3.1 first example of bipartite graph representation.................. 16 3.2 second example of bipartite graph representation................ 17 4.1 Store the adjacency list of the bipartite graph on disk............. 24 4.2 Store the suspicious scores of URLs on disk with the neighbors........ 26 4.3 Partition the suspicious scores of devices and the graph structure....... 29 4.4 Overview of MapReduce framework........................ 30 5.1 Degree distribution of URLs............................ 34 5.2 Running time with size of data set........................ 36 5.3 Memory storage with size of data set....................... 36 5.4 Disk storage with size of data set......................... 37 5.5 Accuracy of Top K URLs found by our method................. 38 5.6 Accuracy of Top K URLs that end with 'exe' or 'php'............. 39 5.7 Accuracy of Top K URLs with different definition of being malicious..... 40 5.8 Accuracy after one week and two weeks..................... 41 5.9 Accuracy of different iterations.......................... 42 5.10 Running time with different number of reducers................. 46 5.11 Running time with size of dataset......................... 47 xi Chapter 1 Introduction In this chapter, we first briefly introduce the background of Internet security, how web based attacks work, and the motivation and challenges of malicious URL detection. Then, we will summarize our major contributions and describe the structure of the thesis. 1.1 Background and Motivation The development of Internet not only improves our quality of life and drives new oppor- tunities for commerce, but also creates opportunities for malicious attacks. The attackers are the people that design web based attacks to achieve several goals, such as installation of malware and virus, spam-advertised commerce, identity theft, financial fraud and botnet information flow. How to identify web based attacks and guard the safety of users on the Internet is very important. Several factors have made the identification of web based attacks challenging. The first is the large scale of the World Wide Web. The amount of websites is so huge and different websites provide different kinds of data and services, which make it difficult to distinguish between attack websites and benign websites. Second, the attackers can disguise their attacks anytime and duplicate them in multiple locations. Most web based attacks share a common pattern, the attackers will put their attack code on the web and attract the users to visit it via its Uniform Resource Locator(URL). As a result, users need to evaluate the associated risk when deciding whether to click on an unfamiliar URL. Is this URL safe or not, or will it make the computer get infected after clicking this URL? This is a very difficult decision for users to make. 1 CHAPTER 1. INTRODUCTION 2 http://coolstowage.com/ponyb/gate.php http://coolstowage.com/2/gate.php http://deeliodin.com/ponyb/gate.php http://couponwalla.com/ponyb/gate.php http://dealdin.com/ponyb/gate.php http://coolstowage.com/ponyb/gate.php Table 1.1: Malicious URLs with the same IP address There are various systems that help users decide whether a URL is safe to click on or not. In recent years, the most common method, used in Web filtering applications, search engine and browser toolbars, is blacklisting. The bad URLs that direct users to web based attacks are called malicious URLs. The Internet security companies maintain a list of malicious URLs, which is called blacklist. After a user clicks on a URL, the URL will be checked in the blacklist. If the URL is in the blacklist, the user will be prevented from visiting it. How to maintain and update a blacklist is a key issue for Internet security companies. Currently, blacklists are constructed
Recommended publications
  • The Study of Open Source Cmss by CHETAN GOPILAL JAIN a Thesis
    The Study of Open Source CMSs By CHETAN GOPILAL JAIN A thesis submitted to the Graduate School-New Brunswick Rutgers, The State University of New Jersey in partial fulfillment of the requirements for the degree of Master of Science Graduate Program in Electrical and Computer Engineering written under the direction of Prof Deborah Silver and approved by ________________________ ________________________ ________________________ ________________________ New Brunswick, New Jersey May, 2010 2010 CHETAN GOPILAL JAIN ALL RIGHTS RESERVED ABSTRACT OF THE THESIS The Study of Open Source CMSs By CHETAN JAIN Thesis Director: Professor Deborah Silver In this thesis, we evaluate different open source content management systems (CMSs) and determine their appropriateness for scientific research laboratories’ website content management. We describe different CMSs and evaluate them based on the following criteria: ease of installation, usability, maintenance and updates, scalability, community strength and support, user roles and workflow, security, and Web 2.0 features. We then choose of these system, Drupal, and demonstrate its effectiveness for two different scientific websites, Bio-1 and Vizlab. Drupal allows integrating new features using community contributed modules and easy future up-gradation. Successful implementation of both projects using Drupal highlights the importance of Open Source CMSs. ii Acknowledgement I would like to thank my advisor, Prof. Deborah Silver, for her support and encouragement while writing this thesis.Also, I would like to thank my parents and family who provided me with a strong educational foundation and supported me in all my academic pursuits. I also acknowledge the help of VIZLAB at Rutgers. iii Table of Contents Abstract ……………………………………………………………………….………………….ii Acknowledgement …………………………………………………………...………………...
    [Show full text]
  • Appximity: a Context-Aware Mobile Application Management Framework
    AppXimity: A Context-Aware Mobile Application Management Framework by Ernest E. Alexander Jr. Aaron B.Sc., Universiti Tenaga Nasional, 2011 A Thesis Submitted in Partial Fulfillment of the Requirements for the Degree of MASTER OF SCIENCE in the Department of Computer Science c Ernest E. Alexander Jr. Aaron, 2017 University of Victoria All rights reserved. This thesis may not be reproduced in whole or in part, by photocopying or other means, without the permission of the author. ii AppXimity: A Context-Aware Mobile Application Management Framework by Ernest E. Alexander Jr. Aaron B.Sc., Universiti Tenaga Nasional, 2011 Supervisory Committee Dr. Hausi A. M¨uller,Supervisor (Department of Computer Science) Dr. Issa Traor´e,Outside Member (Department of Electrical and Computer Engineering) iii Supervisory Committee Dr. Hausi A. M¨uller,Supervisor (Department of Computer Science) Dr. Issa Traor´e,Outside Member (Department of Electrical and Computer Engineering) ABSTRACT The Internet of Things is an emerging technology where everyday devices with sensing and actuating capabilities are connected to the Internet and seamlessly com- municate with other devices over the network. The proliferation of mobile devices enables access to unprecedented levels of rich information sources. Mobile app cre- ators can leverage this information to create personalized mobile applications. The amount of available mobile apps available for download will increase over time, and thus, accessing and managing apps can become cumbersome. This thesis presents AppXimity, a mobile-app-management that provides personalized app suggestions and recommendations by leveraging user preferences and contextual information to provide relevant apps in a given context. Suggested apps represent a subset of the installed apps that match nearby businesses or have been identified by AppXimity as apps of interest to the user, and recommended apps are those apps that are not installed on the user's device, but may be of interest to the user, in that location.
    [Show full text]
  • Exploring Search Engine Optimization (SEO) Techniques for Dynamic Websites
    Master Thesis Computer Science Thesis no: MCS-2011-10 March, 2011 __________________________________________________________________________ Exploring Search Engine Optimization (SEO) Techniques for Dynamic Websites Wasfa Kanwal School of Computing Blekinge Institute of Technology SE – 371 39 Karlskrona Sweden This thesis is submitted to the School of Computing at Blekinge Institute of Technology in partial fulfillment of the requirements for the degree of Master of Science in Computer Science. The thesis is equivalent to 20 weeks of full time studies. ___________________________________________________________________________________ Contact Information: Author: Wasfa Kanwal E-mail: [email protected] University advisor: Martin Boldt, PhD. School of Computing School of Computing Internet : www.bth.se/com Blekinge Institute of Technology Phone : +46 455 38 50 00 SE – 371 39 Karlskrona Fax : +46 455 38 50 57 Sweden ii ABSTRACT Context: With growing number of online businesses, Search Engine Optimization (SEO) has become vital to capitalize a business because SEO is key factor for marketing an online business. SEO is the process to optimize a website so that it ranks well on Search Engine Result Pages (SERPs). Dynamic websites are commonly used for e-commerce because they are easier to update and expand; however they are subjected to indexing related problems. Objectives: This research aims to examine and address dynamic websites indexing related issues. To achieve aims and objectives of this research I intend to explore dynamic websites indexing considerations, investigate SEO tools to carry SEO campaign in three major search engines (Google, Yahoo and Bing), experiment SEO techniques, and determine to what extent dynamic websites can be made search engine friendly on these major search engines.
    [Show full text]
  • Chapter 4 Working with Content
    C H A P T E R 4 ! ! ! Working with Content WordPress comes with several basic content types: posts, pages, links, and media files. In addition, you can create your own content types, which I’ll talk more about in Chapter 12. Posts and pages make up the heart of your site. You’ll probably add images, audio, video, or other documents like Office files to augment your posts and pages, and WordPress makes it easy to upload and link to these files. WordPress also includes a robust link manager, which you can use to maintain a blogroll or other link directory. WordPress automatically generates a number of different feeds to syndicate your content. I’ll talk about the four feed formats, the common feeds, and the hidden ones that even experienced WordPress users might not know about. Since WordPress is known for its exceptional blogging capabilities, I’ll talk about posts first, and then discuss how pages differ from posts—and how you can modify them to be more alike. Posts Collectively, posts make up the blog (or news) section of your site. Posts are generally listed according to date, but can also be tagged or filed into categories. At its most basic, a post consists of a title and some content. In addition, WordPress will add some required metadata to every post: an ID number, an author, a publication date, a category, the publication status, and a visibility setting. There are a number of other things that may be added to posts, but the aforementioned are the essentials.
    [Show full text]
  • Cached and Confused: Web Cache Deception in the Wild
    Cached and Confused: Web Cache Deception in the Wild Seyed Ali Mirheidari, University of Trento; Sajjad Arshad, Northeastern University; Kaan Onarlioglu, Akamai Technologies; Bruno Crispo, University of Trento, KU Leuven; Engin Kirda and William Robertson, Northeastern University https://www.usenix.org/conference/usenixsecurity20/presentation/mirheidari This paper is included in the Proceedings of the 29th USENIX Security Symposium. August 12–14, 2020 978-1-939133-17-5 Open access to the Proceedings of the 29th USENIX Security Symposium is sponsored by USENIX. Cached and Confused: Web Cache Deception in the Wild Seyed Ali Mirheidari Sajjad Arshad∗ Kaan Onarlioglu University of Trento Northeastern University Akamai Technologies Bruno Crispo Engin Kirda William Robertson University of Trento & Northeastern University Northeastern University KU Leuven Abstract In particular, Content Delivery Network (CDN) providers Web cache deception (WCD) is an attack proposed in 2017, heavily rely on effective web content caching at their edge where an attacker tricks a caching proxy into erroneously servers, which together comprise a massively-distributed In- storing private information transmitted over the Internet and ternet overlay network of caching reverse proxies. Popular subsequently gains unauthorized access to that cached data. CDN providers advertise accelerated content delivery and Due to the widespread use of web caches and, in particular, high availability via global coverage and deployments reach- the use of massive networks of caching proxies deployed ing hundreds of thousands of servers [5,15]. A recent scien- by content distribution network (CDN) providers as a critical tific measurement also estimates that more than 74% of the component of the Internet, WCD puts a substantial population Alexa Top 1K are served by CDN providers, indicating that of Internet users at risk.
    [Show full text]
  • WEBMYNE SYSTEMS Complete Internet Marketing & SEO Training Guide [2014]
    WEBMYNE SYSTEMS Complete Internet Marketing & SEO Training Guide [2014] SEO Training Guide - 2014 Webmyne, SEO Training Guide 0 TABLE OF CONTENTS Introduction to SEO # INTRODUCTION TO INTERNET MARKETING # SEARCH ENGINE BASICS # SEO REQUIREMENTS # Types of SEO # ON-PAGE SEO # OFF-PAGE SEO # Off-page SEO Overview # WHAT IS LINK BUILDING? # IMPORTANCE OF OFF-PAGE SEO OR LINK BUILDING # QUALITY LINK BUILDING/LINK DEVELOPMENT WAYS # Link Popularity in Practice # DIRECTORY SUBMISSION SOCIAL MEDIA OPTIMIZATION / BOOKMARKING ARTICLE SUBMISSION BLOG CREATION / SUBMISSION / MARKETING NEWS / PR / RELEASES FORUM POSTING CLASSIFIEDS COMMENTING (BLOGS / ARTICLES / FORUMS) LINK EXCHANGE EMAIL MARKETING BUSINESS DIRECTORY SUBMISSION GROUPS / COMMUNITY PROFILES CREATION SOCIAL NETWORKING RSS / ATOM / OPML / XML PINGING VIDEO SUBMISSION / PODCASTING / AUDIO SUBMISSION SOFTWARE SUBMISSION SLIDE SHARE / DOCUMENT / PDF SHARING On Page SEO Overview # KEYWORD RESEARCH # META TAGS AND TITLE OPTIMIZATION # CANONICAL TAG, META ROBOTS AND REL=NOFOLLOW TAGS # CONTENT OPTIMIZATION # URL OPTIMIZATION # IMPORTANCE OF ROBOTS.TXT AND HTACCESS # Webmyne, SEO Training Guide 1 Site Architecture # WHAT IS SITE ARCHITECTURE? # SEO FRIENDLY SITE STRUCTURE # Google Algorithms and Updates # WHAT ARE GOOGLE/SEARCH ENGINE ALGORITHMS? # EXPLAINING PANDA, PENGUIN AND HUMMINGBIRD UPDATES # Social Media Optimization & Social Networking # WHAT IS SOCIAL MEDIA OPTIMIZATION? # PARTICIPATING IN FACEBOOK, TWITTER, LINKEDIN, PINTEREST # EFFECTIVE SOCIAL MEDIA MARKETING TACTICS #
    [Show full text]
  • User Guide for Zone Labs Security Software
    User Guide for Zone Labs security software version 6.5 Smarter Security TM © 2006 Zone Labs, LLC. All rights reserved. © 2006 Check Point Software Technologies Ltd. All rights reserved. Check Point, Application Intelligence, Check Point Express, the Check Point logo, AlertAdvisor, ClusterXL, Cooperative Enforcement, ConnectControl, Connectra, CoSa, Cooperative Security Alliance, FireWall-1, FireWall-1 GX, FireWall-1 SecureServer, FloodGate-1, Hacker ID, IMsecure, INSPECT, INSPECT XL, Integrity, InterSpect, IQ Engine, Open Security Extension, OPSEC, Policy Lifecycle Management, Provider-1, Safe@Home, Safe@Office, SecureClient, SecureKnowledge, SecurePlatform, SecurRemote, SecurServer, SecureUpdate, SecureXL, SiteManager-1, SmartCenter, SmartCenter Pro, Smarter Security, SmartDashboard, SmartDefense, SmartLSM, SmartMap, SmartUpdate, SmartView, SmartView Monitor, SmartView Reporter, SmartView Status, SmartViewTracker, SofaWare, SSL Network Extender, TrueVector, UAM, User-to-Address Mapping, UserAuthority, VPN-1, VPN-1 Accelerator Card, VPN-1 Edge, VPN-1 Pro, VPN-1 SecureClient, VPN-1 SecuRemote, VPN-1 SecureServer, VPN-1 VSX, Web Intelligence, ZoneAlarm, Zone Alarm Pro, Zone Labs, and the Zone Labs logo, are trademarks or registered trademarks of Check Point Software Technologies Ltd. or its affiliates. All other product names mentioned herein are trademarks or registered trademarks of their respective owners. The products described in this document are protected by U.S. Patent No. 5,606,668, 5,835,726 and 6,496,935 and may be protected by other U.S. Patents, foreign patents, or pending applications. Zone Labs, LLC. A Checkpoint Company 475 Brannan, Suite 300 San Francisco, CA 94107 ZLD -0422-0650-2006-0601 Contents Tables . ix Figures . xi Preface . xiii About Zone Labs security software . xiv About this guide . xv Conventions .
    [Show full text]
  • Review of Lovethatwhitesmile.Com
    Your website score 48.5 Review of lovethatwhitesmile.com Introduction This report provides a review of the key factors that influence the SEO and the usability of your website. The rank is a grade, on a 100 points scale, that represents your Internet Marketing Effectiveness. The algorithm is based on 50 criteria, including search engine data, website structure, site performance, and others. A rank lower than 40 means that there is a lot of areas to improve. A rank above 70 is a good mark and it means that your website is probably well optimized. Our reports provide actionable advices to improve a site's business objectives. Please contact us for more information. [email protected] Table of Contents Visitors SEO Content SEO Backlinks Social Monitoring SEO Links Usability Mobile SEO Keywords Security SEO Basics SEO Authority Technologies Iconography Pass High impact Very hard to solve Moderate Medium impact Hard to solve Fail Low impact Easy to solve FYI Visitors Traffic Estimations Low High Medium impact We use several different tools to estimate web traffic: Google™ Ad Planner, Google™ Trends and Alexa™. Nevertheless, your analytics will provide the most accurate traffic data. Traffic Rank 5846972th most visited website in the World A low rank means that your website gets lots of visitors. Your Alexa Rank is a good estimate of the worldwide traffic to your website, although it is not 100% accurate. Reviewing the most visited websites by country can give you valuable insights Distinctive Audience This website tends to be popular amongst: females aged between 18 and 24 connecting from work Relative to the general internet population, the above audience is over-represented at lovethatwhitesmile.com.
    [Show full text]
  • C5E2FF Introduction
    C5E2FF Introduction What is local SEO? If you’re a local business, or someone who wants to market your products to a local audience, you want to be ranking higher on local search results. Simply put, local SEO is the process of optimizing your website, local profiles and online reviews to ensure that you rank higher when searches are made “locally” (near your business). Why is local SEO important? It is estimated that more than half the searches on Google are made with "local intent". This means that one in two people in Google are actually looking for local products or services when they search for something. Now, if you don't get your local SEO right, you're losing out on earning more customers for your business. SEO has changed drastically over the years, and bad practices to game the system such as keyword stuffing, meta tag stuffing and link buying are discouraged, now. Ever since Google started rolling out its Panda and Penguin updates, it's become difficult for users to hack their way and rank well on searches without providing rich, accurate and relevant content. In other words, you'll need to actually put in effort to optimize and get your profiles and websites ranking for local searches. The bad news is that there is no shortcut. The good news is that it's fairly straightforward and uncomplicated. How do I optimize my local SEO? It might be a good idea to just focus on the website optimization, managing your Google profile and online business citations sections of this guide if you want to cover the basics of local SEO.
    [Show full text]
  • Website 'Must Haves'
    Website ‘Must Haves’ For Driving Traffic, Leads & Sales By Jessica Meher TABLE OF CONTENTS Introduction…………………….3 Part 3: Content………………..... 25 Messaging………………………………......26 Part 1: Get Found Online…… 4 Educate and Offer Value……………………28 Building Inbound Links………………… 7 Importance of Quality……………………….29 On-page SEO……………………………9 Avoid Corporate Gobbledygook…………..31 Title Tag & Meta Tags……………………11 Be Clear Not Clever……………………….. 32 XML Sitemaps……………………………13 Blogging……………………………………..33 301 Redirects…………………………… 14 Making Content Social and Shareable……35 Other Forms of Content……………………36 15 Part 2: Design & Usability…… Customer Proof……………………………..37 The First Impression…………………....16 Maintain Consistency…………..............19 Part 4: Conversion………………..39 Using the Right Images…………………...20 Effective Call-to-Actions……………………40 Navigation…………………………………21 CTA Positioning……………………………..43 Flash and Animation,,,,,…………………23 Landing Pages………………………………45 Accessibility……………………………….24 Forms…………………………………………49 Newsletters……………………………….....51 Conclusion……………………... 52 Introduction We all know how important a website is to a business's online strategy. Almost every business, whether B2B, B2C, non-profit, local or global needs an online presence to reach buyers in the internet age. A company’s website is its virtual storefront. Shockingly, a recent survey by 1&1 Internet reported that up to 40% of small-to-medium sized businesses still don’t A recent survey by have a website. Even if you’re on social media, operating 1&1 Internet reported without a website is just silly. A website is an essential that up to 40% of piece of your online marketing strategy. small-to-medium sized Whether you're looking to build your first website, or if your businesses still don’t existing site just isn't getting the traffic or leads you were hoping for, you may wonder what it really takes to have a have a website.
    [Show full text]
  • Search Engine Optimization (SEO) Tips for Brokers
    March 2015 Search Engine Optimization (SEO) Tips for Brokers © 2015 Centre for Study of Featuring case studies of Canadian brokers, this white paper contains valuable information and Insurance Operations actionable practices for building an effective search optimization strategy that will help your brokerage rise above the competition and get found when it counts. 110 Yonge Street Suite 500 Toronto, ON M5C 1T4 www.csio.com Table of Contents Executive Summary.............................................................................................................................3 SEO...By the Numbers........................................................................................................................4 Organic vs. Paid SEO.........................................................................................................................5 SEO Tips for Brokers Use a Content Creation Strategy.........................................................................................................6 Establish a Social Media Presence.....................................................................................................7 Use Link Building................................................................................................................................8 Leverage Authoritative Websites.........................................................................................................9 Optimize for Mobile............................................................................................................................10
    [Show full text]
  • Hypertext Transfer Protocol
    Reader zum MOOC "Web-Engineering" Kapitel 3: Grundlagen des Webs: HTTP Die PDF-Datei wurde mit Hilfe des Open-Source-Werkzeugs „mwlib“ erstellt. Für weitere Informationen siehe http://code.pediapress.com/ PDF generated at: Fri, 01 Nov 2013 08:27:14 UTC Inhalt Artikel Kapitel 3: Grundlagen des Webs: HTTP 1 World Wide Web 1 Website 5 Representational State Transfer 8 Hypertext Transfer Protocol 13 Hypertext 21 Uniform Resource Locator 24 Clean URL 30 Quellennachweise Quelle(n) und Bearbeiter des/der Artikel(s) 32 Quelle(n), Lizenz(en) und Autor(en) des Bildes 33 Artikellizenzen Lizenz 34 1 Kapitel 3: Grundlagen des Webs: HTTP World Wide Web Das World Wide Web [ˌwɜːldˌwaɪdˈwɛb] (kurz Web oder WWW aus dem Englischen für: „Weltweites Netz“) ist ein über das Internet abrufbares System von elektronischen Hypertext-Dokumenten, sogenannten Webseiten. Sie sind durch Hyperlinks untereinander verknüpft und werden im Internet über die Protokolle HTTP bzw. HTTPS übertragen. Die Webseiten enthalten Informationen, die meist in Form von Text vorliegen, der oft mit Fotos und grafischen Elementen illustriert ist. Häufig sind auch Videos, Tondokumente oder Musikstücke in den Webseiten eingebettet. Das historische WWW-Logo, entworfen von Zur Nutzung des World Wide Web wird ein Webbrowser benötigt, der Robert Cailliau üblicherweise auf einem Computer oder Mobilgerät läuft. Mit ihm kann der Benutzer die auf einem Webserver bereitgestellten Webseiten-Daten herunterladen und sich auf dem Bildschirm anzeigen lassen. Der Benutzer kann den Hyperlinks auf einer Webseite folgen, die wiederum auf andere Webseiten verweisen, gleichgültig ob sie auf demselben Webserver oder einem anderen gespeichert sind. Dadurch ergibt sich ein weltweites Netz aus Webseiten.
    [Show full text]