Copyrighted Material

Total Page:16

File Type:pdf, Size:1020Kb

Copyrighted Material INDEX NUMERICS overview, 150–151 setting conversion goals, 150, 156–157 1&1 Web Hosting, 62 setting up e-commerce tracking, 170–171 301 redirects, 75, 88–89 specifying directories, 162–163 third-party shopping carts, 174–175 tracking external links, 164–165 A analyzing A/B landing page tests, 243 competition accounts with Compete.com, 14, 18–19 AdWords (Google), 200, 202–203 with SEMRush, 15, 24–25 Google Analytics, 152–153 keywords with Keyword Discovery, 15, 32–33 optimizing, 201, 222–223 anchor text, 125, 134–135 acquiring quality links, 124, 132–133 Anybrowser (Web site), 65 ad copy, testing applying filters, 150 with advanced keyword insertion, 225, 234–235 article directory submissions, 125, 138–139 overview, 224, 230–231 Ask.com, 203, 245, 252–253 AdBrite, 267 auctions, eBay, 245, 258–259 adding audience, target, 4–5 reviews to Web sites, 105, 122–123 automating reporting, 151, 166–167 target keywords to meta tags, 17 avoiding administrator (blog), 114 broken links, 67 AdSense (Google), 262, 264–265 duplicate content, 90, 94–95 advanced keyword insertion, 225, 234–235 error messages, 67 AdWords (Google) AZN cost-per-action offers, 263, 274–275 accounts, 202–203 creating campaigns, 204–207 opening accounts, 200 B overview, 112–113 Best of the Web Directory, 136–137 structuring accounts, 200 bidding strategies, 201, 214–215 AdWords Editor (Google), 201, 220–221 Bing, 95, 245, 254–255 AdWords Editorial Guidelines, 210 blocking bad bots, 82 affiliate marketing search arbitrage, 269 Blogger, 104, 108–109 Affiliate Summit, 13 blogs Akismet software, 109 creating Alexa Toolbar, 282, 286–287 with Blogger, 104, 108–109 alt image tags, 52–53 with Tumblr, 104, 110–111 Analytics (Google) COPYRIGHTEDwith MATERIAL WordPress, 104, 106–107 automating reporting, 151, 166–167 keys to success of, 105, 114–115 creating accounts, 152–153 optimizing for Technorati, 244, 246–247 e-mail reports, 167 participating in, 140–141 excluding preventing spam, 79 IP address with filters, 158–159 Boykin, Jim (blogger), 11 traffic from particular domains, 160–161 browsers, optimizing for multiple, 58, 64–65 finding new keywords with, 168–169 budget setting, 6 installing building relevancy, 78 overview, 150 buying links, 125, 146–149 tracking code, 154–155 Buzz (Google), 177, 190–191 tracking code on Thank You page, 172–173 298 119_620755-bindex.indd9_620755-bindex.indd 298298 66/21/10/21/10 111:091:09 AAMM C community participation, 125, 140–143 company information pages, 59, 70–71 call-to-action statements, 93 Compete.com, 14, 18–19 campaigns competition AdWords, 204–207 analyzing content-targeted, 225, 236–237 with Compete.com, 18–19 pay-per-click with SEMRush, 24–25 AdWords accounts, 202–203 evaluating, 124, 126–127 creating AdWords campaigns, 204–207 Completely Automated Public Turing test to tell Google AdWords Editor, 220–221 Computers and Humans Apart (CAPTCHA), 109 keyword matching options, 212–213 consumer feedback, evaluating, 120 optimizing account, 222–223 content overview, 200–201 creating running PPC reports, 216–217 avoiding duplicate content, 94–95 setting bidding strategies, 214–215 goals, 92–93 targeting campaigns, 208–209 keeping current, 91, 100–101 tracking conversions, 218–219 keyword density, 90, 96–97 writing effective ad copy, 210–211 Latent Semantic Content, 91, 98–99 placement-targeted, 225, 238–239 optimizing non-HTML documents, 91, 102–103 targeting, 208–209 overview, 90–91 canonicalization, 296 development costs, 6 CAPTCHA (Completely Automated Public Turing content-targeted campaigns, 225, 236–237 test to tell Computers and Humans Apart), 109 contests Cascading Style Sheets (CSS), 49 blog, 114 Chitika Select Ads, 262, 266–267 forum, 121 choosing controlling bots, 47 filenames, 36, 38–39 conversion goals head terms, 16 setting, 150, 156–157 keywords, 14, 16–17 tracking, 150 tail terms, 17 conversion rates topics, 2–3 defined, 6 clicks, external, 151 tracking, 201, 218–219 click-through rate (CTR), 235 converting Commission Junction, 262, 268–269 HTML to PDF, 102 communities, creating PDF to HTML, 103 adding reviews, 122–123 Copyscape, 94–95 creating cost-per-acquisition model (CPA), 255 blogs with Blogger, 104, 108–109 cost-per-action offers, 263, 274–275 blogs with WordPress, 104, 106–107 cost-per-click (CPC), 245 blogs withTumblr, 104, 110–111 cost-per-impression (CPM), 238–239 keys to successful blogs, 105, 114–115 costs keys to successful forums, 105, 120–121 industry conference, 12–13 overview, 104–105 Web design and development, 6 with phpBB, 105, 118–119 CPA (cost-per-acquisition model), 255 with vBulletin, 105, 116–117 CPC (cost-per-click), 245 writing search-engine-optimized posts, 105, CPM (cost-per-impression), 238–239 112–113 Craigslist, 245, 260–261 299 119_620755-bindex.indd9_620755-bindex.indd 299299 66/21/10/21/10 111:091:09 AAMM INDEX creating CTR (click-through rate), 235 AdWords campaigns, 204–207 cybersquatting, 63 communities adding reviews, 122–123 blogs with Blogger, 104, 108–109 D blogs with Tumblr, 104, 110–111 database, 107 blogs with WordPress, 104, 106–107 dayparting, 225, 240–241 keys to successful blogs, 105, 114–115 Del.icio.us, 244, 248–249 keys to successful forums, 105, 120–121 descriptions, optimizing with WordPress Description overview, 104–105 Tag plugin, 283, 292–293 with phpBB, 105, 118–119 designing with vBulletin, 105, 116–117 sitemaps, 59, 68–69 writing search-engine-optimized posts, 105, Web site structure, 59, 66 112–113 Digg, 177, 196–197 company information pages, 59, 70–71 direct linking, 269 content Direct Marketing Association, 73 avoiding duplicate content, 94–95 directories, 162–163 goals, 92–93 DMOZ (Open Directory Project), 136–137, 144–145 keeping current, 91, 100–101 documents, non-HTML, 91, 102–103 keyword density, 90, 96–97 domain names Latent Semantic Content, 98–99 age of, 128 optimizing non-HTML documents, 91, 102–103 establishing, 58, 62–63 overview, 90–91 excluding traffic from, 160–161 Fan pages on Facebook, 176, 180–181 purchasing, 6 Google Analytics accounts, 152–153 domaining, 63 links, 37, 54–55 Dynamic Keyword Insertion, 211 meta description tags, 36, 42–43 meta robots tags, 37, 46–47 MySpace profiles, 182 E pages EasyReviewScript v1.0, 123 choosing file names, 38–39 eBay auctions, 245, 258–259 creating links, 54–55 eBay Database (Keyword Discovery), 31 creating meta robots tags, 46–47 e-book, 103 header tags, 48–49 e-commerce tracking, 151, 170–171 optimizing images, 52–53 .edu links, finding, 129 optimizing meta description tags, 42–43 Elance (Web site), 8, 9 optimizing meta keyword tags, 44–45 e-mail reports (Google Analytics), 167 optimizing title tags, 40–41 embedded match, 213 overview,, 36–37 error messages, 67 text modifiers, 50–51 establishing domain names, 58, 62–63 validating HTML, 56–57 evaluating privacy policies, 59, 72–73 competition, 124, 126–127 robots.txt file, 74, 76–77 consumer feedback, 120 sitemaps with WordPress Sitemap Generator, 283, potential linking partners, 124, 128–129 294–295 excluding CSS (Cascading Style Sheets), 49 IP addresses with filters, 158–159 traffic from domains, 160–161 300 119_620755-bindex.indd9_620755-bindex.indd 300300 66/21/10/21/10 111:091:09 AAMM expressions, regular, 159 Google AdWords Ezinearticles.com, 138–139 accounts, 202–203 creating campaigns, 204–207 opening accounts, 200 F overview, 112–113 Facebook structuring accounts, 200 creating Fan pages on, 176, 180–181 Google AdWords Editor, 201, 220–221 networking with, 176, 178–179 Google Analytics Facebook Marketplace, 179 automating reporting, 151, 166–167 Facebook Platform, 179 creating accounts, 152–153 feedback (consumer), evaluating, 124, 126–127 e-mail reports, 167 files excluding choosing names, 36, 38–39 IP address with filters, 158–159 .htaccess, 75, 82–83 traffic from particular domains, 160–161 robots.txt, 74, 76–77, 117 finding new keywords with, 168–169 filters/filtering installing applying filters, 150 overview, 150 excluding IP addresses with, 158–159 tracking code, 154–155 keywords with Keyword Discovery, 15, 34–35 tracking code on Thank You page, 172–173 finding overview, 150–151 .edu links in Yahoo, 129 setting conversion goals, 150, 156–157 keywords, 151 setting up e-commerce tracking, 151, 170–171 keywords with Google Analytics, 168–169 specifying directories, 162–163 target audience, 4–5 third-party shopping carts, 174–175 Web hosting, 58, 60–61 tracking external links, 164–165 Firefox plugin, 282, 288–289 Google Buzz, 177, 190–191 forums Google Images, 245, 250–251 keys to success of, 105, 120–121 Google Keyword Suggestion Tool, 3, 15, 28–29 participating in, 142 Google Latent Semantic Content tool, 91, 98–99 funnel, 157 Google Toolbar, 282, 284–285 Googlebot spider, 47 Google’s Link Query, 126 G grammar, avoiding poor, 92 gathering link intelligence with Linkscape, 124, Gray, Michael (blogger), 11 130–131 giveaways H blog, 114 forums, 121 head terms, 14, 16 Global Premium Database (Keyword Discovery), 31 header tags, 37, 48–49 goals hierarchy, linking in, 67 conversion, 6, 201, 218–219 Historical Global Database (Keyword Discovery), 31 of search engine optimization, 92–93 Hitwise (Web site), 21 setting, 7 hook, 113 GoDaddy, 62 hosted marketing pages, 281 Google, 95. See also specific programs hosting (Web), 58, 60–61 Google AdSense ads, 262, 264–265 HotScripts Internet directory, 119 .htaccess file, 75, 82–83 301 119_620755-bindex.indd9_620755-bindex.indd 301301 66/21/10/21/10 111:091:09 AAMM INDEX HTML K converting to PDF, 102 KEI (keyword effectiveness indicator), 32 PDF to, 103 KeyCompete tool, 14, 20–21 validating, 37,
Recommended publications
  • Uncovering Social Network Sybils in the Wild
    Uncovering Social Network Sybils in the Wild ZHI YANG, Peking University 2 CHRISTO WILSON, University of California, Santa Barbara XIAO WANG, Peking University TINGTING GAO,RenrenInc. BEN Y. ZHAO, University of California, Santa Barbara YAFEI DAI, Peking University Sybil accounts are fake identities created to unfairly increase the power or resources of a single malicious user. Researchers have long known about the existence of Sybil accounts in online communities such as file-sharing systems, but they have not been able to perform large-scale measurements to detect them or measure their activities. In this article, we describe our efforts to detect, characterize, and understand Sybil account activity in the Renren Online Social Network (OSN). We use ground truth provided by Renren Inc. to build measurement-based Sybil detectors and deploy them on Renren to detect more than 100,000 Sybil accounts. Using our full dataset of 650,000 Sybils, we examine several aspects of Sybil behavior. First, we study their link creation behavior and find that contrary to prior conjecture, Sybils in OSNs do not form tight-knit communities. Next, we examine the fine-grained behaviors of Sybils on Renren using clickstream data. Third, we investigate behind-the-scenes collusion between large groups of Sybils. Our results reveal that Sybils with no explicit social ties still act in concert to launch attacks. Finally, we investigate enhanced techniques to identify stealthy Sybils. In summary, our study advances the understanding of Sybil behavior on OSNs and shows that Sybils can effectively avoid existing community-based Sybil detectors. We hope that our results will foster new research on Sybil detection that is based on novel types of Sybil features.
    [Show full text]
  • Svms for the Blogosphere: Blog Identification and Splog Detection
    SVMs for the Blogosphere: Blog Identification and Splog Detection Pranam Kolari∗, Tim Finin and Anupam Joshi University of Maryland, Baltimore County Baltimore MD {kolari1, finin, joshi}@umbc.edu Abstract Most blog search engines identify blogs and index con- tent based on update pings received from ping servers1 or Weblogs, or blogs have become an important new way directly from blogs, or through crawling blog directories to publish information, engage in discussions and form communities. The increasing popularity of blogs has and blog hosting services. To increase their coverage, blog given rise to search and analysis engines focusing on the search engines continue to crawl the Web to discover, iden- “blogosphere”. A key requirement of such systems is to tify and index blogs. This enables staying ahead of compe- identify blogs as they crawl the Web. While this ensures tition in a domain where “size does matter”. Even if a web that only blogs are indexed, blog search engines are also crawl is inessential for blog search engines, it is still pos- often overwhelmed by spam blogs (splogs). Splogs not sible that processed update pings are from non-blogs. This only incur computational overheads but also reduce user requires that the source of the pings need to be verified as a satisfaction. In this paper we first describe experimental blog prior to indexing content.2 results of blog identification using Support Vector Ma- In the first part of this paper we address blog identifica- chines (SVM). We compare results of using different feature sets and introduce new features for blog iden- tion by experimenting with different feature sets.
    [Show full text]
  • By Nilesh Bansal a Thesis Submitted in Conformity with the Requirements
    ONLINE ANALYSIS OF HIGH-VOLUME SOCIAL TEXT STEAMS by Nilesh Bansal A thesis submitted in conformity with the requirements for the degree of Doctor of Philosophy Graduate Department of Computer Science University of Toronto ⃝c Copyright 2013 by Nilesh Bansal Abstract Online Analysis of High-Volume Social Text Steams Nilesh Bansal Doctor of Philosophy Graduate Department of Computer Science University of Toronto 2013 Social media is one of the most disruptive developments of the past decade. The impact of this information revolution has been fundamental on our society. Information dissemination has never been cheaper and users are increasingly connected with each other. The line between content producers and consumers is blurred, leaving us with abundance of data produced in real-time by users around the world on multitude of topics. In this thesis we study techniques to aid an analyst in uncovering insights from this new media form which is modeled as a high volume social text stream. The aim is to develop practical algorithms with focus on the ability to scale, amenability to reliable operation, usability, and ease of implementation. Our work lies at the intersection of building large scale real world systems and developing theoretical foundation to support the same. We identify three key predicates to enable online methods for analysis of social data, namely : • Persistent Chatter Discovery to explore topics discussed over a period of time, • Cross-referencing Media Sources to initiate analysis using a document as the query, and • Contributor Understanding to create aggregate expertise and topic summaries of authors contributing online. The thesis defines each of the predicates in detail and covers proposed techniques, their practical applicability, and detailed experimental results to establish accuracy and scalability for each of the three predicates.
    [Show full text]
  • Blogosphere: Research Issues, Tools, and Applications
    Blogosphere: Research Issues, Tools, and Applications Nitin Agarwal Huan Liu Computer Science and Engineering Department Arizona State University Tempe, AZ 85287 fNitin.Agarwal.2, [email protected] ABSTRACT ging. Acknowledging this fact, Times has named \You" as the person of the year 2006. This has created a consider- Weblogs, or Blogs, have facilitated people to express their able shift in the way information is assimilated by the indi- thoughts, voice their opinions, and share their experiences viduals. This paradigm shift can be attributed to the low and ideas. Individuals experience a sense of community, a barrier to publication and open standards of content genera- feeling of belonging, a bonding that members matter to one tion services like blogs, wikis, collaborative annotation, etc. another and their niche needs will be met through online These services have allowed the mass to contribute and edit interactions. Its open standards and low barrier to publi- articles publicly. Giving access to the mass to contribute cation have transformed information consumers to produc- or edit has also increased collaboration among the people ers. This has created a plethora of open-source intelligence, unlike previously where there was no collaboration as the or \collective wisdom" that acts as the storehouse of over- access to the content was limited to a chosen few. Increased whelming amounts of knowledge about the members, their collaboration has developed collective wisdom on the Inter- environment and the symbiosis between them. Nonetheless, net. \We the media" [21], is a phenomenon named by Dan vast amounts of this knowledge still remain to be discovered Gillmor: a world in which \the former audience", not a few and exploited in its suitable way.
    [Show full text]
  • Web Spam Taxonomy
    Web Spam Taxonomy Zolt´an Gy¨ongyi Hector Garcia-Molina Computer Science Department Computer Science Department Stanford University Stanford University [email protected] [email protected] Abstract techniques, but as far as we know, they still lack a fully effective set of tools for combating it. We believe Web spamming refers to actions intended to mislead that the first step in combating spam is understanding search engines into ranking some pages higher than it, that is, analyzing the techniques the spammers use they deserve. Recently, the amount of web spam has in- to mislead search engines. A proper understanding of creased dramatically, leading to a degradation of search spamming can then guide the development of appro- results. This paper presents a comprehensive taxon- priate countermeasures. omy of current spamming techniques, which we believe To that end, in this paper we organize web spam- can help in developing appropriate countermeasures. ming techniques into a taxonomy that can provide a framework for combating spam. We also provide an overview of published statistics about web spam to un- 1 Introduction derline the magnitude of the problem. There have been brief discussions of spam in the sci- As more and more people rely on the wealth of informa- entific literature [3, 6, 12]. One can also find details for tion available online, increased exposure on the World several specific techniques on the Web itself (e.g., [11]). Wide Web may yield significant financial gains for in- Nevertheless, we believe that this paper offers the first dividuals or organizations. Most frequently, search en- comprehensive taxonomy of all important spamming gines are the entryways to the Web; that is why some techniques known to date.
    [Show full text]
  • Robust Detection of Comment Spam Using Entropy Rate
    Robust Detection of Comment Spam Using Entropy Rate Alex Kantchelian Justin Ma Ling Huang UC Berkeley UC Berkeley Intel Labs [email protected] [email protected] [email protected] Sadia Afroz Anthony D. Joseph J. D. Tygar Drexel University, Philadelphia UC Berkeley UC Berkeley [email protected] [email protected] [email protected] ABSTRACT Keywords In this work, we design a method for blog comment spam detec- Spam filtering, Comment spam, Content complexity, Noisy label, tion using the assumption that spam is any kind of uninformative Logistic regression content. To measure the “informativeness” of a set of blog com- ments, we construct a language and tokenization independent met- 1. INTRODUCTION ric which we call content complexity, providing a normalized an- swer to the informal question “how much information does this Online social media have become indispensable, and a large part text contain?” We leverage this metric to create a small set of fea- of their success is that they are platforms for hosting user-generated tures well-adjusted to comment spam detection by computing the content. An important example of how users contribute value to a content complexity over groupings of messages sharing the same social media site is the inclusion of comment threads in online ar- author, the same sender IP, the same included links, etc. ticles of various kinds (news, personal blogs, etc). Through com- We evaluate our method against an exact set of tens of millions ments, users have an opportunity to share their insights with one of comments collected over a four months period and containing another.
    [Show full text]
  • Spam in Blogs and Social Media
    ȱȱȱȱ ȱ Pranam Kolari, Tim Finin Akshay Java, Anupam Joshi March 25, 2007 ȱ • Spam on the Internet –Variants – Social Media Spam • Reason behind Spam in Blogs • Detecting Spam Blogs • Trends and Issues • How can you help? • Conclusions Pranam Kolari is a UMBC PhD Tim Finin is a UMBC Professor student. His dissertation is on with over 30 years of experience spam blog detection, with tools in the applying AI to information developed in use both by academia and systems, intelligent interfaces and industry. He has active research interest robotics. Current interests include social in internal corporate blogs, the Semantic media, the Semantic Web and multi- Web and blog analytics. agent systems. Akshay Java is a UMBC PhD student. Anupam Joshi is a UMBC Pro- His dissertation is on identify- fessor with research interests in ing influence and opinions in the broad area of networked social media. His research interests computing and intelligent systems. He include blog analytics, information currently serves on the editorial board of retrieval, natural language processing the International Journal of the Semantic and the Semantic Web. Web and Information. Ƿ Ȭȱ • Early form seen around 1992 with MAKE MONEY FAST • 80-85% of all e-mail traffic is spam • In numbers 2005 - (June) 30 billion per day 2006 - (June) 55 billion per day 2006 - (December) 85 billion per day 2007 - (February) 90 billion per day Sources: IronPort, Wikipedia http://www.ironport.com/company/ironport_pr_2006-06-28.html ȱȱǵ • “Unsolicited usually commercial e-mail sent to a large
    [Show full text]
  • Advances in Online Learning-Based Spam Filtering
    ADVANCES IN ONLINE LEARNING-BASED SPAM FILTERING A dissertation submitted by D. Sculley, M.Ed., M.S. In partial fulfillment of the requirements for the degree of Doctor of Philosophy in Computer Science TUFTS UNIVERSITY August 2008 ADVISER: Carla E. Brodley Acknowledgments I would like to take this opportunity to thank my advisor Carla Brodley for her patient guidance, my parents David and Paula Sculley for their support and en- couragement, and my bride Jessica Evans for making everything worth doing. I gratefully acknowledge Rediff.com for funding the writing of this disserta- tion. D. Sculley TUFTS UNIVERSITY August 2008 ii Abstract The low cost of digital communication has given rise to the problem of email spam, which is unwanted, harmful, or abusive electronic content. In this thesis, we present several advances in the application of online machine learning methods for auto- matically filtering spam. We detail a sliding-window variant of Support Vector Machines that yields state of the art results for the standard online filtering task. We explore a variety of feature representations for spam data. We reduce human labeling cost through the use of efficient online active learning variants. We give practical solutions to the one-sided feedback scenario, in which users only give la- beling feedback on messages predicted to be non-spam. We investigate the impact of class label noise on machine learning-based spam filters, showing that previous benchmark evaluations rewarded filters prone to overfitting in real-world settings and proposing several modifications for combating these negative effects. Finally, we investigate the performance of these filtering methods on the more challenging task of abuse filtering in blog comments.
    [Show full text]
  • 2 Uncovering Social Network Sybils in the Wild
    Uncovering Social Network Sybils in the Wild ZHI YANG, Peking University 2 CHRISTO WILSON,UniversityofCalifornia,SantaBarbara XIAO WANG, Peking University TINGTING GAO,RenrenInc. BEN Y. ZHAO,UniversityofCalifornia,SantaBarbara YAFEI DAI , Peking University Sybil accounts are fake identities created to unfairly increase the power or resources of a single malicious user. Researchers have long known about the existence of Sybil accounts in online communities such as file-sharing systems, but they have not been able to perform large-scale measurements to detect them or measure their activities. In this article, we describe our efforts to detect, characterize, and understand Sybil account activity in the Renren Online Social Network (OSN). We use ground truth provided by Renren Inc. to build measurement-based Sybil detectors and deploy them on Renren to detect more than 100,000 Sybil accounts. Using our full dataset of 650,000 Sybils, we examine several aspects of Sybil behavior. First, we study their link creation behavior and find that contrary to prior conjecture, Sybils in OSNs do not form tight-knit communities. Next, we examine the fine-grained behaviors of Sybils on Renren using clickstream data. Third, we investigate behind-the-scenes collusion between large groups of Sybils. Our results reveal that Sybils with no explicit social ties still act in concert to launch attacks. Finally, we investigate enhanced techniques to identify stealthy Sybils. In summary, our study advances the understanding of Sybil behavior on OSNs and shows that Sybils can effectively avoid existing community-based Sybil detectors. We hope that our results will foster new research on Sybil detection that is based on novel types of Sybil features.
    [Show full text]
  • The SAGE Handbook of Web History by Niels Brügger, Ian
    38 Spam Finn Brunton THE COUNTERHISTORY OF THE WEB what the rules are, and who’s in charge. And the first conversation, over and over again: This chapter builds on a larger argument what exactly is ‘spam?’. Briefly looking at about the history of the Internet, and makes how this question got answered will bring us the case that this argument has something to the Web and what made it different. useful to say about the Web; and, likewise, Before the Web, before the formalization that the Web has something useful to say of the Internet, before Minitel and Prestel about the argument, expressing an aspect of and America Online, there were graduate stu- what is distinctive about the Web as a tech- dents in basements, typing on terminals that nology. The larger argument is this: spam connected to remote machines somewhere provides another history of the Internet, a out in the night (the night because comput- shadow history. In fact, following the history ers, of course, were for big, expensive, labor- of ‘spam’, in all its different meanings and intensive projects during the day – if you, a across different networks and platforms student, could get an account for access at all (ARPANET and Usenet, the Internet, email, it was probably for the 3 a.m. slot). Students the Web, user-generated content, comments, wrote programs, created games, traded mes- search engines, texting, and so on), lets us sages, and played pranks and tricks on each tell the history of the Internet itself entirely other. Being nerds of the sort that would stay through what its architects and inhabitants up overnight to get a few hours of computer sought to exclude.
    [Show full text]
  • Social Software: Fun and Games, Or Business Tools?
    Social software: fun and games, or business tools? Wendy A. Warr Wendy Warr & Associates Abstract. This is the era of social networking, collective intelligence, participation, collaborative creation, and bor- derless distribution. Every day we are bombarded with more publicity about collaborative environments, news feeds, blogs, wikis, podcasting, webcasting, folksonomies, social bookmarking, social citations, collab- orative filtering, recommender systems, media sharing, massive multiplayer online games, virtual worlds, and mash-ups. This sort of anarchic environment appeals to the digital natives, but which of these so-called ‘Web 2.0’ technologies are going to have a real business impact? This paper addresses the impact that issues such as quality control, security, privacy and bandwidth may have on the implementation of social net- working in hide-bound, large organizations. Keywords: blogs; digital natives; folksonomies; internet; podcasts; second life; social bookmarking; social networking; social software; virtual worlds; Web 2.0; wikis 1. Introduction1 Fifty years ago information was stored on punch cards. SDI services appeared about 10 years later and databases were available online from about 1978. In 1988 PCs were in common use and by 1998 the web was being used as a business tool. The web of the 1990s might be thought of as ‘Web 1.0’, for now in 2008 there is much hype about Web 2.0, but what does that mean? Web 2.0 is an umbrella term for a number of new internet services that are not necessarily closely related. Indeed, some people feel that Web 2.0 is not a valid overall title for these technologies. A reductionist view is that of a read–write web and lots of people using it.
    [Show full text]
  • D2.4 Weblog Spider Prototype and Associated Methodology
    SEVENTH FRAMEWORK PROGRAMME FP7-ICT-2009-6 BlogForever Grant agreement no.: 269963 BlogForever: D2.4 Weblog spider prototype and associated methodology Editor: M. Rynning Revision: First version Dissemination Level: Public Author(s): M. Rynning, V. Banos, K. Stepanyan, M. Joy, M. Gulliksen Due date of deliverable: 30th November 2011 Actual submission date: 30th November 2011 Start date of project: 01 March 2011 Duration: 30 months Lead Beneficiary name: CyberWatcher Abstract: This document presents potential solutions and technologies for monitoring, capturing and extracting data from weblogs. Additionally, the selected weblog spider prototype and associated methodologies are analysed. D2.2 Report: Weblog Spider Prototype and Associated Methodology 30 November 2011 Project co-funded by the European Commission within the Seventh Framework Programme (2007-2013) The BlogForever Consortium consists of: Aristotle University of Thessaloniki (AUTH) Greece European Organization for Nuclear Research (CERN) Switzerland University of Glasgow (UG) UK The University of Warwick (UW) UK University of London (UL) UK Technische Universitat Berlin (TUB) Germany Cyberwatcher Norway SRDC Yazilim Arastrirma ve Gelistrirme ve Danismanlik Ticaret Limited Sirketi (SRDC) Turkey Tero Ltd (Tero) Greece Mokono GMBH Germany Phaistos SA (Phaistos) Greece Altec Software Development S.A. (Altec) Greece BlogForever Consortium Page 2 of 71 D2.2 Report: Weblog Spider Prototype and Associated Methodology 30 November 2011 History Version Date Modification reason Modified
    [Show full text]