Analysis for the Overwhelming Success of the Web Compared to Microcosm and Hyper-G Systems

Total Page:16

File Type:pdf, Size:1020Kb

Analysis for the Overwhelming Success of the Web Compared to Microcosm and Hyper-G Systems Analysis for the Overwhelming Success of the Web Compared to Microcosm and Hyper-G Systems Bryar Hassan1, and Shko Qader2 1Kurdistan Institution for Strategic Studies and Scientific Research, Sulaimani, Iraq. ‡2Sulaimani Governorate, Sulaimani, Kurdistan Region, Iraq. Abstract. After 1989, Microcosm, Hyper-G, and the Web were developed and released. Each of these hypertext systems had some strengths and weaknesses over each other and their architecture were relatively different from each other. Standing above its competitors, the Web has become the largest and most popu- lar information system. This paper analyses the reasons that why the Web was the first successful hypermedia system against all its competitors by looking at the architecture and evaluation of the Web and open hypermedia systems, and then three reasons beyond this success with some lessons to learn. Currently, Semantic Web is a recent development of the Web to provide conceptual hy- permedia. More important, the study of the Web with its impact on technical, social and cultural, and economics agendas is introduced as web science. Word count: 1497 Keywords: Open Hypermedia, Hyper-G, Microcosm, the Web, Semantic Web 1 Introduction There have been many significant developments over the centuries in hypermedia systems, but the Web was presumably the most successful and popular one. Before the Web was released in 1991, there were two other hypermedia systems: Hyper-G and Microcosm. Currently, the Web is considered as the most popular and used dis- tributed hypermedia systems, whereas the others have all around, yet they are disap- peared. This paper presents an early hypermedia systems overview firstly. Secondly, open hypermedia system and the Web through the lens of their architecture and eval- uation. Thirdly, main causes of growing the Web. Fourthly, some lessons to learn from succeeding the Web. Finally, recent developments of the Web. 2 Early History of Hypertext Hypertext has a fairly long history. The most influenced hypertext systems are shown in this section. Memex. this was proposed by bush in 1945[4]. Memex was an electro-mechanical device used for organising information and knowledge. It is considered as the forefa- ther of all subsequent hypertext systems. Xandru. after Memex, Nelson launched the Xandru project as a more comprehensive hypertext system in the 1960s and as a revision of Memex[5]. This project as an ideal hypertext system had the compelling features of link integrity and automatic version management. Furthermore, Nelson had invented both hypertext and hypermedia terms[6]. The former deals with text which organised in non-linear format and con- nected by links. The latter is the hypertext's extension, which is combination of both hypertext and multimedia. Dexter. this was a formal reference model for an open hypertext system which was developed between 1988-1990. It was used to design existing and future hypertext systems[7]. the aim of this reference model was for system comparison as well as for interchange development and interoperability standards. 3 Open Hypermedia and the Web 3.1 Architecture Microcosm. Microcosm was initially designed for desktop-based hypertext system as a research project at Southampton University. Then, it was changed to distributed hypermedia system[1]. Microcosm had three layers: application, link service, and storage layers. Hyper-G. it was the midway between Microcosm and the Web and it was developed by Graz University around 1989-1990[1]. Hyper-G was also another distributed hy- permedia system, which was based on client-server model[9]. It utilised its own pro- tocol (HG-CSP) and its resource format (HTF). The Web. it was initially proposed by Tim Berners-Lee at CERN to provide a dis- tributed hypertext environment[3]. The Web architecture encompassed three essential technologies. Firstly, URI as an identifier was to address any resources on the Web. Secondly, network protocol such as HTTP, which defined how to receive and send messages between clients and servers. Finally, mark-up language such as HMTL was to specify resource format for documents. 3.2 Evaluation The Web and open hypermedia systems had some strengths and weaknesses. Ta- ble-1 summarises the Web and open hypermedia systems' evaluation. Linking. linking model of the Web was different from Hyper-G and Microcosm. The Web had simple node link model[13]. Nodes were interconnected with point-to-point, uni-directional, non-contextual, and no-typed links were used to present the Web's content. This simplicity of the today's Web leads to link dangling and broken. For example, an "Error 404" will be shown if the requested link is broken or not found. This simplicity of linking model, on the other hand, gives strength to the Web to be implemented easily[14]. Conversely, links were separated from nodes in open hyper- media systems[13]. This separation of data and links allowed user to navigate in vari- ous ways. Links were stored in a database in Microcosm. Dynamic linking was sup- ported by Microcosm via generic links so that link destinations were managed on the fly automatically by the system. Likewise, Hyper-G used central link database to separate links from the nodes, but it was not as comprehensive as Microcosm. Conse- quently, it did not need to maintain the links manually in Microcosm and Hyper-G in case of broken. That means, their linking management was costless. This strength of linking model in Microcosm and Hyper-G, therefore, was also their weakness[13]. Hyper-G and Microcosm were complicated to implement technically peculiarly owing to their linking management. Scalability. scalability is always the vital feature of hypermedia systems which in- cludes performance with number of users[13]. The Web had generally more scalable than Microcosm and Hyper-G[16]. Firstly, the Web and Hyper-G were developed as large-scale hypermedia over the internet, whereas Microcosm was initially based upon intranet and it was also designed for cooperative and group activities. Secondly, the Web was designed as a decentralised system, whereas open hypermedia systems are centralised ones. Thirdly, open hypermedia had many built-in features: harmony browser in Hyper-G, dynamic linking in Microcosm, but the Web defined the mini- mum protocol nonetheless. Finally, Hyper-G document format was HFT and it had search engine facility, while the Web text format was HTML and it did not have search engine. Instead, it allowed third search engine[17]. However, Microcosm did not have any document format[16]. Openness. URI was used in the Web to identify any object via a simple text string, whereas Hyper-G and Microcosm did not have this concept[16]. This brought open- ness for the Web. Alternatively, There were document system in Hyper-G and Micro- cosm. 4 Growth of the Web There were three main causes as shown in this section for growing the Web against its competitors. Technical. there were six main technical causes of succeeding the Web. Firstly, sim- plicity and ease to use were the power of the Web, especially after developing and spreading web browser technology[16]. There is no doubt that the Web was not com- plicated compared to Hyper-G and Microcosm. Secondly, the Web had more flexibil- ity than other hypermedia systems. For example, users were easily able to create plug- in such as search engines, and bi-directional links[15]. Thirdly, the Web was the most scalable hypermedia system in compare to Hyper-G and Microcosm. Fourthly, the Web was universally standardised and open protocols were provided as well[3]. Fifth- ly, the implementation of the Web was technically easy compared to the others thanks to linking model, protocols, and standardisation[13]. Lastly, the Web was compliant with all the Halasz's seven issues on hypertext[8], while the Hyper-G and Microcosm were not utterly compliant with it. Economical. first and foremost, the Web was entirely non-proprietary[11], while Hyper-G and Microcosm were commercial products[2]. That means, either the Web would be used by everyone or it would not be so. Accordingly, users always prefer the Web rather than the other systems. Second, the Web was developed by CERN[3], whereas Microcosm and Hyper-G were developed by Universities[1]. Hence, this organisation was probably better for funding projects than Universities. Third, linking model and technical complications of Microcosm and Hyper-G were made difficulty to implement them economically[13]. Social. Hyper-G and Microcosm were developed in the Europe and attempted to spread thereof[16], while the Web was spread not only through the Europe but also through the USA on the Internet[3].Resultantly, both the Europe and USA, especially the latter one had probably more influence on people globally than only the Europe. 5 Lessons to Learn Overwhelming success of the Web addresses three vital lessons. First, the provi- sion of document format, open protocols and universal standards were the cornerstone of its success. Second, the Web was initially showed the idea of "scruffy works"[16]. This creates opportunity to be more simpler and usable by users easily. Final, free- ness, openness, decentralisation, and easy to use of the Web were the vital key of using it by users. 6 Recent Developments At the present time, Semantic Web is considered as the next stage of the Web de- velopment [11]. The aim of Semantic Web is to change the current machine-readable web into machine-understandable. Furthermore, COHSE endeavours to bring concep- tual hypermedia into the Web in order for the Web to be able to implement dynamic linking as Microcosm did so[12]. Conceptual hypermedia is presumably considered as Semantic Web and COHSE can be counted as its application. Meanwhile, technical, social and cultural, and economics agendas have all effect on shaping the future of the Web.
Recommended publications
  • The Origins of the Underline As Visual Representation of the Hyperlink on the Web: a Case Study in Skeuomorphism
    The Origins of the Underline as Visual Representation of the Hyperlink on the Web: A Case Study in Skeuomorphism The Harvard community has made this article openly available. Please share how this access benefits you. Your story matters Citation Romano, John J. 2016. The Origins of the Underline as Visual Representation of the Hyperlink on the Web: A Case Study in Skeuomorphism. Master's thesis, Harvard Extension School. Citable link http://nrs.harvard.edu/urn-3:HUL.InstRepos:33797379 Terms of Use This article was downloaded from Harvard University’s DASH repository, and is made available under the terms and conditions applicable to Other Posted Material, as set forth at http:// nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.terms-of- use#LAA The Origins of the Underline as Visual Representation of the Hyperlink on the Web: A Case Study in Skeuomorphism John J Romano A Thesis in the Field of Visual Arts for the Degree of Master of Liberal Arts in Extension Studies Harvard University November 2016 Abstract This thesis investigates the process by which the underline came to be used as the default signifier of hyperlinks on the World Wide Web. Created in 1990 by Tim Berners- Lee, the web quickly became the most used hypertext system in the world, and most browsers default to indicating hyperlinks with an underline. To answer the question of why the underline was chosen over competing demarcation techniques, the thesis applies the methods of history of technology and sociology of technology. Before the invention of the web, the underline–also known as the vinculum–was used in many contexts in writing systems; collecting entities together to form a whole and ascribing additional meaning to the content.
    [Show full text]
  • A Comparative Evaluation of Geospatial Semantic Web Frameworks for Cultural Heritage
    heritage Article A Comparative Evaluation of Geospatial Semantic Web Frameworks for Cultural Heritage Ikrom Nishanbaev 1,* , Erik Champion 1,2,3 and David A. McMeekin 4,5 1 School of Media, Creative Arts, and Social Inquiry, Curtin University, Perth, WA 6845, Australia; [email protected] 2 Honorary Research Professor, CDHR, Sir Roland Wilson Building, 120 McCoy Circuit, Acton 2601, Australia 3 Honorary Research Fellow, School of Social Sciences, FABLE, University of Western Australia, 35 Stirling Highway, Perth, WA 6907, Australia 4 School of Earth and Planetary Sciences, Curtin University, Perth, WA 6845, Australia; [email protected] 5 School of Electrical Engineering, Computing and Mathematical Sciences, Curtin University, Perth, WA 6845, Australia * Correspondence: [email protected] Received: 14 July 2020; Accepted: 4 August 2020; Published: 12 August 2020 Abstract: Recently, many Resource Description Framework (RDF) data generation tools have been developed to convert geospatial and non-geospatial data into RDF data. Furthermore, there are several interlinking frameworks that find semantically equivalent geospatial resources in related RDF data sources. However, many existing Linked Open Data sources are currently sparsely interlinked. Also, many RDF generation and interlinking frameworks require a solid knowledge of Semantic Web and Geospatial Semantic Web concepts to successfully deploy them. This article comparatively evaluates features and functionality of the current state-of-the-art geospatial RDF generation tools and interlinking frameworks. This evaluation is specifically performed for cultural heritage researchers and professionals who have limited expertise in computer programming. Hence, a set of criteria has been defined to facilitate the selection of tools and frameworks.
    [Show full text]
  • From the Semantic Web to Social Machines
    ARTICLE IN PRESS ARTINT:2455 JID:ARTINT AID:2455 /REV [m3G; v 1.23; Prn:25/11/2009; 12:36] P.1 (1-6) Artificial Intelligence ••• (••••) •••–••• Contents lists available at ScienceDirect Artificial Intelligence www.elsevier.com/locate/artint From the Semantic Web to social machines: A research challenge for AI on the World Wide Web ∗ Jim Hendler a, , Tim Berners-Lee b a Tetherless World Constellation, RPI, United States b Computer Science and AI Laboratory, MIT, United States article info abstract Article history: The advent of social computing on the Web has led to a new generation of Web Received 24 September 2009 applications that are powerful and world-changing. However, we argue that we are just Received in revised form 1 October 2009 at the beginning of this age of “social machines” and that their continued evolution and Accepted 1 October 2009 growth requires the cooperation of Web and AI researchers. In this paper, we show how Available online xxxx the growing Semantic Web provides necessary support for these technologies, outline the challenges we see in bringing the technology to the next level, and propose some starting places for the research. © 2009 Elsevier B.V. All rights reserved. Much has been written about the profound impact that the World Wide Web has had on society. Yet it is primarily in the past few years, as more interactive “read/write” technologies (e.g. Wikis, blogs and photo/video sharing) and social network- ing sites have proliferated, that the truly profound nature of this change is being felt. From the very beginning, however, the Web was designed to create a network of humans changing society empowered using this shared infrastructure.
    [Show full text]
  • Deciding SHACL Shape Containment Through Description Logics Reasoning
    Deciding SHACL Shape Containment through Description Logics Reasoning Martin Leinberger1, Philipp Seifer2, Tjitze Rienstra1, Ralf Lämmel2, and Steffen Staab3;4 1 Inst. for Web Science and Technologies, University of Koblenz-Landau, Germany 2 The Software Languages Team, University of Koblenz-Landau, Germany 3 Institute for Parallel and Distributed Systems, University of Stuttgart, Germany 4 Web and Internet Science Research Group, University of Southampton, England Abstract. The Shapes Constraint Language (SHACL) allows for for- malizing constraints over RDF data graphs. A shape groups a set of constraints that may be fulfilled by nodes in the RDF graph. We investi- gate the problem of containment between SHACL shapes. One shape is contained in a second shape if every graph node meeting the constraints of the first shape also meets the constraints of the second. Todecide shape containment, we map SHACL shape graphs into description logic axioms such that shape containment can be answered by description logic reasoning. We identify several, increasingly tight syntactic restrictions of SHACL for which this approach becomes sound and complete. 1 Introduction RDF has been designed as a flexible, semi-structured data format. To ensure data quality and to allow for restricting its large flexibility in specific domains, the W3C has standardized the Shapes Constraint Language (SHACL)5. A set of SHACL shapes are represented in a shape graph. A shape graph represents constraints that only a subset of all possible RDF data graphs conform to. A SHACL processor may validate whether a given RDF data graph conforms to a given SHACL shape graph. A shape graph and a data graph that act as a running example are pre- sented in Fig.
    [Show full text]
  • The Application of Semantic Web Technologies to Content Analysis in Sociology
    THEAPPLICATIONOFSEMANTICWEBTECHNOLOGIESTO CONTENTANALYSISINSOCIOLOGY MASTER THESIS tabea tietz Matrikelnummer: 749153 Faculty of Economics and Social Science University of Potsdam Erstgutachter: Alexander Knoth, M.A. Zweitgutachter: Prof. Dr. rer. nat. Harald Sack Potsdam, August 2018 Tabea Tietz: The Application of Semantic Web Technologies to Content Analysis in Soci- ology, , © August 2018 ABSTRACT In sociology, texts are understood as social phenomena and provide means to an- alyze social reality. Throughout the years, a broad range of techniques evolved to perform such analysis, qualitative and quantitative approaches as well as com- pletely manual analyses and computer-assisted methods. The development of the World Wide Web and social media as well as technical developments like optical character recognition and automated speech recognition contributed to the enor- mous increase of text available for analysis. This also led sociologists to rely more on computer-assisted approaches for their text analysis and included statistical Natural Language Processing (NLP) techniques. A variety of techniques, tools and use cases developed, which lack an overall uniform way of standardizing these approaches. Furthermore, this problem is coupled with a lack of standards for reporting studies with regards to text analysis in sociology. Semantic Web and Linked Data provide a variety of standards to represent information and knowl- edge. Numerous applications make use of these standards, including possibilities to publish data and to perform Named Entity Linking, a specific branch of NLP. This thesis attempts to discuss the question to which extend the standards and tools provided by the Semantic Web and Linked Data community may support computer-assisted text analysis in sociology. First, these said tools and standards will be briefly introduced and then applied to the use case of constitutional texts of the Netherlands from 1884 to 2016.
    [Show full text]
  • Hypertext Semiotics in the Commercialized Internet
    Hypertext Semiotics in the Commercialized Internet Moritz Neumüller Wien, Oktober 2001 DOKTORAT DER SOZIAL- UND WIRTSCHAFTSWISSENSCHAFTEN 1. Beurteiler: Univ. Prof. Dipl.-Ing. Dr. Wolfgang Panny, Institut für Informationsver- arbeitung und Informationswirtschaft der Wirtschaftsuniversität Wien, Abteilung für Angewandte Informatik. 2. Beurteiler: Univ. Prof. Dr. Herbert Hrachovec, Institut für Philosophie der Universität Wien. Betreuer: Gastprofessor Univ. Doz. Dipl.-Ing. Dr. Veith Risak Eingereicht am: Hypertext Semiotics in the Commercialized Internet Dissertation zur Erlangung des akademischen Grades eines Doktors der Sozial- und Wirtschaftswissenschaften an der Wirtschaftsuniversität Wien eingereicht bei 1. Beurteiler: Univ. Prof. Dr. Wolfgang Panny, Institut für Informationsverarbeitung und Informationswirtschaft der Wirtschaftsuniversität Wien, Abteilung für Angewandte Informatik 2. Beurteiler: Univ. Prof. Dr. Herbert Hrachovec, Institut für Philosophie der Universität Wien Betreuer: Gastprofessor Univ. Doz. Dipl.-Ing. Dr. Veith Risak Fachgebiet: Informationswirtschaft von MMag. Moritz Neumüller Wien, im Oktober 2001 Ich versichere: 1. daß ich die Dissertation selbständig verfaßt, andere als die angegebenen Quellen und Hilfsmittel nicht benutzt und mich auch sonst keiner unerlaubten Hilfe bedient habe. 2. daß ich diese Dissertation bisher weder im In- noch im Ausland (einer Beurteilerin / einem Beurteiler zur Begutachtung) in irgendeiner Form als Prüfungsarbeit vorgelegt habe. 3. daß dieses Exemplar mit der beurteilten Arbeit überein
    [Show full text]
  • Ted Nelson History of Computing
    History of Computing Douglas R. Dechow Daniele C. Struppa Editors Intertwingled The Work and Influence of Ted Nelson History of Computing Founding Editor Martin Campbell-Kelly, University of Warwick, Coventry, UK Series Editor Gerard Alberts, University of Amsterdam, Amsterdam, The Netherlands Advisory Board Jack Copeland, University of Canterbury, Christchurch, New Zealand Ulf Hashagen, Deutsches Museum, Munich, Germany John V. Tucker, Swansea University, Swansea, UK Jeffrey R. Yost, University of Minnesota, Minneapolis, USA The History of Computing series publishes high-quality books which address the history of computing, with an emphasis on the ‘externalist’ view of this history, more accessible to a wider audience. The series examines content and history from four main quadrants: the history of relevant technologies, the history of the core science, the history of relevant business and economic developments, and the history of computing as it pertains to social history and societal developments. Titles can span a variety of product types, including but not exclusively, themed volumes, biographies, ‘profi le’ books (with brief biographies of a number of key people), expansions of workshop proceedings, general readers, scholarly expositions, titles used as ancillary textbooks, revivals and new editions of previous worthy titles. These books will appeal, varyingly, to academics and students in computer science, history, mathematics, business and technology studies. Some titles will also directly appeal to professionals and practitioners
    [Show full text]
  • The Web of Data for E-Commerce: Schema.Org and Goodrelations for Researchers and Practitioners
    The Web of Data for E-Commerce: Schema.org and GoodRelations for Researchers and Practitioners Martin Hepp() E-Business & Web Science Research Group, Universität der Bundeswehr München, Werner-Heisenberg-Weg 39, 85577, Neubiberg, Germany [email protected] Abstract. Schema.org is one of the main drivers for the adoption of Semantic Web principles by a broad number of organizations and individuals for real business needs. GoodRelations is a well-established conceptual model for representing e-commerce information, one of the few widely used OWL DL on- tologies, and since 2012 the official e-commerce model of schema.org. In this tutorial, we will (1) give a comprehensive overview and hands-on training on the advanced conceptual structures of schema.org for e-commerce, including patterns for ownership and demand, (2) present the full tool chain for producing and consuming respective data, (3) explain the long-term vision of Linked Open Commerce, and (4) discuss advanced topics, like access control, identity and authentication (e.g. with WebID); micropayment services, and data management issues from the publisher and consumer perspective. We will also cover research opportunities resulting from the growing adoption and the re- spective amount of data in RDFa, Microdata, and JSON-LD syntaxes. Keywords: Schema.org · GoodRelations · Semantic Web · Ontologies · Micro- data · OWL · RDFa · JSON-LD · Linked Open Data · E-Commerce · E-Business 1 Introduction Schema.org [1] is one of the main drivers for the adoption of Semantic Web principles by a broad number of people for their real business needs. The resulting amount of real-world RDF1 data exceeds a critical mass so that it becomes interesting as reference data for any kind of foundational Semantic Web research.
    [Show full text]
  • A Celebration of 10 Years of the Science of the Web WEB SCIENCE TRUST BOARD Board Members
    2006-2016 A celebration of 10 Years of the Science of the Web WEB SCIENCE TRUST BOARD Board Members Professor Dame Wendy Hall JP Rangaswami Professor Sir Nigel Shadbolt Professor George Metakides Professor James Hendler John Taysom Professor Noshir Contractor Daniel J Weitzner Fellows and Advisors Professor Bebo White Web Science Champion Sir John Taylor Professor Sir Tim Berners-Lee Senior Fellow Senior Fellow Anni Rowland-Campbell Baroness Rennie Fritchie Advisor Patron We also wish to acknowledge the contribution of colleagues who acted as supporters and research fellows for the fore- runner to the Web Science Trust, the Web Science Research Initiative (WSRI). CELEBRATING 10 YEARS OF WEB SCIENCE 2016 marks the tenth anniversary of the academic discipline of Web Science. It was in 2006 that the paper ‘Creating a Science of the Web’ appeared in the journal Science. The paper’s authors: Tim Berners-Lee, Wendy Hall, James Hendler, Nigel Shadbolt, and Daniel Weitzner, set out their concerns about the future direction of the Web, and emphasized the need to establish a clear research agenda ‘aimed at understanding the current, evolving, and potential Web’: “If we want to model the Web; if we want to understand the architectural principles that have provided for its growth; and if we want to be sure that it supports the basic social values of trustworthiness, privacy, and respect for social boundaries, then we must chart out a research agenda that targets the Web as a primary focus of attention.” The authors called for the new discipline of Web Science role in shaping appropriate policy directives, as well as to be inherently interdisciplinary, to tackle research enabling a better understanding of the central importance challenges around ownership and access to data, and to of the Web in all our lives.
    [Show full text]
  • The Media Assemblage: the Twentieth-Century Novel in Dialogue with Film, Television, and New Media
    THE MEDIA ASSEMBLAGE: THE TWENTIETH-CENTURY NOVEL IN DIALOGUE WITH FILM, TELEVISION, AND NEW MEDIA BY PAUL STEWART HACKMAN DISSERTATION Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in English in the Graduate College of the University of Illinois at Urbana-Champaign, 2010 Urbana, Illinois Doctoral Committee: Professor Michael Rothberg, Chair Professor Robert Markley Associate Professor Jim Hansen Associate Professor Ramona Curry ABSTRACT At several moments during the twentieth-century, novelists have been made acutely aware of the novel as a medium due to declarations of the death of the novel. Novelists, at these moments, have found it necessary to define what differentiates the novel from other media and what makes the novel a viable form of art and communication in the age of images. At the same time, writers have expanded the novel form by borrowing conventions from these newer media. I describe this process of differentiation and interaction between the novel and other media as a “media assemblage” and argue that our understanding of the development of the novel in the twentieth century is incomplete if we isolate literature from the other media forms that compete with and influence it. The concept of an assemblage describes a historical situation in which two or more autonomous fields interact and influence one another. On the one hand, an assemblage is composed of physical objects such as TV sets, film cameras, personal computers, and publishing companies, while, on the other hand, it contains enunciations about those objects such as claims about the artistic merit of television, beliefs about the typical audience of a Hollywood blockbuster, or academic discussions about canonicity.
    [Show full text]
  • A Manifesto for Web Science? Susan Halford, Cathy Pope, Leslie Carr University of Southampton Southampton United Kingdom
    A Manifesto for Web Science? Susan Halford, Cathy Pope, Leslie Carr University of Southampton Southampton United Kingdom [email protected] Categories and Subject Descriptors The call for Web Science insists that we open up this H.3.5 [Information Storage and Retrieval]: Online space. In doing so, a flag has been planted. Hendler, Information Services Berners-Lee et al have named this territory for web science and have begun to map it from their vantage point in General Terms Computer Science. But – and as they would be the first to Management, Documentation, Economics, Security, acknowledge – this is only one vantage point. Other Human Factors, Legal Aspects. disciplines will add new perspectives and interpretations. However, it is by no means certain that we will all agree Keywords about what we see. For whilst we might all agree that Web Web science Science cannot develop without inter-disciplinarity, we 1. INTRODUCTION should be clear from the beginning that this is no simple matter. We need to be realistic about what we are getting A clarion call for a new science of the web has been ourselves into. There will be big challenges in making sounded in the pages of CACM (Hendler et al 2008) and ourselves understood to each other and developing elsewhere in path-breaking papers by Berners-Lee et al collaborative understandings will require us to leave the (2006a, 2006b). These authors point to a paradox: despite comfort of our disciplinary silos. But, the promise of new the huge effect that the web has had on computing – not to forms of knowledge and understanding that are bigger than mention the world – computer scientists rarely study the the sum of our parts are gains worth working for.
    [Show full text]
  • The Nature of Hypertext: Background and Implications for Librarians
    Wilfrid Laurier University Scholars Commons @ Laurier Library Publications Library 3-1999 The Nature of Hypertext: Background and Implications for Librarians Deborah Wills Wilfrid Laurier University, [email protected] Follow this and additional works at: https://scholars.wlu.ca/lib_pub Recommended Citation Wills, Deborah, "The Nature of Hypertext: Background and Implications for Librarians" (1999). Library Publications. 8. https://scholars.wlu.ca/lib_pub/8 This Article is brought to you for free and open access by the Library at Scholars Commons @ Laurier. It has been accepted for inclusion in Library Publications by an authorized administrator of Scholars Commons @ Laurier. For more information, please contact [email protected]. THE NATURE OF HYPERTEXT: BACKGROUND AND IMPLICATIONS FOR LIBRARIANS There has been much talk in recent years about the way electronic information is breaking down the walls of the traditional library, muddying the boundary between what is owned in the library building and what can be accessed from the larger world. However, the advent of electronic hypertext is causing another kind of "breakdown," this time among individual texts. Hypertext allows connections among words or phrases in an electronic environment: highlighted text in one document links directly to other documents or to other parts of the same document. Given the flexibility of hypertext, a group of texts can form a highly complex environment with multiple paths for reading and understanding. The boundaries between individual texts, so easy to identify in the print environment, therefore lose their meaning. Hypertext is affecting the way users read, write, and think about information. Hypertext has been used in various environments, the most familiar being the World Wide Web: a space to which any individual or organization, with the appropriate computer connections, may add documents and links connecting documents.
    [Show full text]