Patterns of Content Transclusion in Wikipedia

Total Page:16

File Type:pdf, Size:1020Kb

Patterns of Content Transclusion in Wikipedia There and Here: Paerns of Content Transclusion in Wikipedia Mark Anderson Leslie Carr David E. Millard Southampton University Southampton University Southampton University Electronics and Computer Science Electronics and Computer Science Electronics and Computer Science Southampton SO17 1BJ, UK Southampton SO17 1BJ, UK Southampton SO17 1BJ, UK [email protected] [email protected] [email protected] ABSTRACT 1 INTRODUCTION As large, collaboratively authored hypertexts such as Wikipedia A large public collaborative hypertext gives free access to allow grow so does the requirement both for organisational principles any person both to read its content and to add to, or improve, the and methods to provide sustainable consistency and to ease the task hypertext’s data and structure. The hypertext may thus contain the of contributing editors. Large numbers of (potential) editors are not work of many authors, spread across discrete pages. Their varying necessarily a sucient bulwark against loss of coherence amongst editing skills can pose a challenge for those trying to maintain a corpus of many discrete articles. The longitudinal task of curation the overall coherence and accuracy of the hypertext’s content as a may benet from deliberate curatorial roles and techniques. whole—as opposed to activity revising individual articles or gener- A potentially benecial technique for the development and main- ating new content. In wikis, where focus is on the rendered page, tenance of hypertext content at scale is hypertext transclusion, by incremental edits can lead to unseen structural issues. For instance, oering controllable re-use of a canonical source. In considering under 50% of ‘articles’ in the English Wikipedia are actually content issues of longitudinal support of web collaborative hypertexts, we articles, the remainder are re-direction stubs (see Table 2). investigated the current degree and manner of adoption of transclu- The same information may need to be repeated within dierent sion facilities by editors of Wikipedia articles. We sampled 20 mil- articles across a large hypertext. If text is copied, potential exists for lion articles from ten discrete language wikis within Wikipedia to thematic drift between dierent articles through subsequent edits analyse behaviour both within and across the individual Wikipedia by dierent authors. Ideally, in order to retain coherence of the communities. hypertext over time, what we call longitudinal coherence, content We show that Wikipedia (as at February 2016) makes limited, duplication needs to be identied and consistency maintained. inconsistent of use of transclusion. Use is localised to subject areas, Transclusion [17] oers one means of avoiding duplication. De- which dier between sampled languages. A limited number of liberate and considered transclusional re-use of canonical sources patterns were observed including: Lists from transclusion, Lists throughout the hypertext can potentially assist with maintaining of Lists, Episodic Media Listings, Tangles, Articles as Macros, and coherence and avoiding divergent copy. For example, by re-using Self-Transclusion. We nd little indication of deliberate structural text summarising a subject in articles referring to that subject. Fur- maintenance of the hypertext. thermore, transclusion—if identied up as such—also oers the potential to indicate provenance of re-used text. CCS CONCEPTS It therefore follows that the use of transclusion within a large Web hypertext should increase longitudinal coherence, but it is • Information systems → Wikis; Document structure; • Human- unclear how widely and how eectively these techniques are used centered computing → Collaborative content creation; Com- in existing examples such as Wikipedia. Wikipedia’s MediaWiki puter supported cooperative work; software does support transclusion (see Section 3), but Wiki stud- ies appear to ignore the implied linkage created by transclusion. KEYWORDS Despite some analysis as to the functional nature of edits made in Hypertext, Transclusion, Collaboration, Wikis, Wikipedia, Digital Wikipedia [5], no study has been made of the nature of editing as Curation relating specically to transclusional (re-)use of content. Built-in Wikipedia queries (‘special’ pages1) and API methods can give some ACM Reference format: indication of transclusion use, but the reports are opaque and do Mark Anderson, Leslie Carr, and David E. Millard. 2017. There and Here: Patterns of Content Transclusion in Wikipedia. In Proceedings of The 28th not lend themselves to further exploration, especially as to how or ACM Conference on Hypertext and Social Media, Prague, Czech Republic, 4-7 why editors implemented their ideas. Thus more focused study of July 2017 (HT17), 10 pages. transclusion is needed. DOI: 10.475/123_4 By analysing the occurrence and nature of Wikipedia content transclusion, the study set out to investigate these questions: Permission to make digital or hard copies of part or all of this work for personal or • Does Wikipedia show evidence of deliberate use of tran- classroom use is granted without fee provided that copies are not made or distributed scluded article content? If transclusion is used in Wikipedia, for prot or commercial advantage and that copies bear this notice and the full citation then at minimum transclusion mark-up should be detected on the rst page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). HT17, Prague, Czech Republic © 2017 Copyright held by the owner/author(s). 123-4567-24-567/08/06...$15.00 DOI: 10.475/123_4 1See: https://en.wikipedia.org/wiki/Special:WhatLinksHere, on all article pages. HT17, 4-7 July 2017, Prague, Czech Republic Mark Anderson, Leslie Carr, and David E. Millard in article source code using transclusion, disparity in us- transclusion still remains atypical for hypertextual writing for the age should become apparent, either within discrete per- Web. Research interest tends to focus on either the technical imple- language wikis, or between dierent wikis. mentation or the social aspect of use. Consideration of the writing • Does the nature of transclusion vary between discrete areas of hypertext, in a non-ction context, can fall between these stools. within per-language wikis, or between dierent languages? Halasz’s ‘Reections on “Seven Issues”’ [8, p.112] noted that the By categorising the subject area of any transclusion activ- versioning ‘issue’ was not fully resolved. In a wiki system [14], the ity, disparity in use of transclusion should become appar- default is to render the current edit of the requested page. All past ent, both within discrete per-language wikis and between edits can be rendered and by furnishing the UID of the desired dierent wikis. edit. However links, including transclusions, are not tied to a target • Does article content show distinct patterns of transclusion? edit; thus rendered content may change if the transcluded source If common, transclusion link patterns may be identied is edited. For a web-based hypertext wiki supporting transclusion which aid those maintaining the hypertext. this means, in simplest terms, that the rendered article content (the body copy) of a page is able dynamically to include content 2 BACKGROUND not present in the article’s own source code. Further indication Transclusion, as coined by Nelson in his Literary Machines [17], of transclusion, or ability to traverse such implied links is left to referred originally to a single hypermedia source occurring in mul- individual implementation. tiple places “Transclusion means that part of a document may be Transclusion, applied appropriately, could help Wikipedia’s many in several places—in other documents beside the original—without editors maintain cohesion. A precept of Wikipedia quality is the actually being copied there”[18, preface footnote]2. Subsequently, ‘many eyes’ theory [15]—that many people have looked at any given 4 he re-dened transclusion as “reuse with original context available, fact. However, Wikipedia’s Manual of Style makes no mention of through embedded shared instancing”[19, p32], tying it more closely transclusion (or transcluding from Wikidata), eectively blinding to ideas expressed in his Xanadu system with its ‘transpointing’3 the ‘many eyes’ to the concept. windows. Halfaker et al.[9] nd that there is a plateauing in numbers of Besides giving a canonical source, the inherent transclusion link- active editors of Wikipedia, with the suggestion that there may a age can help establish provenance and copyright. Nelson held that natural equilibrium in levels of active editors in collaborative wikis. indication of transclusion is a front-end function of the hypertext’s Wikipedia has a very at hierarchy of administrators and users reader (renderer) [18, footnote p2/37]. The technique does not pre- although either of those may have extra roles [1]. There is a no- clude changes in transcluded sources, it left to the user to select tion of a ‘quality assurance’ role but this seems to apply more to which version to link: if the system holds past version(s) of the anti-vandalism than hypertextual coherence. For Wikipedia editors source these may be linked [18, p2/26]. Web transclusion, e.g for kudos is most easily acquired, and thus promoted, by concentration 5 image placement, generally draws material directly from its source on the ‘quality’ of individual rendered articles. There appears to be meaning that the transcluding document will reect any change
Recommended publications
  • Improving Wikimedia Projects Content Through Collaborations Waray Wikipedia Experience, 2014 - 2017
    Improving Wikimedia Projects Content through Collaborations Waray Wikipedia Experience, 2014 - 2017 Harvey Fiji • Michael Glen U. Ong Jojit F. Ballesteros • Bel B. Ballesteros ESEAP Conference 2018 • 5 – 6 May 2018 • Bali, Indonesia In 8 November 2013, typhoon Haiyan devastated the Eastern Visayas region of the Philippines when it made landfall in Guiuan, Eastern Samar and Tolosa, Leyte. The typhoon affected about 16 million individuals in the Philippines and Tacloban City in Leyte was one of [1] the worst affected areas. Philippines Eastern Visayas Eastern Visayas, specifically the provinces of Biliran, Leyte, Northern Samar, Samar and Eastern Samar, is home for the Waray speakers in the Philippines. [3] [2] Outline of the Presentation I. Background of Waray Wikipedia II. Collaborations made by Sinirangan Bisaya Wikimedia Community III. Problems encountered IV. Lessons learned V. Future plans References Photo Credits I. Background of Waray Wikipedia (https://war.wikipedia.org) Proposed on or about June 23, 2005 and created on or about September 24, 2005 Deployed lsjbot from Feb 2013 to Nov 2015 creating 1,143,071 Waray articles about flora and fauna As of 24 April 2018, it has a total of 1,262,945 articles created by lsjbot (90.5%) and by humans (9.5%) As of 31 March 2018, it has 401 views per hour Sinirangan Bisaya (Eastern Visayas) Wikimedia Community is the (offline) community that continuously improves Waray Wikipedia and related Wikimedia projects I. Background of Waray Wikipedia (https://war.wikipedia.org) II. Collaborations made by Sinirangan Bisaya Wikimedia Community A. Collaborations with private* and national government** Introductory letter organizations Series of meetings and communications B.
    [Show full text]
  • Approved DITA 2.0 Proposals
    DITA Technical Committee DITA 2.0 proposals DITA TC work product Page 1 of 189 Table of contents 1 Overview....................................................................................................................................................3 2 DITA 2.0: Stage two proposals.................................................................................................................. 3 2.1 Stage two: #08 <include> element.................................................................................................... 3 2.2 Stage two: #15 Relax specialization rules......................................................................................... 7 2.3 Stage two: #17 Make @outputclass universal...................................................................................9 2.4 Stage two: #18 Make audience, platform, product, otherprops into specializations........................12 2.5 Stage two: #27 Multimedia domain..................................................................................................16 2.6 Stage two: #29 Update bookmap.................................................................................................... 20 2.7 Stage two: #36 Remove deprecated elements and attributes.........................................................23 2.8 Stage two: #46: Remove @xtrf and @xtrc...................................................................................... 31 2.9 Stage 2: #73 Remove delayed conref domain.................................................................................36
    [Show full text]
  • XRI 2.0 FAQ 1 December 2005
    XRI 2.0 FAQ 1 December 2005 This document is a comprehensive FAQ on the XRI 2.0 suite of specifications, with a particular emphasis on the XRI Syntax 2.0 Committee Specification which was submitted for consideration as an OASIS Standard on November 14, 2005. 1 General..................................................................................... 3 1.1 What does the acronym XRI stand for? ................................................................3 1.2 What is the relationship of XRI to URI and IRI? ....................................................3 1.3 Why was XRI needed?..........................................................................................3 1.4 Who is involved in the XRI specification effort? ....................................................4 1.5 What is the XRI 2.0 specification suite? ................................................................4 1.6 Are there any intellectual property restrictions on XRI? ........................................4 2 Uses of XRI .............................................................................. 5 2.1 What things do XRIs identify? ...............................................................................5 2.2 What are some example uses of XRI?..................................................................5 2.3 What are some applications that use XRI? ...........................................................5 3 Features of XRI Syntax ........................................................... 6 3.1 What were some of the design requirements
    [Show full text]
  • Włodzimierz Lewoniewski Metoda Porównywania I Wzbogacania
    Włodzimierz Lewoniewski Metoda porównywania i wzbogacania informacji w wielojęzycznych serwisach wiki na podstawie analizy ich jakości The method of comparing and enriching informa- on in mullingual wikis based on the analysis of their quality Praca doktorska Promotor: Prof. dr hab. Witold Abramowicz Promotor pomocniczy: dr Krzysztof Węcel Pracę przyjęto dnia: podpis Promotora Kierunek: Specjalność: Poznań 2018 Spis treści 1 Wstęp 1 1.1 Motywacja .................................... 1 1.2 Cel badawczy i teza pracy ............................. 6 1.3 Źródła informacji i metody badawcze ....................... 8 1.4 Struktura rozprawy ................................ 10 2 Jakość danych i informacji 12 2.1 Wprowadzenie .................................. 12 2.2 Jakość danych ................................... 13 2.3 Jakość informacji ................................. 15 2.4 Podsumowanie .................................. 19 3 Serwisy wiki oraz semantyczne bazy wiedzy 20 3.1 Wprowadzenie .................................. 20 3.2 Serwisy wiki .................................... 21 3.3 Wikipedia jako przykład serwisu wiki ....................... 24 3.4 Infoboksy ..................................... 25 3.5 DBpedia ...................................... 27 3.6 Podsumowanie .................................. 28 4 Metody określenia jakości artykułów Wikipedii 29 4.1 Wprowadzenie .................................. 29 4.2 Wymiary jakości serwisów wiki .......................... 30 4.3 Problemy jakości Wikipedii ............................ 31 4.4
    [Show full text]
  • List of Different Digital Practices 3
    Categories of Digital Poetics Practices (from the Electronic Literature Collection) http://collection.eliterature.org/1/ (Electronic Literature Collection, Vol 1) http://collection.eliterature.org/2/ (Electronic Literature Collection, Vol 2) Ambient: Work that plays by itself, meant to evoke or engage intermittent attention, as a painting or scrolling feed would; in John Cayley’s words, “a dynamic linguistic wall- hanging.” Such work does not require or particularly invite a focused reading session. Kinetic (Animated): Kinetic work is composed with moving images and/or text but is rarely an actual animated cartoon. Transclusion, Mash-Up, or Appropriation: When the supply text for a piece is not composed by the authors, but rather collected or mined from online or print sources, it is appropriated. The result of appropriation may be a “mashup,” a website or other piece of digital media that uses content from more than one source in a new configuration. Audio: Any work with an audio component, including speech, music, or sound effects. CAVE: An immersive, shared virtual reality environment created using goggles and several pairs of projectors, each pair pointing to the wall of a small room. Chatterbot/Conversational Character: A chatterbot is a computer program designed to simulate a conversation with one or more human users, usually in text. Chatterbots sometimes seem to offer intelligent responses by matching keywords in input, using statistical methods, or building models of conversation topic and emotional state. Early systems such as Eliza and Parry demonstrated that simple programs could be effective in many ways. Chatterbots may also be referred to as talk bots, chat bots, simply “bots,” or chatterboxes.
    [Show full text]
  • Web Boot Camp Week 1 (Custom for Liberty)
    "Charting the Course ... ... to Your Success!" Web Boot Camp Week 1 (Custom for Liberty) Course Summary Description This class is designed for students that have experience with HTML5 and CSS3 and wish to learn about, JavaScript and jQuery and AngularJS. Topics Introduction to the Course Designing Responsive Web Applications Introduction to JavaScript Data Types and Assignment Operators Flow Control JavaScript Events JavaScript Objects JavaScript Arrays JavaScript Functions The JavaScript Window and Document Objects JavaScript and CSS Introduction to jQuery Introduction to jQuery UI jQuery and Ajax Introduction to AngularJS Additional AngularJS Topics The Angular $http Service AngularJS Filters AngularJS Directives AngularJS Forms Testing JavaScript Introduction to AngularJS 2 Prerequisites Prior knowledge of HTML5 and CSS3 is required. Duration Five days Due to the nature of this material, this document refers to numerous hardware and software products by their trade names. References to other companies and their products are for informational purposes only, and all trademarks are the properties of their respective companies. It is not the intent of ProTech Professional Technical Services, Inc. to use any of these names generically "Charting the Course ... ... to Your Success!" TDP Web Week 1: HTML5/CSS3/JavaScript Programming Course Outline I. Introduction to the Course O. Client-Side JavaScript Objects A. TDP Web Bootcamp Week 1, 2016 P. Embedding JavaScript in HTML B. Legal Information Q. Using the script Tag C. TDP Web Bootcamp Week 1, 2016 R. Using an External File D. Introductions S. Defining Functions E. Course Description T. Modifying Page Elements F. Course Objectives U. The Form Submission Event G. Course Logistics V.
    [Show full text]
  • Autonomous Agents on the Web
    Report from Dagstuhl Seminar 21072 Autonomous Agents on the Web Edited by Olivier Boissier1, Andrei Ciortea2, Andreas Harth3, and Alessandro Ricci4 1 Ecole des Mines – St. Etienne, FR, [email protected] 2 Universität St. Gallen, CH, [email protected] 3 Fraunhofer IIS – Nürnberg, DE, [email protected] 4 Università di Bologna, IT, [email protected] Abstract The World Wide Web has emerged as the middleware of choice for most distributed systems. Recent standardization efforts for the Web of Things and Linked Data are now turning hypermedia into a homogeneous information fabric that interconnects everything – devices, information resources, abstract concepts, etc. The latest standards allow clients not only to browse and query, but also to observe and act on this hypermedia fabric. Researchers and practitioners are already looking for means to build more sophisticated clients able to meet their design objectives through flexible autonomous use of this hypermedia fabric. Such autonomous agents have been studied to large extent in research on distributed artificial intelligence and, in particular, multi-agent systems. These recent developments thus motivate the need for a broader perspective that can only be achieved through a concerted effort of the research communities on the Web Architecture and the Web of Things, Semantic Web and Linked Data, and Autonomous Agents and Multi-Agent Systems. The Dagstuhl Seminar 21072 on “Autonomous Agents on the Web” brought together leading scholars and practitioners across these research areas in order to support the transfer of knowledge and results – and to discuss new opportunities for research on Web-based autonomous systems. This report documents the seminar’s program and outcomes.
    [Show full text]
  • Freenet-Like Guids for Implementing Xanalogical Hypertext
    Freenet-like GUIDs for Implementing Xanalogical Hypertext Tuomas J. Lukka Benja Fallenstein Hyperstructure Group Oberstufen-Kolleg Dept. of Mathematical Information Technology University of Bielefeld, PO. Box 100131 University of Jyvaskyl¨ a,¨ PO. Box 35 D-33501 Bielefeld FIN-40351 Jyvaskyl¨ a¨ Germany Finland [email protected] lukka@iki.fi ABSTRACT For example, an email quoting another email would be automati- We discuss the use of Freenet-like content hash GUIDs as a prim- cally and implicitly connected to the original via the transclusion. itive for implementing the Xanadu model in a peer-to-peer frame- Bidirectional, non-breaking external links (content linking) can be work. Our current prototype is able to display the implicit con- resolved through the same mechanism. Nelson[9] argues that con- nection (transclusion) between two different references to the same ventional software, unable to reflect such interconnectivity of doc- permanent ID. We discuss the next layers required in the implemen- uments, is unsuited to most human thinking and creative work. tation of the Xanadu model on a world-wide peer-to-peer network. In order to implement the Xanadu model, it must be possible to efficiently search for references to permanent IDs on a large scale. The original Xanadu design organized content IDs in a DNS-like Categories and Subject Descriptors hierarchical structure (tumblers), making content references arbi- H.5.4 [Information Interfaces and Presentation]: Hypertext/Hy- trary intervals (spans) in the hierarchy. Advanced tree-like data permedia—architectures; H.3.4 [Information Storage and Retrie- structures[6] were used to retrieve the content efficiently.
    [Show full text]
  • Angularjs Workbook
    AngularJS Workbook Exercises take you to mastery By Nicholas Johnson version 0.1.0 - beta Image credit: San Diego Air and Space Museum Archives Welcome to the Angular Workbook This little book accompanies the Angular course taught over 3 days. Each exercise builds on the last, so it's best to work through these exercises in order. We start out from the front end with templates, moving back to controllers. We'll tick off AJAX, and then get into building custom components, filters, services and directives. After covering directives in some depth we'll build a real app backed by an open API. By the end you should have a pretty decent idea of what is what and what goes where. Template Exercises The goal of this exercise is just to get some Angular running in a browser. Getting Angular We download Angular from angularjs.org Alternately we can use a CDN such as the Google Angular CDN. Activating the compiler Angular is driven by the template. This is different from other MVC frameworks where the template is driven by the app. In Angular we modify our app by modifying our template. The JavaScript we write simply supports this process. We use the ng-app attribute (directive) to tell Angular to begin compiling our DOM. We can attach this attribute to any DOM node, typically the html or body tags: 1 <body ng-app> 2 Hello! 3 </body> All the HTML5 within this directive is an Angular template and will be compiled as such. Linking from a CDN Delivering common libraries from a shared CDN can be a good idea.
    [Show full text]
  • Language-Agnostic Relation Extraction from Abstracts in Wikis
    information Article Language-Agnostic Relation Extraction from Abstracts in Wikis Nicolas Heist, Sven Hertling and Heiko Paulheim * ID Data and Web Science Group, University of Mannheim, Mannheim 68131, Germany; [email protected] (N.H.); [email protected] (S.H.) * Correspondence: [email protected] Received: 5 February 2018; Accepted: 28 March 2018; Published: 29 March 2018 Abstract: Large-scale knowledge graphs, such as DBpedia, Wikidata, or YAGO, can be enhanced by relation extraction from text, using the data in the knowledge graph as training data, i.e., using distant supervision. While most existing approaches use language-specific methods (usually for English), we present a language-agnostic approach that exploits background knowledge from the graph instead of language-specific techniques and builds machine learning models only from language-independent features. We demonstrate the extraction of relations from Wikipedia abstracts, using the twelve largest language editions of Wikipedia. From those, we can extract 1.6 M new relations in DBpedia at a level of precision of 95%, using a RandomForest classifier trained only on language-independent features. We furthermore investigate the similarity of models for different languages and show an exemplary geographical breakdown of the information extracted. In a second series of experiments, we show how the approach can be transferred to DBkWik, a knowledge graph extracted from thousands of Wikis. We discuss the challenges and first results of extracting relations from a larger set of Wikis, using a less formalized knowledge graph. Keywords: relation extraction; knowledge graphs; Wikipedia; DBpedia; DBkWik; Wiki farms 1. Introduction Large-scale knowledge graphs, such as DBpedia [1], Freebase [2], Wikidata [3], or YAGO [4], are usually built using heuristic extraction methods, by exploiting crowd-sourcing processes, or both [5].
    [Show full text]
  • Connecting Topincs Using Transclusion to Connect Proxy Spaces
    Connecting Topincs Using transclusion to connect proxy spaces Robert Cerny Anton-Kubernat-Straße 15, A-2512 Oeynhausen, Austria [email protected] http://www.cerny-online.com Abstract. Topincs is a software system for agile and distributed knowledge management on top of the common web infrastructure and the Topic Maps Data Model. It segments knowledge into stores and offers users a way to establish a temporary connection between stores through transclusion and merging. This allows to easily copy topics and statements. Through this mechanism, later acts of integration are simplified, due to matching item identifiers. Furthermore, transclusion can be used to create permanent connections between stores. 1 Introduction A Topincs store [5, 6, 10] is a knowledge repository for an individual or a group to make statements about subjects. It uses the Topic Maps Data Model [7] for representing knowledge. Throughout this paper we will illustrate crucial aspects by the example of a small fictional software company, called Modestsoft. It employs around 10 people and has various departments, including development, testing, and sales & services. Modestsoft uses several shared Topincs stores to manage its organizational knowledge. Those stores deal with Modestsoft’s staff, products, issues, clients, and their configurations, to name only a few. Additionally, every employee has his own store. While the shared stores have a clear boundary around their domain, the personal stores may contain topics and statements of a wider variety and are considered a repository for note taking, and as such they function as a memory extender [4]. Figure 1 illustrates Modestsoft’s setup and the Topincs stores they use to manage their spontaneous knowledge recording demands.
    [Show full text]
  • What Can Software Learn from Hypermedia?
    What Can Software Learn From Hypermedia? Philip Tchernavskij Clemens Nylandsted Klokmose Michel Beaudouin-Lafon LRI, Univ. Paris-Sud, CNRS, Digital Deisgn & Information Studies, LRI, Univ. Paris-Sud, CNRS, Inria, Université Paris-Saclay Aarhus University Inria, Université Paris-Saclay Orsay 43017-6221, France Aarhus N 8200, Denmark Orsay 43017-6221, France [email protected] [email protected] [email protected] ABSTRACT procedures, or working materials in the physical world, these dimen- Most of our interactions with the digital world are mediated by sions of flexibility are limited or nonexistent in computer-mediated apps: desktop, web, or mobile applications. Apps impose artificial activity. These limitations are especially evident as software has limitations on collaboration among users, distribution across de- become ubiquitous in our social and professional lives. vices, and the changing procedures that constantly occur in real In general, apps model procedures, in the sense that they couple work. These limitations are partially due to the engineering prin- what users can do, e.g., changing color, with what they can do it ciples of encapsulation and program-data separation. By contrast, to, e.g., text. Some apps supports collaborating on a particular task, the field of hypermedia envisions collaboration, distribution and but limit the available tools and working materials. Some apps can flexible practices as fundamental features of software. We discuss be extended with new tools, but are highly specialized, e.g., for shareable dynamic media, an alternative model for software that illustration or programming. Apps can only be combined in limited unifies hypermedia and interactive systems, and Webstrates, an ways, e.g., it is possible to work on an image in two image treatment experimental implementation of that model.
    [Show full text]