Conceptualization and Visualization of Tagging and Folksonomies

Total Page:16

File Type:pdf, Size:1020Kb

Conceptualization and Visualization of Tagging and Folksonomies Conceptualization and Visualization of Tagging and Folksonomies Von der Fakultät für Ingenieurwissenschaften, Abteilung Informatik und Angewandte Kognitionswissenschaft der Universität Duisburg-Essen zur Erlangung des akademischen Grades Doktor der Ingenieurwissenschaften (Dr.-Ing.) genehmigte Dissertation von Steffen Lohmann aus Hamburg 1. Gutachter: Prof. Dr. Maria Paloma Díaz Pérez 2. Gutachter: Prof. Dr.-Ing. Jürgen Ziegler Tag der mündlichen Prüfung: 27.11.2013 Hinweis: Diese Dissertation ist im Rahmen eines binationalen Promotionsverfahrens (Cotutelle) in Kooperation mit der Universidad Carlos III de Madrid entstanden. Abstract Tagging has become a popular indexing method for interactive systems in the past decade. It offers a simple yet effective way for users to organize an ever increasing amount of digital information for themselves and/or others. The linked user vocabulary resulting from tagging is known as folksonomy and provides a valuable source for the retrieval and exploration of digital resources. Although several models and representations of tagging have been proposed, there is no coherent conceptualization that provides a comprehensive and pre- cise description of the concepts and relationships in the domain. Furthermore, there is little systematic research in the area of folksonomy visualization, and so folksonomies are still mainly depicted as simple tag clouds. Both problems are related, as a well-defined conceptualization is an important prerequisite for the interoperable use and visualization of folksonomies. The thesis addresses these shortcomings by developing a coherent conceptualiza- tion of tagging and visualizations for the interactive exploration of folksonomies. It gives an overview and comparison of tagging models and defines key concepts of the domain. After a comprehensive review of existing tagging ontologies, a unified and coherent conceptualization is presented that incorporates the best parts of the reviewed ontologies. The conceptualization is implemented in a standardized ontology language and enables scalable reasoning on folksonomies. Based on the conceptualization, different tag cloud layouts and their ability to support users in information seeking tasks are investigated. Extensions to tag clouds are developed and combined into an analysis system, enhanced by sophisticated natural language processing and intuitive interaction techniques. Furthermore, existing work on visualizing folksonomies as graphs is analyzed and a new type of graph visualization tailored to the tag-based exploration of digital resources is introduced. The work presented in this thesis has been successfully applied in different contexts. The developed conceptualization has been used in knowledge and requirements engineering, the tag cloud extensions have been applied in visual text analysis, and the novel graph visualization has proved useful in image retrieval and requirements analysis. These examples demonstrate the utility of the developed approaches beyond the topic of tagging and open up interesting questions for future research. iii Resumen El etiquetado se ha convertido en la última década en un método popular para la indexación de contenidos en sistemas interactivos. Ofrece una forma sencilla y eficaz para que los usuarios organicen una cantidad cada vez mayor de información para ser usada por ellos mismos y/o por terceros. El vocabulario de los usuarios resultante del etiquetado se conoce como folcsonomía y puede ser muy útil en la búsqueda y exploración de contenidos digitales. Aunque se han propuesto varios modelos y representaciones para el etiquetado, todavía no hay una conceptualización coherente que describa completamente los conceptos y relaciones de la materia. Además, existe poca investigación sistemática en el área de visualización de folcsonomías, por lo que éstas se representan princi- palmente como simples nubes de palabras. Ambos problemas están relacionados, ya que una conceptualización precisa y bien definida es importante para el uso interoperable y la visualización de folcsonomías. La tesis aborda estos dos problemas mediante el desarrollo de una conceptual- ización coherente de etiquetado y de visualizaciones para la exploración interactiva de folcsonomías. En primer lugar, se ofrece una visión general de los modelos de etiquetado y se definen los conceptos clave. Después de un análisis exhaustivo de las ontologías de etiquetado existentes, se presenta una conceptualización unificada que combina las mejores características de las ontologías analizadas. La conceptualización se implementa en un lenguaje de ontologías estandarizada y permite el uso de algoritmos de razonamiento eficientes. Sobre la base de la conceptualización, se investigan diferentes formas de las nubes de etiquetas y las propiedades que ayudan a los usuarios en la búsqueda de información. Se desarrollan extensiones de las nubes de etiquetas y se las integra en un único sistema de análisis que utiliza técnicas avanzadas de interacción y de procesamiento del lenguaje natural. En un segundo paso, se analizan y se clasifican los enfoques de visualización gráfica de folcsonomías y, finalmente, se presenta un nuevo tipo de visualización gráfica que permite la exploración de contenidos digitales basada en etiquetas. Los desarrollos de esta tesis se han aplicado con éxito en diferentes contextos. La conceptualización se ha utilizado tanto en la ingeniería del conocimiento y de requisitos. Las extensiones de las nubes de etiquetas se han integrado en un sistema de análisis del texto. Además, la visualización gráfica se ha aplicado tanto en la recuperación de imágenes como en el análisis de requisitos. Estas aplicaciones muestran los diversos usos de los desarrollos más allá de etiquetado y plantean preguntas interesantes para la investigación futura. v Zusammenfassung Tagging ist im vergangenen Jahrzehnt zu einer beliebten Methode für die Index- ierung von Inhalten in interaktiven Systemen geworden. Es bietet Nutzern eine einfache und effektive Möglichkeit, immer größere Mengen digitaler Informationen für sich selbst und/oder andere zu organisieren. Das durch Tagging entstehende Vokabular wird als Folksonomie bezeichnet und kann die Suche und Erschließung von digitalen Inhalten hilfreich unterstützen. Obwohl mehrere Modelle und Repräsentationen für Tagging vorgeschlagen wurden, gibt es bislang keine umfassende Beschreibung der Konzepte und Beziehungen des Themengebiets. Zudem existiert wenig systematische Forschung im Bereich der Visualisierung von Folksonomien, so dass diese hauptsächlich in Form einfacher Wortwolken dargestellt werden. Beide Probleme bedingen sich gegenseitig, da eine präzise und konsistente Konzeptualisierung wichtige Voraussetzung für die interoperable Nutzung und Visualisierung von Folksonomien ist. Diese Dissertation setzt sich mit den genannten Problemen auseinander, indem sie eine umfassende Konzeptualisierung für Tagging und Visualisierungen für die interaktive Analyse von Folksonomien entwickelt. Zunächst wird ein Überblick über Tagging-Modelle gegeben und werden wichtige Begriffe des Themengebiets definiert. Nach einer sorgfältigen Analyse vorhandener Tagging-Ontologien wird eine konsistente und wohldefinierte Konzeptualisierung präsentiert, die kompatible Teilbereiche der betrachteten Ontologien aufgreift und vereint. Die Konzeptual- isierung ist in einer standardisierten Ontologiebeschreibungssprache implementiert und ermöglicht die Nutzung skalierbarer Reasoning-Verfahren auf Folksonomien. Ausgehend von der Konzeptualisierung untersucht die Dissertation anschließend verschiedene Darstellungsformen für Wortwolken und wie sie Nutzer bei der Informationssuche unterstützen. Es werden visuelle und interaktive Erweiterun- gen für Wortwolken entwickelt und zu einem Analysesystem integriert, das mit maschineller Sprachverarbeitung und intuitiven Interaktionstechniken ausgestat- tet ist. Darüber hinaus werden vorhandene Umsetzungen der Darstellung von Folksonomien als Graphen analysisiert und schließlich eine neuartige Graphvisu- alisierung präsentiert, die die tag-basierte Erschließung von digitalen Inhalten ermöglicht. Die entwickelten Ansätze wurden erfolgreich in verschiedenen Kontexten ange- wandt. Die Konzeptualisierung kam im Wissens- und Anforderungsmanagement zum Einsatz, die erweiterten Wortwolken halfen bei der visuellen Textanalyse und die neuartige Graphvisualisierung fand in der Bildersuche und Anforderungsanal- yse Anwendung. Die Beispiele zeigen die vielfältigen Verwendungsmöglichkeiten der entwickelten Ansätze über das Themengebiet Tagging hinaus und führen zu interessanten Fragestellungen für zukünftige Forschung. vii Acknowledgments This thesis was conducted under a joint supervision (cotutelle) agreement between Universidad Carlos III de Madrid and Universität Duisburg-Essen. I would like to express my deep gratitude to my thesis supervisors Paloma Díaz and Jürgen Ziegler for making this possible and for their support, trust, and patience throughout the whole PhD process. I am also very grateful to Thomas Ertl who gave me the opportunity and time to complete the thesis after starting work at the Institute for Visualization and Interactive Systems (VIS) of the University of Stuttgart. My grateful thanks are extended to Manuel Castro, Fabio Crestani, and Heinz Ulrich Hoppe who participated in the international PhD committee and examined the thesis and defense, as well as to Norbert Fuhr for his efficient support in coordinating the PhD committee and defense. Helpful feedback was also provided by the
Recommended publications
  • Panel: Large Knowledge Bases
    From: AAAI Technical Report SS-02-06. Compilation copyright © 2002, AAAI (www.aaai.org). All rights reserved. Panel: Large KnowledgeBases AdamPease (Teknowledge, chair), Chris Welty (Vassar College), Pat Hayes (U. West Florida), Anthony G. Cohn (U. Leeds), Ken Murray (SRI) Fellbaum, C. (1998). WordNet, An Electronic Lexical Database. MITPress. Abstract Lenat, D., 1995, "Cyc: A Large-Scale Investment in It is estimated that 1-2 exabytes of data is now being KnowledgeInfrastructure". Communicationsof the ACM generated each year, almost all of it in purely digital form 38, no. 1 !, November.See also http://www.c¥c.com (Lymanet. ai. 2000). Properly structured, this information could form a global knowledge base. Currently however, Lyman,P., Varian, H., Dunn, J., Strygin, A., Swearingen, this information exists in manydifferent forms, manyof K., (2000). HowMuch Information?, University California, which are only suitable for humanconsumption, and which Berkeley are largely opaque to computerbased understanding. Majorefforts to build large formal ontologies or address http://www.sims.berkeley.edu/research/project.,ghow-much- issues in their construction have been undertaken funded by info the government in the US such as the DARPAKnowledge Niles, I., & Pease, A., (2001), Towarda Standard Upper Sharing Effort (Patil et al, 1992), High Performance Ontology, in Proceedings of the 2nd International KnowledgeBases (Cohen et. al., 1998), Rapid Knowledge Conference on Formal Ontology in Information Systems Formation (RKF, 2002) and in Europe including Advanced (FOIS-2001). See also http:llonlology.teknowledge.com KnowledgeTechnologies (Shadboit, 2001) and OntoWeb and http:l/suo.ieee.org (OntoWeb,2002), as international standards efforts such the IEEE Standard Upper Ontology (Niles & Pease, 2001) OntoWeb(2002).
    [Show full text]
  • Annotea: an Open RDF Infrastructure for Shared Web Annotations
    Proceedings of the WWW 10th International Conference, Hong Kong, May 2001. Annotea: An Open RDF Infrastructure for Shared Web Annotations Jos´eKahan,1 Marja-Riitta Koivunen,2 Eric Prud’Hommeaux2 and Ralph R. Swick2 1 W3C INRIA Rhone-Alpes 2 W3C MIT Laboratory for Computer Science {kahan, marja, eric, swick}@w3.org Abstract. Annotea is a Web-based shared annotation system based on a general-purpose open RDF infrastructure, where annotations are modeled as a class of metadata.Annotations are viewed as statements made by an author about a Web doc- ument. Annotations are external to the documents and can be stored in one or more annotation servers.One of the goals of this project has been to re-use as much existing W3C technol- ogy as possible. We have reacheditmostlybycombining RDF with XPointer, XLink, and HTTP. We have also implemented an instance of our system using the Amaya editor/browser and ageneric RDF database, accessible through an Apache HTTP server. In this implementation, the merging of annotations with documents takes place within the client. The paper presents the overall design of Annotea and describes some of the issues we have faced and how we have solved them. 1Introduction One of the basic milestones in the road to a Semantic Web [22] is the as- sociation of metadata to content. Metadata allows the Web to describe properties about some given content, even if the medium of this content does not directly provide the necessary means to do so. For example, ametadata schema for digital photos [15] allows the Web to describe, among other properties, the camera model used to take a photo, shut- ter speed, date, and location.
    [Show full text]
  • A Critique of Pure Reason’
    151 A critique of pure reason’ DREWMCDERMOTT Yale University, New Haven, CT 06S20, U.S.A. Cornput. Intell. 3. 151-160 (1987) In 1978, Patrick Hayes promulgated the Naive Physics Man- the knowledge that programs must have before we write the ifesto. (It finally appeared as an “official” publication in programs themselves. We know what this knowledge is; it’s Hobbs and Moore 1985.) In this paper, he proposed that an all- what everybody knows, about physics, about time and space, out effort be mounted to formalize commonsense knowledge, about human relationships and behavior. If we attempt to write using first-order logic as a notation. This effort had its roots in the programs first, experience shows that the knowledge will earlier research, especially the work of John McCarthy, but the be shortchanged. The tendency will be to oversimplify what scope of Hayes’s proposal was new and ambitious. He sug- people actually know in order to get a program that works. On gested that the use of Tarskian seniantics could allow us to the other hand, if we free ourselves from the exigencies of study a large volume of knowledge-representation problems hacking, then we can focus on the actual knowledge in all its free from the confines of computer programs. The suggestion complexity. Once we have a rich theory of the commonsense inspired a small community of people to actually try to write world, we can try to embody it in programs. This theory will down all (or most) of commonsense knowledge in predictate become an indispensable aid to writing those programs.
    [Show full text]
  • Spatial Data Infrastructures and Linked Data
    Spatial Data Infrastructures and Linked Data Carlos Granell Centre for Interactive Visualization Universitat Jaume I, Castellón, Spain Sven Schade European Commission - Joint Research Centre Institute for Environment and Sustainability, Ispra, Italy Gobe Hobona Centre for Geospatial Science University of Nottingham, Nottingham, United Kingdom ABSTRACT A Spatial Data Infrastructure (SDI) is a type of information infrastructure for enhancing geospatial data sharing and access. At the moment, we face the transition from the service-oriented second generation of SDI to a third generation, characterized by user-centric approaches. This new movement closes the gap between classical SDI and Volunteered Geographic Information (VGI). Public use and acquisition of information provides additional challenges within and beyond the geospatial domain. Linked data has been suggested recently as a possible overall solution. This notion refers to a best practice for exposing, sharing, and connecting resources in the (semantic) web. In this paper, we project the linked data approach to SDI and suggest it as a possibility to combine SDI with VGI. We advocate a Spatial Linked Data Infrastructure, which applies solutions for linked data to classical SDI standards. We detail different implementing strategies, give examples, and argue for benefits, while at the same time trying to outline possible fallbacks. We hope that this contribution will enlighten a way towards a single shared information space. 2 INTRODUCTION A Spatial Data Infrastructure (SDI) is a type of information infrastructure for enhancing geospatial data sharing and access. An SDI embraces a set of rules, standards, procedures, guidelines, policies, institutions, data, networks, technology and human resources for enabling and coordinating the management and exchange of geospatial data between stakeholders in the spatial data community (Nebert, 2004; Rajabifard et al., 2006; Masser, 2007).
    [Show full text]
  • Rdfa in XHTML: Syntax and Processing Rdfa in XHTML: Syntax and Processing
    RDFa in XHTML: Syntax and Processing RDFa in XHTML: Syntax and Processing RDFa in XHTML: Syntax and Processing A collection of attributes and processing rules for extending XHTML to support RDF W3C Recommendation 14 October 2008 This version: http://www.w3.org/TR/2008/REC-rdfa-syntax-20081014 Latest version: http://www.w3.org/TR/rdfa-syntax Previous version: http://www.w3.org/TR/2008/PR-rdfa-syntax-20080904 Diff from previous version: rdfa-syntax-diff.html Editors: Ben Adida, Creative Commons [email protected] Mark Birbeck, webBackplane [email protected] Shane McCarron, Applied Testing and Technology, Inc. [email protected] Steven Pemberton, CWI Please refer to the errata for this document, which may include some normative corrections. This document is also available in these non-normative formats: PostScript version, PDF version, ZIP archive, and Gzip’d TAR archive. The English version of this specification is the only normative version. Non-normative translations may also be available. Copyright © 2007-2008 W3C® (MIT, ERCIM, Keio), All Rights Reserved. W3C liability, trademark and document use rules apply. Abstract The current Web is primarily made up of an enormous number of documents that have been created using HTML. These documents contain significant amounts of structured data, which is largely unavailable to tools and applications. When publishers can express this data more completely, and when tools can read it, a new world of user functionality becomes available, letting users transfer structured data between applications and web sites, and allowing browsing applications to improve the user experience: an event on a web page can be directly imported - 1 - How to Read this Document RDFa in XHTML: Syntax and Processing into a user’s desktop calendar; a license on a document can be detected so that users can be informed of their rights automatically; a photo’s creator, camera setting information, resolution, location and topic can be published as easily as the original photo itself, enabling structured search and sharing.
    [Show full text]
  • Where Is the Semantic Web? – an Overview of the Use of Embeddable Semantics in Austria
    Where Is The Semantic Web? – An Overview of the Use of Embeddable Semantics in Austria Wilhelm Loibl Institute for Service Marketing and Tourism Vienna University of Economics and Business, Austria [email protected] Abstract Improving the results of search engines and enabling new online applications are two of the main aims of the Semantic Web. For a machine to be able to read and interpret semantic information, this content has to be offered online first. With several technologies available the question arises which one to use. Those who want to build the software necessary to interpret the offered data have to know what information is available and in which format. In order to answer these questions, the author analysed the business websites of different Austrian industry sectors as to what semantic information is embedded. Preliminary results show that, although overall usage numbers are still small, certain differences between individual sectors exist. Keywords: semantic web, RDFa, microformats, Austria, industry sectors 1 Introduction As tourism is a very information-intense industry (Werthner & Klein, 1999), especially novel users resort to well-known generic search engines like Google to find travel related information (Mitsche, 2005). Often, these machines do not provide satisfactory search results as their algorithms match a user’s query against the (weighted) terms found in online documents (Berry and Browne, 1999). One solution to this problem lies in “Semantic Searches” (Maedche & Staab, 2002). In order for them to work, web resources must first be annotated with additional metadata describing the content (Davies, Studer & Warren., 2006). Therefore, anyone who wants to provide data online must decide on which technology to use.
    [Show full text]
  • Peiwen Zhu. Web Annotation Systems: a Literature Review and Case Study
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Carolina Digital Repository Peiwen Zhu. Web Annotation Systems: A Literature Review and Case Study. A Master’s Paper for the M.S. in I.S. degree. April, 2008. 35 pages. Advisor: Bradley M.Hemminger. Web annotation has been a popular research topic since the appearance of Internet and its supplementary technologies. This paper provides a literature review in the related research areas, and introduces some currently available web annotation systems with the comparison of these systems in seven aspects. In addition, this paper incorporates a description of the ongoing NeoNote project, and identifies limitations of this study for future work. Headings: Scholarly Communication Web annotations Information Systems/Design WEB ANNOTATION SYSTEMS: A LITERATURE REVIEW AND CASE STUDY by Peiwen Zhu A Master's paper submitted to the faculty of the School of Information and Library Science of the University of North Carolina at Chapel Hill in partial fulfillment of the requirements for the degree of Master of Science in Information Science. Chapel Hill, North Carolina April, 2008 Approved by: ___________________________ Bradley M.Hemminger 1 Table of Contents Introduction………………………………………………………………………………..2 Literature Review………………………………………………………………………….4 Current Systems………………………………………………………………………….15 Case Study……………………………………………………………………………….26 Conclusion……………………………………………………………………………….30 References………………………………………………………………………………..32 2 Introduction Web browsing plays an important role nowadays in people’s daily life, study, and work. Since documents exist mostly in digital format on the web, people may spend a large part of their time on browsing or searching on the web to look for useful information. However, this used to be a one-way interaction with users having few options to mark texts or to highlight important sections in a web document; what’s more, it is difficult to add extra information as reference on web pages, which is useful for further reference or sharing with friends.
    [Show full text]
  • Microformats the Next (Small) Thing on the Semantic Web?
    Standards Editor: Jim Whitehead • [email protected] Microformats The Next (Small) Thing on the Semantic Web? Rohit Khare • CommerceNet “Designed for humans first and machines second, microformats are a set of simple, open data formats built upon existing and widely adopted standards.” — Microformats.org hen we speak of the “evolution of the is precisely encoding the great variety of person- Web,” it might actually be more appropri- al, professional, and genealogical relationships W ate to speak of “intelligent design” — we between people and organizations. By contrast, can actually point to a living, breathing, and an accidental challenge is that any blogger with actively involved Creator of the Web. We can even some knowledge of HTML can add microformat consult Tim Berners-Lee’s stated goals for the markup to a text-input form, but uploading an “promised land,” dubbed the Semantic Web. Few external file dedicated to machine-readable use presume we could reach those objectives by ran- remains forbiddingly complex with most blog- domly hacking existing Web standards and hop- ging tools. ing that “natural selection” by authors, software So, although any intelligent designer ought to developers, and readers would ensure powerful be able to rely on the long-established facility of enough abstractions for it. file transfer to publish the “right” model of a social Indeed, the elegant and painstakingly inter- network, the path of least resistance might favor locked edifice of technologies, including RDF, adding one of a handful of fixed tags to an exist- XML, and query languages is now growing pow- ing indirect form — the “blogroll” of hyperlinks to erful enough to attack massive information chal- other people’s sites.
    [Show full text]
  • Directed Graph
    The Consistency and Conformance of Web Document Collection Based on Heterogeneous DAC Graph Marek Kopel and Aleksander Zgrzywa www.iis.pwr.wroc.pl www.zsi.pwr.wroc.pl Outline • Background & Idea • Personal Web of Trust • User and Agent Trust • Local Document Ranking & Filtering • Example Scenario • Conclusions & Future Work 2 Relationships in WWW • directed graph - most common model of a Web document collection • documents' hyperlinking relationship (edges) → PageRank, HITS • Tim Berners-Lee (in reference to the social aspect of Web 2.0): “I called this graph the Semantic Web, but maybe it should have been Giant Global Graph!” 3 Relationships in WWW (2) There's more to hyperlink than href: • HTML 4.01 attributes rel and rev - e.g: <a href=”glos.html” rel=”glossary”>used definitions</a> – navigation in a document collection (start, prev, next, contents, index), – structure (chapter, section, subsection, appendix, glossary) – meta (copyright, help) • XHTML 2.0 – custom namespaces 4 Relationships in WWW (3) Popular relation ontologies: • FOAF <foaf:knows> • XFN microformat – friendship (contact, acquaintance, friend) – family (child, parent, sibling, spouse, kin) – professional (co-worker, colleague) – physical (met) – geographical (co-resident, neighbor) – romantic (muse, crush, date, sweetheart) • rel-tag microformat - folksonomies 5 Heterogeneous DAC Graph • DAC graph – nodes of three types: • Document • Author • Concept – edges between nodes model the relationships – most of the relationships can be acquired directly from the Web data
    [Show full text]
  • HTML5 Microdata and Schema.Org
    HTML5 Microdata and Schema.org journal.code4lib.org/articles/6400 On June 2, 2011, Bing, Google, and Yahoo! announced the joint effort Schema.org. When the big search engines talk, Web site authors listen. This article is an introduction to Microdata and Schema.org. The first section describes what HTML5, Microdata and Schema.org are, and the problems they have been designed to solve. With this foundation in place section 2 provides a practical tutorial of how to use Microdata and Schema.org using a real life example from the cultural heritage sector. Along the way some tools for implementers will also be introduced. Issues with applying these technologies to cultural heritage materials will crop up along with opportunities to improve the situation. By Jason Ronallo Foundation HTML5 The HTML5 standard or (depending on who you ask) the HTML Living Standard has brought a lot of changes to Web authoring. Amongst all the buzz about HTML5 is a new semantic markup syntax called Microdata. HTML elements have semantics. For example, an ol element is an ordered list, and by default gets rendered with numbers for the list items. HTML5 provides new semantic elements like header , nav , article , aside , section and footer that allow more expressiveness for page authors. A bunch of div elements with various class names is no longer the only way to markup this content. These new HTML5 elements enable new tools and better services for the Web ecosystem. Browser plugins can more easily pull out the text of the article for a cleaner reading experience. Search engines can give more weight to the article content rather than the advertising in the sidebar.
    [Show full text]
  • Microformats?
    What are Microformats? “Designed for humans first and machines second, Microformats microformats are a set of simple, open data formats built upon existing and widely adopted standards. Instead of throwing away what works today, microformats intend to solve simpler problems first by adapting to current behaviors and usage patterns (e.g. XHTML, blogging).” Randy Schauer – Microformats.org CMSC 691M Semantic Web vs. semantic web Advantages Encode explicit information to aid machine Semantic Web semantic web Philosophy Build a common data format for Humans first, machines second. readability expressing the meaning of data. Use Encode existing Web content with Uses standard XHTML with a set of common ontologies to help machines to special tags. understand web content. class-names. They are freely available for Language RDF, RDFS, OWL Microformats anyone to use. Format Well- formed RDF documents Valid XHTML documents Simple to both describe and use Semantic Defined by the underlying Loosely defined. No formal ontology model semantic model. Offers a way to go beyond just presentation in XHTML. From Harry Chen’s Geospatial Semantic Web Presentation on 3/27/2007 1 Disadvantages Why use Microformats? Do not address implicit knowledge representation, In short, microformats are the convergence of a ontological analysis or logical inference. number of trends: a logical next step in the evolution of web design and Since there is no equivalent to namespaces, name information architecture. conflicts are destined to occur with increasing a way for people and organizations to publish richer information themselves, without having to rely upon frequency. centralized services. In order to try to reduce possible conflicts, there is a single an acknowledgment that (outside of specialist areas) repository for microformats; this is not scalable.
    [Show full text]
  • Inconsistency Robustness for Logic Programs Carl Hewitt
    Inconsistency Robustness for Logic Programs Carl Hewitt To cite this version: Carl Hewitt. Inconsistency Robustness for Logic Programs. Inconsistency Robustness, 2015, 978-1- 84890-159. hal-01148496v6 HAL Id: hal-01148496 https://hal.archives-ouvertes.fr/hal-01148496v6 Submitted on 27 Apr 2016 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Copyright Inconsistency Robustness for Logic Programs Carl Hewitt This article is dedicated to Alonzo Church and Stanisław Jaśkowski Abstract This article explores the role of Inconsistency Robustness in the history and theory of Logic Programs. Inconsistency Robustness has been a continually recurring issue in Logic Programs from the beginning including Church's system developed in the early 1930s based on partial functions (defined in the lambda calculus) that he thought would allow development of a general logic without the kind of paradoxes that had plagued earlier efforts by Frege, etc.1 Planner [Hewitt 1969, 1971] was a kind of hybrid between the procedural and logical paradigms in that it featured a procedural embedding of logical sentences in that an implication of the form (p implies q) can be procedurally embedded in the following ways: Forward chaining When asserted p, Assert q When asserted q, Assert p Backward chaining When goal q, SetGoal p When goal p, SetGoal q Developments by different research groups in the fall of 1972 gave rise to a controversy over Logic Programs that persists to this day in the form of following alternatives: 1.
    [Show full text]