Semantic Enrichment of Scientific Documents with Semantic Lenses – Developing Methodologies, Tools and Prototypes for Their Concrete Use

Total Page:16

File Type:pdf, Size:1020Kb

Semantic Enrichment of Scientific Documents with Semantic Lenses – Developing Methodologies, Tools and Prototypes for Their Concrete Use ALMA MATER STUDIORUM – UNIVERSITÀ DEGLI STUDI DI BOLOGNA FACOLTÀ DI SCIENZE MATEMATICHE FISICHE E NATURALI Dipartimento di Informatica: Scienza e Ingegneria Corso di Laurea Specialistica in Informatica Semantic Enrichment of Scientific Documents with Semantic Lenses – Developing methodologies, tools and prototypes for their concrete use Relatore: Candidato: Chiar.mo Prof. Fabio Vitali Jacopo Zingoni Correlatore: Dott. Silvio Peroni ANNO ACCADEMICO 2011-2012 Sessione II 2 A Marta e Stefano, i miei amatissimi ed esemplari genitori, straordinari, dolcissimi e sempre pronti a tutto. A Giuliana ed a Tosca, indimenticabili nonne, sempre vicine al mio cuore, delle cui lezioni spero di poter fare sempre tesoro. Grazie per tutto l’appoggio, per ogni parola di incoraggiamento. E soprattutto grazie per tutto l’amore che mi avete sempre donato. 3 4 Indice Introduzione_______________________________________________ 9 1 Introduction and Aims of this Work – Applications for Semantic Lenses ________________________ 13 2 Scientific Context and Related Works ____________________ 16 2.1 General introduction to Semantic Web and Semantic Publishing __ 16 2.2 Semantic web, Semantic Publishing and the Enrichment of knowledge _____________________________ 19 2.3 Other Related Works _________________________________________ 25 3 Technologies and onthologies ____________________________ 29 3.1 RDF and Ontologies in General _______________________________ 29 3.1.1 RDF/XML Syntax Linearization __________________________________ 32 3.1.2 Terse RDF Triple Language – TURTLE ____________________________ 33 3.2 EARMARK – Extremely Annotational RDF Markup _____________ 34 3.2.1 Ghost Classes _________________________________________________ 37 3.2.2 Shell Classes __________________________________________________ 39 3.2.3 Handling Overlapping Markup with EARMARK ___________________ 41 3.3 Linguistic Acts Ontology______________________________________ 44 3.4 Spar Area Ontologies _________________________________________ 47 3.5 Toulmin Argument Model ____________________________________ 53 4 The Semantic Lenses Model _____________________________ 57 4.1 Introduction to Semantic Lenses _______________________________ 57 4.2 Facets of a Document – Different outlooks, aspects and layers. ____ 59 4.3 Semantic Lenses in Detail _____________________________________ 61 4.3.1 Research Context Lens __________________________________________ 64 4.3.2 Contributions and Roles Lens ____________________________________ 64 5 4.3.3 The Publication Context Lens ____________________________________ 65 4.3.4 The Document Structure Lens ___________________________________ 66 4.3.5 The Rhetoric Organization Lens __________________________________ 67 4.3.6 The Citation Network Lens ______________________________________ 68 4.3.7 The Argumentation Lens ________________________________________ 70 4.3.8 The Textual Semantics Lens _____________________________________ 72 4.5 Structural Patterns and the Pattern Ontology ____________________ 73 4.6 DOCO – The Document Components Ontology _________________ 77 4.7 DEO – The Discourse Elements Ontology _______________________ 79 4.8 CiTO – The Citation Typing Ontology _________________________ 81 4.9 AMO – The Argument Model Ontology ________________________ 83 5 SLM – Semantic Lenses Methodology _____________________ 85 5.1 Applying Semantic Lenses – Authorial or Editorial Activity? _____ 85 5.2 Proposing a general methodology ______________________________ 89 5.3 The target paper – “Ontologies are Us” _________________________ 93 5.4 The target lenses _____________________________________________ 93 5.5 Porting the target document to EARMARK _____________________ 94 5.6 The Application of the Document Structure Lens ________________ 96 5.7 The Application of the Rhetoric Organization Lens ______________ 99 5.8 The Application of the Citation Network Lens _________________ 101 5.9 The Application of the Argumentation Lens ___________________ 103 6 SLAM – Semantic Lenses Application & Manipulation ____ 104 6.1 Introduction – Tasks, Aims, Necessities and Priorities __________ 104 6.1.1 Jena API _____________________________________________________ 106 6.1.2 EARMARK API _______________________________________________ 109 6.2 SLAM features and architecture ______________________________ 110 6.3 Details on the code and on SLAM Classes _____________________ 116 6.3.1 Core Package _________________________________________________ 116 6.3.2 Utilities Package ______________________________________________ 119 6.4 From Theory to Practice – Mikalens sub-package and its structure 120 6 7 TAL – Through a Lens __________________________________ 122 7.1 User Interaction with Lenses – The Focusing Activity ___________ 122 7.2 Through a Lens - TAL _______________________________________ 125 7.3 TAL Features _______________________________________________ 127 7.3.1 Explorable Argumentation Index ________________________________ 127 7.3.2 Explorable Citation Index _______________________________________ 130 7.3.3 Rhetoric and Citation Tooltips ___________________________________ 132 7.3.4 Contextual Rhetoric Denotations_________________________________ 133 7.4 Prototype generation from the test-bed through Java ____________ 134 7.5 Other Implementation details ________________________________ 140 8 Evaluation of SLAM and TAL __________________________ 141 8.1 Enacting the methodology and using the framework to field test lenses ___________________________________________ 141 8.2 Applying the annotations, lens by lens ________________________ 143 8.2.1 The Application of the Document Structure Lens ___________________ 143 8.2.2 The Application of the Rhetoric Organization Lens _________________ 150 8.2.3 The Application of the Citation Network Lens _____________________ 155 8.2.4 The Application of the Argumentation Lens _______________________ 158 8.3 Overall statistics ____________________________________________ 164 8.4 Lessons Learned and possible, recommended improvements ____ 175 8.4.1 Suggested changes in the SPAR Ontologies ________________________ 179 8.5 User Testing the TAL prototype ______________________________ 181 9 Conclusions __________________________________________ 184 Conclusioni _____________________________________________ 193 Bibliography – Ontologies References _______________________ 199 Bibliography ____________________________________________ 200 7 8 Introduzione Con questa dissertazione di tesi miro ad illustrare i risultati della mia ricerca nel campo del Semantic Publishing, consistenti nello sviluppo di un insieme di metodologie, strumenti e prototipi, uniti allo studio di un caso d‟uso concreto, finalizzati all‟applicazione ed alla focalizzazione di Lenti Semantiche (Semantic Lenses): un efficace modello per l‟arricchimento semantico di documenti scientifici [PSV12a]. Il Semantic Publishing è un approccio rivoluzionario nel modo in cui i documenti scientifici, come ad esempio degli articoli di ricerca, possono essere letti, usati, indirizzati e diventare oggetto di nuovi modi di interazione. Ma in cosa consiste di preciso il Semantic Publishing? Per definirlo con le stesse parole del suo principale proponente: “Definisco come Semantic Publishing tutto ciò che aumenta la resa del significato di un articolo scientifico pubblicato, che ne facilita la sua scoperta in modo automatizzato, che consente di collegarlo ad articoli semanticamente correlati, che fornisce accesso a dati presenti nell’articolo, in modo azionabile, oppure che facilita l’integrazione di dati fra diversi documenti. Fra le altre cose, richiede l’arricchimento dell’articolo con metadati appropriati, comprensibili, analizzabili e processabili automaticamente, in modo da consentire un miglioramento della verificabilità delle informazioni presenti nella pubblicazione, ed al fine di provvedere ad un loro riassunto automatico, o la loro scoperta automatica da parte di altri agenti.” [SKM09] Il lavoro che ho svolto e che tratterò nelle pagine seguenti è quindi un contributo completo al campo del Semantic Publishing. Innanzitutto è un modo di mostrare la fattibilità ed i vantaggi del modello delle Lenti Semantiche ai fini di un appropriato arricchimento con metadati, tramite la proposta di una metodologia dettagliata per il raggiungimento di questo obiettivo. È una indicazione di una possibile via per risolvere le sfide che si potrebbero incontrare lungo questo percorso, sviluppando gli appropriati strumenti e le soluzioni praticabili. Ed è una dimostrazione pratica di alcune delle nuove interazioni ed opportunità rese possibili da un prototipo di interfaccia generato a partire da un documento scientifico arricchito tramite l‟appropriata annotazione delle lenti su di esso. La mia dimostrazione si basa appunto sulle Lenti Semantiche, che sono un modello per l‟arricchimento semantico di documenti scientifici, accompagnato da un insieme di tecnologie raccomandate per la sua implementazione. È importante osservare come l‟arricchimento di un tradizionale articolo scientifico non sia una operazione monodimensionale, in quanto, al di là del 9 mero atto di aggiungere asserzioni semanticamente precise riguardo il contenuto testuale, sono coinvolte in essa molte altre sfaccettature. Tutti questi aspetti che coesistono simultaneamente in un articolo possono essere definiti nella concreta manifestazione della semantica di un documento tramite l‟applicazione di specifici filtri, in grado di enfatizzare un preciso insieme di informazioni significative su un dominio piuttosto
Recommended publications
  • A Personal Research Agent for Semantic Knowledge Management of Scientific Literature
    A Personal Research Agent for Semantic Knowledge Management of Scientific Literature Bahar Sateli A Thesis in the Department of Computer Science and Software Engineering Presented in Partial Fulfillment of the Requirements For the Degree of Doctor of Philosophy (Computer Science) at Concordia University Montréal, Québec, Canada February 2018 c Bahar Sateli, 2018 Concordia University School of Graduate Studies This is to certify that the thesis prepared By: Bahar Sateli Entitled: A Personal Research Agent for Semantic Knowledge Management of Scientific Literature and submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy (Computer Science) complies with the regulations of this University and meets the accepted standards with respect to originality and quality. Signed by the final examining commitee: Chair Dr. Georgios Vatistas External Examiner Dr. Guy Lapalme Examiner Dr. Ferhat Khendek Examiner Dr. Volker Haarslev Examiner Dr. Juergen Rilling Supervisor Dr. René Witte Approved by Dr. Volker Haarslev, Graduate Program Director 9 April 2018 Dr. Amir Asif, Dean Faculty of Engineering and Computer Science Abstract A Personal Research Agent for Semantic Knowledge Management of Scientific Literature Bahar Sateli, Ph.D. Concordia University, 2018 The unprecedented rate of scientific publications is a major threat to the productivity of knowledge workers, who rely on scrutinizing the latest scientific discoveries for their daily tasks. Online digital libraries, academic publishing databases and open access repositories grant access to a plethora of information that can overwhelm a researcher, who is looking to obtain fine-grained knowledge relevant for her task at hand. This overload of information has encouraged researchers from various disciplines to look for new approaches in extracting, organizing, and managing knowledge from the immense amount of available literature in ever-growing repositories.
    [Show full text]
  • Semantic Web Technologies and Legal Scholarly Publishing Law, Governance and Technology Series
    Semantic Web Technologies and Legal Scholarly Publishing Law, Governance and Technology Series VOLUME 15 Series Editors: POMPEU CASANOVAS, Institute of Law and Technology, UAB, Spain GIOVANNI SARTOR, University of Bologna (Faculty of Law – CIRSFID) and European, University Institute of Florence, Italy Scientific Advisory Board: GIANMARIA AJANI, University of Turin, Italy; KEVIN ASHLEY, University of Pittsburgh, USA; KATIE ATKINSON, University of Liverpool, UK; TREVOR J.M. BENCH-CAPON, University of Liv- erpool, UK; V. RICHARDS BENJAMINS, Telefonica, Spain; GUIDO BOELLA, Universita’degli Studi di Torino, Italy; JOOST BREUKER, Universiteit van Amsterdam, The Netherlands; DANIÈLE BOUR- CIER, University of Paris 2-CERSA, France; TOM BRUCE, Cornell University, USA; NURIA CASEL- LAS, Institute of Law and Technology, UAB, Spain; CRISTIANO CASTELFRANCHI, ISTC-CNR, Italy; JACK G. CONRAD, Thomson Reuters, USA; ROSARIA CONTE, ISTC-CNR, Italy; FRAN- CESCO CONTINI, IRSIG-CNR, Italy; JESÚS CONTRERAS, iSOCO, Spain; JOHN DAVIES, British Telecommunications plc, UK; JOHN DOMINGUE, The Open University, UK; JAIME DELGADO, Uni- versitat Politécnica de Catalunya, Spain; MARCO FABRI, IRSIG-CNR, Italy; DIETER FENSEL, Uni- versity of Innsbruck, Austria; ENRICO FRANCESCONI, ITTIG-CNR, Italy; FERNANDO GALINDO, Universidad de Zaragoza, Spain; ALDO GANGEMI, ISTC-CNR, Italy; MICHAEL GENESERETH, Stanford University, USA; ASUNCIÓN GÓMEZ-PÉREZ, Universidad Politécnica de Madrid, Spain; THOMAS F. GORDON, Fraunhofer FOKUS, Germany; GUIDO GOVERNATORI, NICTA, Australia; GRAHAM
    [Show full text]
  • Proceedings of Balisage: the Markup Conference 2012
    Published in: Proceedings of Balisage: The Markup Conference 2012. at: http://www.balisage.net/Proceedings/vol6/print/Witt01/BalisageVol6- Witt01.html (downloaded on: 15.02.2017/14:19). Balisage: The Markup Conference 2012 Proceedings A standards-related web-based The Markup Conference M information system Maik Stührenberg Institut für Deutsche Sprache (IDS) Mannheim Universität Bielefeld <maik.Stuehrenberg0uni-bielefeld.de> Oliver Schonefeld Institut für Deutsche Sprache (IDS) Mannheim <[email protected]> Andreas Witt Institut für Deutsche Sprache (IDS) Mannheim <witt0 ids-mannheim.de> Bcilisage: The Markup Conference 2012 August 7 - 10, 2012 Copyright © 2012 by the authors. Used with permission. Howto cite this paper Stührenberg, Maik, Oliver Schonefeld and Andreas Witt. “A standards-related web-based information system.” Presented at Balisage: The Markup Conference 2012, Montreal, Canada, August 7-10, 2012. In Proceedings ofBalisage: The Markup Conference 2012. Balisage Series on Markup Technologies, vol. 8 (2012). DOI: io.4242/Baiisagevoi8.stuhrenbergoi. Abstract This late breaking proposal introduces the prototype of a web-based information system dealing with standards in the field of annotation. Table of Contents Introduction The problem with standards Providing guidance Information structure Representation Current state and future work Introduction This late breaking proposal is based on an ongoing effort started in the CLARIN project and which was presented briefly at the LREC 2012 Workshop on Collaborative Resource Development and Delivery. Initial point was the development of an easy to use information system for the description of standards developed in ISO/IEC TC37/SC4 Language Resources Management. Since these standards are heavily related to each other it is usually not feasible to adopt only a single standard for one's work but to dive into the standards jungle in full.
    [Show full text]
  • Towards the Unification of Formats for Overlapping Markup
    Towards the unification of formats for overlapping markup Paolo Marinelli∗ Fabio Vitali∗ Stefano Zacchiroli∗ [email protected] [email protected] [email protected] Abstract Overlapping markup refers to the issue of how to represent data structures more expressive than trees|for example direct acyclic graphs|using markup (meta-)languages which have been designed with trees in mind|for example XML. In this paper we observe that the state of the art in overlapping markup is far from being the widespread and consistent stack of standards and technologies readily available for XML and develop a roadmap for closing the gap. In particular we present in the paper the design and implementation of what we believe to be the first needed step, namely: a syntactic conversion framework among the plethora of overlapping markup serialization formats. The algorithms needed to perform the various conversions are presented in pseudo-code, they are meant to be used as blueprints for re- searchers and practitioners which need to write batch translation programs from one format to the other. 1 Introduction This paper is about markup, one of the key technological ingredient of hypertext. The particular aspects of markup we are concerned with are the limits of its expressivity. XML-based markup requires that the identified features of a document are organized hierarchically as a single tree, whereby each fragment of the content of the document is contained in one and only one XML element, each of which is contained within one and only one parent element all the way up to the single root element at the top.
    [Show full text]
  • The MLCD Overlap Corpus (MOC)
    Digital Humanities 2010 The MLCD Overlap Corpus semantic openness of SGML and XML to supply non-hierarchical interpretation of what (MOC) are often thought to be inescapably hierarchical notations. Huitfeldt, Claus The SGML/XML-based approaches to overlap [email protected] fall, roughly, into three groups: milestones, Department of Philosophy, University of fragmentation-and-reconstitution, and stand- Bergen off annotation. Milestones (described as early as Sperberg-McQueen, C. M. (Barnard et al. 1988), and used in (Sperberg- [email protected] McQueen and Burnard 1990) and later versions Black Mesa Technologies LLC, USA of the TEI Guidelines) use empty elements to Marcoux, Yves mark the boundaries of regions which cannot be marked simply as elements because they [email protected] overlap the boundaries of other elements. More Université de Montréal, Canada recently, approaches to milestone markup have been generalized in the Trojan Horse and CLIX markup idioms (DeRose 2004). For some time, theorists and practitioners Fragmentation is the technique of dividing of descriptive markup have been aware that a logical unit which overlaps other units the strict hierarchical organization of elements into several smaller units, which do not; the provided by SGML and XML represents a consuming application can then re-aggregate potentially problematic abstraction. The nesting the fragments. structures of SGML and XML capture an important property of real texts and represent Stand-off annotation addresses the overlap a successful combination of expressive power problem by removing the markup from the main and tractability. But not all textual phenomena data stream of the document, at the same time appear in properly nested form, and for more adding pointers back into the base data.
    [Show full text]
  • XML Prague 2019
    XML Prague 2019 Conference Proceedings University of Economics, Prague Prague, Czech Republic February 7–9, 2019 XML Prague 2019 – Conference Proceedings Copyright © 2019 Jiří Kosek ISBN 978-80-906259-6-9 (pdf) ISBN 978-80-906259-7-6 (ePub) Table of Contents General Information ..................................................................................................... vii Sponsors .......................................................................................................................... ix Preface .............................................................................................................................. xi Task Abstraction for XPath Derived Languages – Debbie Lockett and Adam Retter ........................................................................................ 1 A novel approach to XSLT-based Schematron validation – David Maus .............. 57 Authoring DSLs in Spreadsheets Using XML Technologies – Alan Painter ......... 67 How to configure an editor – Martin Middel ........................................................... 103 Discover the Power of SQF – Octavian Nadolu and Nico Kutscherauer .................. 117 Tagdiff: a diffing tool for highlighting differences in text-oriented XML – Cyril Briquet .................................................................................................................. 143 Merge and Graft: Two Twins That Need To Grow Apart – Robin La Fontaine and Nigel Whitaker ........................................................................
    [Show full text]
  • Journal of the Text Encoding Initiative, Issue 8
    Journal of the Text Encoding Initiative Issue 8 | December 2014 - December 2015 Selected Papers from the 2013 TEI Conference Arianna Ciula and Fabio Ciotti (dir.) Electronic version URL: http://journals.openedition.org/jtei/1025 DOI: 10.4000/jtei.1025 ISSN: 2162-5603 Publisher TEI Consortium Electronic reference Arianna Ciula and Fabio Ciotti (dir.), Journal of the Text Encoding Initiative, Issue 8 | December 2014 - December 2015, « Selected Papers from the 2013 TEI Conference » [Online], Online since 28 December 2014, connection on 23 September 2020. URL : http://journals.openedition.org/jtei/1025 ; DOI : https://doi.org/10.4000/jtei.1025 This text was automatically generated on 23 September 2020. TEI Consortium 2014 (Creative Commons Attribution-NoDerivs 3.0 Unported License) 1 TABLE OF CONTENTS Editorial Introduction to Issue 8 of the Journal of the Text Encoding Initiative Arianna Ciula and Fabio Ciotti TEI in Relation to Other Semantic and Modeling Formalisms TEI, Ontologies, Linked Open Data: Geolat and Beyond Fabio Ciotti, Maurizio Lana and Francesca Tomasi The Linked Fragment: TEI and the Encoding of Text Reuses of Lost Authors Monica Berti, Bridget Almas, David Dubin, Greta Franzini, Simona Stoyanova and Gregory R. Crane TEI in LMNL: Implications for Modeling Wendell Piez Ontologies, Data Modeling, and TEI Øyvind Eide TEI to Express Domain Specific Text and Data Models Texts and Documents: New Challenges for TEI Interchange and Lessons from the Shelley- Godwin Archive Trevor Muñoz and Raffaele Viglianti Manus OnLine and the Text
    [Show full text]
  • Standoff Properties As an Alternative to XML for Digital Historical Editions
    Standoff properties as an alternative to XML for digital historical editions Desmond Schmidt eResearch, ITEE, University of Queensland, Australia 5 [email protected] Abstract In the past few years digital humanists have recognised that the development of tools for the collaborative creation of shareable, reusable and archivable transcriptions of historical documents is being held back by the complexity and lack of interoperability of customisable XML encoding 5 standards. To fix this problem transcriptions must be split into their constituent parts: a plain text and individual sets of markup. The plain text is easily processable by text-analysis software or search engines, and may be divided into layers representing different states of each document. Each markup set may also represent a separate layer of information to enrich an underlying plain text, and may be freely combined with others. Unlike in XML, textual properties in markup sets can 10 freely overlap. Plain text versions and their corresponding markup can also be reliably and efficiently converted into HTML, the language of the Web. Unlike previous attempts at separating markup from text, the proposed data representation is designed to be fully editable, and development of a suitable Web-based editor is at an advanced stage. To solve the problems we now face will require a radical rethink of how we mark up texts, and the adoption of globally 15 interoperable standards to which these internal representations can be easily converted, such as HTML, plain text and RDFa. Standoff properties as an alternative to XML for digital historical editions Introduction Most digital humanists are familiar with markup, or the use of named tags to indicate properties that 5 text should possess in addition to its basic character data.
    [Show full text]
  • Risoluzione E Linearizzazione Di Overlap in Linguaggi Di Markup
    Alma Mater Studiorum - Università di Bologna Facoltà di Scienze Matematiche, Fisiche e Naturali Corso di Laurea Specialistica in Informatica Materia: Tecnologie Web Risoluzione e linearizzazione di overlap in linguaggi di markup Tesi di Laurea di Francesco Poggi Relatore Chiar.mo Prof. Fabio Vitali Correlatore Dott. Silvio Peroni Parole chiave: EARMARK, Overlapping markup, Conversione di documenti, OWL, XML Sessione I Anno Accademico 2010-2011 Risoluzione e linearizzazione di overlap in linguaggi di markup Indice INDICE ....................................................................................................................... 1 INTRODUZIONE ...................................................................................................... 3 OVERLAPPING MARKUP ................................................................................... 11 2.1 OHCO thesis e modello ad albero ............................................................... 13 2.2 Overlapping markup .................................................................................... 14 2.2.1 Caratteristiche complesse legate all'overlap ................................................... 17 2.3 Approcci attuali al problema dell'overlap ................................................... 18 2.3.1 Soluzioni a livello di linguaggio di markup .................................................... 19 2.3.1.1 TEI milestone ...................................................................................... 21 2.3.1.2 Flat Milestone ....................................................................................
    [Show full text]
  • Towards a Formal Ontology for the Text Encoding Initiative DOI
    Umanistica Digitale - ISSN:2532-8816 - n.3, 2018 F. Ciotti – owa"ds a Fo"mal Ontolog% fo" t'e e(t Encoding Initiati*e DOI: htt+:/,doi.o"g,10.6092,issn.2532-8816/8174 o!a"#s a Fo"mal $ntolog% &o" t'e e(t )nco#ing Initiati*e Fabio Ciotti Uni*ersit1 #i 2oma 3 or 4ergata5 [email protected] 60st"act. In 78esto a"ticolo viene +"esentata 8na +"o+osta +"elimina"e #i ontologia +e" la ra++"esentazione #ello schema ext )nco#ing Initiati*e : )I;. <e motivazioni e i bene&ici #i 8na *ersione semantica e machine-"ea#able #ella )I sono molte+lici: :1) l=inte"o+erabilit1 semantica tra co#i&iche #i testi com+o"te"ebbe la &acilitazione #ell=inte""ogazione cross-co"+ora e l=integrazione #i <in>e# $+en Data? :2; la &o"malizzazione #el )I abstract model a++o"te"ebbe migliorie signi&icati*e in te"mini #i consistenza e soundness. Data la com+lessit1 #ello schema )I, viene 78i +"eso in consi#erazione 8n consistente ma rist"etto sottoinsieme #i elementi e attri08ti. In secon#o l8ogo la meta-ontolog% )62@62A viene esaminata. In 8ltimo, *engono &orniti alcuni #ettagli im+lementativi. 'is article +"esents t'e rationale an# t'e +"o+osal o& a +"elimina"% a"c'itect8"e o& a &o"mal ontolog% o& t'e ext )nco#ing Initiati*e ma">8+ lang8age. 'e "easons to 'a*e a &o"mal an# machine-"ea#a0le semantics &o" )I a"e mani&ol#.
    [Show full text]
  • No. 3 on the Use of TEX As an Authoring Language for HTML5 SK
    278 TUGboat, Volume 32 (2011), No. 3 On the use of TEX as an authoring language The difference between the above two S-expressions for HTML5 is that the former introduces a deliberate asymmetry between attributes and elements, whereas the latter S.K. Venkatesan treats attributes on a par with elements. However, Abstract both S-expressions can be considered as an improve- ment on XML as they allow further nesting within The T X syntax has been fairly successful at mark- E attributes. The corresponding XML code would be: ing up a variety of scientific and technical literature, making it an ideal authoring syntax. The brevity <title lang="en">Title of a <italic>plain</italic> article</title> of the TEX syntax makes it difficult to create over- lapping structures, which in the case of HTML has In both TEX and XML syntax, further nesting of made life so difficult for XML purists. We discuss structures is not possible within attributes, which S-expressions, the TEX syntax and how it can help makes TEX ideal for authoring XML or HTML5. reduce the nightmare that HTML5 markup is going There are further similarities between the TEX to create. Apart from this we implement a new syn- and SGML/HTML syntaxes. Attribute minimization tax for marking up semantic information (microdata) used in HTML, like not quoting attribute values, is in TEX. very much practiced in TEX syntax, more as a rule rather than the exception; e.g., 1 Introduction \includegraphics[width=2cm]{myimage.gif} The brevity of T X syntax has made it fairly success- E Unlike SGML/HTML,TEX typically uses a comma ful at marking up a variety of scientific and techni- as the separator between attributes, instead of the cal literature.
    [Show full text]
  • Proceedings of Extreme Markup Languages®
    Published in: Proceedings of Extreme Markup Languages 2007, Montreal, Kanada. - Montreal, 2007. ® Master Bibliography Author Index Topic Index Date Index Proceedings Home On the Lossless Transformation of Single-File, Multi-Layer Annotations into Multi-Rooted Trees Andreas Witt Oliver Schonefeld Georg Rehm Jonathan Khoo Kilian Evang Abstract The Generalised Architecture for Sustainability (GENAU) provides a framework for the transformation of single-file, multi-layer annotations into multi-rooted trees. By employing constraints expressed in XCONCUR-CL, this procedure can be performed lossless, i.e., without losing information, especially with regard to the nesting of elements that belong to multiple annotation layers. This article describes how different types of linguistic corpora can be transformed using specialised tools, and how constraint rules can be applied to the resulting multi-rooted trees to add an additional level of validation. Keywords: Validating; Trees/Graphs; Concurrent Markup/Overlap Table of Contents Introduction Generalised Architecture for Sustainability (GENAU) Transforming Single Rooted Trees Leveler Pipeline The Leveler Web Application Leveler: Step 1 Leveler: Step 2 Considerations and Error Detection Transforming Annotation Graphs XCONCUR Document Syntax Validation Basic constraint expressions Common derived operators Rule Evaluation Compact Syntax GENAU and XCONCUR Conclusion Andreas Witt Andreas Witt received his PhD in Computational Linguistics and Text Technology from the University of Bielefeld in 2002. After graduating in 1996, he started as a researcher and instructor in Computational Linguistics and Text Technology at Bielefeld University. He was heavily involved in the establishment of the minor subject Text Technology in Bielefeld University's Magister and B.A. program. In 2006 he moved to University of Tübingen, where he is engaged in a project on Sustainability of Linguistic Resources.
    [Show full text]