Semantics Driven Agent Programming
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
FI-WARE User and Programmers Guide WP6 Front Page 1 FI-WARE User and Programmers Guide WP6 Front Page
FI-WARE User and Programmers Guide WP6 front page 1 FI-WARE User and Programmers Guide WP6 front page Large-scale Integrated Project (IP) D.6.4.a: User and Programmers Guide Project acronym: FI-WARE Project full title: Future Internet Core Platform Contract No.: 285248 Strategic Objective: FI.ICT-2011.1.7 Technology foundation: Future Internet Core Platform Project Document Number: ICT-2011-FI-285248-WP6-D6.4a Project Document Date: 30 June 2012 Deliverable Type and Security: Public Author: FI-WARE Consortium Contributors: FI-WARE Consortium Abstract: This deliverable contains the User and Programmers Guide for the Generic Enablers of Data/Context Management chapter, being included in the First FI-WARE release. Keyword list: FI-WARE, Generic Enabler, Bigdata, Context, Events, Semantic Web, Multimedia. Contents BigData Analysis - User and Programmer Guide 2 CEP - User and Programmer Guide 89 Location Platform - User and Programmer Guide 90 Multimedia Analysis - User and Programmer Guide 95 Query Broker - User and Programmer Guide 105 Semantic Annotation - Users and Programmers Guide 110 Semantic Application Support - Users and Programmers Guide 113 BigData Analysis - User and Programmer Guide 2 BigData Analysis - User and Programmer Guide Introduction This guide covers the user and development aspects of the SAMSON platform, version 0.6.1. The SAMSON platform has been designed for the processing of large amounts of data, in continuous mode (streaming), distributing tasks in a medium-size cluster of computers, following the MapReduce paradigm Jeffrey Dean and Sanjay Ghemawat. “MapReduce: Simplified data processing on large clusters” [1]. The platform provides the necessary framework to allow a developer to focus on solving analytical problems without needing to know how distribute jobs and data, and their synchronization. -
INFOCOMP / Datasys 2017 International Expert Panel: Challenges on Web Semantic Mapping and Information Processing
INFOCOMP / DataSys 2017 International Expert Panel: Challenges on Web Semantic Mapping and Information Processing INFOCOMP / DataSys 2017 International Expert Panel: Challenges on Web Semantic Mapping and Information Processing June 28, 2017, Venice, Italy The Seventh International Conference on Advanced Communications and Computation (INFOCOMP 2017) The Seventh International Conference on Advances in Information Mining and Management (IMMM 2017) The Twelfth International Conference on Internet and Web Applications and Services (ICIW 2017) INFOCOMP/IMMM/ICIW (DataSys) June 25{29, 2017 - Venice, Italy INFOCOMP, June 25 { 29, 2017 - Venice, Italy INFOCOMP / DataSys 2017 International Expert Panel: Challenges on Web Semantic Mapping and Information Processing INFOCOMP / DataSys 2017 International Expert Panel: Challenges on Web Semantic Mapping and Information Processing INFOCOMP Expert Panel: Web Semantic Mapping & Information Proc. INFOCOMP Expert Panel: Web Semantic Mapping & Information Proc. Panelists Claus-Peter R¨uckemann (Moderator), Westf¨alischeWilhelms-Universit¨atM¨unster(WWU) / Leibniz Universit¨atHannover / North-German Supercomputing Alliance (HLRN), Germany Marc Jansen, University of Applied Sciences Ruhr West, Deutschland Fahad Muhammad, CSTB, Sophia Antipolis, France Kiyoshi Nagata, Daito Bunka University, Japan Claus-Peter R¨uckemann, WWU M¨unster/ Leibniz Universit¨atHannover / HLRN, Germany INFOCOMP 2017: http://www.iaria.org/conferences2017/INFOCOMP17.html Program: http://www.iaria.org/conferences2017/ProgramINFOCOMP17.html -
A Question Answering System for Chemistry
A Question Answering System for Chemistry Preprint Cambridge Centre for Computational Chemical Engineering ISSN 1473 – 4273 A Question Answering System for Chemistry Xiaochi Zhou1, Daniel Nurkowski4, Sebastian Mosbach1;2, Jethro Akroyd1;2, Markus Kraft1;2;3 released: January 25, 2021 1 Department of Chemical Engineering 2 CARES and Biotechnology Cambridge Centre for Advanced University of Cambridge Research and Education in Singapore Philippa Fawcett Drive 1 Create Way Cambridge, CB3 0AS CREATE Tower, #05-05 United Kingdom Singapore, 138602 3 School of Chemical 4 CMCL Innovations and Biomedical Engineering Sheraton House Nanyang Technological University Castle Park, Castle Street 62 Nanyang Drive Cambridge, CB3 0AX Singapore, 637459 United Kingdom Preprint No. 266 Keywords: Question Answering system, Natural language processing, Knowledge Graph Edited by Computational Modelling Group Department of Chemical Engineering and Biotechnology University of Cambridge Philippa Fawcett Drive Cambridge, CB3 0AS United Kingdom CoMo E-Mail: [email protected] GROUP World Wide Web: https://como.ceb.cam.ac.uk/ Abstract This paper describes the implementation and evaluation of a proof-of-concept Ques- tion Answering system for accessing chemical data from knowledge graphs which offer data from chemical kinetics to chemical and physical properties of species. We trained a question type classification model and an entity extraction model to interpret chemistry questions of interest. The system has a novel design which applies a topic model to identify the question-to-ontology affiliation. The topic model helps the sys- tem to provide more accurate answers. A new method that automatically generates training questions from ontologies is also implemented. The question set generated for training contains 80085 questions under 8 types. -
Discussion #1 | University of Texas at Austin | August 13, 2018 Facilitated By: Itza A
Introduction to Linked Data Discussion #1 | University of Texas at Austin | August 13, 2018 Facilitated by: Itza A. Carbajal, LLILAS Benson Latin American Metadata Librarian Hello! My name is Itza A. Carbajal I will be facilitating this discussion series on Linked Data I am the Latin American Metadata Librarian for a Post Custodial project at LLILAS Benson Have questions? Email me at: [email protected] Like twitter? You can find me at: @archiviststan Housekeeping ◎ Discussions are meant to highlight collective wisdom of the group ◎ Attendance to all discussion meetings not required, but recommended ◎ Take home practices are not required, but encouraged to further an individual’s understanding ◎ Readings are not required, but should be considered as a method for self education or for sharing with others Link to LIVE syllabus: http://bit.ly/SYLLABUSUTLD Link to reading materials: http://bit.ly/READUTLD 1. Discussion #1 Topics Topics to Discuss ◎ Semantic Web ◎ Linked data principles ◎ RDF and Triple statements Related Topics not covered in discussions ◎ Linked Open Data ◎ Examples of linked open data sets ◎ Linked Data platform 2. First, the Semantic Web Semantic Web (project) ◎ Derives from the concept of semantics defined as the study of meanings ◎ Extension to the current World Wide Web ◎ Also known as Web 3.0 where the web can now “read-write-execute” ◎ Changes the current web of information to a web of data including data inside the web and outside ◎ Core functions include semantic markup ○ Semantic markup - data interchange formats accessible to humans and machine ◎ Focuses on adding meaning/context to information giving machines and humans the ability to communicate and cooperate ◎ Relies on machine-readable metadata to expresses that meaning/context ◎ No formal definition, still ongoing and constantly maturing 3. -
Constructing Reference Semantic Predictions from Biomedical Knowledge Sources
Constructing Reference Semantic Predictions from Biomedical Knowledge Sources Demeke Ayele1, J. P. Chevallet2, Million Meshesha1, Getnet Kassie1 (1) Addis Ababa University, Addis Ababa, Ethiopia (2) University of Grenoble, Grenoble, France {demekeayele, meshe8, getnetmk}@gmail.com, jean- [email protected] ABSTRACT Semantic tuples are core component of text mining and knowledge extraction systems in biomedicine. The practical success of these systems significantly depends on the correctness and quality of the extracted semantic tuples. The quality and correctness of the semantic predictions can be measured against a benchmark semantic structure. In this article, we presented an approach for constructing a reference semantic tuple structure based on the existing biomedical knowledge sources in which the evaluation is based on the UMLS knowledge sources. In the evaluation, 7400 semantic triples are extracted from UMLS knowledge sources and the semantic predictions are constructed using the proposed approach. In the semantic triples, 87 concepts are found redundantly classified and 207 pair of semantic triples showed hierarchically inconsistent. 128 are found to be non-taxonomically inconsistent. The quality of the semantic triple is also judged using expert evaluators. The Cohen's kappa coefficient is used to measure the degree of agreement between two evaluators and the result is promising (0.9). Construire des prévisions de référence sémantique à partir de sources de connaissances biomédicales Les "tuples sémantiques" forment un élément essentiel à la fouille de texte et aux systèmes d'extraction de connaissances dans le domaine biomédical. Le succès en pratique des systèmes exploitant ces informations sémantique, dépend fortement de l'exactitude et de la qualité des tuples sémantiques. -
Distributed Semantic Sensor Networks How to Use Semantics and Knowledge Distribution to Integrate Sensor Data of Disparate Data Sources
2007 Distributed Semantic Sensor Networks How to use semantics and knowledge distribution to integrate sensor data of disparate data sources. In this research project we focused on sensor networks and how semantic web technologies and knowledge sharing can make integration of distributed sensor data possible. To achieve this we have first identified the main challenges to address to allow for data integration. A distributed semantic sensor network framework is proposed that uses semantic web techniques and knowledge sharing to address these challenges. We propose that knowledge sharing in a sensor network is a key aspect of data integration in a dynamic environment, since it allows the network to handle changes in the environment. This project is innovative in that it proposes new ways to handle semantic integration in a distributed environment, by using question federation and data conversion on semantically annotated data and questions. It contributes to current research in that our framework enables data integration of distributed sensor data. Masters Student M.J. van der Veen, Department of Artificial Intelligence, Rijksuniversiteit Groningen, Groningen Supervisors Bart Verheij, Department of Artificial Intelligence, Rijksuniversiteit Groningen Arnoud de Jong, TNO ICT & Telecommunicatie, Groningen 1 People A masters student at the Department of Artificial Intelligence at the Rijksuniversiteit Groningen in The Netherlands. This thesis is the final work in his masters studies. His main interests besides writing this thesis is doing sports, coaching, travelling and making music . Masters Student: Maarten van der Veen A tenured lecturer/researcher (in Dutch: universitair docent) at the University of Groningen, Department of Artificial Intelligence and a member of the ALICE institute. -
Applying the Semantic Web to Computational Chemistry
Applying the Semantic Web to Computational Chemistry Neil S. Ostlund, Mirek Sopek Chemical Semantics Inc. , Gainesville, Florida, USA {ostlund, sopek}@chemicalsemantics.com Abstract. Chemical Semantics is a new start-up devoted to bringing the Semantic Web to computational chemistry, with a future goal to cover chemical results of other kinds, including experimental. We have created a prototype test bed that enables computational chemists to publish the results of their computations (for example: structure, energies and wave functions of Ab Initio calculations) on servers (powered by semantic triple stores) holding RDF data. In addition, scientists will be able to search across results produced by themselves and other researchers using SPARQL queries and faceted search built on top of it. The semantics of computational chemistry oriented RDF data is defined by an ontology identified as “GC” (Gainesville Core). Gainesville Core defines concepts such as: theoretical models used in computations, molecular systems (as collections of atoms and molecules), molecular arrangements which describe system evolution in time or in phase space and so on. The poster demonstrates the project during its early stage, including a description and demonstration of the prototype portal created by Chemical Semantics. This paper is the concise description of the poster content. Keywords: computational chemistry, semantic web, linked data, Gainesville Core, ontology Chemical Semantics portal main targets The Chemical Semantic portal, now in its prototype version, has the following long term goals: 1. Interoperable PUBLISHING of Computational Chemistry calculations using Linked Data principles 2. Enhanced search across all published results with basic reasoning enabled by the new ontology 3. FEDERATION of published data with existing web-based chemical datasets 4. -
Relext: Relation Extraction Using Deep Learning Approaches for Cybersecurity Knowledge Graph Improvement
RelExt: Relation Extraction using Deep Learning approaches for Cybersecurity Knowledge Graph Improvement Aditya Pingle, Aritran Piplai, Sudip Mittal, Anupam Joshi James Holt, and Richard Zak University of Maryland, Baltimore County, Baltimore, MD 21250, USA Laboratory for Physical Sciences fadiping1, apiplai1, smittal1, [email protected] fholt, [email protected] Abstract—Security Analysts that work in a ‘Security Opera- cybersecurity reports, CVE [21], CWE [22], National Vul- tions Center’ (SoC) play a major role in ensuring the security nerability Datasets [28], and any cybersecurity text publicly of the organization. The amount of background knowledge they available. Various SIEM systems, fetch, process, and store have about the evolving and new attacks makes a significant difference in their ability to detect attacks. Open source threat much of the cyber threat intelligence available through OSINT intelligence sources, like text descriptions about cyber-attacks, sources. Knowledge Graphs (KG) specifically to store cyber can be stored in a structured fashion in a cybersecurity knowl- threat intelligence has been used by many cybersecurity in- edge graph. A cybersecurity knowledge graph can be paramount formatics systems. These cybersecurity knowledge graphs can in aiding a security analyst to detect cyber threats because it not only store but are also be able to retrieve data and answer stores a vast range of cyber threat information in the form of semantic triples which can be queried. A semantic triple complex queries asked by the security analysts. contains two cybersecurity entities with a relationship between The dependence of various cybersecurity informatics sys- them. In this work, we propose a system to create semantic tems on various knowledge representation schemes makes triples over cybersecurity text, using deep learning approaches it imperative that we develop systems that improve these to extract possible relationships. -
FI-WARE Product Vision Front Page Ref
FI-WARE Product Vision front page Ref. Ares(2011)1227415 - 17/11/20111 FI-WARE Product Vision front page Large-scale Integrated Project (IP) D2.2: FI-WARE High-level Description. Project acronym: FI-WARE Project full title: Future Internet Core Platform Contract No.: 285248 Strategic Objective: FI.ICT-2011.1.7 Technology foundation: Future Internet Core Platform Project Document Number: ICT-2011-FI-285248-WP2-D2.2b Project Document Date: 15 Nov. 2011 Deliverable Type and Security: Public Author: FI-WARE Consortium Contributors: FI-WARE Consortium. Abstract: This deliverable provides a high-level description of FI-WARE which can help to understand its functional scope and approach towards materialization until a first release of the FI-WARE Architecture is officially released. Keyword list: FI-WARE, PPP, Future Internet Core Platform/Technology Foundation, Cloud, Service Delivery, Future Networks, Internet of Things, Internet of Services, Open APIs Contents Articles FI-WARE Product Vision front page 1 FI-WARE Product Vision 2 Overall FI-WARE Vision 3 FI-WARE Cloud Hosting 13 FI-WARE Data/Context Management 43 FI-WARE Internet of Things (IoT) Services Enablement 94 FI-WARE Applications/Services Ecosystem and Delivery Framework 117 FI-WARE Security 161 FI-WARE Interface to Networks and Devices (I2ND) 186 Materializing Cloud Hosting in FI-WARE 217 Crowbar 224 XCAT 225 OpenStack Nova 226 System Pools 227 ISAAC 228 RESERVOIR 229 Trusted Compute Pools 230 Open Cloud Computing Interface (OCCI) 231 OpenStack Glance 232 OpenStack Quantum 233 Open -
Thinking Outside the Tbox
Thinking Outside the TBox Multiparty Service Matchmaking as Information Retrieval David J. Lambert School of Informatics University of Edinburgh I V N E R U S E I T H Y T O H F G E R D I N B U Doctor of Philosophy Centre for Intelligent Systems and their Applications School of Informatics University of Edinburgh 2009 Abstract Service oriented computing is crucial to a large and growing number of computational undertakings. Central to its approach are the open and network-accessible services provided by many different organisations, and which in turn enable the easy creation of composite workflows. This leads to an environment containing many thousands of services, in which a programmer or automated composition system must discover and select services appropriate for the task at hand. This discovery and selection process is known as matchmaking. Prior work in the field has conceived the problem as one of sufficiently describing individual services using formal, symbolic knowledge representation languages. We review the prior work, and present arguments for why it is optimistic to assume that this approach will be adequate by itself. With these issues in mind, we examine how, by reformulating the task and giving the matchmaker a record of prior service performance, we can alleviate some of the problems. Using two formalisms—the incidence calculus and the lightweight coordination calculus—along with algorithms inspired by information retrieval techniques, we evolve a series of simple matchmaking agents that learn from experience how to select those services which performed well in the past, while making minimal demands on the service users. -
An Approach for Knowledge Graph Construction from Spanish Texts
ISSN 1870-4069 An Approach for Knowledge Graph Construction from Spanish Texts Andrea G. Garcia Perez1, Ana B. Rios Alvarado1, Tania Y. Guerrero Melendez1, Edgar Tello Leal1, Jose L. Martinez Rodriguez2 1 Autonomous University of Tamaulipas, Mexico 2 CINVESTAV Unidad Tamaulipas, Mexico [email protected], farios, tyguerre, [email protected], [email protected] Abstract. Knowledge Graphs (KGs) represent valuable data sources for the Educational and Learning field, allowing computers and peo- ple to process and interpret information in an easier way. In addition KGs are useful to get information by software systems on any data re- source (person, place, organization, among others). KGs are represented through triples composed of entities and semantic relations commonly obtained from textual resources. However, exploiting triples from text is complicated due to linguistic variations, where Spanish has been slightly approached. This paper presents a method for constructing KGs from textual resources in Spanish. The method is composed of four stages: 1) obtaining of documents, 2) identification of entities and their relations, 3) association of entities to linked data resources, and 4) the construction of a schema for the KGs. The experiments were performed over a set of doc- uments in a general and computer science domain. Our method showed encouraging results for the precision in the identification of entities and semantic relationships for the KGs constructed from unstructured texts. Keywords: knowledge graph, construction of graph from texts. 1 Introduction In recent years the Web has become a global repository that represents a source of knowledge in a wide range of domains and languages, where the information is shared and stored in different formats. -
The Resource Description Framework and Its Schema Fabien Gandon, Reto Krummenacher, Sung-Kook Han, Ioan Toma
The Resource Description Framework and its Schema Fabien Gandon, Reto Krummenacher, Sung-Kook Han, Ioan Toma To cite this version: Fabien Gandon, Reto Krummenacher, Sung-Kook Han, Ioan Toma. The Resource Description Frame- work and its Schema. Handbook of Semantic Web Technologies, 2011, 978-3-540-92912-3. hal- 01171045 HAL Id: hal-01171045 https://hal.inria.fr/hal-01171045 Submitted on 2 Jul 2015 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. The Resource Description Framework and its Schema Fabien L. Gandon, INRIA Sophia Antipolis Reto Krummenacher, STI Innsbruck Sung-Kook Han, STI Innsbruck Ioan Toma, STI Innsbruck 1. Abstract RDF is a framework to publish statements on the web about anything. It allows anyone to describe resources, in particular Web resources, such as the author, creation date, subject, and copyright of an image. Any information portal or data-based web site can be interested in using the graph model of RDF to open its silos of data about persons, documents, events, products, services, places etc. RDF reuses the web approach to identify resources (URI) and to allow one to explicitly represent any relationship between two resources.