Linked Web Apis Dataset Web Apis Meet Linked Data

Total Page:16

File Type:pdf, Size:1020Kb

Linked Web Apis Dataset Web Apis Meet Linked Data Semantic Web 1 (2015) 1–5 1 IOS Press Linked Web APIs Dataset Web APIs meet Linked Data Editor(s): Rinke Hoekstra, Universiteit van Amsterdam, The Netherlands Solicited review(s): Christoph Lange, University of Bonn, Germany; Enrico Daga, KMI, The Open University, UK; Tobias Kuhn VU Amsterdam, The Netherlands Milan Dojchinovski and Tomas Vitvar Web Intelligence Research Group, Faculty of Information Technology, Czech Technical University in Prague E-mail: {milan.dojchinovski,tomas.vitvar}@fit.cvut.cz Abstract. Web APIs enjoy a significant increase in popularity and usage in the last decade. They have become the core technology for exposing functionalities and data. Nevertheless, due to the lack of semantic Web API descriptions their discovery, sharing, integration, and assessment of their quality and consumption is limited. In this paper, we present the Linked Web APIs dataset, an RDF dataset with semantic descriptions about Web APIs. It provides semantic descriptions for 11,339 Web APIs, 7,415 mashups and 7,717 developer profiles, which make it the largest available dataset from the Web APIs domain. The dataset captures the provenance, temporal, technical, functional, and non-functional aspects. In addition, we describe the Linked Web APIs Ontology, a minimal model which builds on top of several well-known ontologies. The dataset has been interlinked and published according to the Linked Data principles. Finally, we describe several possible usage scenarios for the dataset and show its potential. Keywords: Web APIs, Linked Data, Web services, Linked Web APIs, ontology 1. Introduction ery and selection of APIs, on the other hand the Web API providers can execute queries to get better insight Web APIs have become the first-class citizens and analysis results from the Web APIs ecosystem. on the Web and the core functionality of any Web In order to achieve these goals, we have developed application. Targeting the developer audience, they the Linked Web APIs dataset. It provides information lower the entry barriers for accessing valuable en- about Web APIs, mashups which utilize Web APIs terprise data and functionalities. Back in late 2008, in compositions, and mashup developers. The primary ProgrammableWeb.com1, the largest Web API and source for the dataset is the ProgrammableWeb.com mashup directory, reported only 1,000 Web APIs. This directory, which acts as central repository for Web count increased to 5,000 APIs in Feb 2014 and over APIs descriptions. The dataset re-uses several well- 13,000 APIs in June 2015. There are several benefits of known ontologies developed by the Semantic Web having these Web API descriptions provided as Linked community. In order to conform to the Linked Data Data. The Web API descriptions are contextualized, principles2, we have also linked the dataset with four they can be referenced, re-used and combined. The Web APIs data is linked, therefore API consumers can effectively discover new Web APIs. Last but not least, on the one hand the developers can benefit from so- phisticated discovery and selection queries for discov- 1http://www.programmableweb.com 2http://www.w3.org/DesignIssues/LinkedData.html 1570-0844/15/$27.50 c 2015 – IOS Press and the authors. All rights reserved 2 M. Dojchinovski et al. / Linked Web APIs Dataset central LOD datasets: DBpedia3, Freebase4, Linked- which describes a developer, we extracted its user- GeoData5 and GeoNames6. name, homepage and short bio. The city and country The remainder of this paper is structured as follows. of residence, its given and family name and the gender Section 2 describes the source of information and how were extracted only if these information were publicly the data was collected. Section 3 describes the devel- available. oped ontology for modelling relevant Web APIs infor- We also captured the relationships between the Web mation. The creation of the Linked Web APIs dataset APIs, mashups and developers. In other words, for and its technical details are described in Section 4. The each mashup we extracted the list of Web APIs which approach for interlinking the dataset with other LOD were used by the mashup and also the list of mashups datasets is described in Section 5. Section 6 discusses created by each developer. The dataset also captures the quality of the ontology and the dataset. Section 7 the temporal aspects - the creation time of the Web presents selected use cases and the results from a sur- APIs, mashups and the time a user registered his pro- vey on the potential and the usefulness of the dataset. file. Section 8 discusses related vocabularies and potential In order to collect the data, we have implemented data sources. Finally, Section 9 concludes the paper. a script which systematically browses and parses rel- evant pages. The parsing mechanism has been imple- mented using the jsoup Java HTML parser8. For the 2. The Data Source crawler, we used proper etiquette and we configured the crawl delay to one page every four seconds. In our work, we have considered ProgrammableWeb as a primary source of information for creating the dataset. It adopts characteristics of a social Web plat- 3. The Ontology form where Web API providers can publish and share information about offered Web APIs and consequently The Linked Web APIs ontology9 is a minimal model increase their visibility. The API directory also allows that captures the most relevant information which are developers to find appropriate APIs for their projects, related to Web APIs and mashups. The ontology builds or they can learn from showcases of existing mashup on top of existing and well established ontologies and applications. appropriately extends them. The selection of the on- The implemented knowledge extraction process tologies was driven by the following four crucial re- consists of four steps: (1) parsing and extraction of quirements: valuable information from pages describing Web APIs, – Provenance: It is important to keep information mashups and developers, (2) pre-processing, cleanup about Who (developers) created What (mashups) and consolidation of information, (3) linking with and How (using which APIs). In addition, infor- LOD resources, and (4) lifting in RDF and publishing mation about APIs a provider provides need to be the data as Linked Data. captured as well. An example of a Web page which describes a Web – Functional and Non-functional Properties: What API is the one which describes the Twitter API7. For functionalities a Web API or mashup offers is each Web API we extracted its title, short summary more than important, as well as their usage lim- describing its functionalities, assigned tags and cate- its and fees, supported security and authentication gories, technical information, such as supported for- mechanisms. mats and protocols, as well as non-functional proper- – Technical Properties: Information about the sup- ties such as its homepage, usage limits, usage fees, se- ported protocols, formats and the Web API end- curity, etc. Similarly, for each mashup we extracted point location is as important, as it allows a Web its title, short free-text description of its functionali- API consumer to search only for APIs with pre- ties, assigned tags and the homepage. From each page ferred technical capabilities. 3http://dbpedia.org/ 4https://www.freebase.com/ 5http://linkedgeodata.org/ 8 6http://www.geonames.org/ http://jsoup.org/ 7http://www.programmableweb.com/api/twitter 9http://linked-web-apis.fit.cvut.cz/ns/core/index.html M. Dojchinovski et al. / Linked Web APIs Dataset 3 lwapis:assignedCategory Category Tag lwapis:assignedCategory lwapis:assignedTag rdfs:label ; rdfs:label ; dcterms:title . dcterms:title . DataFormat Mashup WebAPI rdfs:subClassOf wl:NonFunctionalParameter ; rdfs:subClassOf prov:Entity ; rdfs:label ; rdfs:label ; rdfs:subClassOf msm:Service , dcterms:title . dcterms: title ; lwapis:assignedTag prov:Entity ; dcterms:description ; rdfs:label ; lwapis:supportedFormat prov:generatedAtTime ; dcterms: title ; dcterms:modified ; dcterms:description ; lwapis:rating . prov:generatedAtTime ; Protocol prov:wasGeneratedBy dcterms:modified ; lwapis:endpoint ; rdfs:subClassOf wl:NonFunctionalParameter ; lwapis:rating ; rdfs:label ; lwapis:usageFees ; dcterms:title . Activity lwapis:usageLimits ; lwapis:usedAPI lwapis:requiresDeveloperKey ; a prov:Activity ; lwapis:supportedProtocol prov:wasGeneratedBy lwapis:requiresAccount ; dcterms: title . lwapis:requiresSSL . prov:wasAssociatedWith Agent @prefix lwapis: <http://linked-web-apis.fit.cvut.cz/ns/core#> . @prefix dcterms: <http://purl.org/dc/terms/> . a prov: Agent, foaf:Agent ; prov:wasAttributedTo @prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> . rdfs:label ; dcterms:creator @prefix foaf: <http://xmlns.com/foaf/0.1/> . dcterms:title ; dcterms:publisher prov:wasAttributedTo @prefix prov: <http://www.w3.org/ns/prov#> . foaf:name . dcterms:creator @prefix msm: <http://iserve.kmi.open.ac.uk/ns/msm#> dcterms:publisher @prefix wl: <http://www.wsmo.org/ns/wsmo-lite#> Fig. 1. The Linked Web APIs Ontology. – Temporal Information: When a mashup or Web dra:ApiDocumentation class from the Hydra11 vocab- API was created is a valuable information as well. ulary. We introduce the lso:usedAPI property which For example, to analyze the recent trends in the refines the semantics of the prov:used property so it API ecosystem or to discover the most recent Web can be used to explicitly identify usage of a Web API APIs or mashups. in a mashup creation. The temporal information about the creation time of a mashup or Web API is expressed Figure 1 shows the overall Linked Web APIs on- using the prov:generatedAtTime property. It represents tology. The ontology contains three central classes: the time when the API documentation has been gener- lso:WebAPI – describe Web APIs, lso:Mashup – de- ated withing the snapshot. Since this is the first official scribe mashup compositions, which utilize one or more snapshot of the dataset, the value of this property in- Web API, and lso:Agent – represent all kinds of en- dicates the first version of each resource. Any changes tities involved in the creation and/or consumption of in the following snapshots will be captured with the Web APIs and mashups.
Recommended publications
  • Downloaded When Cloning the Github Repository (RUN Git Clone ...)
    bioRxiv preprint doi: https://doi.org/10.1101/013615; this version posted January 10, 2015. The copyright holder for this preprint (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made available under aCC-BY-NC 4.0 International license. Merging OpenLifeData with SADI services using Galaxy and Docker Mikel Egana˜ Aranguren1 1Functional Genomics Group, Dpt. of Genetics, Physical Anthropology and Animal Physiology, University of Basque Country (UPV/EHU), Spain ABSTRACT Semantic Web technologies have been widely applied in Life Sciences, for example by data providers like OpenLifeData and Web Services frameworks like SADI. The recent OpenLifeData2SADI project offers access to the OpenLifeData data store through SADI services. This paper shows how to merge data from OpenLifeData with other extant SADI services in the Galaxy bioinformatics analysis platform, making semantic data amenable to complex analyses, as a worked example demonstrates. The whole setting is reproducible through a Docker image that includes a pre-configured Galaxy server with data and workflows that constitute the worked example. Keywords: Semantic Web, Workflow, RDF, SPARQL, OWL, Galaxy, Docker, Web Services, SADI, OpenLifeData INTRODUCTION The idea behind a 3rd generation Semantic Web is to codify information on the Web in a way that machines can directly access and process it, opening possibilities for automated analysis. Even though the Semantic Web has not yet been fully implemented, it has been used in the life sciences, in which Semantic Web technologies are used to integrate data from different resources with disparate schemas [Good and Wilkinson (2006)].
    [Show full text]
  • Bosc 2010 Abstract
    From Moby to SADI - Modeling Semantic Web Services with the Semantic Automated Discovery and Integration Framework Mark Wilkinson, Benjamin Vandervalk, Luke McCarthy, Edward Kawas, David Withers Heart + Lung Institute at St. Paul's Hospital, University of British Columbia, Vancouver, BC, Canada Email: [email protected] Project: http://sadiframework.org Code: http://sadiframework.org/content/links-and-docs/ In 2001 the BioMoby project was established. It's goal was to define a framework for modeling, annotating, and discovering Web Services to improve interoperability between bioinformatics resources. A key aspect of the Moby solution was a centralized ontology describing bioinformatics data structures; all data within the BioMoby system had to be represented as instances of these ontological classes. The Object Ontology, while being the key to Moby's success, was also the most mis-understood, mis-used, and widely disliked feature of the Moby solution. This monolithic structure conflated semantics and syntax in a manner that was confusing for newcomers to the project. In addition, Moby data, while being highly predictable in structure, could not be represented in XML Schema, thus making existing Web Service toolkits largely unusable. For these and other reasons BioMoby reached a peak of ~1500 Services in 2008, but has not seen any appreciable adoption since then. In 2004, the W3C announced their recommendation of RDF (Resource Description Framework) and OWL (Web Ontology Language) for syntax and semantics , respectively, on the Semantic Web. Using our experiences with BioMoby as a requirements-guide, and utilizing these new Semantic Web technologies, we have designed and implemented a novel Semantic Web Service Framework – SADI (Semantic Automated Discovery and Integration).
    [Show full text]
  • Enhanced Reproducibility of SADI Web Service Workflows with Galaxy and Docker Mikel Egaña Aranguren1,3* and Mark D
    Aranguren and Wilkinson GigaScience (2015) 4:59 DOI 10.1186/s13742-015-0092-3 TECHNICALNOTE Open Access Enhanced reproducibility of SADI web service workflows with Galaxy and Docker Mikel Egaña Aranguren1,3* and Mark D. Wilkinson2 Abstract Background: Semantic Web technologies have been widely applied in the life sciences, for example by data providers such as OpenLifeData and through web services frameworks such as SADI. The recently reported OpenLifeData2SADI project offers access to the vast OpenLifeData data store through SADI services. Findings: This article describes how to merge data retrieved from OpenLifeData2SADI with other SADI services using the Galaxy bioinformatics analysis platform, thus making this semantic data more amenable to complex analyses. This is demonstrated using a working example, which is made distributable and reproducible through a Docker image that includes SADI tools, along with the data and workflows that constitute the demonstration. Conclusions: The combination of Galaxy and Docker offers a solution for faithfully reproducing and sharing complex data retrieval and analysis workflows based on the SADI Semantic web service design patterns. Keywords: Semantic Web, RDF, SADI, Web service, Workflow, Galaxy, Docker, Reproducibility Background • Resource Description Framework (RDF). RDF is a The Semantic Web is a ‘third-generation’ web in which machine-readable data representation language based information is published directly as data, in machine- on the ‘triple’, that is, data is codified in a processable formats [1]. With the Semantic Web, the web subject–predicate–object structure (e.g. ‘Cyclin becomes a ‘universal database’, rather than the collection participates in Cell cycle’, Fig. 1), in which the of documents it has traditionally been.
    [Show full text]
  • Deploying Mutation Impact Text-Mining Software with the SADI
    Riazanov et al. BMC Bioinformatics 2011, 12(Suppl 4):S6 http://www.biomedcentral.com/1471-2105/12/S4/S6 RESEARCH Open Access Deploying mutation impact text-mining software with the SADI Semantic Web Services framework Alexandre Riazanov, Jonas Bergman Laurila, Christopher JO Baker* From ECCB 2010 Workshop: Annotation interpretation and management of mutations (AIMM) Ghent, Belgium. 26 September 2010 Abstract Background: Mutation impact extraction is an important task designed to harvest relevant annotations from scientific documents for reuse in multiple contexts. Our previous work on text mining for mutation impacts resulted in (i) the development of a GATE-based pipeline that mines texts for information about impacts of mutations on proteins, (ii) the population of this information into our OWL DL mutation impact ontology, and (iii) establishing an experimental semantic database for storing the results of text mining. Results: This article explores the possibility of using the SADI framework as a medium for publishing our mutation impact software and data. SADI is a set of conventions for creating web services with semantic descriptions that facilitate automatic discovery and orchestration. We describe a case study exploring and demonstrating the utility of the SADI approach in our context. We describe several SADI services we created based on our text mining API and data, and demonstrate how they can be used in a number of biologically meaningful scenarios through a SPARQL interface (SHARE) to SADI services. In all cases we pay special attention to the integration of mutation impact services with external SADI services providing information about related biological entities, such as proteins, pathways, and drugs.
    [Show full text]
  • Instructions for the Preparation of a Camera
    Building an effective Semantic Web for Health Care and the Life Sciences Editor(s): Krzysztof Janowicz, Pennsylvania State University, USA; Pascal Hitzler, Wright State University, USA Solicited review(s): Rinke Hoekstra, Universiteit van Amsterdam, The Netherlands; Kunal Verma, Accenture, USA Michel Dumontier * Department of Biology, Institute of Biochemistry, School of Computer Science, Carleton University, 1125 Colonel By Drive, Ottawa, Ontario, Canada, K1S5B6 Abstract. Health Care and the Life Sciences (HCLS) are at the leading edge of applying advanced information technologies for the purpose of knowledge management and knowledge discovery. To realize the promise of the Semantic Web as a frame- work for large-scale, distributed knowledge management for biomedical informatics, substantial investments must be made in technological innovation and social agreement. Building an effective Biomedical Semantic Web will be a long, hard and te- dious process. First, domain requirements are still driving new technology development, particularly to address issues of scala- bility in light of demands for increased expressive capability in increasingly massive and distributed knowledge bases. Second, significant challenges remain in the development and adoption of a well founded, intuitive and coherent knowledge representa- tion for general use. Support for semantic interoperability across a large number of sub-domains (from molecular to medical) requires that rich, machine-understandable descriptions are consistently represented by well formulated vocabularies drawn from formal ontology , and that they can be easily composed and published by domain experts. While current focus has been on data, the provisioning of semantic web services, such that they may be automatically discovered to answer a question, will be an essential component of deploying Semantic Web technologies as part of academic or commercial cyberinfrastructure.
    [Show full text]
  • Elseweb Meets SADI: Supporting Data-To-Model Integration For
    Discovery Informatics: AI Takes a Science-Centered View on Big Data AAAI Technical Report FS-13-01 ELSEWeb Meets SADI: Supporting Data-to-Model Integration for Biodiversity Forecasting Nicholas Del Rio1, Natalia Villanueva-Rosales1, Deana Pennington1, Karl Benedict2 Aimee Stewart3, C. J. Grady3 1Cyber-Share Center of Excellence, University of Texas at El Paso, 500 W University Ave, El Paso, TX 79968 2Earth Data Analysis Center, MSC01 1110, 1 University of New Mexico, Albuquerque, NM 87131 3University of Kansas Biodiversity Institute, 1345 Jayhawk Blvd., Lawrence, KS, 66045-7593 Abstract Nativi, Mazzetti, and Geller 2012). Yet progress is slow, much of the legacy data remain difficult to discover, and In this paper, we describe the approach of the Earth, the relevant environmental data are often voluminous, het- Life and Semantic Web (ELSEWeb) project that facili- tates the discovery and transformation of Earth observa- erogeneous, and require specialized expertise to work with. tion data sources for the creation of species distribution For example, an investigation into the combined impact of models (data-to-model) transformations. ELSEWeb au- climate change and population growth on plant biodiversity tomates the discovery and processing of voluminous, in a region would require integrated analysis of data from heterogeneous satellite imagery and other geospatial climate change models, population models, species distri- data available at the Earth Data Analysis Center to bution models, and water models (since water availability be included in Lifemapper Species Distribution mod- drives plant productivity and is impacted by climate and els by using AI knowledge representation and reason- population change). However, each of these model types are ing techniques developed by the Semantic Web com- created by independent scientific communities that typically munity.
    [Show full text]
  • Clever Generation of Rich SPARQL Queries from Annotated Relational
    Clever generation of rich SPARQL queries from annotated relational schema: application to Semantic Web Service creation for biological databases Julien Wollbrett, Pierre Larmande, Frederic de Lamotte Guéry, Manuel Ruiz To cite this version: Julien Wollbrett, Pierre Larmande, Frederic de Lamotte Guéry, Manuel Ruiz. Clever generation of rich SPARQL queries from annotated relational schema: application to Semantic Web Service creation for biological databases. BMC Bioinformatics, BioMed Central, 2013, 14, 10.1186/1471-2105-14-126. hal-01268129 HAL Id: hal-01268129 https://hal.archives-ouvertes.fr/hal-01268129 Submitted on 29 May 2020 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Wollbrett et al. BMC Bioinformatics 2013, 14:126 http://www.biomedcentral.com/1471-2105/14/126 SOFTWARE Open Access Clever generation of rich SPARQL queries from annotated relational schema: application to Semantic Web Service creation for biological databases Julien Wollbrett1,4*, Pierre Larmande2,4, Frédéric de Lamotte3 and Manuel Ruiz1,4* Abstract Background: In recent years, a large amount of “-omics” data have been produced. However, these data are stored in many different species-specific databases that are managed by different institutes and laboratories. Biologists often need to find and assemble data from disparate sources to perform certain analyses.
    [Show full text]
  • SHARE & the Semantic
    SHARE & the Semantic Web - This Time it’s Personal! Benjamin Vandervalk1 , Luke McCarthy1, and Mark Wilkinson1 1 Heart + Lung Institute at St. Paul‟s Hospital, UBC, Vancouver, BC, Canada. [email protected] Abstract. Currently, in the Semantic Web in Healthcare and Life Sciences large ontologies representing a “consensus” world-view are selected, then data relevant to those ontologies is manually located and aggregated prior to reasoning. This approach presents challenges in addition to the real-world limitations of reasoning over such large-scale data: data critical to discovery in the life sciences must often be generated dynamically by analytical algorithms. Here we describe a prototype system – SHARE – that consumes ontologies and queries and automatically discovers, retrieves, and reasons over information relevant to that ontological world view by locating and executing Web Services. The SHARE system exhibits what we believe are crucial characteristics of the Semantic Web vision: relevant information is dynamically discovered and/or generated from distributed resources, and is interpreted, updated, or re- interpreted as the over-laid local ontology changes. Keywords: RDF, OWL, Semantic Web, Semantic Web Services, SPARQL, Workflow, Workflow Orchestration, SADI, SHARE 1 Introduction As Semantic Web technologies become more pervasive in bioinformatics, we must urgently define scalable ways to discover, analyse, and organize biological data in ways that encourage scientific exploration, discourse and disagreement, and in a manner compliant with the proposed Semantic Web “stack”[1]. The vision of the “stack” is that distributed, linked data is interpreted by logical reasoning, guided by ontological axioms. Unfortunately, today‟s Semantic Web in bioinformatics lacks many features that make this vision scalable to the size of the Web.
    [Show full text]
  • The 2Nd DBCLS Biohackathon: Interoperable Bioinformatics Web Services for Integrated Applications
    UC San Diego UC San Diego Previously Published Works Title The 2nd DBCLS BioHackathon: interoperable bioinformatics Web services for integrated applications Permalink https://escholarship.org/uc/item/2b46w1j7 Journal Journal of Biomedical Semantics, 2(1) ISSN 2041-1480 Authors Katayama, Toshiaki Wilkinson, Mark D Vos, Rutger et al. Publication Date 2011-08-02 DOI http://dx.doi.org/10.1186/2041-1480-2-4 Peer reviewed eScholarship.org Powered by the California Digital Library University of California Katayama et al. Journal of Biomedical Semantics 2011, 2:4 http://www.jbiomedsem.com/content/2/1/4 JOURNAL OF BIOMEDICAL SEMANTICS REVIEW Open Access The 2nd DBCLS BioHackathon: interoperable bioinformatics Web services for integrated applications Toshiaki Katayama*, Mark D Wilkinson, Rutger Vos, Takeshi Kawashima, Shuichi Kawashima, Mitsuteru Nakao, Yasunori Yamamoto, Hong-Woo Chun, Atsuko Yamaguchi, Shin Kawano, Jan Aerts, Kiyoko F Aoki-Kinoshita, Kazuharu Arakawa, Bruno Aranda, Raoul JP Bonnal, José M Fernández, Takatomo Fujisawa, Paul MK Gordon, Naohisa Goto, Syed Haider, Todd Harris, Takashi Hatakeyama, Isaac Ho, Masumi Itoh, Arek Kasprzyk, Nobuhiro Kido , Young-Joo Kim, Akira R Kinjo, Fumikazu Konishi, Yulia Kovarskaya, Greg von Kuster, Alberto Labarga, Vachiranee Limviphuvadh, Luke McCarthy, Yasukazu Nakamura, Yunsun Nam, Kozo Nishida, Kunihiro Nishimura, Tatsuya Nishizawa, Soichi Ogishima, Tom Oinn, Shinobu Okamoto, Shujiro Okuda, Keiichiro Ono, Kazuki Oshita, Keun-Joon Park, Nicholas Putnam, Martin Senger, Jessica Severin, Yasumasa Shigemoto, Hideaki Sugawara, James Taylor, Oswaldo Trelles, Chisato Yamasaki, Riu Yamashita, Noriyuki Satoh and Toshihisa Takagi * Correspondence: [email protected] Abstract Database Center for Life Science, Research Organization of Background: The interaction between biological researchers and the bioinformatics Information and Systems, 2-11-16 Yayoi, Bunkyo-ku, Tokyo, 113-0032, tools they use is still hampered by incomplete interoperability between such tools.
    [Show full text]
  • SCRY: Extending SPARQL with Custom Data Processing Methods for the Life Sciences
    SCRY: extending SPARQL with custom data processing methods for the life sciences Bas Stringer1;3, Albert Meroño-Peñuela2, Sanne Abeln1, Frank van Harmelen2, and Jaap Heringa1 1 Centre for Integrative Bioinformatics (IBIVU), Vrije Universiteit Amsterdam, NL 2 Knowledge Representation and Reasoning Group, Vrije Universiteit Amsterdam, NL 3 To whom correspondence should be adressed ([email protected]) Abstract. An ever-growing amount of life science databases are (par- tially) exposed as RDF graphs (e.g. UniProt, TCGA, DisGeNET, Human Protein Atlas), complementing traditional methods to disseminate bio- data. The SPARQL query language provides a powerful tool to rapidly retrieve and integrate this data. However, the inability to incorporate custom data processing methods in SPARQL queries inhibits its applica- tion in many life science use cases. It should take far less effort to inte- grate data processing methods, such as BLAST, with SPARQL. We pro- pose an effective framework for extending SPARQL with custom methods should fulfill four key requirements: generality, reusability, interoperabil- ity and scalability. We present SCRY, the SPARQL compatible service layer, which provides custom data processing within SPARQL queries. SCRY is a lightweight SPARQL endpoint that interprets parts of basic graph patterns as input for user defined procedures, generating an RDF graph against which the query is resolved on-demand. SCRY’s federation- oriented design allows for easy integration with existing endpoints, extend- ing SPARQL’s functionality to include custom data processing methods in a decoupled, standards-compliant, tool independent manner. We demon- strate the power of this approach by performing statistical analysis of a benchmark, and by incorporating BLAST in a query which simultaneously finds the tissues expressing Hemoglobin β and its homologs.
    [Show full text]
  • A Semantic Framework for Biomedical Image Discovery Ahmad C
    iCyrus: A Semantic Framework for Biomedical Image Discovery Ahmad C. Bukhari1, Mate Levente Nagy2, Michael Krauthammer2, Paolo Ciccarese3, *Christopher J. O. Baker1 1 Dept. CSAS, University of New Brunswick, Saint John, New Brunswick, E2L 4L5, Canada 2 Yale Center for Medical Informatics, 300 Cedar Street, New Haven, CT 06510, USA 3 Harvard Medical School 25 Shattuck Street, Boston, MA 02115 USA {sbukhari,bakerc}@unb.ca Abstract. Images have an irrefutably central role in scientific discovery and dis- course. However, the issues associated with knowledge management and utility oper- ations unique to image data are only recently gaining recognition. In our previous work, we have developed Yale Image finder (YIF), which is a novel Biomedical im- age search engine that indexes around two million biomedical image data, along with associated metadata. While YIF is considered to be a veritable source of easily acces- sible biomedical images, there are still a number of usability and interoperability chal- lenges that have yet to be addressed. To overcome these issues and to accelerate the adoption of the YIF for next generation biomedical applications, we have developed a publically accessible semantic API for biomedical images with multiple modalities. The core API called iCyrus is powered by a dedicated semantic architecture that ex- poses the YIF content as linked data, permitting integration with related information resources and consumption by linked data-aware data services. To facilitate the ad- hoc integration of image data with other online data resources, we also built semantic web services for iCyrus, such that it is compatible with the SADI semantic web ser- vice framework.
    [Show full text]
  • Semantic Web Service Search: a Brief Survey
    Zurich Open Repository and Archive University of Zurich Main Library Strickhofstrasse 39 CH-8057 Zurich www.zora.uzh.ch Year: 2016 Semantic web service search: a brief survey Klusch, Matthias ; Kapahnke, Patrick ; Schulte, Stefan ; Lecue, Freddy ; Bernstein, Abraham Abstract: Scalable means for the search of relevant web services are essential for the development of intelligent service-based applications in the future Internet. Key idea of semantic web services is to enable such applications to perform a high-precision search and automated composition of services based on formal ontology-based representations of service semantics. In this paper, we briefly survey the state of the art of semantic web service search. DOI: https://doi.org/10.1007/s13218-015-0415-7 Posted at the Zurich Open Repository and Archive, University of Zurich ZORA URL: https://doi.org/10.5167/uzh-134977 Journal Article Accepted Version Originally published at: Klusch, Matthias; Kapahnke, Patrick; Schulte, Stefan; Lecue, Freddy; Bernstein, Abraham (2016). Se- mantic web service search: a brief survey. Künstliche Intelligenz (KI), 30(2):139-147. DOI: https://doi.org/10.1007/s13218-015-0415-7 K¨unstliche Intelligenz manuscript No. (will be inserted by the editor) Semantic Web Service Search: A Brief Survey Matthias Klusch · Patrick Kapahnke · Stefan Schulte · Freddy Lecue · Abraham Bernstein Received: date / Accepted: date Abstract Scalable means for the search of relevant 1 Introduction web services are essential for the development of in- telligent service-based applications in the future Inter- net. Key idea of semantic web services is to enable At present, there are tens of thousands of web services such applications to perform a high-precision search for a huge variety of applications and in many hetero- and automated composition of services based on formal geneous formats available for the common user of the ontology-based representations of service semantics.
    [Show full text]