Data Quality Centric Application Framework for Big Data

Total Page:16

File Type:pdf, Size:1020Kb

Data Quality Centric Application Framework for Big Data ALLDATA 2016 : The Second International Conference on Big Data, Small Data, Linked Data and Open Data (includes KESA 2016) Data Quality Centric Application Framework for Big Data Venkat N. Gudivada∗, Dhana Rao y, and William I. Grosky z ∗Department of Computer Science, East Carolina University, USA yDepartment of Biology, East Carolina University, USA zDepartment of Computer and Information Science, University of Michigan - Dearborn, USA email: [email protected], [email protected], and [email protected] Abstract—Risks associated with data quality in Big and load (ETL) process and analyze large datasets. Pig Data have wide ranging adverse implications. Current re- generates MapReduce jobs that perform the dataflows search in Big Data primarily focuses on the proverbial and thus provides a high level abstract interface to harvesting of low hanging fruit and applications are de- veloped using the Hadoop Ecosystem. In this paper, we MapReduce. The Pig Latin enhances the Pig through a discuss the risks and attendant consequences emanating programming language extension. It provides common from data quality in Big Data. We propose a data quality data manipulation operations such as grouping, joining, centric framework for Big Data applications and describe an and filtering. Hive is a tool for enabling data summa- approach to implementing it. rization, ad hoc query execution, and analysis of large Keywords—Big Data; Data Quality; Data Analytics; Application Framework. datasets stored in HDFS-compatible file systems. In other words, Hive serves as a SQL-based data warehouse for the Hadoop Ecosystem. I. Introduction The other widely used tools in the Hadoop Ecosys- Big Data, which has emerged in the last five years, tem include Cascading, Scalding, and Cascalog. Cascading has wide ranging implications for the society at large as is a popular high-level Java API that hides many of the well as individuals at a personal level. It has the poten- complexities of MapReduce programming. Scalding and tial for groundbreaking scientific discoveries and power Cascalog are even higher level and concise APIs to Cas- for misuse and violation of personal privacy. Big Data cading, accessible from Scala and Clojure, respectively. poses research challenges and exacerbates data quality While Scalding enhances Cascading with matrix algebra problems [1][2]. libraries, Cascalog adds logic programming constructs. Though it is difficult to precisely define Big Data, Hadoop Ecosystem tools for the velocity dimension it is often described in terms of five Vs – Volume, are the Storm and Spark. Storm provides a distributed Velocity, Variety, Veracity, and Value. Volume refers to computational framework for event stream processing. the unprecedented scale and velocity refers to the speed It features an array of spouts specialized for receiving at which the data is being generated. The heterogeneous streaming data form disparate data sources, incremental nature of data – unstructured, semi-structured, and struc- computation, and computing metrics on rolling data win- tured – and associated data formats refers to the variety dows in real-time. Like Storm, Spark supports real-time dimension. Typically Big Data goes through several data stream data processing and provides several additional transformations from its inception before reaching the libraries for database access, graph algorithms, and ma- consumers. Veracity refers to data trustworthiness and chine learning. data provenance is one way to specify veracity. Finally, Amazon Web Services (AWS) Elastic MapReduce the value dimension refers to unusual insights and ac- (EMR) is a cloud-hosted commercial offering of the tionable plans that are derived from Big Data through Hadoop Ecosystem from Amazon. Microsoft’s StreamIn- analytic processes. sight is a commercial product for stream data processing with focus on complex event processing applications. A. Apache Hadoop Ecosystem Currently, most of Big Data research is focused B. Industry Driven Big Data Research on issues related to volume, velocity, and value. These Big Data research is primarily industry driven. Nat- investigations primarily use the Hadoop Ecosystem which urally, the focus is on the proverbial harvesting of the encompasses Hadoop Distributed File System (HDFS), a low-hanging fruit due to economic considerations. Re- high-performance parallel data processing engine called search investigations in variety (aka data heterogeneity) Hadoop MapReduce, and various tools for specific tasks. and veracity dimensions are in the initial phases and For example, Pig is a declarative language for ad hoc anal- tools are yet to emerge. However, data heterogeneity ysis. It is used to specify dataflows to extract, transform, has been studied since 1980s in the database context, Copyright (c) IARIA, 2016. ISBN: 978-1-61208-457-2 24 ALLDATA 2016 : The Second International Conference on Big Data, Small Data, Linked Data and Open Data (includes KESA 2016) where the focus has been on database schema integration A. Data Science and distributed query processing. These investigations Big Data is a double-edged sword. On the one hand, assumed that the data is structured and all the com- it enables scientists to overcome problems associated ponent databases use one of the three data models – with small data samples. For example, it enables relaxing relational, network, and hierarchical. However, in the Big the assumptions of theoretical models, avoids over-fitting Data context, the data is predominantly unstructured, of models to training data, effectively deals with noisy and semi-structured and structured data is derived using training data, and provides ample test data to validate a processing framework such as the Hadoop MapReduce. models. Typically Big Data is obtained from multiple vendor Halevy, Norvig and Pereira [6] argue that the accu- sources. Maintaining data provenance [3] – record of origi- rate selection of a mathematical model ceases its im- nal data sources and subsequent transformations applied portance when compensated by big enough data. This to the data – plays a central role in data quality assurance. insight is particularly important for tasks that are ill- Veracity investigations are beginning to appear under the posed for mathematically precise algorithmic solutions. umbrella term data provenance [4][5]. Such tasks abound in natural language processing includ- ing language modeling, part-of-speech tagging, named C. Data Quality Issues in Big Data entity recognition, and parsing. Ill-posed tasks also occur in applications such as computer vision, autonomous The value facet of Big Data critically depends on vehicle navigation, image and video processing. upstream data acquisition, cleaning, and transformation Big Data enables a new paradigm for solving ill- (ACT) tasks. Especially with the emerging Internet of posed problems by managing the complexity of the prob- Things (IoT) technologies, more and more data is machine lem domain through building simple but high quality generated. While some of the IoT data is generated under models by harnessing the power of massive data. For controlled conditions, most of it is created in environ- example, in the imaging domain, an image can be recov- ments which are subjected to fluctuations and thereby ered by simply averaging successive image frames that the quality of data can vary in an unpredictable manner. are highly corrupted by a normally distributed Gaussian For example, the operating environment of wireless sen- noise. sor and smart camera networks is subject to weather. Data quality in ACT tasks has a direct bearing B. Big Data Risks on volume, velocity, variety, and veracity facets of Big On the flip side, Big Data poses several challenges Data. Though data quality has been studied for over two for personal privacy. It provides opportunities for using decades, these investigations focused on database and in- it in ways that are different from the original intention formation systems. Data quality assurance is the ultimate [7]. Cases of Big Data misuse abound. González et al. biggest challenge for Big Data management. Currently, [8] describe how individual human mobility patterns can data quality assurance requires intensive manual cleaning be accurately predicted. Their study tracked position efforts. This is neither feasible nor economically viable. history of 100,000 anonymized mobile phone users over In this paper, we discuss data quality issues in the a six-month period. Contrary to the existing theories, Big Data context. We propose a data quality centric refer- this study found that human trajectories show a high ence framework for developing Big Data applications. We degree of temporal and spatial regularity and individual also describe how this framework can be implemented travel patterns collapse into a single spatial probability using open source tools. distribution. They conclude that despite the diversity of The remainder of the paper is structured as follows. peoples’ travel history, they follow simple reproducible Risks and implications of data quality are discussed in travel patterns. In other words, there is a correlation Section II. Data quality centric application framework for between spatio-temporal history and a person’s identity. Big Data is described in Section III. Considerations for Another case in point is how the retailer Target used implementing this framework are discussed in Section IV. its customers’ purchase history and other information to Finally, Section V concludes the paper. accurately
Recommended publications
  • Creating Dashboards and Data Stories Within the Data & Analytics Framework (DAF)
    Creating dashboards and data stories within the Data & Analytics Framework (DAF) Michele Petitoc, Francesca Fallucchia,b and De Luca Ernesto Williama,b a DIII, Guglielmo Marconi University, Via Plinio 44, 00193 Roma RM, Italy E-mail: [email protected], [email protected] b DIFI, Georg Eckert Institute Braunschweig, Celler Str. 3, 38114 Braunschweig, German E-mail: [email protected], [email protected] c DIFI, Università̀ di Pisa, Lungarno Antonio Pacinotti 43, 56126, Pisa PI, Italy E-mail: [email protected] Abstract. In recent years, many data visualization tools have appeared on the market that can potentially guarantee citizens and users of the Public Administration (PA) the ability to create dashboards and data stories with just a few clicks, using open and unopened data from the PA. The Data Analytics Framework (DAF), a project of the Italian government launched at the end of 2017 and currently being tested, integrates data based on the semantic web, data analysis tools and open source business intelli- gence products that promise to solve the problems that prevented the PA to exploit its enormous data potential. The DAF favors the spread of linked open data (LOD) thanks to the integration of OntoPiA, a network of controlled ontologies and vocabularies that allows us to describe the concepts we find in datasets, such as "sex", "organization", "people", "addresses", "points of inter- est", "events" etc. This paper contributes to the enhancement of the project by introducing the process of creating a dashboard in the DAF in 5 steps, starting from the dataset search on the data portal, to the creation phase of the real dashboard through Superset and the related data story.
    [Show full text]
  • ECM Platforms: the Next Generation of Enterprise Content Management
    ECM Platforms: The Next Generation of Enterprise Content Management The Evolution of ECM: Platform Oriented, Flexible, Architected for the Cloud and Designed for Technologists Over the last decade, content technology, use and expectations have signicantly evolved - redening the requirements for architects, project managers and developers responsible for implementing ECM solutions. Emerging best practices suggest that a well-designed ECM platform supports solution development and deployment in a manner that is predictable, sustainable and aordable. This white paper explores these content technology changes, and explains the value and benets to business and technical adopters of embracing a new generation of ECM platform technology. Eric Barroca, Roland Benedetti - August 12, 2011 © Copyright 2011 - 2012 Nuxeo. All Rights Reserved. www.nuxeo.com Taking a Platform Approach to Building Content Centric Applications Contents Executive Summary_______________________________________________________4 Scope and Goals__________________________________________________________4 Target Audience__________________________________________________________4 Defining Enterprise Content and Enterprise Content Management_______________5 What Is Enterprise Content Management?_____________________________________5 What Is Enterprise Content?_________________________________________________6 Drivers for ECM Adoption___________________________________________________6 Enterprise Content Trends__________________________________________________8 ECM: It's not Just
    [Show full text]
  • State of the Art Technologies and Architectures for EMPOWER
    Deliverable D3.1.1 State of the Art Technologies and Architectures for EMPOWER Project title: Support of Patient Empowerment by an intelligent self- management pathway for patients Project acronym: EMPOWER Project identifier: FP7-ICT-2011-288209 Project instrument: STREP Web link: www.empower-fp7.eu Dissemination level: PU (public) Contractual delivery: 2012-06-30 Actual delivery: 2012-06-28 Leading partner: HMGU The research leading to these results has received funding from the European Community's Seventh Framework Programme (FP7/2007-2013) under grant agreement No 288209, EMPOWER Project. FP7-288209 EMPOWER Document History Version Date Changes From Review HMGU, V0.1 2012-02-29 Initial Document All Partners SRFG V0.2 2012-05-15 Content added All Partners All Partners V0.4 2012-05-25 Comments SRFG All Partners MOH, SRFG, V0.5 2012-06-14 Content added All Partners GOIN HMGU, V0.6 2012-06-19 Content added All Partners ICOM HMGU, V0.7 2012-06-22 Content added SRFG, MOH, All Partners GOIN, USI V1.0 2012-06-27 Final revision All Partners - EMPOWER Consortium Contacts Beneficiary Name Phone E-Mail SRFG Manuela Plößnig +43 662 2288 402 [email protected] HMGU Claudia Hildebrand +49 89 3187 4182 [email protected] GOIN Siegfried Jedamzik +49 8 41956161 [email protected] USI Peter J. Schulz +41586664724 [email protected] SRDC Asuman Dogac +90 312 210 13 93 [email protected] ICOM Ilias Lamprinos +302106677953 [email protected] MOH Ali Kemal Caylan +903125851907 [email protected] d311_empower_technologies-and-architectures-v10.docx 2 / 136 FP7-288209 EMPOWER Table of Contents 1 Summary .......................................................................................................................
    [Show full text]
  • Full-Graph-Limited-Mvn-Deps.Pdf
    org.jboss.cl.jboss-cl-2.0.9.GA org.jboss.cl.jboss-cl-parent-2.2.1.GA org.jboss.cl.jboss-classloader-N/A org.jboss.cl.jboss-classloading-vfs-N/A org.jboss.cl.jboss-classloading-N/A org.primefaces.extensions.master-pom-1.0.0 org.sonatype.mercury.mercury-mp3-1.0-alpha-1 org.primefaces.themes.overcast-${primefaces.theme.version} org.primefaces.themes.dark-hive-${primefaces.theme.version}org.primefaces.themes.humanity-${primefaces.theme.version}org.primefaces.themes.le-frog-${primefaces.theme.version} org.primefaces.themes.south-street-${primefaces.theme.version}org.primefaces.themes.sunny-${primefaces.theme.version}org.primefaces.themes.hot-sneaks-${primefaces.theme.version}org.primefaces.themes.cupertino-${primefaces.theme.version} org.primefaces.themes.trontastic-${primefaces.theme.version}org.primefaces.themes.excite-bike-${primefaces.theme.version} org.apache.maven.mercury.mercury-external-N/A org.primefaces.themes.redmond-${primefaces.theme.version}org.primefaces.themes.afterwork-${primefaces.theme.version}org.primefaces.themes.glass-x-${primefaces.theme.version}org.primefaces.themes.home-${primefaces.theme.version} org.primefaces.themes.black-tie-${primefaces.theme.version}org.primefaces.themes.eggplant-${primefaces.theme.version} org.apache.maven.mercury.mercury-repo-remote-m2-N/Aorg.apache.maven.mercury.mercury-md-sat-N/A org.primefaces.themes.ui-lightness-${primefaces.theme.version}org.primefaces.themes.midnight-${primefaces.theme.version}org.primefaces.themes.mint-choc-${primefaces.theme.version}org.primefaces.themes.afternoon-${primefaces.theme.version}org.primefaces.themes.dot-luv-${primefaces.theme.version}org.primefaces.themes.smoothness-${primefaces.theme.version}org.primefaces.themes.swanky-purse-${primefaces.theme.version}
    [Show full text]
  • Kyriakakisalexandros2019.Pdf
    TECHNOLOGICAL EDUCATIONAL INSTITUTE OF CRETE SCHOOL OF ENGINEERING DEPARTMENT OF INFORMATICS ENGINEERING THESIS ONTOLOGY-BASED SEARCH ENGINE WITH SPEECH RECOGNITION ALEXANDROS KYRIAKAKIS SUPERVISOR PROFESSOR MANOLIS TSIKNAKIS HERAKLION FEBRUARY 2019 Kyriakakis Alexandros Copyright ALEXANDROS KYRIAKAKIS 2019 Formal Declaration: We assure you that we are authors of this bachelor thesis and that the assistance we received during its preparation is fully recognized and it is stated in the thesis. Also, we have stated all the sources which we made use of data, ideas or words from, either in their exact form or changed. We also assure you that this thesis was prepared by us personally for the curriculum of the Department of Informatics Engineering of TEI of Crete. Υπεύθυνη Δήλωση: Βεβαιώνουμε ότι είμαστε συγγραφείς αυτής της πτυχιακής εργασίας και ότι κάθε βοήθεια την οποία είχαμε για την προετοιμασία της, είναι πλήρως αναγνωρισμένη και αναφέρεται στην πτυχιακή εργασία. Επίσης έχουμε αναφέρει τις όποιες πηγές από τις οποίες κάναμε χρήση δεδομένων, ιδεών ή λέξεων, είτε αυτές αναφέρονται ακριβώς είτε παραφρασμένες. Επίσης βεβαιώνουμε ότι αυτή η πτυχιακή εργασία προετοιμάστηκε από εμάς προσωπικά ειδικά για τις απαιτήσεις του προγράμματος σπουδών του Τμήματος Μηχανικών Πληροφορικής του Τ.Ε.Ι. Κρήτης. 2 Ontology-based search engine with speech recognition Abstract Searching for data throughout the internet is not always an easy task. The reason for that is that the internet is flooded with data and the most common searching techniques often fail to provide the best possible results for the users’ needs. Therefore, researchers have for years been trying to find alternative ways of searching. Through that research, searching techniques based on semantic technologies have emerged.
    [Show full text]
  • Developer Documentation
    Developer Documentation Nuxeo Platform 5.5 Table of Contents 1. Technical Documentation Center . 4 1.1 Overview and Architecture . 5 1.2 Overview . 6 1.3 Architecture . 9 1.3.1 Architecture overview . 9 1.3.2 About the content repository . 12 1.3.2.1 VCS Architecture . 21 1.3.3 Platform features quick overview . 30 1.3.4 Component model overview . 32 1.3.5 API and connectors . 36 1.3.6 UI frameworks . 39 1.3.7 Deployment options . 42 1.3.8 Performance management for the Nuxeo Platform . 47 1.4 Customization and Development . 53 1.4.1 Learning to customize Nuxeo EP . 54 1.4.2 Document types . 55 1.4.3 Document, form and listing views . 62 1.4.3.1 Layouts (forms and views) . 62 1.4.3.1.1 Manage layouts . 63 1.4.3.1.2 Document layouts . 70 1.4.3.1.3 Layout display . 71 1.4.3.1.4 Standard widget types . 72 1.4.3.1.5 Custom templates . 73 1.4.3.1.6 Custom widget types . 85 1.4.3.1.7 Generic layout usage . 93 1.4.3.2 Content views . 93 1.4.3.2.1 Custom Page Providers . 100 1.4.3.2.2 Page Providers without Content Views . 101 1.4.3.3 Views on documents . 102 1.4.4 Versioning . 104 1.4.5 User Actions (links, buttons, icons, tabs) . 105 1.4.6 Events and Listeners . 108 1.4.6.1 Scheduling periodic events . 113 1.4.7 Tagging .
    [Show full text]
  • Structuring Medical Records with Apache Stanbol
    Structuring Medical Records with Apache Stanbol Rafa Haro, Senior Software Engineer, Athento Antonio Pérez Morales, Senior Software Engineer, Ixxus • Committer, PMC Member @ Apache Stanbol, Apache ManifoldCF • Topics: Document Analysis, NLP, Machine Learning, Semantic Technologies, ECM • Committer @ Apache Stanbol, Apache ManifoldCF • Topics: ECM, Semantic Search, ETL, Machine Learning Apache Stanbol provides a set of reusable components for semantic content management. It extends existing CMSs with a number of semantic services. Traditional Semantic CMS Software Architecture for Semantically Enabled CM and ECM systems Apache Stanbol Story • Started within FP7 European Project IKS (Interactive Knowledge Stack. 2009 - 2012) • IKS project brought together an Open Source Community for Defining and Building Platforms in the Semantic CMS Space • Incubated in November 2010 • Successfully promoted within CMS and ECM industry through IKS Early Adopters Program • Graduated to Top-Level Apache Project in October 2012 What is a Semantic CMS? Traditional CMS Semantic CMS Atomic Unit: Document Atomic Unit: Entity Properties as meta-data Semantic meta-data (key-value schemas) (RDF) Keyword Search Semantic Search Document Management Knowledge Management Document Types Entity Management Document Workflow Ontologies Source: What Apache Stanbol Can Do for You?. Fabian Christ. ApacheCon Europe 2012 Key Points • Designed to bring Semantic Technologies to existing CMS • Non-intrusive set of RESTful ‘Semantic’ Services • Extremely Modular: Use only the modules
    [Show full text]
  • Code Smell Prediction Employing Machine Learning Meets Emerging Java Language Constructs"
    Appendix to the paper "Code smell prediction employing machine learning meets emerging Java language constructs" Hanna Grodzicka, Michał Kawa, Zofia Łakomiak, Arkadiusz Ziobrowski, Lech Madeyski (B) The Appendix includes two tables containing the dataset used in the paper "Code smell prediction employing machine learning meets emerging Java lan- guage constructs". The first table contains information about 792 projects selected for R package reproducer [Madeyski and Kitchenham(2019)]. Projects were the base dataset for cre- ating the dataset used in the study (Table I). The second table contains information about 281 projects filtered by Java version from build tool Maven (Table II) which were directly used in the paper. TABLE I: Base projects used to create the new dataset # Orgasation Project name GitHub link Commit hash Build tool Java version 1 adobe aem-core-wcm- www.github.com/adobe/ 1d1f1d70844c9e07cd694f028e87f85d926aba94 other or lack of unknown components aem-core-wcm-components 2 adobe S3Mock www.github.com/adobe/ 5aa299c2b6d0f0fd00f8d03fda560502270afb82 MAVEN 8 S3Mock 3 alexa alexa-skills- www.github.com/alexa/ bf1e9ccc50d1f3f8408f887f70197ee288fd4bd9 MAVEN 8 kit-sdk-for- alexa-skills-kit-sdk- java for-java 4 alibaba ARouter www.github.com/alibaba/ 93b328569bbdbf75e4aa87f0ecf48c69600591b2 GRADLE unknown ARouter 5 alibaba atlas www.github.com/alibaba/ e8c7b3f1ff14b2a1df64321c6992b796cae7d732 GRADLE unknown atlas 6 alibaba canal www.github.com/alibaba/ 08167c95c767fd3c9879584c0230820a8476a7a7 MAVEN 7 canal 7 alibaba cobar www.github.com/alibaba/
    [Show full text]
  • How the Apache Community Upgrades Dependencies: an Evolutionary Study?
    Empirical Software Engineering manuscript No. (will be inserted by the editor) How the Apache Community Upgrades Dependencies: An Evolutionary Study? Gabriele Bavota Gerardo Canfora · · Massimiliano Di Penta Rocco Oliveto · · Sebastiano Panichella the date of receipt and acceptance should be inserted later Abstract Software ecosystems consist of multiple software projects, often in- terrelated by means of dependency relations. When one project undergoes changes, other projects may decide to upgrade their dependency. For exam- ple, a project could use a new version of a component from another project because the latter has been enhanced or subject to some bug-fixing activities. In this paper we study the evolution of dependencies between projects in the Java subset of the Apache ecosystem, consisting of 147 projects, for a period of 14 years, resulting in 1,964 releases. Specifically, we investigate (i) how de- pendencies between projects evolve over time when the ecosystem grows, (ii) what are the product and process factors that can likely trigger dependency upgrades, (iii) how developers discuss the needs and risks of such upgrades, ? This paper is an extended version of the paper: “Gabriele Bavota, Gerardo Canfora, Massimiliano Di Penta, Rocco Oliveto, Sebastiano Panichella: The Evolution of Project Inter-dependencies in a Software Ecosystem: The Case of Apache. 2013 IEEE International Conference on Software Maintenance, ICSM 2013, Eindhoven, The Netherlands, September 22-28, 2013: 280-289, IEEE” Gabriele Bavota Dept of Engineering, University of Sannio, Benevento (BN), Italy E-mail: [email protected] Gerardo Canfora Dept of Engineering, University of Sannio, Benevento (BN), Italy E-mail: [email protected] Massimiliano Di Penta Dept of Engineering, University of Sannio, Benevento (BN), Italy E-mail: [email protected] Rocco Oliveto Dept.
    [Show full text]
  • Fusepool P3 Summary Y1
    Coordination Fusepool P3 (609696) www.fusepool.eu getfp3.com Thomas Gehrig Bern University of Applied Sciences BUAS Berner Fachhochschule BFH CH-3005 Bern www.wirtschaft.bfh.ch [email protected] +41 79 760 06 06 FUSEPOOL P3 SUMMARY Y1 1 Project context and objectives The goal of Fusepool P3 project is to make publishing and processing of open data as linked data easy. For this purpose Fusepool P3 develops a set of software components that integrate seamlessly by well- defined APIs basing on Linked Data Best Practices and the Linked Data Platform standard. To ensure longevity of the code and the APIs developed within Fusepool the software is designed so that the individual components can be used not only as parts of the overall software, but also individually. The architecture is not tied to a particular runtime environment but bases exclusively on web standards. This allows components to be implemented using any language and framework. As a consequence of this the focus of the platform is not to build a central application into which the components are added as plugins but mainly specifying generic APIs to allow the interaction of loosely coupled modules. The platform is what emerges from components communicating with generic RESTful RDF APIs. This focus on standards and loose coupling will provide the extensibility to cover both the current as well as future end-users needs. This is because new developers as well as new platforms and tools can be integrated very quickly. The Fusepool P3 project partners Provincia Autonoma di Trento (PAT) and Regione Toscana (RET) have been publishing open data and are supporting the development of applications and services in the tourism domain for several months.
    [Show full text]
  • Fusepool Linked Datapool for Technology Intelligence
    Fusepool Linked Datapool for Technology Intelligence Michael Kaschesky1*, Reto Bachmann-Gmür1, Guillaume Bouchard5, Stephane Gamard3, Anton Heijs4, Michael Luggen1, Tamás Prajczer2, and Luigi Selmi1 1 Bern University of Applied Sciences, E-Government Institute, Bern, Switzerland {firstname.lastname}@bfh.ch 2 Geox KFT, Budapest, Hungary [email protected] 3Searchbox SA, Lausanne, Switzerland [email protected] 4Treparel Information Solutions B.V., Delft, Netherlands [email protected] 5Xerox Research Europe, Machine-Learning Group, Meylan, France [email protected] Abstract. Fusepool applications for technology intelligence merge open data from millions of patents, publications, and research tenders making it available as keyword-searchable linked data. The open-source Fusepool data and soft- ware infrastructure separates the data, its processors, and the user interfaces from each other enabling users to transform and enrich their own data as linked data for searching and interlinking internal and external sources. Fusepool inte- grates many Apache projects and makes it easy for developers to build new apps using the ready-made OSGi Fusepool archetypes. The JavaScript client provides a semantic REST endpoint to be used alongside the SPARQL end- point. Fusepool data undergoes a series of data extraction and refinement steps that remove ambiguity and make data reusable. As a result, the exploitation by app developers and the management of data becomes more efficient. Keywords: technology intelligence, semantic search, linked data, fusepool 1 Introduction Fusepool provides an open-source integrated data and software infrastructure that separates the data, its processors, and the user interfaces from each other. The Fuse- pool end user applications (e.g. PartnerMatch, PatentExplorer, FundingFinder, UserFeedback, PublicationExplorer, and ExpertMatch) integrate large amounts of diverse and heterogeneous information sources from different owners (e.g.
    [Show full text]
  • If You Have the Content, Then Apache Has the Technology!
    If you have the Content, then Apache has the Technology! A whistle-stop tour of the Apache content related projects Nick Burch CTO Quanticate Apache Projects • 154 Top Level Projects • 33 Incubating Projects • 46 “Content Related” Projects • 8 “Content Related” Incubating Projects (that excludes another ~30 fringe ones!) Picking the “most interesting” ones 36 Projects in 45 minutes With time for questions... This is not a comprehensive guide! Active Committer - ~3 of these projects Committer - ~6 of these projects User - ~12 of these projects Interested - ~24 of these projects My experience levels / knowledge will vary from project to project! Different Technologies • Transforming and Reading • Text and Language Analysis • RDF and Structured • Data Management and Processing • Serving Content • Hosted Content But not: Storing Content What can we get in 45 mins? • A quick overview of each project • Roughly how they fit together / cluster into related areas • When talks on the project are happening at ApacheCon • The project's URL, so you can look them up and find out more! • What interests me in the project Transforming and Reading Content Apache PDFBox http://pdfbox.apache.org/ • Read, Write, Create and Edit PDFs • Create PDFs from text • Fill in PDF forms • Extract text and formatting (Lucene, Tika etc) • Edit existing files, add images, add text etc • Continues to improve with each release! Apache POI http://poi.apache.org/ • File format reader and writer for Microsoft office file formats • Support binary & ooxml formats • Strong read
    [Show full text]