Issue Editor

Total Page:16

File Type:pdf, Size:1020Kb

Issue Editor Bulletin of the Technical Committee on Data Engineering June 2018 Vol. 41 No. 2 IEEE Computer Society Letters Letter from the Editor-in-Chief . David Lomet 1 Letter from the Special Issue Editor . Guoliang Li 2 Special Issue on Large-scale Data Integration Data Integration: The Current Status and the Way Forward . Michael Stonebraker, Ihab F. Ilyas 3 BigGorilla: An Open-Source Ecosystem for Data Preparation and Integration. ........................... Chen Chen, Behzad Golshan, Alon Halevy, Wang-Chiew Tan, AnHai Doan 10 Self-Service Data Preparation: Research to Practice . Joseph M. Hellerstein, Jeffrey Heer, Sean Kandel 23 Toward a System Building Agenda for Data Integration (and Data Science) . ..........................AnHai Doan, Pradap Konda, Paul Suganthan G.C., Adel Ardalan, Jeffrey R. Ballard, Sanjib Das,Yash Govind, Han Li, Philip Martinkus, Sidharth Mudgal, Erik Paulson, Haojun Zhang 35 Explaining Data Integration . Xiaolan Wang, Laura Haas, Alexandra Meliou 47 Making Open Data Transparent: Data Discovery on Open Data . Renee´ J. Miller, Fatemeh Nargesian, Erkang Zhu, Christina Christodoulakis, Ken Q. Pu, Periklis Andritsos 59 Big Data Integration for Product Specifications . Luciano Barbosa, Valter Crescenzi, Xin Luna Dong, Paolo Merialdo, Federico Piai, Disheng Qiu, Yanyan Shen, Divesh Srivastava 71 Integrated Querying of SQL database data and S3 data in Amazon Redshift . Mengchu Cai, Martin Grund, Anurag Gupta, Fabian Nagel, Ippokratis Pandis, Yannis Papakonstantinou, Michalis Petropoulos 82 Robust Entity Resolution Using a CrowdOracle . .............................. Donatella Firmani, Sainyam Galhotra, Barna Saha, Divesh Srivastava 91 Human-in-the-loop Rule Learning for Data Integration . Ju Fan, Guoliang Li 104 Conference and Journal Notices ICDE 2019 Conference. 116 TCDE Membership Form . back cover Editorial Board TCDE Executive Committee Editor-in-Chief Chair David B. Lomet Xiaofang Zhou Microsoft Research The University of Queensland One Microsoft Way Brisbane, QLD 4072, Australia Redmond, WA 98052, USA [email protected] [email protected] Executive Vice-Chair Associate Editors Masaru Kitsuregawa Philippe Bonnet The University of Tokyo Department of Computer Science Tokyo, Japan IT University of Copenhagen 2300 Copenhagen, Denmark Secretary/Treasurer Joseph Gonzalez Thomas Risse EECS at UC Berkeley L3S Research Center 773 Soda Hall, MC-1776 Hanover, Germany Berkeley, CA 94720-1776 Committee Members Guoliang Li Amr El Abbadi Department of Computer Science University of California Tsinghua University Santa Barbara, California 93106 Beijing, China Malu Castellanos Alexandra Meliou Teradata College of Information & Computer Sciences Santa Clara, CA 95054 University of Massachusetts Xiaoyong Du Amherst, MA 01003 Renmin University of China Beijing 100872, China Distribution Brookes Little Wookey Lee IEEE Computer Society Inha University 10662 Los Vaqueros Circle Inchon, Korea Los Alamitos, CA 90720 Ren´ee J. Miller [email protected] University of Toronto Toronto ON M5S 2E4, Canada The TC on Data Engineering Membership in the TC on Data Engineering is open to Erich Neuhold all current members of the IEEE Computer Society who University of Vienna are interested in database systems. The TCDE web page is A 1080 Vienna, Austria http://tab.computer.org/tcde/index.html. Kyu-Young Whang The Data Engineering Bulletin Computer Science Dept., KAIST The Bulletin of the Technical Committee on Data Engi- neering is published quarterly and is distributed to all TC Daejeon 305-701, Korea members. Its scope includes the design, implementation, modelling, theory and application of database systems and Liaison to SIGMOD and VLDB their technology. Ihab Ilyas Letters, conference information, and news should be sent to the Editor-in-Chief. Papers for each issue are solicited University of Waterloo by and should be sent to the Associate Editor responsible Waterloo, Canada N2L3G1 for the issue. Opinions expressed in contributions are those of the au- thors and do not necessarily reflect the positions of the TC on Data Engineering, the IEEE Computer Society, or the authors’ organizations. The Data Engineering Bulletin web site is at http://tab.computer.org/tcde/bull_about.html. i Letter from the Editor-in-Chief About the Bulletin This June’s issue is the start of the editorial tenure of four new editors. I am both proud and humbled by my continued success in recruiting top notch technical people when I invite them to be Bulletin editors. This success is, indeed, the ”secret sauce” responsible for the Bulletin’s continued success. With the above as prelude, I am proud to announce that the new Bulletin editors are, in alphabetical order, Philippe Bonnet of ITU Copenhagen, Joseph Gonzalez of UC Berkeley, Guoliang Li of Tsinghua University, and Alexandra Meliou of UMASS Amherst. Their contact information appears on the inside cover of this issue. This talented team will be bringing you Bulletin issues over the next two years. The Current Issue There is a bias among some in the research community to focus on solvable problems. You create a program or system and it either solves the problem or it does not. Meanwhile, in the real user community, technical people struggle at tasks that have to be done where it is either impractically difficult or impossible to provide a technical approach that simply solves ”the problem”. Data integration is such a problem. It is important because there are 10’s, 100’s, even 1000’s of data sources that all too frequently need to be “integrated” in some way to solve a real world problem. This problem is sufficiently hard and costly that incremental improvements can be very important. For example, reducing the amount of human involvement in the data integration activity can produce important cost savings and even lead to better results. Data integration is the subject of the current issue. A quick look at the table of contents will show you that data integration has become a “first class” problem for our field. Authors come from universities, large companies, former start-ups, etc., with the divisions between these homes in flux. This illustrates the excitement in the area. Guoliang Li has reached out to these communities in his effort to give you this survey of the field. The issue is a snapshot in time of this challenging area, where the state of the art is improving continually. I want to thank Guoliang for a very successful issue on an important topic. ICDE 2019 The current issue includes an ICDE 2019 call for papers on page 116. ICDE is a premier database conference and the flagship conference of the Technical Committee on Data Engineering. This year’s submission deadlines are earlier than in the past, and there is a two round submission process. The intent is to provide authors earlier feedback and a better opportunity to publish their work earlier in the year. ICDE 2019 is in Macau, which has, via adjacency to Hong Kong, great air connections to the rest of the world. I hope to see you there. David Lomet Microsoft Corporation 1 Letter from the Special Issue Editor Data integration is a long-standing problem over the past few decades. As more than 80% time of a data science project is spent on data integration, it becomes an indispensable part in data analysis. In recent few years, tremendous progress on data integration has been made from systems to algorithms. In this issue, we review the challenges of data integration, survey data integration systems (e.g., Tamr, BigGorilla, Trifacta, PyData), report recent progresses (e.g., explaining data integration, data discovery on open data, big data integration pipeline for product specifications, integrated querying of table data and S3 data, crowd-based entity resolution and human-in-the-loop rule learning), and discuss the future of this filed. The first paper, “Data Integration: The Current Status and the Way Forward” by Michael Stonebraker and Ihab F. Ilyas, discusses scalable data integration challenges based on the experience at Tamr, and highlights future directions on building end-to-end scalable data integration systems, such as data discovery, human in- volvement, data exploration, and model reusability. The second paper, “BigGorilla: An Open-Source Ecosystem for Data Preparation and Integration” by Alon Halevy et al, presents BIGGORILLA, an open-source resource for data preparation and integration. The paper describes four packages – an information extraction tool, a schema matching tool, and two entity matching tools. The third paper, “Self-Service Data Preparation: Research to Practice” by Joseph M. Hellerstein et al, re- views self-service data preparation, which aims to enable the people who know the data best to prepare it. The paper discusses the key tasks in this problem and reviews the Trifacta system on how to handle these tasks. The fourth paper, “Toward a System Building Agenda for Data Integration (and Data Science)” by Anhai Doan et al, advocates to build data integration systems by extending the PyData system and developing more Python packages to solve data integration problems. The paper provides an integrated agenda of research, system building, education, and outreach. The paper also describes ongoing work at Wisconsin. The fifth paper, “Explaining Data Integration”, by Laura Haas et al, reviews existing data integration systems with respect to their ability to derive explanations. The paper presents a new classification of data integration systems by their explainability and discusses the characteristics of systems within these classes. The authors also present a vision of the desired properties of future data integration systems with respect to explanations. The sixth paper, “Making Open Data Transparent: Data Discovery on Open Data” by Renee´ J. Miller et al, discusses the problem of data discovery on open data, e.g., open government data. The paper considers three im- portant data discovery problems: finding joinable tables, finding unionable tables, and creating an organization over a massive collection of tables. The seventh paper, “Big Data Integration for Product Specifications” by Divesh Srivastava et al, presents an end-to-end big data integration pipeline for product specifications.
Recommended publications
  • Open Source Copyrights
    Kuri App - Open Source Copyrights: 001_talker_listener-master_2015-03-02 ===================================== Source Code can be found at: https://github.com/awesomebytes/python_profiling_tutorial_with_ros 001_talker_listener-master_2016-03-22 ===================================== Source Code can be found at: https://github.com/ashfaqfarooqui/ROSTutorials acl_2.2.52-1_amd64.deb ====================== Licensed under GPL 2.0 License terms can be found at: http://savannah.nongnu.org/projects/acl/ acl_2.2.52-1_i386.deb ===================== Licensed under LGPL 2.1 License terms can be found at: http://metadata.ftp- master.debian.org/changelogs/main/a/acl/acl_2.2.51-8_copyright actionlib-1.11.2 ================ Licensed under BSD Source Code can be found at: https://github.com/ros/actionlib License terms can be found at: http://wiki.ros.org/actionlib actionlib-common-1.5.4 ====================== Licensed under BSD Source Code can be found at: https://github.com/ros-windows/actionlib License terms can be found at: http://wiki.ros.org/actionlib adduser_3.113+nmu3ubuntu3_all.deb ================================= Licensed under GPL 2.0 License terms can be found at: http://mirrors.kernel.org/ubuntu/pool/main/a/adduser/adduser_3.113+nmu3ubuntu3_all. deb alsa-base_1.0.25+dfsg-0ubuntu4_all.deb ====================================== Licensed under GPL 2.0 License terms can be found at: http://mirrors.kernel.org/ubuntu/pool/main/a/alsa- driver/alsa-base_1.0.25+dfsg-0ubuntu4_all.deb alsa-utils_1.0.27.2-1ubuntu2_amd64.deb ======================================
    [Show full text]
  • Nosql Databases
    Query & Exploration SQL, Search, Cypher, … Stream Processing Platforms Data Storm, Spark, .. Data Ingestion Serving ETL, Distcp, Batch Processing Platforms BI, Cubes, Kafka, MapReduce, SparkSQL, BigQuery, Hive, Cypher, ... RDBMS, Key- OpenRefine, value Stores, … Data Definition Tableau, … SQL DDL, Avro, Protobuf, CSV Storage Systems HDFS, RDBMS, Column Stores, Graph Databases Computing Platforms Distributed Commodity, Clustered High-Performance, Single Node Query & Exploration SQL, Search, Cypher, … Stream Processing Platforms Data Storm, Spark, .. Data Ingestion Serving ETL, Distcp, Batch Processing Platforms BI, Cubes, Kafka, MapReduce, SparkSQL, BigQuery, Hive, Cypher, ... RDBMS, Key- OpenRefine, value Stores, … Data Definition Tableau, … SQL DDL, Avro, Protobuf, CSV Storage Systems HDFS, RDBMS, Column Stores, Graph Databases Computing Platforms Distributed Commodity, Clustered High-Performance, Single Node Computing Single Node Parallel Distributed Computing Computing Computing CPU GPU Grid Cluster Computing Computing A single node (usually multiple cores) Attached to a data store (Disc, SSD, …) One process with potentially multiple threads R: All processing is done on one computer BidMat: All processing is done on one computer with specialized HW Single Node In memory Retrieve/Stores from Disc Pros Simple to program and debug Cons Can only scale-up Does not deal with large data sets Single Node solution for large scale exploratory analysis Specialized HW and SW for efficient Matrix operations Elements: Data engine software for
    [Show full text]
  • Veracity of Big Data Laure Berti-Équille, Javier Borge-Holthoefer
    Veracity of Big Data Laure Berti-Équille, Javier Borge-Holthoefer To cite this version: Laure Berti-Équille, Javier Borge-Holthoefer. Veracity of Big Data: From Truth Discovery Compu- tation Algorithms to Models of Misinformation Dynamics (Tutorial). In the 24th ACM International on Conference on Information and Knowledge Management (CIKM 2015), Oct 2015, Melbourne, Aus- tralia. hal-01856326 HAL Id: hal-01856326 https://hal.inria.fr/hal-01856326 Submitted on 10 Aug 2018 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Veracity of Big Data Laure BtiBerti‐Equill e and JiJavier Borge‐HlthfHolthoefer Qatar Computing Research Institute {lberti,jborge}@qf.org.qa Disclaimer Aim of the tutorial: Get the big picture The algorithms of the basic approaches will be sketched Please don’t mind if your favorite algorithm is missing The revised version of the tutorial will be available at: http://daqcri.github.io/dafna/tutorial_cikm2015/index.html 2 October 19, 2015 CIKM 2015 2 Maayny sources of infooatormation available online Are all these sources equally ‐ accurate ‐ up‐to‐date ‐ and trustworthy? October 19, 2015 CIKM 2015 3 Accurate? Deep Web data quality is low X.
    [Show full text]
  • Easybuild Documentation Release 20210907.0
    EasyBuild Documentation Release 20210907.0 Ghent University Tue, 07 Sep 2021 08:55:41 Contents 1 What is EasyBuild? 3 2 Concepts and terminology 5 2.1 EasyBuild framework..........................................5 2.2 Easyblocks................................................6 2.3 Toolchains................................................7 2.3.1 system toolchain.......................................7 2.3.2 dummy toolchain (DEPRECATED) ..............................7 2.3.3 Common toolchains.......................................7 2.4 Easyconfig files..............................................7 2.5 Extensions................................................8 3 Typical workflow example: building and installing WRF9 3.1 Searching for available easyconfigs files.................................9 3.2 Getting an overview of planned installations.............................. 10 3.3 Installing a software stack........................................ 11 4 Getting started 13 4.1 Installing EasyBuild........................................... 13 4.1.1 Requirements.......................................... 14 4.1.2 Using pip to Install EasyBuild................................. 14 4.1.3 Installing EasyBuild with EasyBuild.............................. 17 4.1.4 Dependencies.......................................... 19 4.1.5 Sources............................................. 21 4.1.6 In case of installation issues. .................................. 22 4.2 Configuring EasyBuild.......................................... 22 4.2.1 Supported configuration
    [Show full text]
  • Towards a Fully Automated Extraction and Interpretation of Tabular Data Using Machine Learning
    UPTEC F 19050 Examensarbete 30 hp August 2019 Towards a fully automated extraction and interpretation of tabular data using machine learning Per Hedbrant Per Hedbrant Master Thesis in Engineering Physics Department of Engineering Sciences Uppsala University Sweden Abstract Towards a fully automated extraction and interpretation of tabular data using machine learning Per Hedbrant Teknisk- naturvetenskaplig fakultet UTH-enheten Motivation A challenge for researchers at CBCS is the ability to efficiently manage the Besöksadress: different data formats that frequently are changed. Significant amount of time is Ångströmlaboratoriet Lägerhyddsvägen 1 spent on manual pre-processing, converting from one format to another. There are Hus 4, Plan 0 currently no solutions that uses pattern recognition to locate and automatically recognise data structures in a spreadsheet. Postadress: Box 536 751 21 Uppsala Problem Definition The desired solution is to build a self-learning Software as-a-Service (SaaS) for Telefon: automated recognition and loading of data stored in arbitrary formats. The aim of 018 – 471 30 03 this study is three-folded: A) Investigate if unsupervised machine learning Telefax: methods can be used to label different types of cells in spreadsheets. B) 018 – 471 30 00 Investigate if a hypothesis-generating algorithm can be used to label different types of cells in spreadsheets. C) Advise on choices of architecture and Hemsida: technologies for the SaaS solution. http://www.teknat.uu.se/student Method A pre-processing framework is built that can read and pre-process any type of spreadsheet into a feature matrix. Different datasets are read and clustered. An investigation on the usefulness of reducing the dimensionality is also done.
    [Show full text]
  • A Multilingual Information Extraction Pipeline for Investigative Journalism
    A Multilingual Information Extraction Pipeline for Investigative Journalism Gregor Wiedemann Seid Muhie Yimam Chris Biemann Language Technology Group Department of Informatics Universita¨t Hamburg, Germany gwiedemann, yimam, biemann @informatik.uni-hamburg.de { } Abstract 2) court-ordered revelation of internal communi- cation, 3) answers to requests based on Freedom We introduce an advanced information extrac- tion pipeline to automatically process very of Information (FoI) acts, and 4) unofficial leaks large collections of unstructured textual data of confidential information. Well-known exam- for the purpose of investigative journalism. ples of such disclosed or leaked datasets are the The pipeline serves as a new input proces- Enron email dataset (Keila and Skillicorn, 2005) sor for the upcoming major release of our or the Panama Papers (O’Donovan et al., 2016). New/s/leak 2.0 software, which we develop in To support investigative journalism in their cooperation with a large German news organi- zation. The use case is that journalists receive work, we have developed New/s/leak (Yimam a large collection of files up to several Giga- et al., 2016), a software implemented by experts bytes containing unknown contents. Collec- from natural language processing and visualiza- tions may originate either from official disclo- tion in computer science in cooperation with jour- sures of documents, e.g. Freedom of Informa- nalists from Der Spiegel, a large German news or- tion Act requests, or unofficial data leaks. Our ganization. Due to its successful application in the software prepares a visually-aided exploration investigative research as well as continued feed- of the collection to quickly learn about poten- tial stories contained in the data.
    [Show full text]
  • Introduction to Openrefine
    Introduction to OpenRefine 2018 Bibliometrics and Research Assessment Symposium Candace Norton, MLS © Library Carpentry: "OpenRefine Lessons for Librarians." Housekeeping • This session will be recorded and distributed after editing and captioning. • Slides will be available following the Symposium. • Questions are welcome throughout the presentation; please use the microphone. • Due to the set up of the breakout room, this is a lecture style course and will not be hands on. Course Objectives • Explain what OpenRefine does and why to use it. • Recognize when using OpenRefine is appropriate. • Understand how to import, edit, transform, and export data using OpenRefine. Introduction to OpenRefine • OpenRefine is a powerful tool for working with messy data • OpenRefine can clean data, transform data from one format to another, and can extend data via web services and external data • OpenRefine is available for download online at http://openrefine.org/download.html Why use OpenRefine? • Get an overview of a data set • Resolve inconsistencies in a data set – standardizing date formatting • Helps split data up into more granular parts – splitting up cells with multiple authors into separate cells • Match local data up to other data sets – matching local subjects against the Library of Congress Subject Headings • Enhance a data set with data from other sources Common Scenarios: Dates Data as entered Desired data 1st January 2014 2014-01-01 01/01/2014 2014-01-01 Jan 1 2014 2014-01-01 2014-01-01 2014-01-01 Common Scenarios: Names Data as entered
    [Show full text]
  • A Survey on Truth Discovery
    A Survey on Truth Discovery Yaliang Li1, Jing Gao1, Chuishi Meng1, Qi Li1, Lu Su1, Bo Zhao2, Wei Fan3, and Jiawei Han4 1SUNY Buffalo, Buffalo, NY USA 2LinkedIn, Mountain View, CA USA 3Baidu Big Data Lab, Sunnyvale, CA USA 4University of Illinois, Urbana, IL USA 1fyaliangl, jing, chuishim, qli22, [email protected], [email protected], [email protected], [email protected] ABSTRACT able. Unfortunately, this assumption may not hold in most cases. Consider the aforementioned \Mount Everest" exam- Thanks to information explosion, data for the objects of in- ple: Using majority voting, the result \29; 035 feet", which terest can be collected from increasingly more sources. How- has the highest number of occurrences, will be regarded as ever, for the same object, there usually exist conflicts among the truth. However, in the search results, the information the collected multi-source information. To tackle this chal- \29; 029 feet" from Wikipedia is the truth. This example lenge, truth discovery, which integrates multi-source noisy reveals that information quality varies a lot among differ- information by estimating the reliability of each source, has ent sources, and the accuracy of aggregated results can be emerged as a hot topic. Several truth discovery methods improved by capturing the reliabilities of sources. The chal- have been proposed for various scenarios, and they have been lenge is that source reliability is usually unknown a priori successfully applied in diverse application domains. In this in practice and has to be inferred from the data. survey, we focus on providing a comprehensive overview of truth discovery methods, and summarizing them from dif- In the light of this challenge, the topic of truth discov- ferent aspects.
    [Show full text]
  • Please Stand by for Realtime Captions. My Name Is Jamie Hayes I Am Your
    Please stand by for realtime captions. My name is Jamie Hayes I am your host for today's webinar. We also have hash Lee on tech support so if you're having any technical technical difficulties plea email Ashley. Are presented as Amy Solis and she is the librarian for social sciences data. Government documents and public administration at the University of Southern California USC. In Los Angeles California. She also serves as USC's official representative to the inter-Consortium for political and social research or IC PFR. She is responsible for overseeing data services in the USC libraries and works closely with university faculty, subject librarians and the referenced team to build and promote numeric data collections in support of data literacy, research and curriculum. In the social sciences. Before we get started I will walk you through a few housekeeping reminders. If you have any questions you would like to ask the presenter or if you have any technical issues please feel free to use the chat box. It is located in the bottom right-hand corner of your screen and I will keep track of all the questions that come in and at the end of the presentation Amy will respond to each of them. We are recording today's session and will email you a link of the recording and slides to everyone who registered for this webinar. We are also going to be sending you a certificate of participation using the email you used to register for today's webinar. If anyone needs additional certificates because multiple people are watching the webinar with you please email STL P [email protected] and include the title of today's webinar along with the names and email addresses of those needing certificates.
    [Show full text]
  • Mike Bolam Metadata Librarian Digital Scholarship Services University Library System [email protected] // 412-648-5908 Assessment Survey
    Mike Bolam Metadata Librarian Digital Scholarship Services University Library System [email protected] // 412-648-5908 Assessment Survey http://goo.gl/MiDZSm Learning Objectives • What is OpenRefine? What can I do with it? • Installing OpenRefine • Exploring data • Analyzing and fixing data • If we have time: • Some advance data operations • Splitting, clustering, transforming, adding derived columns • Installing extensions • Linking datasets & named-entity extraction What is OpenRefine? • Interactive Data Transformation (IDT) tool • A tool for visualizing and manipulating data • Not a good for creating new data • Extremely powerful for exploring, cleaning, and linking data • Open Source, free, and community supported • Formerly known as Gridworks Freebase then GoogleRefine • OpenRefine 2.6 is still considered a beta release, so we’ll be using GoogleRefine 2.5. http://openrefine.org/2015/01/26/Ma pping-OpenRefine-ecosystem.html Why OpenRefine? • Clean up data that is: • In a simple tabular format • Is inconsistently formatted • Has inconsistent terminology • Get an overview of a data set • Resolve inconsistencies • Split data up into more granular parts • Match local data up to other data sets • Enhance a data set with data from other sources Installing OpenRefine • http://www.openrefine.org • Direct link to the downloads • https://github.com/OpenRefine/OpenRefine/wiki/Installation-Instructions • Windows • Download the ZIP archive. • Unzip & extract the contents of the archive to a folder of your choice. • To launch OpenRefine, double-click on openrefine.exe. • Mac • Download the DMG file. • Open the disk image & drag the OpenRefine icon into the Applications folder. • Double-click on the icon to start OpenRefine. Installing OpenRefine • OpenRefine runs locally on your computer.
    [Show full text]
  • Report of Contributions
    UbuntuNet-Connect 2017 Report of Contributions https://events.ubuntunet.net/e/10 UbuntuNet- … / Report of Contributions NRENs, library consortia and gov … Contribution ID: 1 Type: Reviewed Presentation NRENs, library consortia and government: Nexus for democratic access to scholarship Thursday, 2 November 2017 14:30 (20 minutes) The propagation of NRENs in Africa is transforming the knowledge infospheres through providing internet connectivity to enhance research collaboration and shareability of research output among researchers irrespective of chronemic and spatial limitations. NRENs serve as special vehicles for facilitating access to scientific research and collaboration for socio-economic development. This article will seek to scrutinize how NRENs, library consortia and government can collaborate to support research and education. It will examine how the helical collaboration between NRENs, library consortia and government supports education and research. It will examine how the col- laboration between NRENS and library consortia are contributing to the virtualisation of learning teaching and research with regards to collaboration and unlimited access to information. It will seek to highlight how users are benefitting from cooperation between libraries, library consortia and NRENs. The article will explore the challenges and opportunities of NRENs and library con- sortia collaboration. It will examine the patterns of collaboration at different organisational and inter-organisational levels. The article will also seek to explore the opportunities for collabora- tion beyond national boundaries. It will study how cooperation amongst academic libraries has helped to provide for the epistemic landscape in Zimbabwe. The researchers will also examine how collaboration with other institutions outside the country is contributing towards widening access to scholarship.
    [Show full text]
  • A Component-Based Approach to Traffic Data Wrangling
    THE UNIVERSITY OF MANCHESTER FACULTY OF SCIENCE AND ENGINEERING SCHOOL OF COMPUTER SCIENCE A Component-Based Approach to Traffic Data Wrangling THIRD YEAR PROJECT REPORT SUBMITTED IN PARTIAL FULFILMENT OF THE REQUIREMENTS FOR THE DEGREE OF BACHELOR’S OF SCIENCE IN COMPUTER SCIENCE WITH BUSINESS AND MANAGEMENT arXiv:1906.03418v1 [cs.DB] 8 Jun 2019 Author: Supervisor: Hendrik MOLDER¨ Dr. Sandra SAMPAIO April 2019 Contents Acknowledgements 7 Abstract 8 Copyright 9 Glossary 10 1 Introduction 11 1.1 Motivation . 11 1.2 Research Questions . 12 1.3 Project Scope . 12 1.4 Project Objectives . 12 1.5 Report Structure . 13 2 Background and Literature Review 14 2.1 Big Data . 14 2.2 Data Wrangling . 15 2.3 Data Wrangling Tools . 15 2.3.1 OpenRefine . 15 2.3.2 Trifacta Wrangler . 16 2.3.3 Taverna Workbench . 17 2.3.4 Python . 19 2.3.5 R . 20 2.3.6 OpenCPU . 21 2.4 Summary . 22 3 4 CONTENTS 3 Data Wrangling Components Design 23 3.1 Requirements . 23 3.2 Data Wrangling Request 1 – DWR1 ................... 23 3.3 Data Wrangling Request 2 – DWR2 ................... 24 3.4 Data Wrangling Operators . 27 3.5 Summary . 28 4 Implementation 29 4.1 Implementation Plan . 29 4.2 Implementation Environment . 30 4.3 Implementation Sprint 1 . 31 4.4 Implementation Sprint 2 . 32 4.5 Implementation Sprint 3 . 34 4.6 Implementation Challenges . 35 4.7 Summary . 36 5 Testing and Evaluation 37 5.1 Evaluation Criteria . 37 5.2 Evaluation of DWR1 ........................... 38 5.3 Evaluation of DWR2 ........................... 38 5.4 Summary .
    [Show full text]