Superset Documentation

Total Page:16

File Type:pdf, Size:1020Kb

Superset Documentation Superset Documentation Apache Superset Dev May 12, 2020 CONTENTS 1 Superset Resources 3 2 Apache Software Foundation Resources5 3 Overview 7 3.1 Features..................................................7 3.2 Databases.................................................7 3.3 Screenshots................................................8 3.4 Contents.................................................9 3.5 Indices and tables............................................ 83 i ii Superset Documentation _static/images/s.png _static/images/apache_feather.png Apache Superset (incubating) is a modern, enterprise-ready business intelligence web application Important: Disclaimer: Apache Superset is an effort undergoing incubation at The Apache Software Foundation (ASF), sponsored by the Apache Incubator. Incubation is required of all newly accepted projects until a further review indicates that the infrastructure, communications, and decision making process have stabilized in a manner consistent with other successful ASF projects. While incubation status is not necessarily a reflection of the completeness or stability of the code, it does indicate that the project has yet to be fully endorsed by the ASF. Note: Apache Superset, Superset, Apache, the Apache feather logo, and the Apache Superset project logo are either registered trademarks or trademarks of The Apache Software Foundation in the United States and other countries. CONTENTS 1 Superset Documentation 2 CONTENTS CHAPTER ONE SUPERSET RESOURCES • Versioned versions of this documentation: https://readthedocs.org/projects/apache-superset/ • Superset’s Github, note that we use Github for issue tracking • Superset’s contribution guidelines and code of conduct on Github. • Our mailing list archives. To subscribe, send an email to [email protected] • Join our Slack 3 Superset Documentation 4 Chapter 1. Superset Resources CHAPTER TWO APACHE SOFTWARE FOUNDATION RESOURCES • The Apache Software Foundation Website • Current Events • License • Thanks to the ASF’s sponsors • Sponsor Apache! 5 Superset Documentation 6 Chapter 2. Apache Software Foundation Resources CHAPTER THREE OVERVIEW 3.1 Features • A rich set of data visualizations • An easy-to-use interface for exploring and visualizing data • Create and share dashboards • Enterprise-ready authentication with integration with major authentication providers (database, OpenID, LDAP, OAuth & REMOTE_USER through Flask AppBuilder) • An extensible, high-granularity security/permission model allowing intricate rules on who can access individual features and the dataset • A simple semantic layer, allowing users to control how data sources are displayed in the UI by defining which fields should show up in which drop-down and which aggregation and function metrics are made available to the user • Integration with most SQL-speaking RDBMS through SQLAlchemy • Deep integration with Druid.io 3.2 Databases The following RDBMS are currently supported: • Amazon Athena • Amazon Redshift • Apache Drill • Apache Druid • Apache Hive • Apache Impala • Apache Kylin • Apache Pinot • Apache Spark SQL • BigQuery • ClickHouse 7 Superset Documentation • CockroachDB • Dremio • Elasticsearch • Exasol • Google Sheets • Greenplum • IBM Db2 • MySQL • Oracle • PostgreSQL • Presto • Snowflake • SQLite • SQL Server • Teradata • Vertica • Hana Other database engines with a proper DB-API driver and SQLAlchemy dialect should be supported as well. 3.3 Screenshots _static/images/screenshots/bank_dash.png _static/images/screenshots/explore.png _static/images/screenshots/sqllab.png 8 Chapter 3. Overview Superset Documentation _static/images/screenshots/deckgl_dash.png 3.4 Contents 3.4.1 Installation & Configuration Getting Started Superset has deprecated support for Python 2.* and supports only ~=3.6 to take advantage of the newer Python features and reduce the burden of supporting previous versions. We run our test suite against 3.6, but 3.7 is fully supported as well. Cloud-native! Superset is designed to be highly available. It is “cloud-native” as it has been designed scale out in large, distributed environments, and works well inside containers. While you can easily test drive Superset on a modest setup or simply on your laptop, there’s virtually no limit around scaling out the platform. Superset is also cloud-native in the sense that it is flexible and lets you choose your web server (Gunicorn, Nginx, Apache), your metadata database engine (MySQL, Postgres, MariaDB, . ), your message queue (Redis, RabbitMQ, SQS, . ), your results backend (S3, Redis, Memcached, . ), your caching layer (Memcached, Redis, . ), works well with services like NewRelic, StatsD and DataDog, and has the ability to run analytic workloads against most popular database technologies. Superset is battle tested in large environments with hundreds of concurrent users. Airbnb’s production environment runs inside Kubernetes and serves 600+ daily active users viewing over 100K charts a day. The Superset web server and the Superset Celery workers (optional) are stateless, so you can scale out by running on as many servers as needed. Start with Docker Note: The Docker-related files and documentation are actively maintained and managed by the core committers working on the project. Help and contributions around Docker are welcomed! If you know docker, then you’re lucky, we have shortcut road for you to initialize development environment: git clone https://github.com/apache/incubator-superset/ cd incubator-superset # you can run this command everytime you need to start superset now: docker-compose up 3.4. Contents 9 Superset Documentation After several minutes for superset initialization to finish, you can open a browser and view http://localhost:8088 to start your journey. By default the system configures an admin user with the username of admin and a password of admin - if you are in a non-local environment it is highly recommended to change this username and password at your earliest convenience. From there, the container server will reload on modification of the superset python and javascript source code. Don’t forget to reload the page to take the new frontend into account though. See also CONTRIBUTING.md#building, for alternative way of serving the frontend. It is currently not recommended to run docker-compose in production. If you are attempting to build on a Mac and it exits with 137 you need to increase your docker resources. OSX instructions: https://docs.docker.com/docker-for-mac/#advanced (Search for memory) Or if you’re curious and want to install superset from bottom up, then go ahead. See also docker/README.md OS dependencies Superset stores database connection information in its metadata database. For that purpose, we use the cryptography Python library to encrypt connection passwords. Unfortunately, this library has OS level depen- dencies. You may want to attempt the next step (“Superset installation and initialization”) and come back to this step if you encounter an error. Here’s how to install them: For Debian and Ubuntu, the following command will ensure that the required dependencies are installed: sudo apt-get install build-essential libssl-dev libffi-dev python-dev python-pip ,!libsasl2-dev libldap2-dev Ubuntu 18.04 If you have python3.6 installed alongside with python2.7, as is default on Ubuntu 18.04 LTS, run this command also: sudo apt-get install build-essential libssl-dev libffi-dev python3.6-dev python-pip ,!libsasl2-dev libldap2-dev otherwise build for cryptography fails. For Fedora and RHEL-derivatives, the following command will ensure that the required dependencies are installed: sudo yum upgrade python-setuptools sudo yum install gcc gcc-c++ libffi-devel python-devel python-pip python-wheel ,!openssl-devel cyrus-sasl-devel openldap-devel Mac OS X If possible, you should upgrade to the latest version of OS X as issues are more likely to be resolved for that version. You will likely need the latest version of XCode available for your installed version of OS X. You should also install the XCode command line tools: xcode-select--install System python is not recommended. Homebrew’s python also ships with pip: brew install pkg-config libffi openssl python env LDFLAGS="-L$(brew --prefix openssl)/lib" CFLAGS="-I$(brew --prefix openssl)/ ,!include" pip install cryptography==2.4.2 10 Chapter 3. Overview Superset Documentation Windows isn’t officially supported at this point, but if you want to attempt it, download get-pip.py, and run python get-pip.py which may need admin access. Then run the following: C:\> pip install cryptography # You may also have to create C:\Temp C:\> md C:\Temp Python virtualenv It is recommended to install Superset inside a virtualenv. Python 3 already ships virtualenv. But if it’s not installed in your environment for some reason, you can install it via the package for your operating systems, otherwise you can install from pip: pip install virtualenv You can create and activate a virtualenv by: # virtualenv is shipped in Python 3.6+ as venv instead of pyvenv. # See https://docs.python.org/3.6/library/venv.html python3-m venv venv . venv/bin/activate On Windows the syntax for activating it is a bit different: venv\Scripts\activate Once you activated your virtualenv everything you are doing is confined inside the virtualenv. To exit a virtualenv just type deactivate. Python’s setup tools and pip Put all the chances on your side by getting the very latest pip and setuptools libraries.: pip install--upgrade setuptools pip Superset installation and initialization
Recommended publications
  • IPS Signature Release Note V9.17.79
    SOPHOS IPS Signature Update Release Notes Version : 9.17.79 Release Date : 19th January 2020 IPS Signature Update Release Information Upgrade Applicable on IPS Signature Release Version 9.17.78 CR250i, CR300i, CR500i-4P, CR500i-6P, CR500i-8P, CR500ia, CR500ia-RP, CR500ia1F, CR500ia10F, CR750ia, CR750ia1F, CR750ia10F, CR1000i-11P, CR1000i-12P, CR1000ia, CR1000ia10F, CR1500i-11P, CR1500i-12P, CR1500ia, CR1500ia10F Sophos Appliance Models CR25iNG, CR25iNG-6P, CR35iNG, CR50iNG, CR100iNG, CR200iNG/XP, CR300iNG/XP, CR500iNG- XP, CR750iNG-XP, CR2500iNG, CR25wiNG, CR25wiNG-6P, CR35wiNG, CRiV1C, CRiV2C, CRiV4C, CRiV8C, CRiV12C, XG85 to XG450, SG105 to SG650 Upgrade Information Upgrade type: Automatic Compatibility Annotations: None Introduction The Release Note document for IPS Signature Database Version 9.17.79 includes support for the new signatures. The following sections describe the release in detail. New IPS Signatures The Sophos Intrusion Prevention System shields the network from known attacks by matching the network traffic against the signatures in the IPS Signature Database. These signatures are developed to significantly increase detection performance and reduce the false alarms. Report false positives at [email protected], along with the application details. January 2020 Page 2 of 245 IPS Signature Update This IPS Release includes Two Thousand, Seven Hundred and Sixty Two(2762) signatures to address One Thousand, Nine Hundred and Thirty Eight(1938) vulnerabilities. New signatures are added for the following vulnerabilities: Name CVE–ID
    [Show full text]
  • Unravel Data Systems Version 4.5
    UNRAVEL DATA SYSTEMS VERSION 4.5 Component name Component version name License names jQuery 1.8.2 MIT License Apache Tomcat 5.5.23 Apache License 2.0 Tachyon Project POM 0.8.2 Apache License 2.0 Apache Directory LDAP API Model 1.0.0-M20 Apache License 2.0 apache/incubator-heron 0.16.5.1 Apache License 2.0 Maven Plugin API 3.0.4 Apache License 2.0 ApacheDS Authentication Interceptor 2.0.0-M15 Apache License 2.0 Apache Directory LDAP API Extras ACI 1.0.0-M20 Apache License 2.0 Apache HttpComponents Core 4.3.3 Apache License 2.0 Spark Project Tags 2.0.0-preview Apache License 2.0 Curator Testing 3.3.0 Apache License 2.0 Apache HttpComponents Core 4.4.5 Apache License 2.0 Apache Commons Daemon 1.0.15 Apache License 2.0 classworlds 2.4 Apache License 2.0 abego TreeLayout Core 1.0.1 BSD 3-clause "New" or "Revised" License jackson-core 2.8.6 Apache License 2.0 Lucene Join 6.6.1 Apache License 2.0 Apache Commons CLI 1.3-cloudera-pre-r1439998 Apache License 2.0 hive-apache 0.5 Apache License 2.0 scala-parser-combinators 1.0.4 BSD 3-clause "New" or "Revised" License com.springsource.javax.xml.bind 2.1.7 Common Development and Distribution License 1.0 SnakeYAML 1.15 Apache License 2.0 JUnit 4.12 Common Public License 1.0 ApacheDS Protocol Kerberos 2.0.0-M12 Apache License 2.0 Apache Groovy 2.4.6 Apache License 2.0 JGraphT - Core 1.2.0 (GNU Lesser General Public License v2.1 or later AND Eclipse Public License 1.0) chill-java 0.5.0 Apache License 2.0 Apache Commons Logging 1.2 Apache License 2.0 OpenCensus 0.12.3 Apache License 2.0 ApacheDS Protocol
    [Show full text]
  • Creating Dashboards and Data Stories Within the Data & Analytics Framework (DAF)
    Creating dashboards and data stories within the Data & Analytics Framework (DAF) Michele Petitoc, Francesca Fallucchia,b and De Luca Ernesto Williama,b a DIII, Guglielmo Marconi University, Via Plinio 44, 00193 Roma RM, Italy E-mail: [email protected], [email protected] b DIFI, Georg Eckert Institute Braunschweig, Celler Str. 3, 38114 Braunschweig, German E-mail: [email protected], [email protected] c DIFI, Università̀ di Pisa, Lungarno Antonio Pacinotti 43, 56126, Pisa PI, Italy E-mail: [email protected] Abstract. In recent years, many data visualization tools have appeared on the market that can potentially guarantee citizens and users of the Public Administration (PA) the ability to create dashboards and data stories with just a few clicks, using open and unopened data from the PA. The Data Analytics Framework (DAF), a project of the Italian government launched at the end of 2017 and currently being tested, integrates data based on the semantic web, data analysis tools and open source business intelli- gence products that promise to solve the problems that prevented the PA to exploit its enormous data potential. The DAF favors the spread of linked open data (LOD) thanks to the integration of OntoPiA, a network of controlled ontologies and vocabularies that allows us to describe the concepts we find in datasets, such as "sex", "organization", "people", "addresses", "points of inter- est", "events" etc. This paper contributes to the enhancement of the project by introducing the process of creating a dashboard in the DAF in 5 steps, starting from the dataset search on the data portal, to the creation phase of the real dashboard through Superset and the related data story.
    [Show full text]
  • HDP 3.1.4 Release Notes Date of Publish: 2019-08-26
    Release Notes 3 HDP 3.1.4 Release Notes Date of Publish: 2019-08-26 https://docs.hortonworks.com Release Notes | Contents | ii Contents HDP 3.1.4 Release Notes..........................................................................................4 Component Versions.................................................................................................4 Descriptions of New Features..................................................................................5 Deprecation Notices.................................................................................................. 6 Terminology.......................................................................................................................................................... 6 Removed Components and Product Capabilities.................................................................................................6 Testing Unsupported Features................................................................................ 6 Descriptions of the Latest Technical Preview Features.......................................................................................7 Upgrading to HDP 3.1.4...........................................................................................7 Behavioral Changes.................................................................................................. 7 Apache Patch Information.....................................................................................11 Accumulo...........................................................................................................................................................
    [Show full text]
  • 60 Recipes for Apache Cloudstack
    60 Recipes for Apache CloudStack Sébastien Goasguen 60 Recipes for Apache CloudStack by Sébastien Goasguen Copyright © 2014 Sébastien Goasguen. All rights reserved. Printed in the United States of America. Published by O’Reilly Media, Inc., 1005 Gravenstein Highway North, Sebastopol, CA 95472. O’Reilly books may be purchased for educational, business, or sales promotional use. Online editions are also available for most titles (http://safaribooksonline.com). For more information, contact our corporate/ institutional sales department: 800-998-9938 or [email protected]. Editor: Brian Anderson Indexer: Ellen Troutman Zaig Production Editor: Matthew Hacker Cover Designer: Karen Montgomery Copyeditor: Jasmine Kwityn Interior Designer: David Futato Proofreader: Linley Dolby Illustrator: Rebecca Demarest September 2014: First Edition Revision History for the First Edition: 2014-08-22: First release See http://oreilly.com/catalog/errata.csp?isbn=9781491910139 for release details. Nutshell Handbook, the Nutshell Handbook logo, and the O’Reilly logo are registered trademarks of O’Reilly Media, Inc. 60 Recipes for Apache CloudStack, the image of a Virginia Northern flying squirrel, and related trade dress are trademarks of O’Reilly Media, Inc. Many of the designations used by manufacturers and sellers to distinguish their products are claimed as trademarks. Where those designations appear in this book, and O’Reilly Media, Inc. was aware of a trademark claim, the designations have been printed in caps or initial caps. While every precaution has been taken in the preparation of this book, the publisher and authors assume no responsibility for errors or omissions, or for damages resulting from the use of the information contained herein.
    [Show full text]
  • Real-Time Data Analytics with Apache Druid Correa Bosco Hilary Department of Information Technology, (Msc
    IJARSCT ISSN (Online) 2581-9429 International Journal of Advanced Research in Science, Communication and Technology (IJARSCT) Volume 5, Issue 2, May 2021 Impact Factor: 4.819 Real-Time Data Analytics with Apache Druid Correa Bosco Hilary Department of Information Technology, (MSc. IT Part 1) Sir Sitaram and Lady Shantabai Patkar College of Arts and Science, Mumbai, India Abstract: The shift towards real-time data flow has a major impact on the way applications are designed and on the work of data engineers. Dealing with real-time data ingestion brings a paradigm shift and an added layer of challenges compared to traditional integration and processing methods. There are real benefits to leveraging real-time data, but it requires specialized considerations in setting up the ingestion, processing, storing, and serving of that data. It brings about specific operational needs and a change in the way data engineers work. These should be considered while embarking on a real- time journey. In this paper we are going to see real time data analytics with apache druid. Apache Druid (incubating) performant analytics data store for event-driven data .Druid’s core design combines ideas from OLAP/analytic databases, time series databases, and search systems to create a unified system for operational analytics. Keywords: Distributed, Real-time, Fault-tolerant, Highly Available, Open Source, Analytics, Column- oriented, Olap, Apache Druid I. INTRODUCTION Streaming data integration is the foundation for streaming analytics. Specific use cases such as IoT devices log, contextual marketing triggers, Dynamic pricing all rely on using a data feed or real-time data. If you cannot source the data in real-time, there is very little value to be gained in attempting to tackle these use cases.
    [Show full text]
  • Presto: the Definitive Guide
    Presto The Definitive Guide SQL at Any Scale, on Any Storage, in Any Environment Compliments of Matt Fuller, Manfred Moser & Martin Traverso Virtual Book Tour Starburst presents Presto: The Definitive Guide Register Now! Starburst is hosting a virtual book tour series where attendees will: Meet the authors: • Meet the authors from the comfort of your own home Matt Fuller • Meet the Presto creators and participate in an Ask Me Anything (AMA) session with the book Manfred Moser authors + Presto creators • Meet special guest speakers from Martin your favorite podcasts who will Traverso moderate the AMA Register here to save your spot. Praise for Presto: The Definitive Guide This book provides a great introduction to Presto and teaches you everything you need to know to start your successful usage of Presto. —Dain Sundstrom and David Phillips, Creators of the Presto Projects and Founders of the Presto Software Foundation Presto plays a key role in enabling analysis at Pinterest. This book covers the Presto essentials, from use cases through how to run Presto at massive scale. —Ashish Kumar Singh, Tech Lead, Bigdata Query Processing Platform, Pinterest Presto has set the bar in both community-building and technical excellence for lightning- fast analytical processing on stored data in modern cloud architectures. This book is a must-read for companies looking to modernize their analytics stack. —Jay Kreps, Cocreator of Apache Kafka, Cofounder and CEO of Confluent Presto has saved us all—both in academia and industry—countless hours of work, allowing us all to avoid having to write code to manage distributed query processing.
    [Show full text]
  • Hortonworks Data Platform May 29, 2015
    docs.hortonworks.com Hortonworks Data Platform May 29, 2015 Hortonworks Data Platform : Data Integration Services with HDP Copyright © 2012-2015 Hortonworks, Inc. Some rights reserved. The Hortonworks Data Platform, powered by Apache Hadoop, is a massively scalable and 100% open source platform for storing, processing and analyzing large volumes of data. It is designed to deal with data from many sources and formats in a very quick, easy and cost-effective manner. The Hortonworks Data Platform consists of the essential set of Apache Hadoop projects including MapReduce, Hadoop Distributed File System (HDFS), HCatalog, Pig, Hive, HBase, Zookeeper and Ambari. Hortonworks is the major contributor of code and patches to many of these projects. These projects have been integrated and tested as part of the Hortonworks Data Platform release process and installation and configuration tools have also been included. Unlike other providers of platforms built using Apache Hadoop, Hortonworks contributes 100% of our code back to the Apache Software Foundation. The Hortonworks Data Platform is Apache-licensed and completely open source. We sell only expert technical support, training and partner-enablement services. All of our technology is, and will remain free and open source. Please visit the Hortonworks Data Platform page for more information on Hortonworks technology. For more information on Hortonworks services, please visit either the Support or Training page. Feel free to Contact Us directly to discuss your specific needs. Except where otherwise noted, this document is licensed under Creative Commons Attribution ShareAlike 3.0 License. http://creativecommons.org/licenses/by-sa/3.0/legalcode ii Hortonworks Data Platform May 29, 2015 Table of Contents 1.
    [Show full text]
  • Performance Prediction of Data Streams on High-Performance
    Gautam and Basava Hum. Cent. Comput. Inf. Sci. (2019) 9:2 https://doi.org/10.1186/s13673-018-0163-4 RESEARCH Open Access Performance prediction of data streams on high‑performance architecture Bhaskar Gautam* and Annappa Basava *Correspondence: bhaskar.gautam2494@gmail. Abstract com Worldwide sensor streams are expanding continuously with unbounded velocity in Department of Computer Science and Engineering, volume, and for this acceleration, there is an adaptation of large stream data processing National Institute system from the homogeneous to rack-scale architecture which makes serious con- of Technology Karnataka, cern in the domain of workload optimization, scheduling, and resource management Surathkal, India algorithms. Our proposed framework is based on providing architecture independent performance prediction model to enable resource adaptive distributed stream data processing platform. It is comprised of seven pre-defned domain for dynamic data stream metrics including a self-driven model which tries to ft these metrics using ridge regularization regression algorithm. Another signifcant contribution lies in fully-auto- mated performance prediction model inherited from the state-of-the-art distributed data management system for distributed stream processing systems using Gaussian processes regression that cluster metrics with the help of dimensionality reduction algorithm. We implemented its base on Apache Heron and evaluated with proposed Benchmark Suite comprising of fve domain-specifc topologies. To assess the pro- posed methodologies, we forcefully ingest tuple skewness among the benchmark- ing topologies to set up the ground truth for predictions and found that accuracy of predicting the performance of data streams increased up to 80.62% from 66.36% along with the reduction of error from 37.14 to 16.06%.
    [Show full text]
  • Hortonworks Data Platform Date of Publish: 2018-09-21
    Release Notes 3 Hortonworks Data Platform Date of Publish: 2018-09-21 http://docs.hortonworks.com Contents HDP 3.0.1 Release Notes..........................................................................................3 Component Versions.............................................................................................................................................3 New Features........................................................................................................................................................ 3 Deprecation Notices..............................................................................................................................................4 Terminology.............................................................................................................................................. 4 Removed Components and Product Capabilities.....................................................................................4 Unsupported Features........................................................................................................................................... 4 Technical Preview Features......................................................................................................................4 Upgrading to HDP 3.0.1...................................................................................................................................... 5 Before you begin.....................................................................................................................................
    [Show full text]
  • ACNA2011: Apache Rave: Enterprise Social Networking out of The
    Apache Rave Enterprise Social Networking Out Of The Box Ate Douma, Hippo B.V. Matt Franklin, The MITRE Corporation November 9, 2011 Overview ● About us ● What is Apache Rave? ● History ● Projects and people behind Rave ● The Project ● Demo ● Goals & Roadmap ● More demos and examples ● Other projects using Rave ● Participate Apache Rave: Enterprise Social Networking Out Of The Box About us Ate Douma Matt Franklin Chief Architect at Lead Software Engineer at Hippo B.V. The MITRE Corporation's Center of Open source CMS and Portal Software Information & Technology Apache Champion, Mentor and Committer Apache PPMC Member and Committer of Apache Rave of Apache Rave [email protected] [email protected] [email protected] [email protected] [email protected] twitter: @atedouma twitter: @mattfranklin Apache Rave: Enterprise Social Networking Out Of The Box What is Apache Rave? Apache Rave (incubating) is a lightweight and extensible Web and Social Mashup engine, to host, serve and aggregate Gadgets, Widgets and general (social) network and web services with a highly customizable Web 2.0 friendly front-end. ● Targets Enterprise-level intranet, extranet, portal, web and mobile sites ● Can be used 'out-of-the-box' or as an embeddable engine ● Transparent integration and usage of OpenSocial Gadgets, W3C Widgets, …, ● Built upon a highly extensible and pluggable component architecture ● Will enhance this with context-aware cross-component communication, collaboration and content integration features ● Leverages latest/open standards and related open source
    [Show full text]
  • Real-Time Stream Processing for Big Data
    it – Information Technology 2016; 58(4): 186–194 DE GRUYTER OLDENBOURG Special Issue Wolfram Wingerath*, Felix Gessert, Steffen Friedrich, and Norbert Ritter Real-time stream processing for Big Data DOI 10.1515/itit-2016-0002 1 Introduction Received January 15, 2016; accepted May 2, 2016 Abstract: With the rise of the web 2.0 and the Internet of Through technological advance and increasing connec- things, it has become feasible to track all kinds of infor- tivity between people and devices, the amount of data mation over time, in particular fine-grained user activi- available to (web) companies, governments and other or- ties and sensor data on their environment and even their ganisations is constantly growing. The shift towards more biometrics. However, while efficiency remains mandatory dynamic and user-generated content in the web and the for any application trying to cope with huge amounts of omnipresence of smart phones, wearables and other mo- data, only part of the potential of today’s Big Data repos- bile devices, in particular, have led to an abundance of in- itories can be exploited using traditional batch-oriented formation that are only valuable for a short time and there- approaches as the value of data often decays quickly and fore have to be processed immediately. Companies like high latency becomes unacceptable in some applications. Amazon and Netflix have already adapted and are mon- In the last couple of years, several distributed data pro- itoring user activity to optimise product or video recom- cessing systems have emerged that deviate from the batch- mendations for the current user context.
    [Show full text]