Superset Documentation

Apache Superset Dev

Dec 05, 2019

CONTENTS

1 Superset Resources 3

2 Apache Software Foundation Resources5

3 Overview 7 3.1 Features...... 7 3.2 Databases...... 7 3.3 Screenshots...... 9 3.4 Contents...... 12 3.5 Indices and tables...... 85

i ii Superset Documentation

Apache Superset (incubating) is a modern, enterprise-ready business intelligence web application

Important: Disclaimer: Apache Superset is an effort undergoing incubation at The Apache Software Foundation (ASF), sponsored by the . Incubation is required of all newly accepted projects until a further review indicates that the infrastructure, communications, and decision making process have stabilized in a manner consistent with other successful ASF projects. While incubation status is not necessarily a reflection of the completeness or stability of the code, it does indicate that the project has yet to be fully endorsed by the ASF.

Note: Apache Superset, Superset, Apache, the Apache feather logo, and the Apache Superset project logo are either registered trademarks or trademarks of The Apache Software Foundation in the United States and other countries.

CONTENTS 1 Superset Documentation

2 CONTENTS CHAPTER ONE

SUPERSET RESOURCES

• Superset’s Github, note that we use Github for issue tracking • Superset’s contribution guidelines and code of conduct on Github. • Our mailing list archives. To subscribe, send an email to [email protected] • Join our Slack

3 Superset Documentation

4 Chapter 1. Superset Resources CHAPTER TWO

APACHE SOFTWARE FOUNDATION RESOURCES

• The Apache Software Foundation Website • Current Events • License • Thanks to the ASF’s sponsors • Sponsor Apache!

5 Superset Documentation

6 Chapter 2. Apache Software Foundation Resources CHAPTER THREE

OVERVIEW

3.1 Features

• A rich set of data visualizations • An easy-to-use interface for exploring and visualizing data • Create and share dashboards • Enterprise-ready authentication with integration with major authentication providers (database, OpenID, LDAP, OAuth & REMOTE_USER through Flask AppBuilder) • An extensible, high-granularity security/permission model allowing intricate rules on who can access individual features and the dataset • A simple semantic layer, allowing users to control how data sources are displayed in the UI by defining which fields should show up in which drop-down and which aggregation and function metrics are made available to the user • Integration with most SQL-speaking RDBMS through SQLAlchemy • Deep integration with Druid.io

3.2 Databases

The following RDBMS are currently suppored: • Amazon Athena • Amazon Redshift • • Apache Druid • • Apache Pinot • SQL • BigQuery • ClickHouse

7 Superset Documentation

• Exasol • Google Sheets • Greenplum • IBM Db2 • MySQL • Oracle • PostgreSQL • Presto • Snowflake • SQLite • SQL Server • Teradata • Vertica Other database engines with a proper DB-API driver and SQLAlchemy dialect should be supported as well.

8 Chapter 3. Overview Superset Documentation

3.3 Screenshots

3.3. Screenshots 9 Superset Documentation

10 Chapter 3. Overview Superset Documentation

3.3. Screenshots 11 Superset Documentation

3.4 Contents

3.4.1 Installation & Configuration

Getting Started

Superset has deprecated support for Python 2.* and supports only ~=3.6 to take advantage of the newer Python features and reduce the burden of supporting previous versions. We run our test suite against 3.6, but 3.7 is fully supported as well.

Cloud-native!

Superset is designed to be highly available. It is “cloud-native” as it has been designed scale out in large, distributed environments, and works well inside containers. While you can easily test drive Superset on a modest setup or simply on your laptop, there’s virtually no limit around scaling out the platform. Superset is also cloud-native in the sense that it is flexible and lets you choose your web server (Gunicorn, Nginx, Apache), your metadata database engine (MySQL, Postgres, MariaDB, . . . ), your message queue (Redis, RabbitMQ, SQS, . . . ), your results backend (S3, Redis, Memcached, . . . ), your caching layer (Memcached, Redis, . . . ), works well with services like NewRelic, StatsD and DataDog, and has the ability to run analytic workloads against most popular database technologies. Superset is battle tested in large environments with hundreds of concurrent users. Airbnb’s production environment runs inside Kubernetes and serves 600+ daily active users viewing over 100K charts a day. The Superset web server and the Superset Celery workers (optional) are stateless, so you can scale out by running on as many servers as needed.

Start with Docker

Note: The Docker-related files and documentation has been community-contributed and is not actively maintained and managed by the core committers working on the project. Some issues have been reported as of 2019-01. Help and contributions around Docker are welcomed!

If you know docker, then you’re lucky, we have shortcut road for you to initialize development environment: git clone https://github.com/apache/incubator-superset/ cd incubator-superset/contrib/docker # prefix with SUPERSET_LOAD_EXAMPLES=yes to load examples: docker-compose run--rm superset./docker-init.sh # you can run this command everytime you need to start superset now: docker-compose up

After several minutes for superset initialization to finish, you can open a browser and view http://localhost:8088 to start your journey. From there, the container server will reload on modification of the superset python and javascript source code. Don’t forget to reload the page to take the new frontend into account though. See also CONTRIBUTING.md#building, for alternative way of serving the frontend. It is also possible to run Superset in non-development mode: in the docker-compose.yml file remove the volumes needed for development and change the variable SUPERSET_ENV to production. If you are attempting to build on a Mac and it exits with 137 you need to increase your docker resources. OSX instructions: https://docs.docker.com/docker-for-mac/#advanced (Search for memory)

12 Chapter 3. Overview Superset Documentation

Or if you’re curious and want to install superset from bottom up, then go ahead. See also contrib/docker/README.md

OS dependencies

Superset stores database connection information in its metadata database. For that purpose, we use the cryptography Python library to encrypt connection passwords. Unfortunately, this library has OS level depen- dencies. You may want to attempt the next step (“Superset installation and initialization”) and come back to this step if you encounter an error. Here’s how to install them: For Debian and Ubuntu, the following command will ensure that the required dependencies are installed:

sudo apt-get install build-essential libssl-dev libffi-dev python-dev python-pip

˓→libsasl2-dev libldap2-dev

Ubuntu 18.04 If you have python3.6 installed alongside with python2.7, as is default on Ubuntu 18.04 LTS, run this command also:

sudo apt-get install build-essential libssl-dev libffi-dev python3.6-dev python-pip

˓→libsasl2-dev libldap2-dev

otherwise build for cryptography fails. For Fedora and RHEL-derivatives, the following command will ensure that the required dependencies are installed:

sudo yum upgrade python-setuptools sudo yum install gcc gcc-c++ libffi-devel python-devel python-pip python-wheel

˓→openssl-devel libsasl2-devel openldap-devel

Mac OS X If possible, you should upgrade to the latest version of OS X as issues are more likely to be resolved for that version. You will likely need the latest version of XCode available for your installed version of OS X. You should also install the XCode command line tools:

xcode-select--install

System python is not recommended. Homebrew’s python also ships with pip:

brew install pkg-config libffi openssl python env LDFLAGS="-L$(brew --prefix openssl)/lib" CFLAGS="-I$(brew --prefix openssl)/

˓→include" pip install cryptography==2.4.2

Windows isn’t officially supported at this point, but if you want to attempt it, download get-pip.py, and run python get-pip.py which may need admin access. Then run the following:

C:\> pip install cryptography

# You may also have to create C:\Temp C:\> md C:\Temp

3.4. Contents 13 Superset Documentation

Python virtualenv

It is recommended to install Superset inside a virtualenv. Python 3 already ships virtualenv. But if it’s not installed in your environment for some reason, you can install it via the package for your operating systems, otherwise you can install from pip: pip install virtualenv

You can create and activate a virtualenv by:

# virtualenv is shipped in Python 3.6+ as venv instead of pyvenv. # See https://docs.python.org/3.6/library/venv.html python3-m venv venv . venv/bin/activate

On windows the syntax for activating it is a bit different: venv\Scripts\activate

Once you activated your virtualenv everything you are doing is confined inside the virtualenv. To exit a virtualenv just type deactivate.

Python’s setup tools and pip

Put all the chances on your side by getting the very latest pip and setuptools libraries.: pip install--upgrade setuptools pip

Superset installation and initialization

Follow these few simple steps to install Superset.:

# Install superset pip install apache-superset

# Initialize the database superset db upgrade

# Create an admin user (you will be prompted to set a username, first and last name

˓→before setting a password) $ export FLASK_APP=superset flask fab create-admin

# Load some data to play with superset load_examples

# Create default roles and permissions superset init

# To start a development web server on port 8088, use -p to bind to another port superset run -p 8088 --with-threads --reload --debugger

After installation, you should be able to point your browser to the right hostname:port http://localhost:8088, login using the credential you entered while creating the admin account, and navigate to Menu -> Admin -> Refresh Metadata. This action should bring in all of your datasources for Superset to be aware of, and they should show up in Menu -> Datasources, from where you can start playing with your data!

14 Chapter 3. Overview Superset Documentation

A proper WSGI HTTP Server

While you can setup Superset to run on Nginx or Apache, many use Gunicorn, preferably in async mode, which allows for impressive concurrency even and is fairly easy to install and configure. Please refer to the documentation of your preferred technology to set up this Flask WSGI application in a way that works well in your environment. Here’s an async setup known to work well in production:

gunicorn \ -w 10\ -k gevent \ --timeout 120\ -b 0.0.0.0:6666\ --limit-request-line0\ --limit-request-field_size0\ --statsd-host localhost:8125\ superset:app

Refer to the Gunicorn documentation for more information. Note that the development web server (superset run or flask run) is not intended for production use. If not using gunicorn, you may want to disable the use of flask-compress by setting ENABLE_FLASK_COMPRESS = False in your superset_config.py

Flask-AppBuilder Permissions

By default, every time the Flask-AppBuilder (FAB) app is initialized the permissions and views are added automat- ically to the backend and associated with the ‘Admin’ role. The issue, however, is when you are running multiple concurrent workers this creates a lot of contention and race conditions when defining permissions and views. To alleviate this issue, the automatic updating of permissions can be disabled by setting FAB_UPDATE_PERMS = False (defaults to True). In a production environment initialization could take on the following form: superset init gunicorn -w 10 . . . superset:app

Configuration behind a load balancer

If you are running superset behind a load balancer or reverse proxy (e.g. NGINX or ELB on AWS), you may need to utilise a healthcheck endpoint so that your load balancer knows if your superset instance is running. This is provided at /health which will return a 200 response containing “OK” if the the webserver is running. If the load balancer is inserting X-Forwarded-For/X-Forwarded-Proto headers, you should set ENABLE_PROXY_FIX = True in the superset config file to extract and use the headers. In case that the reverse proxy is used for providing ssl encryption, an explicit definition of the X-Forwarded-Proto may be required. For the Apache webserver this can be set as follows:

RequestHeader setX-Forwarded-Proto"https"

Configuration

To configure your application, you need to create a file (module) superset_config.py and make sure it is in your PYTHONPATH. Here are some of the parameters you can copy / paste in that configuration module:

3.4. Contents 15 Superset Documentation

#------# Superset specific config #------ROW_LIMIT= 5000

SUPERSET_WEBSERVER_PORT= 8088 #------

#------# Flask App Builder configuration #------# Your App secret key SECRET_KEY=' \2\1thisismyscretkey\1\2\e\y\y\h'

# The SQLAlchemy connection string to your database backend # This connection defines the path to the database that stores your # superset metadata (slices, connections, tables, dashboards, ...). # Note that the connection information to connect to the datasources # you want to explore are managed directly in the web UI SQLALCHEMY_DATABASE_URI='sqlite:////path/to/superset.db'

# Flask-WTF flag for CSRF WTF_CSRF_ENABLED= True # Add endpoints that need to be exempt from CSRF protection WTF_CSRF_EXEMPT_LIST=[] # A CSRF token that expires in 1 year WTF_CSRF_TIME_LIMIT= 60 * 60 * 24 * 365

# Set this API key to enable Mapbox visualizations MAPBOX_API_KEY=''

All the parameters and default values defined in https://github.com/apache/incubator-superset/blob/master/superset/ config.py can be altered in your local superset_config.py . Administrators will want to read through the file to understand what can be configured locally as well as the default values in place. Since superset_config.py acts as a Flask configuration module, it can be used to alter the settings Flask itself, as well as Flask extensions like flask-wtf, flask-cache, flask-migrate, and flask-appbuilder. Flask App Builder, the web framework used by Superset offers many configuration settings. Please consult the Flask App Builder Documentation for more information on how to configure it. Make sure to change: • SQLALCHEMY_DATABASE_URI, by default it is stored at ~/.superset/superset.db • SECRET_KEY, to a long random string In case you need to exempt endpoints from CSRF, e.g. you are running a custom auth postback endpoint, you can add them to WTF_CSRF_EXEMPT_LIST WTF_CSRF_EXEMPT_LIST = [‘’]

Database dependencies

Superset does not ship bundled with connectivity to databases, except for Sqlite, which is part of the Python standard library. You’ll need to install the required packages for the database you want to use as your metadata database as well as the packages needed to connect to the databases you want to access through Superset. Here’s a list of some of the recommended packages.

16 Chapter 3. Overview Superset Documentation

Note that many other databases are supported, the main criteria being the existence of a functional SqlAlchemy dialect and Python driver. Googling the keyword sqlalchemy in addition of a keyword that describes the database you want to connect to should get you to the right place.

(AWS) Athena

The connection string for Athena looks like this awsathena+jdbc://{aws_access_key_id}:{aws_secret_access_key}@athena.{region_name}.

˓→amazonaws.com/{schema_name}?s3_staging_dir={s3_staging_dir}&...

Where you need to escape/encode at least the s3_staging_dir, i.e., s3://...-> s3%3A//...

You can also use PyAthena library(no java required) like this awsathena+rest://{aws_access_key_id}:{aws_secret_access_key}@athena.{region_name}.

˓→amazonaws.com/{schema_name}?s3_staging_dir={s3_staging_dir}&...

See PyAthena.

(Google) BigQuery

The connection string for BigQuery looks like this bigquery://{project_id}

To be able to upload data, e.g. sample data, the python library pandas_gbq is required.

Snowflake

The connection string for Snowflake looks like this snowflake://{user}:{password}@{account}.{region}/{database}?role={role}&warehouse=

˓→{warehouse}

The schema is not necessary in the connection string, as it is defined per table/query. The role and warehouse can be omitted if defaults are defined for the user, i.e. snowflake://{user}:{password}@{account}.{region}/{database} Make sure the user has privileges to access and use all required databases/schemas/tables/views/warehouses, as the Snowflake SQLAlchemy engine does not test for user rights during engine creation. See Snowflake SQLAlchemy.

Teradata

The connection string for Teradata looks like this teradata://{user}:{password}@{host}

3.4. Contents 17 Superset Documentation

Note: Its required to have Teradata ODBC drivers installed and environment variables configured for proper work of sqlalchemy dialect. Teradata ODBC Drivers available here: https://downloads.teradata.com/download/connectivity/ odbc-driver/linux Required environment variables:

export ODBCINI=/.../teradata/client/ODBC_64/odbc.ini export ODBCINST=/.../teradata/client/ODBC_64/odbcinst.ini

See Teradata SQLAlchemy.

Apache Drill

At the time of writing, the SQLAlchemy Dialect is not available on pypi and must be downloaded here: SQLAlchemy Drill Alternatively, you can install it completely from the command line as follows:

git clone https://github.com/JohnOmernik/sqlalchemy-drill cd sqlalchemy-drill python3 setup.py install

Once that is done, you can connect to Drill in two ways, either via the REST interface or by JDBC. If you are connecting via JDBC, you must have the Drill JDBC Driver installed. The basic connection string for Drill looks like this

drill+sadrill://{username}:{password}@{host}:{port}/{storage_plugin}?use_ssl=True

If you are using JDBC to connect to Drill, the connection string looks like this:

drill+jdbc://{username}:{password}@{host}:{port}/{storage_plugin}

For a complete tutorial about how to use Apache Drill with Superset, see this tutorial: Visualize Anything with Superset and Drill

Caching

Superset uses Flask-Cache for caching purpose. Configuring your caching backend is as easy as providing a CACHE_CONFIG, constant in your superset_config.py that complies with the Flask-Cache specifications. Flask-Cache supports multiple caching backends (Redis, Memcached, SimpleCache (in-memory), or the local filesys- tem). If you are going to use Memcached please use the pylibmc client library as python-memcached does not handle storing binary data correctly. If you use Redis, please install the redis Python package:

pip install redis

For setting your timeouts, this is done in the Superset metadata and goes up the “timeout searchpath”, from your slice configuration, to your data source’s configuration, to your database’s and ultimately falls back into your global default defined in CACHE_CONFIG.

CACHE_CONFIG={ 'CACHE_TYPE':'redis', 'CACHE_DEFAULT_TIMEOUT': 60 * 60 * 24, # 1 day default (in secs) 'CACHE_KEY_PREFIX':'superset_results', 'CACHE_REDIS_URL':'redis://localhost:6379/0', }

18 Chapter 3. Overview Superset Documentation

It is also possible to pass a custom cache initialization function in the config to handle additional caching use cases. The function must return an object that is compatible with the Flask-Cache API. from custom_caching import CustomCache def init_cache(app): """Takes an app instance and returns a custom cache backend""" config={ 'CACHE_DEFAULT_TIMEOUT': 60 * 60 * 24, # 1 day default (in secs) 'CACHE_KEY_PREFIX':'superset_results', } return CustomCache(app, config)

CACHE_CONFIG= init_cache

Superset has a Celery task that will periodically warm up the cache based on different strategies. To use it, add the following to the CELERYBEAT_SCHEDULE section in config.py:

CELERYBEAT_SCHEDULE={ 'cache-warmup-hourly':{ 'task':'cache-warmup', 'schedule': crontab(minute=0, hour='*'), # hourly 'kwargs':{ 'strategy_name':'top_n_dashboards', 'top_n':5, 'since':'7 days ago', }, }, }

This will cache all the charts in the top 5 most popular dashboards every hour. For other strategies, check the super- set/tasks/cache.py file.

Deeper SQLAlchemy integration

It is possible to tweak the database connection information using the parameters exposed by SQLAlchemy. In the Database edit view, you will find an extra field as a JSON blob.

This JSON string contains extra configuration elements. The engine_params object gets unpacked into the

3.4. Contents 19 Superset Documentation

sqlalchemy.create_engine call, while the metadata_params get unpacked into the sqlalchemy.MetaData call. Re- fer to the SQLAlchemy docs for more information.

Note: If your using CTAS on SQLLab and PostgreSQL take a look at Create Table As (CTAS) for specific engine_params.

Schemas (Postgres & Redshift)

Postgres and Redshift, as well as other databases, use the concept of schema as a logical entity on top of the database. For Superset to connect to a specific schema, there’s a schema parameter you can set in the table form.

External Password store for SQLAlchemy connections

It is possible to use an external store for you database passwords. This is useful if you a running a custom secret distribution framework and do not wish to store secrets in Superset’s meta database. Example: Write a function that takes a single argument of type sqla.engine.url and returns the password for the given connection string. Then set SQLALCHEMY_CUSTOM_PASSWORD_STORE in your config file to point to that function.

def example_lookup_password(url): secret= <> return 'secret'

SQLALCHEMY_CUSTOM_PASSWORD_STORE= example_lookup_password

A common pattern is to use environment variables to make secrets available. SQLALCHEMY_CUSTOM_PASSWORD_STORE can also be used for that purpose.

def example_password_as_env_var(url): # assuming the uri looks like # ://localhost?superset_user:{SUPERSET_PASSWORD} return url.password.format(os.environ)

SQLALCHEMY_CUSTOM_PASSWORD_STORE= example_password_as_env_var

SSL Access to databases

This example worked with a MySQL database that requires SSL. The configuration may differ with other backends. This is what was put in the extra parameter

{ "metadata_params": {}, "engine_params":{ "connect_args":{ "sslmode":"require", "sslrootcert":"/path/to/my/pem" } } }

20 Chapter 3. Overview Superset Documentation

Druid

• From the UI, enter the information about your clusters in the Sources -> Druid Clusters menu by hitting the + sign. • Once the Druid cluster connection information is entered, hit the Sources -> Refresh Druid Metadata menu item to populate • Navigate to your datasources Note that you can run the superset refresh_druid command to refresh the metadata from your Druid clus- ter(s)

Presto

By default Superset assumes the most recent version of Presto is being used when querying the datasource. If you’re using an older version of presto, you can configure it in the extra parameter:

{ "version":"0.123" }

Exasol

The connection string for Exasol looks like this

exa+pyodbc://{user}:{password}@{host}

Note: It’s required to have Exasol ODBC drivers installed for the sqlalchemy dialect to work properly. Exasol ODBC Drivers available are here: https://www.exasol.com/portal/display/DOWNLOAD/Exasol+Download+Section Example config (odbcinst.ini can be left empty)

$ cat $/.../path/to/odbc.ini [EXAODBC] DRIVER = /.../path/to/driver/EXASOL_driver.so EXAHOST = host:8563 EXASCHEMA = main

See SQLAlchemy for Exasol.

CORS

The extra CORS Dependency must be installed: superset[cors] The following keys in superset_config.py can be specified to configure CORS: • ENABLE_CORS: Must be set to True in order to enable CORS • CORS_OPTIONS: options passed to Flask-CORS (documentation )

3.4. Contents 21 Superset Documentation

DOMAIN SHARDING

Chrome allows up to 6 open connections per domain at a time. When there are more than 6 slices in dashboard, a lot of time fetch requests are queued up and wait for next available socket. PR 5039 adds domain sharding to Superset, and this feature will be enabled by configuration only (by default Superset doesn’t allow cross-domain request). • SUPERSET_WEBSERVER_DOMAINS: list of allowed hostnames for domain sharding feature. default None

MIDDLEWARE

Superset allows you to add your own middleware. To add your own middleware, update the ADDITIONAL_MIDDLEWARE key in your superset_config.py. ADDITIONAL_MIDDLEWARE should be a list of your additional middleware classes. For example, to use AUTH_REMOTE_USER from behind a proxy server like nginx, you have to add a simple mid- dleware class to add the value of HTTP_X_PROXY_REMOTE_USER (or any other custom header from the proxy) to Gunicorn’s REMOTE_USER environment variable: class RemoteUserMiddleware(object): def __init__(self, app): self.app= app def __call__(self, environ, start_response): user= environ.pop('HTTP_X_PROXY_REMOTE_USER', None) environ['REMOTE_USER']= user return self.app(environ, start_response)

ADDITIONAL_MIDDLEWARE= [RemoteUserMiddleware, ]

Adapted from http://flask.pocoo.org/snippets/69/

Event Logging

Superset by default logs special action event on it’s database. These log can be accessed on the UI navigating to “Security” -> “Action Log”. You can freely customize these logs by implementing your own event log class. Example of a simple JSON to Stdout class: class JSONStdOutEventLogger(AbstractEventLogger):

def log(self, user_id, action, *args, **kwargs): records= kwargs.get('records', list()) dashboard_id= kwargs.get('dashboard_id') slice_id= kwargs.get('slice_id') duration_ms= kwargs.get('duration_ms') referrer= kwargs.get('referrer')

for record in records: log= dict( action=action, json=record, dashboard_id=dashboard_id, slice_id=slice_id, duration_ms=duration_ms, referrer=referrer, user_id=user_id (continues on next page)

22 Chapter 3. Overview Superset Documentation

(continued from previous page) ) print(json.dumps(log))

Then on Superset’s config pass an instance of the logger type you want to use. EVENT_LOGGER = JSONStdOutEventLogger()

Upgrading

Upgrading should be as straightforward as running:

pip install apache-superset--upgrade superset db upgrade superset init

We recommend to follow standard best practices when upgrading Superset, such as taking a database backup prior to the upgrade, upgrading a staging environment prior to upgrading production, and upgrading production while less users are active on the platform.

Note: Some upgrades may contain backward-incompatible changes, or require scheduling downtime, when that is the case, contributors attach notes in UPDATING.md in the repository. It’s recommended to review this file prior to running an upgrade.

Celery Tasks

On large analytic databases, it’s common to run queries that execute for minutes or hours. To enable support for long running queries that execute beyond the typical web request’s timeout (30-60 seconds), it is necessary to configure an asynchronous backend for Superset which consists of: • one or many Superset workers (which is implemented as a Celery worker), and can be started with the celery worker command, run celery worker --help to view the related options. • a celery broker (message queue) for which we recommend using Redis or RabbitMQ • a results backend that defines where the worker will persist the query results Configuring Celery requires defining a CELERY_CONFIG in your superset_config.py. Both the worker and web server processes should have the same configuration.

class CeleryConfig(object): BROKER_URL='redis://localhost:6379/0' CELERY_IMPORTS=( 'superset.sql_lab', 'superset.tasks', ) CELERY_RESULT_BACKEND='redis://localhost:6379/0' CELERYD_LOG_LEVEL='DEBUG' CELERYD_PREFETCH_MULTIPLIER= 10 CELERY_ACKS_LATE= True CELERY_ANNOTATIONS={ 'sql_lab.get_sql_results':{ 'rate_limit':'100/s', }, (continues on next page)

3.4. Contents 23 Superset Documentation

(continued from previous page) 'email_reports.send':{ 'rate_limit':'1/s', 'time_limit': 120, 'soft_time_limit': 150, 'ignore_result': True, }, } CELERYBEAT_SCHEDULE={ 'email_reports.schedule_hourly':{ 'task':'email_reports.schedule_hourly', 'schedule': crontab(minute=1, hour='*'), }, }

CELERY_CONFIG= CeleryConfig

• To start a Celery worker to leverage the configuration run:

celery worker--app=superset.tasks.celery_app:app--pool=prefork-Ofair-c4

• To start a job which schedules periodic background jobs, run

celery beat--app=superset.tasks.celery_app:app

To setup a result backend, you need to pass an instance of a derivative of werkzeug.contrib.cache. BaseCache to the RESULTS_BACKEND configuration key in your superset_config.py. It’s possible to use Memcached, Redis, S3 (https://pypi.python.org/pypi/s3werkzeugcache), memory or the file system (in a single server- type setup or for testing), or to write your own caching interface. Your superset_config.py may look something like:

# On S3 from s3cache.s3cache import S3Cache S3_CACHE_BUCKET='foobar-superset' S3_CACHE_KEY_PREFIX='sql_lab_result' RESULTS_BACKEND= S3Cache(S3_CACHE_BUCKET, S3_CACHE_KEY_PREFIX)

# On Redis from werkzeug.contrib.cache import RedisCache RESULTS_BACKEND= RedisCache( host='localhost', port=6379, key_prefix='superset_results')

For performance gains, MessagePack and PyArrow are now used for results serialization. This can be disabled by setting RESULTS_BACKEND_USE_MSGPACK = False in your configuration, should any issues arise. Please clear your existing results cache store when upgrading an existing environment. Important notes • It is important that all the worker nodes and web servers in the Superset cluster share a common metadata database. This means that SQLite will not work in this context since it has limited support for concurrency and typically lives on the local file system. • There should only be one instance of celery beat running in your entire setup. If not, background jobs can get scheduled multiple times resulting in weird behaviors like duplicate delivery of reports, higher than expected load / traffic etc.

24 Chapter 3. Overview Superset Documentation

Email Reports

Email reports allow users to schedule email reports for • slice and dashboard visualization (Attachment or inline) • slice data (CSV attachment on inline table) Schedules are defined in crontab format and each schedule can have a list of recipients (all of them can receive a single mail, or separate mails). For audit purposes, all outgoing mails can have a mandatory bcc. Requirements • A selenium compatible driver & headless browser – geckodriver and Firefox is preferred – chromedriver is a good option too • Run celery worker and celery beat as follows

celery worker--app=superset.tasks.celery_app:app--pool=prefork-Ofair-c4 celery beat--app=superset.tasks.celery_app:app

Important notes • Be mindful of the concurrency setting for celery (using -c 4). Selenium/webdriver instances can consume a lot of CPU / memory on your servers. • In some cases, if you notice a lot of leaked geckodriver processes, try running your celery processes with

celery worker--pool=prefork--max-tasks-per-child=128...

• It is recommended to run separate workers for sql_lab and email_reports tasks. Can be done by using queue field in CELERY_ANNOTATIONS

SQL Lab

SQL Lab is a powerful SQL IDE that works with all SQLAlchemy compatible databases. By default, queries are executed in the scope of a web request so they may eventually timeout as queries exceed the maximum duration of a web request in your environment, whether it’d be a reverse proxy or the Superset server itself. In such cases, it is preferred to use celery to run the queries in the background. Please follow the examples/notes mentioned above to get your celery setup working. Also note that SQL Lab supports Jinja templating in queries and that it’s possible to overload the default Jinja context in your environment by defining the JINJA_CONTEXT_ADDONS in your superset configuration. Objects referenced in this dictionary are made available for users to use in their SQL.

JINJA_CONTEXT_ADDONS={ 'my_crazy_macro': lambda x: x*2, }

SQL Lab also includes a live query validation feature with pluggable backends. You can configure which validation implementation is used with which database engine by adding a block like the following to your config.py:

FEATURE_FLAGS={ 'SQL_VALIDATORS_BY_ENGINE':{ 'presto':'PrestoDBSQLValidator', } }

3.4. Contents 25 Superset Documentation

The available validators and names can be found in sql_validators/. Scheduling queries You can optionally allow your users to schedule queries directly in SQL Lab. This is done by addding extra metadata to saved queries, which are then picked up by an external scheduled (like [Apache Airflow](https://airflow.apache.org/)). To allow scheduled queries, add the following to your config.py:

FEATURE_FLAGS={ # Configuration for scheduling queries from SQL Lab. This information is # collected when the user clicks "Schedule query", and saved into the `extra` # field of saved queries. # See: https://github.com/mozilla-services/react-jsonschema-form 'SCHEDULED_QUERIES':{ 'JSONSCHEMA':{ 'title':'Schedule', 'description':( 'In order to schedule a query, you need to specify when it' 'should start running, when it should stop running, and how' 'often it should run. You can also optionally specify' 'dependencies that should be met before the query is' 'executed. Please read the documentation for best practices' 'and more information on how to specify dependencies.' ), 'type':'object', 'properties':{ 'output_table':{ 'type':'string', 'title':'Output table name', }, 'start_date':{ 'type':'string', 'title':'Start date', # date-time is parsed using the chrono library, see # https://www.npmjs.com/package/chrono-node#usage 'format':'date-time', 'default':'tomorrow at 9am', }, 'end_date':{ 'type':'string', 'title':'End date', # date-time is parsed using the chrono library, see # https://www.npmjs.com/package/chrono-node#usage 'format':'date-time', 'default':'9am in 30 days', }, 'schedule_interval':{ 'type':'string', 'title':'Schedule interval', }, 'dependencies':{ 'type':'array', 'title':'Dependencies', 'items':{ 'type':'string', }, }, }, (continues on next page)

26 Chapter 3. Overview Superset Documentation

(continued from previous page) }, 'UISCHEMA':{ 'schedule_interval':{ 'ui:placeholder':'@daily, @weekly, etc.', }, 'dependencies':{ 'ui:help':( 'Check the documentation for the correct format when' 'defining dependencies.' ), }, }, 'VALIDATION':[ # ensure that start_date <= end_date { 'name':'less_equal', 'arguments':['start_date','end_date'], 'message':'End date cannot be before start date', # this is where the error message is shown 'container':'end_date', }, ], # link to the scheduler; this example links to an Airflow pipeline # that uses the query id and the output table as its name 'linkback':( 'https://airflow.example.com/admin/airflow/tree?' 'dag_id=query_${id}_${extra_json.schedule_info.output_table}' ), }, }

This feature flag is based on [react-jsonschema-form](https://github.com/mozilla-services/react-jsonschema-form), and will add a button called “Schedule Query” to SQL Lab. When the button is clicked, a modal will show up where the user can add the metadata required for scheduling the query. This information can then be retrieved from the endpoint /savedqueryviewapi/api/read and used to schedule the queries that have scheduled_queries in their JSON metadata. For schedulers other than Airflow, additional fields can be easily added to the configuration file above.

Celery Flower

Flower is a web based tool for monitoring the Celery cluster which you can install from pip: pip install flower and run via: celery flower--app=superset.tasks.celery_app:app

Building from source

More advanced users may want to build Superset from sources. That would be the case if you fork the project to add features specific to your environment. See CONTRIBUTING.md#setup-local-environment-for-development.

3.4. Contents 27 Superset Documentation

Blueprints

Blueprints are Flask’s reusable apps. Superset allows you to specify an array of Blueprints in your superset_config module. Here’s an example of how this can work with a simple Blueprint. By doing so, you can expect Superset to serve a page that says “OK” at the /simple_page url. This can allow you to run other things such as custom data visualization applications alongside Superset, on the same server.

from flask import Blueprint simple_page= Blueprint('simple_page', __name__, template_folder='templates') @simple_page.route('/', defaults={'page':'index'}) @simple_page.route('/') def show(page): return "Ok"

BLUEPRINTS= [simple_page]

StatsD logging

Superset is instrumented to log events to StatsD if desired. Most endpoints hit are logged as well as key events like query start and end in SQL Lab. To setup StatsD logging, it’s a matter of configuring the logger in your superset_config.py.

from superset.stats_logger import StatsdStatsLogger STATS_LOGGER= StatsdStatsLogger(host='localhost', port=8125, prefix='superset')

Note that it’s also possible to implement you own logger by deriving superset.stats_logger. BaseStatsLogger.

Install Superset with helm in Kubernetes

You can install Superset into Kubernetes with Helm . The chart is located in install/helm. To install Superset into your Kubernetes:

helm upgrade --install superset ./install/helm/superset

Note that the above command will install Superset into default namespace of your Kubernetes cluster.

Custom OAuth2 configuration

Beyond FAB supported providers (github, , linkedin, google, azure), its easy to connect Superset with other OAuth2 Authorization Server implementations that support “code” authorization. The first step: Configure authorization in Superset superset_config.py.

AUTH_TYPE= AUTH_OAUTH OAUTH_PROVIDERS=[ {'name':'egaSSO', 'token_key':'access_token', # Name of the token in the response of access_

˓→token_url 'icon':'fa-address-card', # Icon for the provider 'remote_app':{ (continues on next page)

28 Chapter 3. Overview Superset Documentation

(continued from previous page) 'consumer_key':'myClientId', # Client Id (Identify Superset application) 'consumer_secret':'MySecret', # Secret for this Client Id (Identify

˓→Superset application) 'request_token_params':{ 'scope':'read' # Scope for the Authorization }, 'access_token_method':'POST', # HTTP Method to call access_token_url 'access_token_params':{ # Additional parameters for calls to

˓→access_token_url 'client_id':'myClientId' }, 'access_token_headers':{ # Additional headers for calls to access_

˓→token_url 'Authorization':'Basic Base64EncodedClientIdAndSecret' }, 'base_url':'https://myAuthorizationServer/oauth2AuthorizationServer/', 'access_token_url':'https://myAuthorizationServer/

˓→oauth2AuthorizationServer/token', 'authorize_url':'https://myAuthorizationServer/oauth2AuthorizationServer/

˓→authorize' } } ]

# Will allow user self registration, allowing to create Flask users from Authorized

˓→User AUTH_USER_REGISTRATION= True

# The default user self registration role AUTH_USER_REGISTRATION_ROLE="Public"

Second step: Create a CustomSsoSecurityManager that extends SupersetSecurityManager and overrides oauth_user_info: from superset.security import SupersetSecurityManager class CustomSsoSecurityManager(SupersetSecurityManager):

def oauth_user_info(self, provider, response=None): logging.debug("Oauth2 provider: {0}.".format(provider)) if provider =='egaSSO': # As example, this line request a GET to base_url + '/' + userDetails

˓→with Bearer Authentication, # and expects that authorization server checks the token, and response with user

˓→details me= self.appbuilder.sm.oauth_remotes[provider].get('userDetails').data logging.debug("user_data: {0}".format(me)) return {'name' : me['name'],'email' : me['email'],'id' : me['user_name

˓→'],'username' : me['user_name'],'first_name':'','last_name':''} ...

This file must be located at the same directory than superset_config.py with the name custom_sso_security_manager.py. Then we can add this two lines to superset_config.py:

3.4. Contents 29 Superset Documentation

from custom_sso_security_manager import CustomSsoSecurityManager CUSTOM_SECURITY_MANAGER= CustomSsoSecurityManager

Feature Flags

Because of a wide variety of users, Superset has some features that are not enabled by default. For example, some users have stronger security restrictions, while some others may not. So Superset allow users to enable or disable some features by config. For feature owners, you can add optional functionalities in Superset, but will be only affected by a subset of users. You can enable or disable features with flag from superset_config.py:

DEFAULT_FEATURE_FLAGS={ 'CLIENT_CACHE': False, 'ENABLE_EXPLORE_JSON_CSRF_PROTECTION': False, 'PRESTO_EXPAND_DATA': False, }

Here is a list of flags and descriptions: • ENABLE_EXPLORE_JSON_CSRF_PROTECTION – For some security concerns, you may need to enforce CSRF protection on all query request to explore_json endpoint. In Superset, we use flask-csrf add csrf protection for all POST requests, but this protection doesn’t apply to GET method. – When ENABLE_EXPLORE_JSON_CSRF_PROTECTION is set to true, your users cannot make GET request to explore_json. The default value for this feature False (current behavior), explore_json accepts both GET and POST request. See PR 7935 for more details. • PRESTO_EXPAND_DATA – When this feature is enabled, nested types in Presto will be expanded into extra columns and/or arrays. This is experimental, and doesn’t work with all nested types.

3.4.2 Tutorial - Creating your first dashboard

This tutorial targets someone who wants to create charts and dashboards in Superset. We’ll show you how to connect Superset to a new database and configure a table in that database for analysis. You’ll also explore the data you’ve exposed and add a visualization to a dashboard so that you get a feel for the end-to-end user experience.

Connecting to a new database

We assume you already have a database configured and can connect to it from the instance on which you’re running Superset. If you’re just testing Superset and want to explore sample data, you can load some sample PostgreSQL datasets into a fresh DB, or configure the example weather data we use here. Under the Sources menu, select the Databases option:

30 Chapter 3. Overview Superset Documentation

On the resulting page, click on the green plus sign, near the top right:

You can configure a number of advanced options on this page, but for this walkthrough, you’ll only need to do two things: 1. Name your database connection:

2. Provide the SQLAlchemy Connection URI and test the connection:

This example shows the connection string for our test weather database. As noted in the text below the URI, you should refer to the SQLAlchemy documentation on creating new connection URIs for your target database. Click the Test Connection button to confirm things work end to end. Once Superset can successfully connect and authenticate, you should see a popup like this:

3.4. Contents 31 Superset Documentation

Moreover, you should also see the list of tables Superset can read from the schema you’re connected to, at the bottom of the page:

If the connection looks good, save the configuration by clicking the Save button at the bottom of the page:

Adding a new table

Now that you’ve configured a database, you’ll need to add specific tables to Superset that you’d like to query. Under the Sources menu, select the Tables option:

On the resulting page, click on the green plus sign, near the top left:

You only need a few pieces of information to add a new table to Superset:

32 Chapter 3. Overview Superset Documentation

• The name of the table

• The target database from the Database drop-down menu (i.e. the one you just added above)

• Optionally, the database schema. If the table exists in the “default” schema (e.g. the public schema in Post- greSQL or Redshift), you can leave the schema field blank. Click on the Save button to save the configuration:

When redirected back to the list of tables, you should see a message indicating that your table was created:

This message also directs you to edit the table configuration. We’ll edit a limited portion of the configuration now - just to get you started - and leave the rest for a more advanced tutorial. Click on the edit button next to the table you’ve created:

On the resulting page, click on the List Table Column tab. Here, you’ll define the way you can use specific columns of your table when exploring your data. We’ll run through these options to describe their purpose: • If you want users to group metrics by a specific field, mark it as Groupable. • If you need to filter on a specific field, mark it as Filterable. • Is this field something you’d like to get the distinct count of? Check the Count Distinct box. • Is this a metric you want to sum, or get basic summary statistics for? The Sum, Min, and Max columns will help.

3.4. Contents 33 Superset Documentation

• The is temporal field should be checked for any date or time fields. We’ll cover how this manifests itself in analyses in a moment. Here’s how we’ve configured fields for the weather data. Even for measures like the weather measurements (precipi- tation, snowfall, etc.), it’s ideal to group and filter by these values:

As with the configurations above, click the Save button to save these settings.

Exploring your data

To start exploring your data, simply click on the table name you just created in the list of available tables:

By default, you’ll be presented with a Table View:

Let’s walk through a basic query to get the count of all records in our table. First, we’ll need to change the Since filter to capture the range of our data. You can use simple phrases to apply these filters, like “3 years ago”:

34 Chapter 3. Overview Superset Documentation

The upper limit for time, the Until filter, defaults to “now”, which may or may not be what you want. Look for the Metrics section under the GROUP BY header, and start typing “Count” - you’ll see a list of metrics matching what you type:

Select the COUNT(*) metric, then click the green Query button near the top of the explore:

You’ll see your results in the table:

Let’s group this by the weather_description field to get the count of records by the type of weather recorded by adding it to the Group by section:

and run the query:

3.4. Contents 35 Superset Documentation

Let’s find a more useful data point: the top 10 times and places that recorded the highest temperature in 2015. We replace weather_description with latitude, longitude and measurement_date in the Group by section:

And replace COUNT(*) with max__measurement_flag:

The max__measurement_flag metric was created when we checked the box under Max and next to the measure- ment_flag field, indicating that this field was numeric and that we wanted to find its maximum value when grouped by specific fields. In our case, measurement_flag is the value of the measurement taken, which clearly depends on the type of mea- surement (the researchers recorded different values for precipitation and temperature). Therefore, we must filter our query only on records where the weather_description is equal to “Maximum temperature”, which we do in the Filters section at the bottom of the explore:

Finally, since we only care about the top 10 measurements, we limit our results to 10 records using the Row limit option under the Options header:

We click Query and get the following results:

36 Chapter 3. Overview Superset Documentation

In this dataset, the maximum temperature is recorded in tenths of a degree Celsius. The top value of 1370, measured in the middle of Nevada, is equal to 137 C, or roughly 278 degrees F. It’s unlikely this value was correctly recorded. We’ve already been able to investigate some outliers with Superset, but this just scratches the surface of what we can do. You may want to do a couple more things with this measure: • The default formatting shows values like 1.37k, which may be difficult for some users to read. It’s likely you may want to see the full, comma-separated value. You can change the formatting of any measure by editing its config (Edit Table Config > List Sql Metric > Edit Metric > D3Format) • Moreover, you may want to see the temperature measurements in plain degrees C, not tenths of a degree. Or you may want to convert the temperature to degrees Fahrenheit. You can change the SQL that gets executed against the database, baking the logic into the measure itself (Edit Table Config > List Sql Metric > Edit Metric > SQL Expression) For now, though, let’s create a better visualization of these data and add it to a dashboard. We change the Chart Type to “Distribution - Bar Chart”:

Our filter on Maximum temperature measurements was retained, but the query and formatting options are dependent on the chart type, so you’ll have to set the values again:

3.4. Contents 37 Superset Documentation

You should note the extensive formatting options for this chart: the ability to set axis labels, margins, ticks, etc. To make the data presentable to a broad audience, you’ll want to apply many of these to slices that end up in dashboards. For now, though, we run our query and get the following chart:

Creating a slice and dashboard

This view might be interesting to researchers, so let’s save it. In Superset, a saved query is called a Slice. To create a slice, click the Save as button near the top-left of the explore:

38 Chapter 3. Overview Superset Documentation

A popup should appear, asking you to name the slice, and optionally add it to a dashboard. Since we haven’t yet created any dashboards, we can create one and immediately add our slice to it. Let’s do it:

Click Save, which will direct you back to your original query. We see that our slice and dashboard were successfully created:

Let’s check out our new dashboard. We click on the Dashboards menu:

and find the dashboard we just created:

3.4. Contents 39 Superset Documentation

Things seemed to have worked - our slice is here!

But it’s a bit smaller than we might like. Luckily, you can adjust the size of slices in a dashboard by clicking, holding and dragging the bottom-right corner to your desired dimensions:

After adjusting the size, you’ll be asked to click on the icon near the top-right of the dashboard to save the new configuration. Congrats! You’ve successfully linked, analyzed, and visualized data in Superset. There are a wealth of other table configuration and visualization options, so please start exploring and creating slices and dashboards of your own.

3.4.3 Security

Security in Superset is handled by Flask AppBuilder (FAB). FAB is a “Simple and rapid application development framework, built on top of Flask.”. FAB provides authentication, user management, permissions and roles. Please read its Security documentation.

40 Chapter 3. Overview Superset Documentation

Provided Roles

Superset ships with a set of roles that are handled by Superset itself. You can assume that these roles will stay up- to-date as Superset evolves. Even though it’s possible for Admin users to do so, it is not recommended that you alter these roles in any way by removing or adding permissions to them as these roles will be re-synchronized to their original values as you run your next superset init command. Since it’s not recommended to alter the roles described here, it’s right to assume that your security strategy should be to compose user access based on these base roles and roles that you create. For instance you could create a role Financial Analyst that would be made of a set of permissions to a set of data sources (tables) and/or databases. Users would then be granted Gamma, Financial Analyst, and perhaps sql_lab.

Admin

Admins have all possible rights, including granting or revoking rights from other users and altering other people’s slices and dashboards.

Alpha

Alpha users have access to all data sources, but they cannot grant or revoke access from other users. They are also limited to altering the objects that they own. Alpha users can add and alter data sources.

Gamma

Gamma users have limited access. They can only consume data coming from data sources they have been given access to through another complementary role. They only have access to view the slices and dashboards made from data sources that they have access to. Currently Gamma users are not able to alter or add data sources. We assume that they are mostly content consumers, though they can create slices and dashboards. Also note that when Gamma users look at the dashboards and slices list view, they will only see the objects that they have access to. sql_lab

The sql_lab role grants access to SQL Lab. Note that while Admin users have access to all databases by default, both Alpha and Gamma users need to be given access on a per database basis.

Public

It’s possible to allow logged out users to access some Superset features. By setting PUBLIC_ROLE_LIKE_GAMMA = True in your superset_config.py, you grant public role the same set of permissions as for the GAMMA role. This is useful if one wants to enable anonymous users to view dashboards. Explicit grant on specific datasets is still required, meaning that you need to edit the Public role and add the Public data sources to the role manually.

3.4. Contents 41 Superset Documentation

Managing Gamma per data source access

Here’s how to provide users access to only specific datasets. First make sure the users with limited access have [only] the Gamma role assigned to them. Second, create a new role (Menu -> Security -> List Roles) and click the + sign.

This new window allows you to give this new role a name, attribute it to users and select the tables in the Permissions dropdown. To select the data sources you want to associate with this role, simply click on the dropdown and use the typeahead to search for your table names. You can then confirm with your Gamma users that they see the objects (dashboards and slices) associated with the tables related to their roles.

Customizing

The permissions exposed by FAB are very granular and allow for a great level of customization. FAB creates many permissions automagically for each model that is created (can_add, can_delete, can_show, can_edit, . . . ) as well as for each view. On top of that, Superset can expose more granular permissions like all_datasource_access. We do not recommend altering the 3 base roles as there are a set of assumptions that Superset is built upon. It is possible though for you to create your own roles, and union them to existing ones.

Permissions

Roles are composed of a set of permissions, and Superset has many categories of permissions. Here are the different categories of permissions: • Model & action: models are entities like Dashboard, Slice, or User. Each model has a fixed set of permissions, like can_edit, can_show, can_delete, can_list, can_add, and so on. By adding can_delete on Dashboard to a role, and granting that role to a user, this user will be able to delete dashboards. • Views: views are individual web pages, like the explore view or the SQL Lab view. When granted to a user, he/she will see that view in its menu items, and be able to load that page. • Data source: For each data source, a permission is created. If the user does not have the all_datasource_access permission granted, the user will only be able to see Slices or explore the data sources that are granted to them • Database: Granting access to a database allows for the user to access all data sources within that database, and will enable the user to query that database in SQL Lab, provided that the SQL Lab specific permission have been granted to the user

42 Chapter 3. Overview Superset Documentation

Restricting access to a subset of data sources

The best way to go is probably to give user Gamma plus one or many other roles that would add access to specific data sources. We recommend that you create individual roles for each access profile. Say people in your finance department might have access to a set of databases and data sources, and these permissions can be consolidated in a single role. Users with this profile then need to be attributed Gamma as a foundation to the models and views they can access, and that Finance role that is a collection of permissions to data objects. One user can have many roles, so a finance executive could be granted Gamma, Finance, and perhaps another Executive role that gather a set of data sources that power dashboards only made available to executives. When looking at its dashboard list, this user will only see the list of dashboards it has access to, based on the roles and permissions that were attributed.

3.4.4 SQL Lab

SQL Lab is a modern, feature-rich SQL IDE written in React.

Feature Overview

• Connects to just about any database backend • A multi-tab environment to work on multiple queries at a time • A smooth flow to visualize your query results using Superset’s rich visualization capabilities • Browse database metadata: tables, columns, indexes, partitions • Support for long-running queries – uses the Celery distributed queue to dispatch query handling to workers

3.4. Contents 43 Superset Documentation

– supports defining a “results backend” to persist query results • A search engine to find queries executed in the past • Supports templating using the Jinja templating language which allows for using macros in your SQL code

Extra features

• Hit alt + enter as a keyboard shortcut to run your query

Templating with Jinja

SELECT * FROM some_table WHERE partition_key= '{{ presto.first_latest_partition('some_table') }}'

Templating unleashes the power and capabilities of a programming language within your SQL code. Templates can also be used to write generic queries that are parameterized so they can be re-used easily.

Available macros

We expose certain modules from Python’s standard library in Superset’s Jinja context: • time: time • datetime: datetime.datetime • uuid: uuid • random: random • relativedelta: dateutil.relativedelta.relativedelta Jinja’s builtin filters can be also be applied where needed.

Extending macros

As mentioned in the Installation & Configuration documentation, it’s possible for administrators to expose more more macros in their environment using the configuration variable JINJA_CONTEXT_ADDONS. All objects referenced in this dictionary will become available for users to integrate in their queries in SQL Lab.

Query cost estimation

Some databases support EXPLAIN queries that allow users to estimate the cost of queries before executing this. Currently, Presto is supported in SQL Lab. To enable query cost estimation, add the following keys to the “Extra” field in the database configuration:

{ "version": "0.319", "cost_estimate_enabled": true, ... }

44 Chapter 3. Overview Superset Documentation

Here, “version” should be the version of your Presto cluster. Support for this functionality was introduced in Presto 0.319.

Create Table As (CTAS)

You can use CREATE TABLE AS SELECT ... statements on SQLLab. This feature can be toggled on and off at the database configuration level. Note that since CREATE TABLE.. belongs to a SQL DDL category. Specifically on PostgreSQL, DDL is transac- tional, this means that to properly use this feature you have to set autocommit to true on your engine parameters:

{ ... "engine_params": {"isolation_level":"AUTOCOMMIT"}, ... }

3.4.5 Visualizations Gallery

3.4. Contents 45 Superset Documentation

46 Chapter 3. Overview Superset Documentation

3.4. Contents 47 Superset Documentation

48 Chapter 3. Overview Superset Documentation

3.4. Contents 49 Superset Documentation

50 Chapter 3. Overview Superset Documentation

3.4. Contents 51 Superset Documentation

52 Chapter 3. Overview Superset Documentation

3.4. Contents 53 Superset Documentation

54 Chapter 3. Overview Superset Documentation

3.4. Contents 55 Superset Documentation

56 Chapter 3. Overview Superset Documentation

3.4.6 Druid

Superset has a native connector to Druid and a majority of Druid’s features are accessible through Superset.

Note: Druid now supports SQL and can be accessed through Superset’s SQLAlchemy connector. The long-term vision is to deprecate the Druid native REST connector and query Druid exclusively through the SQL interface.

Aggregations

Common aggregations or Druid metrics can be defined and used in Superset. The first and simpler use case is to use the checkbox matrix expose in your datasource’s edit view (Sources -> Druid Datasources -> [your datasource] -> Edit -> [tab] List Druid Column). Clicking the GroupBy and Filterable checkboxes will make the column appear in the related dropdowns while in explore view. Checking Count Distinct, Min, Max or Sum will result in creating new metrics that will appear in the List Druid Metric tab upon saving the datasource. By editing these metrics, you’ll notice that their json element corresponds to Druid ag- gregation definition. You can create your own aggregations manually from the List Druid Metric tab following Druid documentation.

3.4. Contents 57 Superset Documentation

Post-Aggregations

Druid supports post aggregation and this works in Superset. All you have to do is create a metric, much like you would create an aggregation manually, but specify postagg as a Metric Type. You then have to provide a valid json post-aggregation definition (as specified in the Druid docs) in the Json field.

Unsupported Features

Note: Unclear at this point, this section of the documentation could use some input.

3.4.7 Misc

Visualization Tools

The data is visualized via the slices. These slices are visual components made with the D3.js. Some components can be completed or required inputs.

Country Map Tools

This tool is used in slices for visualization number or string by region, province or department of your countries. So, if you want to use tools, you need ISO 3166-2 code of region, province or department.

58 Chapter 3. Overview Superset Documentation

ISO 3166-2 is part of the ISO 3166 standard published by the International Organization for Standardization (ISO), and defines codes for identifying the principal subdivisions (e.g., provinces or states) of all countries coded in ISO 3166-1 The purpose of ISO 3166-2 is to establish an international standard of short and unique alphanumeric codes to represent the relevant administrative divisions and dependent territories of all countries in a more convenient and less ambiguous form than their full names. Each complete ISO 3166-2 code consists of two parts, separated by a hyphen:[1] The first part is the ISO 3166-1 alpha-2 code of the country; The second part is a string of up to three alphanumeric characters, which is usually obtained from national sources and stems from coding systems already in use in the country concerned, but may also be developed by the ISO itself.

List of Countries

• Belgium

ISO Name of region BE-BRU Bruxelles BE-VAN Antwerpen BE-VLI Limburg BE-VOV Oost-Vlaanderen BE-VBR Vlaams Brabant BE-VWV West-Vlaanderen BE-WBR Brabant Wallon BE-WHT Hainaut BE-WLG Liège BE-VLI Limburg BE-WLX Luxembourg BE-WNA Namur

• Brazil

3.4. Contents 59 Superset Documentation

ISO Name of region BR-AC Acre BR-AL Alagoas BR-AP Amapá BR-AM Amazonas BR-BA Bahia BR-CE Ceará BR-DF Distrito Federal BR-ES Espírito Santo BR-GO Goiás BR-MA Maranhão BR-MS Mato Grosso do Sul BR-MT Mato Grosso BR-MG Minas Gerais BR-PA Pará BR-PB Paraíba BR-PR Paraná BR-PE Pernambuco BR-PI Piauí BR-RJ Rio de Janeiro BR-RN Rio Grande do Norte BR-RS Rio Grande do Sul BR-RO Rondônia BR-RR Roraima BR-SP São Paulo BR-SC Santa Catarina BR-SE Sergipe BR-TO Tocantins

• China

ISO Name of region CN-34 Anhui CN-11 Beijing CN-50 Chongqing CN-35 Fujian CN-62 Gansu CN-44 Guangdong CN-45 Guangxi CN-52 Guizhou CN-46 Hainan CN-13 Hebei CN-23 Heilongjiang CN-41 Henan CN-42 Hubei CN-43 Hunan CN-32 Jiangsu CN-36 Jiangxi CN-22 Jilin CN-21 Liaoning CN-15 Nei Mongol Continued on next page

60 Chapter 3. Overview Superset Documentation

Table 1 – continued from previous page ISO Name of region CN-64 Ningxia Hui CN-63 Qinghai CN-61 Shaanxi CN-37 Shandong CN-31 Shanghai CN-14 Shanxi CN-51 Sichuan CN-12 Tianjin CN-65 Xinjiang Uygur CN-54 Xizang CN-53 Yunnan CN-33 Zhejiang CN-71 Taiwan CN-91 Hong Kong CN-92 Macao

• Egypt

ISO Name of region EG-DK Ad Daqahliyah EG-BA Al Bahr al Ahmar EG-BH Al Buhayrah EG-FYM Al Fayyum EG-GH Al Gharbiyah EG-ALX Al Iskandariyah EG-IS Al Isma iliyah EG-GZ Al Jizah EG-MNF Al Minufiyah EG-MN Al Minya EG-C Al Qahirah EG-KB Al Qalyubiyah EG-LX Al Uqsur EG-WAD Al Wadi al Jadid EG-SUZ As Suways EG-SHR Ash Sharqiyah EG-ASN Aswan EG-AST Asyut EG-BNS Bani Suwayf EG-PTS Bur Sa id EG-DT Dumyat EG-JS Janub Sina’ EG-KFS Kafr ash Shaykh EG-MT Matrouh EG-KN Qina EG-SIN Shamal Sina’ EG-SHG Suhaj

• France

3.4. Contents 61 Superset Documentation

ISO Name of region FR-67 Bas-Rhin FR-68 Haut-Rhin FR-24 Dordogne FR-33 Gironde FR-40 Landes FR-47 Lot-et-Garonne FR-64 Pyrénées-Atlantiques FR-03 Allier FR-15 Cantal FR-43 Haute-Loire FR-63 Puy-de-Dôme FR-91 Essonne FR-92 Hauts-de-Seine FR-75 Paris FR-77 Seine-et-Marne FR-93 Seine-Saint-Denis FR-95 Val-d’Oise FR-94 Val-de-Marne FR-78 Yvelines FR-14 Calvados FR-50 Manche FR-61 Orne FR-21 Côte-d’Or FR-58 Nièvre FR-71 Saône-et-Loire FR-89 Yonne FR-22 Côtes-d’Armor FR-29 Finistère FR-35 Ille-et-Vilaine FR-56 Morbihan FR-18 Cher FR-28 Eure-et-Loir FR-37 Indre-et-Loire FR-36 Indre FR-41 Loir-et-Cher FR-45 Loiret FR-08 Ardennes FR-10 Aube FR-52 Haute-Marne FR-51 Marne FR-2A Corse-du-Sud FR-2B Haute-Corse FR-25 Doubs FR-70 Haute-Saône FR-39 Jura FR-90 Territoire de Belfort FR-27 Eure FR-76 Seine-Maritime FR-11 Aude FR-30 Gard Continued on next page

62 Chapter 3. Overview Superset Documentation

Table 2 – continued from previous page ISO Name of region FR-34 Hérault FR-48 Lozère FR-66 Pyrénées-Orientales FR-19 Corrèze FR-23 Creuse FR-87 Haute-Vienne FR-54 Meurthe-et-Moselle FR-55 Meuse FR-57 Moselle FR-88 Vosges FR-09 Ariège FR-12 Aveyron FR-32 Gers FR-31 Haute-Garonne FR-65 Hautes-Pyrénées FR-46 Lot FR-82 Tarn-et-Garonne FR-81 Tarn FR-59 Nord FR-62 Pas-de-Calais FR-44 Loire-Atlantique FR-49 Maine-et-Loire FR-53 Mayenne FR-72 Sarthe FR-85 Vendée FR-02 Aisne FR-60 Oise FR-80 Somme FR-17 Charente-Maritime FR-16 Charente FR-79 Deux-Sèvres FR-86 Vienne FR-04 Alpes-de-Haute-Provence FR-06 Alpes-Maritimes FR-13 Bouches-du-Rhône FR-05 Hautes-Alpes FR-83 Var FR-84 Vaucluse FR-01 Ain FR-07 Ardèche FR-26 Drôme FR-74 Haute-Savoie FR-38 Isère FR-42 Loire FR-69 Rhône FR-73 Savoie

• Germany

3.4. Contents 63 Superset Documentation

ISO Name of region DE-BW Baden-Württemberg DE-BY Bayern DE-BE Berlin DE-BB Brandenburg DE-HB Bremen DE-HH Hamburg DE-HE Hessen DE-MV Mecklenburg-Vorpommern DE-NI Niedersachsen DE-NW Nordrhein-Westfalen DE-RP Rheinland-Pfalz DE-SL Saarland DE-ST Sachsen-Anhalt DE-SN Sachsen DE-SH Schleswig-Holstein DE-TH Thüringen

• Italy

ISO Name of region IT-CH Chieti IT-AQ L’Aquila IT-PE Pescara IT-TE Teramo IT-BA Bari IT-BT Barletta-Andria-Trani IT-BR Brindisi IT-FG Foggia IT-LE Lecce IT-TA Taranto IT-MT Matera IT-PZ Potenza IT-CZ Catanzaro IT-CS Cosenza IT-KR Crotone IT-RC Reggio Di Calabria IT-VV Vibo Valentia IT-AV Avellino IT-BN Benevento IT-CE Caserta IT-NA Napoli IT-SA Salerno IT-BO Bologna IT-FE Ferrara IT-FC Forli’ - Cesena IT-MO Modena IT-PR Parma IT-PC Piacenza IT-RA Ravenna IT-RE Reggio Nell’Emilia Continued on next page

64 Chapter 3. Overview Superset Documentation

Table 3 – continued from previous page ISO Name of region IT-RN Rimini IT-GO Gorizia IT-PN Pordenone IT-TS Trieste IT-UD Udine IT-FR Frosinone IT-LT Latina IT-RI Rieti IT-RM Roma IT-VT Viterbo IT-GE Genova IT-IM Imperia IT-SP La Spezia IT-SV Savona IT-BG Bergamo IT-BS Brescia IT-CO Como IT-CR Cremona IT-LC Lecco IT-LO Lodi IT-MN Mantua IT-MI Milano IT-MB Monza and Brianza IT-PV Pavia IT-SO Sondrio IT-VA Varese IT-AN Ancona IT-AP Ascoli Piceno IT-FM Fermo IT-MC Macerata IT-PU Pesaro E Urbino IT-CB Campobasso IT-IS Isernia IT-AL Alessandria IT-AT Asti IT-BI Biella IT-CN Cuneo IT-NO Novara IT-TO Torino IT-VB Verbano-Cusio-Ossola IT-VC Vercelli IT-CA Cagliari IT-CI Carbonia-Iglesias IT-VS Medio Campidano IT-NU Nuoro IT-OG Ogliastra IT-OT Olbia-Tempio IT-OR Oristano IT-SS Sassari Continued on next page

3.4. Contents 65 Superset Documentation

Table 3 – continued from previous page ISO Name of region IT-AG Agrigento IT-CL Caltanissetta IT-CT Catania IT-EN Enna IT-ME Messina IT-PA Palermo IT-RG Ragusa IT-SR Syracuse IT-TP Trapani IT-AR Arezzo IT-FI Florence IT-GR Grosseto IT-LI Livorno IT-LU Lucca IT-MS Massa Carrara IT-PI Pisa IT-PT Pistoia IT-PO Prato IT-SI Siena IT-BZ Bolzano IT-TN Trento IT-PG Perugia IT-TR Terni IT-AO Aosta IT-BL Belluno IT-PD Padua IT-RO Rovigo IT-TV Treviso IT-VE Venezia IT-VR Verona IT-VI Vicenza

• Japan

ISO Name of region JP-01 Hokkaido JP-02 Aomori JP-03 Iwate JP-04 Miyagi JP-05 Akita JP-06 Yamagata JP-07 Fukushima JP-08 Ibaraki JP-09 Tochigi JP-10 Gunma JP-11 Saitama JP-12 Chiba JP-13 Tokyo JP-14 Kanagawa Continued on next page

66 Chapter 3. Overview Superset Documentation

Table 4 – continued from previous page ISO Name of region JP-15 Niigata JP-16 Toyama JP-17 Ishikawa JP-18 Fukui JP-19 Yamanashi JP-20 Nagano JP-21 Gifu JP-22 Shizuoka JP-23 Aichi JP-24 Mie JP-25 Shiga JP-26 Kyoto JP-27 Osaka JP-28 Hyogo JP-29 Nara JP-30 Wakayama JP-31 Tottori JP-32 Shimane JP-33 Okayama JP-34 Hiroshima JP-35 Yamaguchi JP-36 Tokushima JP-37 Kagawa JP-38 Ehime JP-39 Kochi JP-40 Fukuoka JP-41 Saga JP-42 Nagasaki JP-43 Kumamoto JP-44 Oita JP-45 Miyazaki JP-46 Kagoshima JP-47 Okinawa

• Morocco

ISO Name of region MA-BES Ben Slimane MA-KHO Khouribga MA-SET Settat MA-JDI El Jadida MA-SAF Safi MA-BOM Boulemane MA-FES Fès MA-SEF Sefrou MA-MOU Zouagha-Moulay Yacoub MA-KEN Kénitra MA-SIK Sidi Kacem MA-CAS Casablanca Continued on next page

3.4. Contents 67 Superset Documentation

Table 5 – continued from previous page ISO Name of region MA-MOH Mohammedia MA-ASZ Assa-Zag MA-GUE Guelmim MA-TNT Tan-Tan MA-TAT Tata MA-LAA Laâyoune MA-HAO Al Haouz MA-CHI Chichaoua MA-KES El Kelaâ des Sraghna MA-ESI Essaouira MA-MMD Marrakech MA-HAJ El Hajeb MA-ERR Errachidia MA-IFR Ifrane MA-KHN Khénifra MA-MEK Meknès MA-BER Berkane Taourirt MA-FIG Figuig MA-JRA Jerada MA-NAD Nador MA-OUJ Oujda Angad MA-KHE Khémisset MA-RAB Rabat MA-SAL Salé MA-SKH Skhirate-Témara MA-AGD Agadir-Ida ou Tanane MA-CHT Chtouka-Aït Baha MA-INE Inezgane-Aït Melloul MA-OUA Ouarzazate MA-TAR Taroudannt MA-TIZ Tiznit MA-ZAG Zagora MA-AZI Azilal MA-BEM Béni Mellal MA-CHE Chefchaouen MA-FAH Fahs Anjra MA-LAR Larache MA-TET Tétouan MA-TNG Tanger-Assilah MA-HOC Al Hoceïma MA-TAO Taounate MA-TAZ Taza

• Netherlands

68 Chapter 3. Overview Superset Documentation

ISO Name of region NL-DR Drenthe NL-FL Flevoland NL-FR Friesland NL-GE Gelderland NL-GR Groningen NL-YS IJsselmeer NL-LI Limburg NL-NB Noord-Brabant NL-NH Noord-Holland NL-OV Overijssel NL-UT Utrecht NL-ZE Zeeland NL-ZM Zeeuwse meren NL-ZH Zuid-Holland

• Russian

ISO Name of region RU-AD Adygey RU-ALT Altay RU-AMU Amur RU-ARK Arkhangel’sk RU-AST Astrakhan’ RU-BA Bashkortostan RU-BEL Belgorod RU-BRY Bryansk RU-BU Buryat RU-CE Chechnya RU-CHE Chelyabinsk RU-CHU Chukot RU-CU Chuvash RU-SPE City of St. Petersburg RU-DA Dagestan RU-AL Gorno-Altay RU-IN Ingush RU-IRK Irkutsk RU-IVA Ivanovo RU-KB Kabardin-Balkar RU-KGD Kaliningrad RU-KL Kalmyk RU-KLU Kaluga RU-KAM Kamchatka RU-KC Karachay-Cherkess RU-KR Karelia RU-KEM Kemerovo RU-KHA Khabarovsk RU-KK Khakass RU-KHM Khanty-Mansiy RU-KIR Kirov RU-KO Komi Continued on next page

3.4. Contents 69 Superset Documentation

Table 6 – continued from previous page ISO Name of region RU-KOS Kostroma RU-KDA Krasnodar RU-KYA Krasnoyarsk RU-KGN Kurgan RU-KRS Kursk RU-LEN Leningrad RU-LIP Lipetsk RU-MAG Maga Buryatdan RU-ME Mariy-El RU-MO Mordovia RU-MOW Moscow City RU-MOS Moskva RU-MUR Murmansk RU-NEN Nenets RU-NIZ Nizhegorod RU-SE North Ossetia RU-NGR Novgorod RU-NVS Novosibirsk RU-OMS Omsk RU-ORL Orel RU-ORE Orenburg RU-PNZ Penza RU-PER Perm’ RU-PRI Primor’ye RU-PSK Pskov RU-ROS Rostov RU-RYA Ryazan’ RU-SAK Sakhalin RU-SA Sakha RU-SAM Samara RU-SAR Saratov RU-SMO Smolensk RU-STA Stavropol’ RU-SVE Sverdlovsk RU-TAM Tambov RU-TA Tatarstan RU-TOM Tomsk RU-TUL Tula RU-TY Tuva RU-TVE Tver’ RU-TYU Tyumen’ RU-UD Udmurt RU-ULY Ul’yanovsk RU-VLA Vladimir RU-VGG Volgograd RU-VLG Vologda RU-VOR Voronezh RU-YAN Yamal-Nenets RU-YAR Yaroslavl’ Continued on next page

70 Chapter 3. Overview Superset Documentation

Table 6 – continued from previous page ISO Name of region RU-YEV Yevrey RU-ZAB Zabaykal’ye

• Singapore

Id Name of region 205 Singapore

• Spain

ISO Name of region ES-AL Almería ES-CA Cádiz ES-CO Córdoba ES-GR Granada ES-H Huelva ES-J Jaén ES-MA Málaga ES-SE Sevilla ES-HU Huesca ES-TE Teruel ES-Z Zaragoza ES-S3 Cantabria ES-AB Albacete ES-CR Ciudad Real ES-CU Cuenca ES-GU Guadalajara ES-TO Toledo ES-AV Ávila ES-BU Burgos ES-LE León ES-P Palencia ES-SA Salamanca ES-SG Segovia ES-SO Soria ES-VA Valladolid ES-ZA Zamora ES-B Barcelona ES-GI Girona ES-L Lleida ES-T Tarragona ES-CE Ceuta ES-ML Melilla ES-M5 Madrid ES-NA7 Navarra ES-A Alicante ES-CS Castellón ES-V Valencia ES-BA Badajoz ES-CC Cáceres Continued on next page

3.4. Contents 71 Superset Documentation

Table 7 – continued from previous page ISO Name of region ES-C A Coruña ES-LU Lugo ES-OR Ourense ES-PO Pontevedra ES-PM Baleares ES-GC Las Palmas ES-TF Santa Cruz de Tenerife ES-LO4 La Rioja ES-VI Álava ES-SS Guipúzcoa ES-BI Vizcaya ES-O2 Asturias ES-MU6 Murcia

• Uk

ISO Name of region GB-BDG Barking and Dagenham GB-BAS Bath and North East Somerset GB-BDF Bedfordshire GB-WBK Berkshire GB-BEX Bexley GB-BBD Blackburn with Darwen GB-BMH Bournemouth GB-BEN Brent GB-BNH Brighton and Hove GB-BST Bristol GB-BRY Bromley GB-BKM Buckinghamshire GB-CAM Cambridgeshire GB-CMD Camden GB-CHS Cheshire GB-CON Cornwall GB-CRY Croydon GB-CMA Cumbria GB-DAL Darlington GB-DBY Derbyshire GB-DER Derby GB-DEV Devon GB-DOR Dorset GB-DUR Durham GB-EAL Ealing GB-ERY East Riding of Yorkshire GB-ESX East Sussex GB-ENF Enfield GB-ESS Essex GB-GLS Gloucestershire GB-GRE Greenwich GB-HCK Hackney Continued on next page

72 Chapter 3. Overview Superset Documentation

Table 8 – continued from previous page ISO Name of region GB-HAL Halton GB-HMF Hammersmith and Fulham GB-HAM Hampshire GB-HRY Haringey GB-HRW Harrow GB-HPL Hartlepool GB-HAV Havering GB-HRT Herefordshire GB-HEF Hertfordshire GB-HIL Hillingdon GB-HNS Hounslow GB-IOW Isle of Wight GB-ISL Islington GB-KEC Kensington and Chelsea GB-KEN Kent GB-KHL Kingston upon Hull GB-KTT Kingston upon Thames GB-LBH Lambeth GB-LAN Lancashire GB-LEC Leicestershire GB-LCE Leicester GB-LEW Lewisham GB-LIN Lincolnshire GB-LND London GB-LUT Luton GB-MAN Manchester GB-MDW Medway GB-MER Merseyside GB-MRT Merton GB-MDB Middlesbrough GB-MIK Milton Keynes GB-NWM Newham GB-NFK Norfolk GB-NEL North East Lincolnshire GB-NLN North Lincolnshire GB-NSM North Somerset GB-NYK North Yorkshire GB-NTH Northamptonshire GB-NBL Northumberland GB-NTT Nottinghamshire GB-NGM Nottingham GB-OXF Oxfordshire GB-PTE Peterborough GB-PLY Plymouth GB-POL Poole GB-POR Portsmouth GB-RDB Redbridge GB-RCC Redcar and Cleveland GB-RIC Richmond upon Thames Continued on next page

3.4. Contents 73 Superset Documentation

Table 8 – continued from previous page ISO Name of region GB-RUT Rutland GB-SHR Shropshire GB-SOM Somerset GB-SGC South Gloucestershire GB-SY South Yorkshire GB-STH Southampton GB-SOS Southend-on-Sea GB-SWK Southwark GB-STS Staffordshire GB-STT Stockton-on-Tees GB-STE Stoke-on-Trent GB-SFK Suffolk GB-SRY Surrey GB-STN Sutton GB-SWD Swindon GB-TFW Telford and Wrekin GB-THR Thurrock GB-TOB Torbay GB-TWH Tower Hamlets GB-TAW Tyne and Wear GB-WFT Waltham Forest GB-WND Wandsworth GB-WRT Warrington GB-WAR Warwickshire GB-WM West Midlands GB-WSX West Sussex GB-WY West Yorkshire GB-WSM Westminster GB-WIL Wiltshire GB-WOR Worcestershire GB-YOR York GB-ANT Antrim GB-ARD Ards GB-ARM Armagh GB-BLA Ballymena GB-BLY Ballymoney GB-BNB Banbridge GB-BFS Belfast GB-CKF Carrickfergus GB-CSR Castlereagh GB-CLR Coleraine GB-CKT Cookstown GB-CGV Craigavon GB-DRY Derry GB-DOW Down GB-DGN Dungannon GB-FER Fermanagh GB-LRN Larne GB-LMV Limavady Continued on next page

74 Chapter 3. Overview Superset Documentation

Table 8 – continued from previous page ISO Name of region GB-LSB Lisburn GB-MFT Magherafelt GB-MYL Moyle GB-NYM Newry and Mourne GB-NTA Newtownabbey GB-NDN North Down GB-OMH Omagh GB-STB Strabane GB-ABD Aberdeenshire GB-ABE Aberdeen GB-ANS Angus GB-AGB Argyll and Bute GB-CLK Clackmannanshire GB-DGY Dumfries and Galloway GB-DND Dundee GB-EAY East Ayrshire GB-EDU East Dunbartonshire GB-ELN East Lothian GB-ERW East Renfrewshire GB-EDH Edinburgh GB-ELS Eilean Siar GB-FAL Falkirk GB-FIF Fife GB-GLG Glasgow GB-HLD Highland GB-IVC Inverclyde GB-MLN Midlothian GB-MRY Moray GB-NAY North Ayrshire GB-NLK North Lanarkshire GB-ORK Orkney Islands GB-PKN Perthshire and Kinross GB-RFW Renfrewshire GB-SCB Scottish Borders GB-ZET Shetland Islands GB-SAY South Ayrshire GB-SLK South Lanarkshire GB-STG Stirling GB-WDU West Dunbartonshire GB-WLN West Lothian GB-AGY Anglesey GB-BGW Blaenau Gwent GB-BGE Bridgend GB-CAY Caerphilly GB-CRF Cardiff GB-CMN Carmarthenshire GB-CGN Ceredigion GB-CWY Conwy GB-DEN Denbighshire Continued on next page

3.4. Contents 75 Superset Documentation

Table 8 – continued from previous page ISO Name of region GB-FLN Flintshire GB-GWN Gwynedd GB-MTY Merthyr Tydfil GB-MON Monmouthshire GB-NTL Neath Port Talbot GB-NWP Newport GB-PEM Pembrokeshire GB-POW Powys GB-RCT Rhondda GB-SWA Swansea GB-TOF Torfaen GB-VGL Vale of Glamorgan GB-WRX Wrexham

• Ukraine

ISO Name of region UA-71 Cherkasy UA-74 Chernihiv UA-77 Chernivtsi UA-43 Crimea UA-12 Dnipropetrovs’k UA-14 Donets’k UA-26 Ivano-Frankivs’k UA-63 Kharkiv UA-65 Kherson UA-68 Khmel’nyts’kyy UA-30 Kiev City UA-32 Kiev UA-35 Kirovohrad UA-46 L’viv UA-09 Luhans’k UA-48 Mykolayiv UA-51 Odessa UA-53 Poltava UA-56 Rivne UA-40 Sevastopol’ UA-59 Sumy UA-61 Ternopil’ UA-21 Transcarpathia UA-05 Vinnytsya UA-07 Volyn UA-23 Zaporizhzhya UA-18 Zhytomyr

• Usa

ISO Name of region US-AL Alabama US-AK Alaska Continued on next page

76 Chapter 3. Overview Superset Documentation

Table 9 – continued from previous page ISO Name of region US-AK Alaska US-AZ Arizona US-AR Arkansas US-CA California US-CO Colorado US-CT Connecticut US-DE Delaware US-DC District of Columbia US-FL Florida US-GA Georgia US-HI Hawaii US-ID Idaho US-IL Illinois US-IN Indiana US-IA Iowa US-KS Kansas US-KY Kentucky US-LA Louisiana US-ME Maine US-MD Maryland US-MA Massachusetts US-MI Michigan US-MN Minnesota US-MS Mississippi US-MO Missouri US-MT Montana US-NE Nebraska US-NV Nevada US-NH New Hampshire US-NJ New Jersey US-NM New Mexico US-NY New York US-NC North Carolina US-ND North Dakota US-OH Ohio US-OK Oklahoma US-OR Oregon US-PA Pennsylvania US-RI Rhode Island US-SC South Carolina US-SD South Dakota US-TN Tennessee US-TX Texas US-UT Utah US-VT Vermont US-VA Virginia US-WA Washington US-WV West Virginia US-WI Wisconsin Continued on next page

3.4. Contents 77 Superset Documentation

Table 9 – continued from previous page ISO Name of region US-WY Wyoming

Need to add a new Country?

To add a new country in country map tools, we need to follow the following steps : 1. You need shapefiles which contain data of your map. You can get this file on this site: https://www.diva-gis.org/ gdata 2. You need to add ISO 3166-2 with column name ISO for all record in your file. It’s important because it’s a norm for mapping your data with geojson file 3. You need to convert shapefile to geojson file. This action can make with ogr2ogr tools: https://www.gdal.org/ ogr2ogr.html 4. Put your geojson file in next folder : superset/assets/src/visualizations/CountryMap/countries with the next name : nameofyourcountries.geojson 5. You can to reduce size of geojson file on this site: https://mapshaper.org/ 6. Go in file superset/assets/src/explore/controls.jsx 7. Add your country in component ‘select_country’ Example : select_country:{ type: 'SelectControl', label: 'Country Name Type', default: 'France', choices:[ 'Belgium', 'Brazil', 'China', 'Egypt', 'France', 'Germany', 'Italy', 'Japan', 'Morocco', 'Netherlands', 'Russia', 'Singapore', 'Spain', 'Uk', 'Usa', ].map(s => [s, s]), description: 'The name of country that Superset should display', },

Videos

Note: This section of the documentation has yet to be filled in.

78 Chapter 3. Overview Superset Documentation

Importing and Exporting Datasources

The superset cli allows you to import and export datasources from and to YAML. Datasources include both databases and druid clusters. The data is expected to be organized in the following hierarchy:

. databases | database_1 | | table_1 | | | columns | | | | column_1 | | | | column_2 | | | | ... (more columns) | | | metrics | | | metric_1 | | | metric_2 | | | ... (more metrics) | | ... (more tables) | ... (more databases) druid_clusters cluster_1 | datasource_1 | | columns | | | column_1 | | | column_2 | | | ... (more columns) | | metrics | | metric_1 | | metric_2 | | ... (more metrics) | ... (more datasources) ... (more clusters)

Exporting Datasources to YAML

You can print your current datasources to stdout by running: superset export_datasources

To save your datasources to a file run: superset export_datasources-f

By default, default (null) values will be omitted. Use the -d flag to include them. If you want back references to be included (e.g. a column to include the table id it belongs to) use the -b flag. Alternatively, you can export datasources using the UI: 1. Open Sources -> Databases to export all tables associated to a single or multiple databases. (Tables for one or more tables, Druid Clusters for clusters, Druid Datasources for datasources) 2. Select the items you would like to export 3. Click Actions -> Export to YAML 4. If you want to import an item that you exported through the UI, you will need to nest it inside its parent element, e.g. a database needs to be nested under databases a table needs to be nested inside a database element.

3.4. Contents 79 Superset Documentation

Exporting the complete supported YAML schema

In order to obtain an exhaustive list of all fields you can import using the YAML import run:

superset export_datasource_schema

Again, you can use the -b flag to include back references.

Importing Datasources from YAML

In order to import datasources from a YAML file(s), run:

superset import_datasources-p

If you supply a path all files ending with *.yaml or *.yml will be parsed. You can apply additional flags e.g.:

superset import_datasources-p-r

Will search the supplied path recursively. The sync flag -s takes parameters in order to sync the supplied elements with your file. Be careful this can delete the contents of your meta database. Example: superset import_datasources -p -s columns,metrics This will sync all metrics and columns for all datasources found in the in the Superset meta database. This means columns and metrics not specified in YAML will be deleted. If you would add tables to columns,metrics those would be synchronised as well. If you don’t supply the sync flag (-s) importing will only add and update (override) fields. E.g. you can add a verbose_name to the column ds in the table random_time_series from the example datasets by saving the following YAML to file and then running the import_datasources command.

databases: - database_name: main tables: - table_name: random_time_series columns: - column_name: ds verbose_name: datetime

3.4.8 FAQ

Can I query/join multiple tables at one time?

Not directly no. A Superset SQLAlchemy datasource can only be a single table or a view. When working with tables, the solution would be to materialize a table that contains all the fields needed for your analysis, most likely through some scheduled batch process. A view is a simple logical layer that abstract an arbitrary SQL queries as a virtual table. This can allow you to join and union multiple tables, and to apply some transformation using arbitrary SQL expressions. The limitation there is your database performance as Superset effectively will run a query on top of your query (view). A good practice may be to limit yourself to joining your main large table to one or many small tables only, and avoid using GROUP BY where possible as Superset will do its own GROUP BY and doing the work twice might slow down performance.

80 Chapter 3. Overview Superset Documentation

Whether you use a table or a view, the important factor is whether your database is fast enough to serve it in an interactive fashion to provide a good user experience in Superset.

How BIG can my data source be?

It can be gigantic! As mentioned above, the main criteria is whether your database can execute queries and return results in a time frame that is acceptable to your users. Many distributed databases out there can execute queries that scan through terabytes in an interactive fashion.

How do I create my own visualization?

We are planning on making it easier to add new visualizations to the framework, in the meantime, we’ve tagged a few pull requests as example to give people examples of how to contribute new visualizations. https://github.com/airbnb/superset/issues?q=label%3Aexample+is%3Aclosed

Can I upload and visualize csv data?

Yes, using the Upload a CSV button under the Sources menu item. This brings up a form that allows you specify required information. After creating the table from CSV, it can then be loaded like any other on the Sources -> Tables page.

Why are my queries timing out?

There are many reasons may cause long query timing out. • For running long query from Sql Lab, by default Superset allows it run as long as 6 hours before it being killed by celery. If you want to increase the time for running query, you can specify the timeout in configuration. For example: SQLLAB_ASYNC_TIME_LIMIT_SEC = 60 * 60 * 6 • Superset is running on gunicorn web server, which may time out web requests. If you want to increase the default (50), you can specify the timeout when starting the web server with the -t flag, which is expressed in seconds. superset runserver -t 300 • If you are seeing timeouts (504 Gateway Time-out) when loading dashboard or explore slice, you are probably behind gateway or proxy server (such as Nginx). If it did not receive a timely response from Superset server (which is processing long queries), these web servers will send 504 status code to clients directly. Superset has a client-side timeout limit to address this issue. If query didn’t come back within clint-side timeout (60 seconds by default), Superset will display warning message to avoid gateway timeout message. If you have a longer gateway timeout limit, you can change the timeout settings in superset_config.py: SUPERSET_WEBSERVER_TIMEOUT = 60

Why is the map not visible in the mapbox visualization?

You need to register to mapbox.com, get an API key and configure it as MAPBOX_API_KEY in superset_config.py.

3.4. Contents 81 Superset Documentation

How to add dynamic filters to a dashboard?

It’s easy: use the Filter Box widget, build a slice, and add it to your dashboard. The Filter Box widget allows you to define a query to populate dropdowns that can be used for filtering. To build the list of distinct values, we run a query, and sort the result by the metric you provide, sorting descending. The widget also has a checkbox Date Filter, which enables time filtering capabilities to your dashboard. After checking the box and refreshing, you’ll see a from and a to dropdown show up. By default, the filtering will be applied to all the slices that are built on top of a datasource that shares the column name that the filter is based on. It’s also a requirement for that column to be checked as “filterable” in the column tab of the table editor. But what about if you don’t want certain widgets to get filtered on your dashboard? You can do that by editing your dashboard, and in the form, edit the JSON Metadata field, more specifically the filter_immune_slices key, that receives an array of sliceIds that should never be affected by any dashboard level filtering.

{ "filter_immune_slices":[324, 65, 92], "expanded_slices": {}, "filter_immune_slice_fields":{ "177":["country_name", "__time_range"], "32":["__time_range"] }, "timed_refresh_immune_slices":[324] }

In the json blob above, slices 324, 65 and 92 won’t be affected by any dashboard level filtering. Now note the filter_immune_slice_fields key. This one allows you to be more specific and define for a specific slice_id, which filter fields should be disregarded. Note the use of the __time_range keyword, which is reserved for dealing with the time boundary filtering men- tioned above. But what happens with filtering when dealing with slices coming from different tables or databases? If the column name is shared, the filter will be applied, it’s as simple as that.

How to limit the timed refresh on a dashboard?

By default, the dashboard timed refresh feature allows you to automatically re-query every slice on a dashboard according to a set schedule. Sometimes, however, you won’t want all of the slices to be refreshed - especially if some data is slow moving, or run heavy queries. To exclude specific slices from the timed refresh process, add the timed_refresh_immune_slices key to the dashboard JSON Metadata field:

{ "filter_immune_slices": [], "expanded_slices": {}, "filter_immune_slice_fields": {}, "timed_refresh_immune_slices":[324] }

In the example above, if a timed refresh is set for the dashboard, then every slice except 324 will be automatically re-queried on schedule. Slice refresh will also be staggered over the specified period. You can turn off this staggering by setting the stagger_refresh to false and modify the stagger period by setting stagger_time to a value in milliseconds in the JSON Metadata field:

82 Chapter 3. Overview Superset Documentation

{ "stagger_refresh": false, "stagger_time": 2500 }

Here, the entire dashboard will refresh at once if periodic refresh is on. The stagger time of 2.5 seconds is ignored.

Why does ‘flask fab’ or superset freezed/hung/not responding when started (my home directory is NFS mounted)?

By default, superset creates and uses an sqlite database at ~/.superset/superset.db. Sqlite is known to don’t work well if used on NFS due to broken file locking implementation on NFS. You can override this path using the SUPERSET_HOME environment variable. Another work around is to change where superset stores the sqlite database by adding SQLALCHEMY_DATABASE_URI = 'sqlite:////new/location/superset.db' in superset_config.py (create the file if needed), then adding the directory where superset_config.py lives to PYTHONPATH environment variable (e.g. export PYTHONPATH=/opt/logs/sandbox/airbnb/).

What if the table schema changed?

Table schemas evolve, and Superset needs to reflect that. It’s pretty common in the life cycle of a dashboard to want to add a new dimension or metric. To get Superset to discover your new columns, all you have to do is to go to Menu -> Sources -> Tables, click the edit icon next to the table who’s schema has changed, and hit Save from the Detail tab. Behind the scene, the new columns will get merged it. Following this, you may want to re-edit the table afterwards to configure the Column tab, check the appropriate boxes and save again.

How do I go about developing a new visualization type?

Here’s an example as a Github PR with comments that describe what the different sections of the code do: https: //github.com/airbnb/superset/pull/3013

What database engine can I use as a backend for Superset?

To clarify, the database backend is an OLTP database used by Superset to store its internal information like your list of users, slices and dashboard definitions. Superset is tested using Mysql, Postgresql and Sqlite for its backend. It’s recommended you install Superset on one of these database server for production. Using a column-store, non-OLTP databases like Vertica, Redshift or Presto as a database backend simply won’t work as these databases are not designed for this type of workload. Installation on Oracle, Microsoft SQL Server, or other OLTP databases may work but isn’t tested. Please note that pretty much any databases that have a SqlAlchemy integration should work perfectly fine as a data- source for Superset, just not as the OLTP backend.

How can i configure OAuth authentication and authorization?

You can take a look at this Flask-AppBuilder configuration example.

3.4. Contents 83 Superset Documentation

How can I set a default filter on my dashboard?

Easy. Simply apply the filter and save the dashboard while the filter is active.

How do I get Superset to refresh the schema of my table?

When adding columns to a table, you can have Superset detect and merge the new columns in by using the “Refresh Metadata” action in the Source -> Tables page. Simply check the box next to the tables you want the schema refreshed, and click Actions -> Refresh Metadata.

Is there a way to force the use specific colors?

It is possible on a per-dashboard basis by providing a mapping of labels to colors in the JSON Metadata attribute using the label_colors key.

{ "label_colors":{ "Girls": "#FF69B4", "Boys": "#ADD8E6" } }

Does Superset work with [insert database engine here]?

The community over time has curated a list of databases that work well with Superset in the Database dependencies section of the docs. Database engines not listed in this page may work too. We rely on the community to contribute to this knowledge base. For a database engine to be supported in Superset through the SQLAlchemy connector, it requires having a Python compliant SQLAlchemy dialect as well as a DBAPI driver defined. Database that have limited SQL support may work as well. For instance it’s possible to connect to Druid through the SQLAlchemy connector even though Druid does not support joins and subqueries. Another key element for a database to be supported is through the Superset Database Engine Specification interface. This interface allows for defining database-specific configurations and logic that go beyond the SQLAlchemy and DBAPI scope. This includes features like: • date-related SQL function that allow Superset to fetch different time granularities when running time-series queries • whether the engine supports subqueries. If false, Superset may run 2-phase queries to compensate for the limitation • methods around processing logs and inferring the percentage of completion of a query • technicalities as to how to handle cursors and connections if the driver is not standard DBAPI • more, read the code for more details Beyond the SQLAlchemy connector, it’s also possible, though much more involved, to extend Superset and write your own connector. The only example of this at the moment is the Druid connector, which is getting superseded by Druid’s growing SQL support and the recent availability of a DBAPI and SQLAlchemy driver. If the database you are considering integrating has any kind of of SQL support, it’s probably preferable to go the SQLAlchemy route. Note that for a native connector to be possible the database needs to have support for running OLAP-type queries and should be able to things that are typical in basic SQL: • aggregate data • apply filters (==, !=, >, <, >=, <=, IN, . . . )

84 Chapter 3. Overview Superset Documentation

• apply HAVING-type filters • be schema-aware, expose columns and types

3.5 Indices and tables

• genindex • modindex • search

3.5. Indices and tables 85