Alinex Data Store

Read, work and write data structures to differents stores

Alexander Schilling

Copyright © 2019 - 2021 Alexander Schilling Table of contents

Table of contents

1. Home 6

1.1 Alinex Data Store 6

1.1.1 Usage 6

1.1.2 Debugging 6

1.1.3 Module Usage 7

1.1.4 Chapters 7

1.1.5 Support 7

1.2 Command Line Usage 8

1.2.1 Input 8

1.2.2 Output 8

1.2.3 Transform Files 9

1.2.4 Using Definition 9

1.2.5 Examples 9

1.3 Last Changes 10

1.3.1 Version 1.16.0 - (12.05.2021) 10

1.3.2 Version 1.15.0 - (02.01.2021) 10

1.3.3 Version 1.13.0 - (16.06.2020) 10

1.3.4 Version 1.12.0 - (27.01.2020) 10

1.3.5 Version 1.11.0 - (13.01.2020) 11

1.3.6 Version 1.10.0 - (22.11.2019) 11

1.3.7 Version 1.9.1 - (13.11.2019) 11

1.3.8 Version 1.8.0 - (31.10.2019) 11

1.3.9 Version 1.7.0 - (13.10.2019) 11

1.3.10 Version 1.6.0 - (01.10.2019) 11

1.3.11 Version 1.5.0 - (28.08.2019) 12

1.3.12 Version 1.4.0 - (15.08.2019) 12

1.3.13 Version 1.3.0 - (6.08.2019) 12

1.3.14 Version 1.2.0 - (22.06.2019) 13

1.3.15 Version 1.1.0 - (17.05.2019) 13

1.3.16 Version 1.0.0 - (12.05.2019) 13

1.3.17 Version 0.7.0 (29.04.2019) 13

1.3.18 Version 0.6.0 (26.04.2019) 14

1.3.19 Version 0.5.0 (19.04.2019) 14

1.3.20 Version 0.4.0 (17.04.2019) 14

1.3.21 Version 0.3.0 (15.04.2019) 14

- 2/80 - Copyright © 2019 - 2021 Alexander Schilling Table of contents

1.3.22 Version 0.2.0 (12.04.2019) 14

1.3.23 Version 0.1.0 (0t.04.019) 14

1.4 Roadmap 16

1.4.1 Add Protocols 16

1.4.2 Multiple sources 16

1.5 Privacy statement 17

2. API 18

2.1 Application Programming Interface 18

2.1.1 Basic Usage 18

2.1.2 TypeScript 18

2.2 Loading from Persistent Store 19

2.2.1 Initialization 19

2.2.2 Options 21

2.2.3 Load 22

2.2.4 Save 23

2.2.5 Reload 23

2.3 Transform 25

2.3.1 Parse 25

2.3.2 Format 25

2.4 Access and Manipulate Data 26

2.4.1 Has Element 26

2.4.2 Get Element 26

2.4.3 Filter 27

2.4.4 Coalesce 27

2.4.5 Set Element 27

2.4.6 Insert into Array 28

2.4.7 Push to Array 28

2.4.8 Empty Element 28

2.4.9 Delete Element 29

3. Specifications 30

3.1 Specifications 30

3.1.1 URI 31

3.2 Protocols 32

3.2.1 File Protocol Handler 32

3.2.2 SSH File Transfer Protocol Handler 33

3.2.3 FTP and FTPS Handler 39

3.2.4 HTTP and HTTPS Handler 41

3.2.5 PostgreSQL Database Handler 43

- 3/80 - Copyright © 2019 - 2021 Alexander Schilling Table of contents

3.2.6 Shell Execution Protocol Handler 44

3.3 Compression 45

3.3.1 GZip Compression 45

3.3.2 Raw Deflate Compression 46

3.3.3 Brotli Compression 47

3.3.4 BZip2 Compression 48

3.3.5 LZMA Compression 49

3.3.6 TAR Archive 50

3.3.7 TAR + GZip Archives 53

3.3.8 TAR + BZip2 Archives 54

3.3.9 TAR + LZMA Archives 55

3.3.10 ZIP Archives 56

3.4 Formats 57

3.4.1 JSON Format 57

3.4.2 BSON Format 58

3.4.3 MessagePack Format 59

3.4.4 JS Format 60

3.4.5 CSON Format 61

3.4.6 CoffeeScript Format 62

3.4.7 YAML Format 63

3.4.8 INI Format 65

3.4.9 Properties Format 66

3.4.10 TOML Format 67

3.4.11 XML Format 68

3.4.12 HTML Format 70

3.4.13 CSV Format 71

3.4.14 Excel 97-2004 Workbook Format 73

3.4.15 Excel 2007+ XML Format 74

3.4.16 Plain Text Format 75

3.4.17 Binary Format 77

4. Internal 78

4.1 Extending this Module 78

4.1.1 Structure 78

4.1.2 Processing 79

4.1.3 DataStore 79

4.1.4 Statistics 79

4.1.5 Download Documentation 79

4.1.6 License 79

- 4/80 - Copyright © 2019 - 2021 Alexander Schilling Table of contents

4.2 Test 80

4.2.1 SFTP 80

4.2.2 FTP 80

- 5/80 - Copyright © 2019 - 2021 Alexander Schilling 1. Home

1. Home

1.1 Alinex Data Store

The data store is a in memory store for small to medium sized data structures. It allows you to read, work and write in different formats and therefore also to transform between them.

Big Features:

• read, write and transform data structures

• methods to access and work with data structures

• 9 protocols supported (file, sftp, HTTP, https...)

• 17 file formats supported

• 10 compression or archive formats are also supported

• integrated access filter language

• multi source loading for single structure

See more on the specification page.

1.1.1 Usage

It can be used from command line but also as module in other NodeJS scripts.

The main use cases are:

• reading configuration files

• pre-processing configuration into other format to easily import later

• communicating with REST services

• data provider for the Validator module since version 3

Personally I seldom use it standalone and in most times use the additional checking from the Validator. See also the examples there.

1.1.2 Debugging

To see some more information you may enable the debug logging using the environment variable DEBBUG=datastore* . The following finer debug elements are alternatively usable:

• datastore:protocol:* - output uri accessed by specific handler

• datastore:details - output a lot of verbose information

• datastore:compression - output size changes

• datastore:compression:* - output uri processed through specific handler

• datastore:format - output parsed data

• datastore:format:* - output uri processed through specific handler

• datastore - output general problems like ignore because source not found

• datastore* - output all

A lot more may be available using the additional core debugging of NodeJS using NODE_DEBUG=xxx . If there is something available the datastore:details logging will inform you about it.

- 6/80 - Copyright © 2019 - 2021 Alexander Schilling 1.1.3 Module Usage

1.1.3 Module Usage

Because of the big list of supported alternatives this module depends on a lot of other modules but only the ones really used are loaded on demand.

Only the following are loaded always:

• @alinex/core

• @alinex/data

• debug

• object-path

All others are only loaded at the time the specific format is used. But it will install a bigger collection of modules on your system to be prepared if specific formats are used.

1.1.4 Chapters

Read all about the DataStore in the chapters:

• API - How to use it in your own script.

• Specifications - Detailed format, protocol and file specifications.

• Extending - Structure of this module for those extending it.

1.1.5 Support

I don't give any paid support but you may create GitLab Issues:

• Bug Reports or Feature: Please make sure to give as much information as possible. And explain how the system should behave in your opinion.

• Code Fixing or Extending: If you help to develop this package I am thankful. Develop in a fork and make a merge request after done. I will have a look at it.

• Should anybody be willing to join the core team feel free to ask, too.

• Any other comment or discussion as far as it is on this package is also kindly accepted.

Please use for all of them the GitLab Issues which only requires you to register with a free account.

- 7/80 - Copyright © 2019 - 2021 Alexander Schilling 1.2 Command Line Usage

1.2 Command Line Usage

The CLI allows to work in two major modes and also can generate a bash completion script:

• working on a single input source

• use input as definition to allow multiple sources

datastore [options] # work with single input datastore definition [options] # use input as definition (allow multiple sources) datastore bashrc-script # generate completion script

The general options are:

--version Show version number [boolean] --quiet, -q only output result [boolean] --help Show help [boolean]

1.2.1 Input

The input can be a defined file or if not specified STDIN is used as for pipes. If no input file or URI the --input-format is always needed because the filename is unknown to the DataStore. The other options are used to specify how to read this input:

--input, -i input path or URI to read from [string] --input-format format used to parse input [choices: "bson", "coffee", "cson", "csv", "ini", "js", "json", "msgpack", "properties", "text", "toml", "xml", "yaml"] --input-compression compression method to be used [choices: "gzip", "brotli", "bzip2", "lzma", "tar", "tgz", "tbz2", "tlz", "zip"] --records flag to read the CSV always as with records [boolean]

Protocol specific options:

--proxy HTTP method to use for call [choices: "get", "delete", "head", "options", "post", "put", "patch"] --http-method HTTP method to use for call [choices: "get", "delete", "head", "options", "post", "put", "patch"] --http-header send additional HTTP header [array] --http-data send additional HTTP POST or PUT data [string] --ignore-error ignore HTTP error response codes [boolean] --sftp-privatekey private key file [string] --sftp-passphrase passphrase for private key, if needed [string]

Info

See the specifications for details to the supported protocols, compression, archive and file formats.

At last some filter rules may be used to reduce the input result:

--filter, -f filter rule to modify data structure [string]

1.2.2 Output

The same goes for the output, if no file URI is specified STDOUT will be used. If no output file or URI is given and no --output-format is defined, JavaScript is used as default. The other options are used to define the correct format:

--proxy HTTP method to use for call [choices: "get", "delete", "head", "options", "post", "put", "patch"] --http-method HTTP method to use for call [choices: "get", "delete", "head", "options", "post", "put", "patch"] --http-header send additional HTTP header [array] --http-data send additional HTTP POST or PUT data [string] --ignore-error ignore HTTP error response codes [boolean] --sftp-privatekey private key file [string] --sftp-passphrase passphrase for private key, if needed [string]

With the filter rule a specific part can be selected or the data can be slightly manipulated.

- 8/80 - Copyright © 2019 - 2021 Alexander Schilling 1.2.3 Transform Files

datastore -i test.json -f 'person' datastore -i test.json -f 'person.age + " years"'

1.2.3 Transform Files

Therefore you simply need the input and output parameters and the DataStore will convert between all of the supported formats.

datastore -i test.json -o test.yml

1.2.4 Using Definition

As noted on top of the page, you can specify multiple sources to be merged together into one resulting data structure. As this needs a more complex definition as possible using different command arguments it is done in two steps:

1. the given single source will be loaded as definition -> this makes it the real loading definition

2. the real loading and merge will take place based on the above definition

datastore definition -i loader.json -o test.yml

Here the loader.json may look like:

[ { source: '/etc/myapp/base.json' }, { source: '/etc/myapp/base.yaml', alternate: ['/etc/myapp/base.json'] }, { source: '/etc/myapp/base.ini', alternate: ['/etc/myapp/base.json','/etc/myapp/base.yaml'] }, { source: '/etc/myapp/email.yaml', base: 'email', depend: ['/etc/myapp/base.json','/etc/myapp/base.yaml'] }, { source: '~/.myapp/base.yaml', object: 'replace' } { source: '~/.myapp/email.yaml', base: 'email', depend: '~/.myapp/base.yaml' }, ]

Effectively this is the same as if this definition was given as parameters to the JavaScript API.

A simple call using pipes may look like:

echo '[{"source": "test.yml"}]' | datastore definition --input-format json

1.2.5 Examples

Get Token from REST Server

datastore -q \ --input http://localhost:3030/authentication \ --http-method post \ --http-header 'Content-Type: application/json' \ --http-data '{ "strategy": "local", "email": "[email protected]", "password": "demo123" }' \ --input-format json \ --filter accessToken \ --output-format text

- 9/80 - Copyright © 2019 - 2021 Alexander Schilling 1.3 Last Changes

1.3 Last Changes

1.3.1 Version 1.16.0 - (12.05.2021)

• upgrade to ES2019, Node >= 12

• enhanced debug mode for ftp

• update packages: bson, msgpack, json, zip, csv, pg, xlsx

• replace outdated moment.js with luxon

• refactor format casting

• HotFix 1.16.1 - (18.06.2021) - fix url decoding

1.3.2 Version 1.15.0 - (02.01.2021)

• new doc theme

• update packages for ssh, ini, html

• move man page creation into core

• update and fix zip

1.3.3 Version 1.13.0 - (16.06.2020)

• Excel xls and xlsx to read (one table only) and write

• Update 1.13.1 - (11.08.2020)

• also allow pattern in text parser which match no line

• add error.data on parse error

• update ftp, cson, csv, tar, pg, bson, xlsx packages

• HotFix 1.13.2 - (10.09.2020) - update packages

• Update 1.13.3 - (11.09.2020) - allow source description for preset data with preset:....

• HotFix 1.13.4 - (12.09.2020) - allow ignoreError in shell

• Update 1.13.5 - (29.09.2020) - fix storing error.data on all string sources

• Update 1.13.6 - (09.10.2020) - add time to meta data on loading

• Update 1.13.7 - (23.10.2020) - update packages and doc structure

1.3.4 Version 1.12.0 - (27.01.2020)

• csv now with delimiter and quote option support

• html tree parser

Update 1.12.1 (15.02.2020) - update package management and ssh, ftp

Update 1.12.2 (02.06.2020)

• changed default output format to JSON

• add search path option for postgres protocol

• upgrade 3rd party modules including new major postgres driver

Update 1.12.3 (05.06.2020) - fix line ending in CSV export

Update 1.12.4 (09.06.2020) - fix date parsing bug occurring for numerical strings

Update 1.12.5 (15.06.2020) - fix to also work with empty result sets in postgres

- 10/80 - Copyright © 2019 - 2021 Alexander Schilling 1.3.5 Version 1.11.0 - (13.01.2020)

1.3.5 Version 1.11.0 - (13.01.2020)

• file tail optimized

• adding meta.raw with buffer data

• adding deflate compression support

1.3.6 Version 1.10.0 - (22.11.2019)

• Add binary format returning byte array.

1.3.7 Version 1.9.1 - (13.11.2019)

• Add shell protocol for local and remote execution

• Add tail option for file protocol

• support meta data in file protocol

• Bug fix: return content also if error (with ignoreError)

Bug Fix Version 1.9.2 - (19.11.2019) - switch to request lib to work better on complex rest calls

Bug Fix Version 1.9.3 - (21.11.2019) - allow to change HTTP method, again

1.3.8 Version 1.8.0 - (31.10.2019)

• Store HTTP meta data from server under ds.meta

• Added ds.map to acquire mapping information if enabled in DataSource settings

• Added support for pattern matching in text formatter

• fixed bug in not closing sftp connection after retrieval

1.3.9 Version 1.7.0 - (13.10.2019)

• Breaking Change: Option ignoreResponseCode was renamed to ignoreError

• Added HTTP options: httpHeader , httpData

• Made all HTTP options available through CLI

1.3.10 Version 1.6.0 - (01.10.2019)

• Added httpMethod to be set for http protocol

• Added ignoreResponseCode option for http protocol

• Fixed bug in don't using options if input is defined using cli

• Update packages: data (filter), ftp, sftp, csv

Version 1.6.1 - (08.10.2019)

• Fix: Add merge result as debug message

• Update: CSV parser

BugFix 1.6.2 - (11.10.2019)

• Fix: Not recognizing --output-format

• Update: FTP protocol handler

- 11/80 - Copyright © 2019 - 2021 Alexander Schilling 1.3.11 Version 1.5.0 - (28.08.2019)

1.3.11 Version 1.5.0 - (28.08.2019)

New shortcut calls:

• DataStore.load() to easily initialize and load data

• DataStore.url() to easily initialize and load single sources in one step

• DataStore.data() to initialize with preset data structure

Fixes:

• load of data only stopped because no real loading necessary

Fix Version 1.5.1 (01.09.2019)

• Update packages data filter, protocols sftp, postgres and xml format

Fix Version 1.5.2 (16.09.2019)

• Update packages data filter, protocols sftp+ftp and xml format

Fix Version 1.5.3 (17.09.2019)

• Added option proxy for http/https handler

1.3.12 Version 1.4.0 - (15.08.2019)

Extension:

• updated data module to get depend and alternate support in merging

• added support for multi file in CLI using definition data structures

Fixes:

• don't fail if one of multiple files are not found

1.3.13 Version 1.3.0 - (6.08.2019)

The biggest change is the added support for multiple sources.

Breaking Changes:

• constructor now only allows one or multiple DataSource structures which may be a simple source string, options or alternatively a direct data structure

• allow to set merge definitions beside the data source

• same changes for load and save methods

• set/get source is no longer available

Fixes:

• ds.load() without source will only load if not already done

• method filter() was extracted into data module

Hotfix 1.3.1 - fix type DataSource export

- 12/80 - Copyright © 2019 - 2021 Alexander Schilling 1.3.14 Version 1.2.0 - (22.06.2019)

1.3.14 Version 1.2.0 - (22.06.2019)

New:

• added ds.filter(string) which will work like get() but return a new DataStore with the results instead of directly returning them

• added postgres protocol to read from database

• added plain text formatter

Extended:

• new DataStore(data) can now directly create a datastore with a preset data structure

• the http handler will auto detect format based on mime-type if not defined

Hotfix 1.2.1 - (23.06.2019)

• new DataStore(data, true) now uses the second parameter to work with any data structure

Update 1.2.2 - (13.07.2019)

• Internal restructure using retry from core module

• replace request package with axios

• export formats and compressions definition

1.3.15 Version 1.1.0 - (17.05.2019)

• add time range in reload() call

Hotfix 1.1.1 - (19.05.2019)

• fix get() on empty string in root returned undefined

Hotfix 1.1.2 - (27.05.2019)

• fix usage of paths in brackets like '[xxx],yyy[zzz]' in change methods like set()

• update serialize-to-js with internal restructuring

1.3.16 Version 1.0.0 - (12.05.2019)

• add new rule engine in get() with filter

• filter add negative selections

• filter add combine and alternative syntax

• filter add function handling

• filter add list concat, mathematical operators and grouping

• remove coalesce() method and use filter in has()

• allow filter within cli call

Hotfix 1.0.1 - (13.05.2019)

• allow get() and has() without arguments

1.3.17 Version 0.7.0 (29.04.2019)

• support FTP and FTPS

• added archive support using tar

• support for .tar.gz archives

- 13/80 - Copyright © 2019 - 2021 Alexander Schilling 1.3.18 Version 0.6.0 (26.04.2019)

• all options available through CLI

• add support for zip compression

• hash part not needing for single file archives in reading

Hotfix 0.7.1 (30.04.2019)

• fix cli calls

• support for bzip2 and tar-bzip2 compression

• support for lzip and tar-lzip compression

Hotfix 0.7.2 (01.05.2019)

• reduce symbol imports

• update code style

• add support for .mjs and .ts extensions

Hotfix 0.7.3 (03.05.2019)

• fix access methods with empty path

• fix reload if changed using access methods

• add inline doc tags

1.3.18 Version 0.6.0 (26.04.2019)

• support HTTP and HTTPS retrieval

• fixes in formatters with Date handling

• full support in all formatters, also for arrays

• optimized debugging

1.3.19 Version 0.5.0 (19.04.2019)

• add sftp support

• add compression support using gzip and brotli

1.3.20 Version 0.4.0 (17.04.2019)

• CLI usage

• Options: format

1.3.21 Version 0.3.0 (15.04.2019)

• adding binary bson, msgpack formats

• adding js, cson, coffee, ini, properties, toml, xml formatter

• adding csv formatter

1.3.22 Version 0.2.0 (12.04.2019)

• extract by path

• access and manipulate structure

1.3.23 Version 0.1.0 (0t.04.019)

• class based access

- 14/80 - Copyright © 2019 - 2021 Alexander Schilling 1.3.23 Version 0.1.0 (0t.04.019)

• load json

• write json

- 15/80 - Copyright © 2019 - 2021 Alexander Schilling 1.4 Roadmap

1.4 Roadmap

It's not a clear roadmap here, but more a list of possible extension ways divided into thematic groups. If you need a specific extension or change contact me. Or send me a pull request.

ftp problems [2021-05-11]

sftp problems [2021-05-11]

replace moment.js [2021-05-12]

support csv with delimiter ':' (for apache status) [2021-05-12]

remove type definition for Blob (needed for js-zip V3.4.0/3.5.0)

1.4.1 Add Protocols

• epub extraction

Web Services:

• REST support with store + test

Databases:

• mysql like postgres

• mongodb https://www.iana.org/assignments/uri-schemes/prov/mongodb

• redis https://www.iana.org/assignments/uri-schemes/prov/redis

• postgres: save as list or record - howto delete old???

Code Repositories:

https://www.iana.org/assignments/uri-schemes/prov/git

• svn https://www.iana.org/assignments/uri-schemes/prov/svn

Without mounts:

• smb https://www.iana.org/assignments/uri-schemes/prov/smb

• nfs https://github.com/scality/node-nfsc

1.4.2 Multiple sources

• store multi using merge.map and diff

- 16/80 - Copyright © 2019 - 2021 Alexander Schilling 1.5 Privacy statement

1.5 Privacy statement

This documentation is part of the alinex.gitlab.io site and as such please have a look on the site's privacy statement.

- 17/80 - Copyright © 2019 - 2021 Alexander Schilling 2. API

2. API

2.1 Application Programming Interface

First install it as module:

npm install @alinex/datastore --save

2.1.1 Basic Usage

The DataStore has a class based interface. To use it create an instance and load or save it as you need.

1. load and save or transform between different protocols and formats.

2. access methods to read or modify data structures

A complete process may look like:

TypeScript

import DataStore from '@alinex/datastore';

async function config(): any { const ds = new DataStore({ source: 'file:/etc/my-app.json' }); await ds.load(); // load data return ds.data; }

config().then(conf => { // work with it });

JavaScript

const DataStore = require('@alinex/datastore').default;

async function config() { const ds = new DataStore({ source: 'file:/etc/my-app.json' }); await ds.load(); // load data return ds.data; }

config().then(conf => { // work with it });

Read in the next pages about all the options and see more examples.

2.1.2 TypeScript

This module is written in TypeScript, and comes with all the definitions included. You can import it directly in TypeScript and get the benefit of static type checking and auto-complete within your IDE.

- 18/80 - Copyright © 2019 - 2021 Alexander Schilling 2.2 Loading from Persistent Store

2.2 Loading from Persistent Store

2.2.1 Initialization

If you want to load or save the DataStore it has to be initialized. Therefore an URL and maybe some options defining the concrete handling are needed. But you can also set a list of data sources which will be merged together.

Definition

interface DataSource { // URL to load this structure source?: string; options?: Options; // alteratively directly give data data?: any; // set base path in final structure base?: string; // relations depend?: string[]; alternate?: string[]; // default combine methods object?: 'combine' | 'replace' | 'missing' | 'alternate'; // default: 'combine' array?: 'concat' | 'replace' | 'overwrite' | 'combine'; // default: 'concat' // overwrite per position pathObject?: {[key: string]: 'combine' | 'replace' | 'missing' | 'alternate' } pathArray?: {[key: string]: 'concat' | 'replace' | 'overwrite' | 'combine' } // general settings clone?: boolean; allowFlags?: boolean; // default: true map?: boolean; // default: false // Automatically added on loading meta?: any; }

interface Options { // http or https protocol proxy?: string; httpMethod?: string; httpHeader?: string[]; httpData?: string; ignoreError?: boolean; // security privateKey?: Buffer | string; passphrase?: string; // file protocol tail?: number; // compression compression?: 'gzip' | 'brotli' | 'bzip2' | 'lzma' | 'tar' | 'tgz' | 'tbz2' | 'tlz' | 'zip'; compressionLevel?: number; // formatting format?: 'bson' | 'coffee' | 'cson' | 'csv' | 'ini' | 'js' | 'json' | 'msgpack' | 'properties' | 'toml' | 'xml' | 'yaml'; pattern?: RegExp; records?: boolean; module?: boolean; rootName?: string; }

This can be used to initialize a new DataStrore:

// initialize new store DataStore.constructor(...input?: DataSource[]): DataStore

Additionally three static methods allow to initialize and load a DataSource which afterwards may directly be red.

- 19/80 - Copyright © 2019 - 2021 Alexander Schilling 2.2.1 Initialization

Definition

Create a new data store and load it directly:

DataStore.load(...input: DataSource[]): Promise

From single source this can be called with:

• source URL to load from

• options options for loading

Or with a predefined data structure.

DataStore.url(source: string, options?: Options): Promise

And to setup with predefined data:

DataStore.data(data: any, source?: string): Promise

The first two attributes from DataSource are used for loading, the others are described in more detail in the merge function.

You may also set or change the sources on each load() or save() method.

Single source

The simplest use case is to use a source defined by an URI string with possible options to specify how to read them:

const ds = new DataStore({ source: "file:config.yml" }); const ds = new DataStore({ source: "http://my-site/config", options: { format: "json" } });

All option settings are defined below.

For single sources you can also initialize and load a DataSource in one step:

const ds = DataStore.url("file:config.yml");

Combine multiple sources

It is also possible to load the data structure from multiple sources. This allows to separate big configuration in multiple files or allow for base settings which can be overwritten with specific ones at another location.

Each DataStore can either be used as multi source loader or with single source load and save.

const ds = new DataStore( { source: '/etc/myapp/base.json' }, { source: '/etc/myapp/base.yaml', alternate: ['/etc/myapp/base.json'] }, { source: '/etc/myapp/base.ini', alternate: ['/etc/myapp/base.json','/etc/myapp/base.yaml'] }, { source: '/etc/myapp/email.yaml', base: 'email', depend: ['/etc/myapp/base.json','/etc/myapp/base.yaml'] }, { source: '~/.myapp/base.yaml', object: 'replace' } { source: '~/.myapp/email.yaml', base: 'email', depend: '~/.myapp/base.yaml' }, );

This will load /etc/myapp/base.json (2) or if not found the yaml (3) or ini (4) format of it. The /etc/myapp/email.yaml (5) is only loaded if one of the base configs in the same directory is loaded. If also a local config like ~/.myapp/base.yaml (7) is found it will replace the global config and also has an dependent email.yml (8).

Preset Data Structure

If you already have the data loaded and want to directly create a DataStore with it as result data structure this is possible using:

const ds = new DataStore(); ds.data = { name: "preset data structure" }; ds.source = 'preset:config' // optional to set where the data comes from // no loading is necessary

- 20/80 - Copyright © 2019 - 2021 Alexander Schilling 2.2.2 Options

The same is possible in one async call using:

const ds = await DataStore.data([1, 2, 3], 'config');

But you can also initialize and load it in two steps:

const ds = new DataStore({ data: { name: "preset data structure" }, source: 'preset:config' }); await ds.load();

You can also add a source URL to one of this examples later and store it, if you want.

But really useful is this in combination with a list of DataSources. This allows to use default values or overwrite specific elements with code calculated values.

const ds = new DataStore({ source: "file:config.yml" }, { data: { vehicle: "car", model: "BMW" } }); await ds.load();

If you try to load a DataStore, which has no real sources defined, it will throw an error.

2.2.2 Options

This can be done directly as parameters to the load() or save() methods or directly as described here. The following options may also be specified.

• proxy: string - A proxy server and port string like 'myserver.de:80'. But an IP Address is also possible. You may also give the real server here to not use the normal name resolution.

• httpMethod: string - one of: get, delete, head, options, post, put, patch (default: get)

• httpHeader: string[] - add specific HTTP/HTTPS header lines

• httpData: string - send additional data with HTTP/HTTPS POST or PUT

• ignoreError: boolean - to ignore response code not equal 2xx for HTTP/HTTPS protocol

• privateKey: Buffer | string - used for SFTP and SHELL/SSH protocol

• passphrase: string - used for SFTP and SHELL/SSH protocol

• search: string - defines the search path for postgres to use like my_schema, public

• compression: string - compression method to be used (default is detected by file extension)

• compressionLevel: number - gzip compression level 0 (no compression) to 9 (best compression, default)

• format: string - set a specific format handler to be used (default is detected by file extension)

• pattern: RegExp - for text format to be split into pattern matches (Using named groups) for event logs

• module: boolean - use module format in storing as JavaScript, so you can load it using normal require or import

• rootName: string - define the root element name in formatting as XML

• records: boolean - flag to read the CSV, XLS or XLSX or table data always as with records (only needed with less than 3 columns)

• delimiter: string - character to be used as field separator in CSV data (default: , ).

• quote: string - character used to quote field values in CSV data (default: " ).

- 21/80 - Copyright © 2019 - 2021 Alexander Schilling 2.2.3 Load

To set the options on the DataStore directly use:

TypeScript

import DataStore from "@alinex/datastore";

async function config() { const ds = new DataStore({ source: "file:/etc/my-app.json", options: { format: "json" } }); await ds.load(); // go on using the datastore }

JavaScript

const DataStore = require("@alinex/datastore").default;

async function config() { const ds = new DataStore({ source: "file:/etc/my-app.json", options: { format: "json" } }); await ds.load(); // go on using the datastore }

2.2.3 Load

Load a data structure from the defined source URL.

Definition

.load(...input: DataSource[]): Promise

As described above you may load a data source if it is already defined using:

const ds = new DataStore({ source: "file:/etc/my-app.json" }); await ds.load(); console.log(ds.data);

Calling load without parameters will only load the DataStore if not already done. So you may call it also if you don't know if it is already loaded and directly use the returning data structure. To load again use the reload() method below.

But you can also set them as needed in the call to load :

const ds = new DataStore(); await ds.load({ source: "http://my-server.de/config", options: { format: "json" } }); console.log(ds.data);

The above example needs to define the format in the options, because it is not set as file extension in the URL.

Meta Data

Some of the protocols like HTML and filr will also store meta data gathered while data retrieval beside the data structure:

ds.meta = { status: 200, statusText: 'OK', headers: { server: 'nginx', date: 'Sat, 26 Oct 2019 19:47:18 GMT', 'content-type': 'text/plain; charset=utf-8', 'content-length': '317', connection: 'close', 'cache-control': 'max-age=60, public', 'content-disposition': 'inline', etag: 'W/"24d4aa47151123702a7a3b4e92ff8320"', 'referrer-policy': 'strict-origin-when-cross-origin, strict-origin-when-cross-origin', 'set-cookie': [ 'experimentation_subject_id=Ijg4Nzg4MTAzLTFiM2EtNDZhNi05MDY5LWE5NTg5YzlhYzZhNCI%3D--f9891296ec3948ca54191d83b2fbaec0403f52d4; domain=.gitlab.com; path=/; expires=Wed, 26 Oct 2039 19:47:18 -0000; secure' ], 'x-content-type-options': 'nosniff',

- 22/80 - Copyright © 2019 - 2021 Alexander Schilling 2.2.4 Save

'x-download-options': 'noopen', 'x-frame-options': 'DENY', 'x-permitted-cross-domain-policies': 'none', 'x-request-id': 'mfxS9d29uS1', 'x-runtime': '0.063831', 'x-ua-compatible': 'IE=edge', 'x-xss-protection': '1; mode=block', 'strict-transport-security': 'max-age=31536000', 'gitlab-lb': 'fe-16-lb-gprd', 'gitlab-sv': 'web-23-sv-gprd' }, raw: };

If multiple sources are used you will get a list of meta structures equivalent to the list of sources.

Mapping Information

Only available if multiple sources are used and the mapping flag is set to true :

ds.map = { __map: ["x.yml", ""], name: { __map: ["y.yml", "name"] }, hair: { __map: ["x.yml", "hair"], color: { __map: ["x.yml", "hair.color"] }, cut: { __map: ["y.yml", "hair.cut"] } }, family: { "0": { __map: ["x.yml", "family.0"] }, "1": { __map: ["x.yml", "family.1"] }, "2": { __map: ["y.yml", "family.0"] }, __map: ["x.yml", "family"] }, gender: { __map: ["y.yml", "gender"] } };

This mapping has the same structure as the ds.data but contains the additional __map information with the source URL and the path within the source from which the value comes from.

2.2.4 Save

Store the current data structure to the defined source URL.

Definition

.save(output?: DataSource): Promise

If you want to transfer to another format or you changed something you may save the data structure again. If it should go in another URI you may give it directly here or set it before.

await ds.save(); console.log(ds.data);

Like for loading you can also give a source and options here.

2.2.5 Reload

Reload the current data structure if it has been changed on disk.

Definition

.reload(time?: number): Promise

If you want to reload the DataStore later, you can easily call the ds.reload() method. This will reload if the file was changed (returning true) or not. If a time (number of seconds) setting is given, the reload will not take place within this range.

- 23/80 - Copyright © 2019 - 2021 Alexander Schilling 2.2.5 Reload

There are no parameters here, because it is meant to update only. But you can also call load() a second time to load it again - but this will always load, also if the source wasn't changed.

- 24/80 - Copyright © 2019 - 2021 Alexander Schilling 2.3 Transform

2.3 Transform

Instead of loading form a persistent store or save to one you can also only use the parsing and formatting. This may be necessary if the source is already loaded.

2.3.1 Parse

Definition

public async .parse( uri: string, buffer: Buffer | string, options?: Options ): Promise

The URI specifying where the data comes from is necessary to also read the format from it's file extension if no format option is given.

2.3.2 Format

Definition

public async .format( uri: string, options?: Options ): Promise

As with the parse() method the uri and options.format used for specifying the correct format.

- 25/80 - Copyright © 2019 - 2021 Alexander Schilling 2.4 Access and Manipulate Data

2.4 Access and Manipulate Data

The examples below use this data structure:

{ name: 'example', null: null, boolean: true, string: 'test', number: 5.6, date: '2016-05-10T19:06:36.909Z', list: [ 1, 2, 3 ], person: { name: 'Alexander Schilling', job: 'Developer' }, complex: [ { name: 'Egon' }, { name: 'Janina' } ] }

Info

All of the following methods use a string as first parameter.

The modifying elements need a path which is the full way to the selected element, concatenating the object keys or array index numbers using dots. Alternatively using names in square brackets. In case of arrays, the index may also be negativ to specify element from the end.

But the accessing methods get() and has() supports an greatly extended filter string which is basically the same so each path can also be used. But see the filter description for all the possibilities.

2.4.1 Has Element

Check if the given element path is defined.

Definition

public .has( path: string | number | (string | number)[] ): boolean

This method will check the given path and return a boolean value. It's true if this element is defined and false if not.

ds.has('person.name'); // true ds.has(['person', 'age']); // false

See the filter documentation for all the possibilities here.

2.4.2 Get Element

Get a specific element by path.

Definition

public .get( path: string | number | (string | number)[], fallback?: any ): any

Use the name or index number (array) to specify the element to retrieve.

ds.get(); // return all data ds.get('name'); // returns "example" ds.get('complex.0.name'); // returns "Egon"

- 26/80 - Copyright © 2019 - 2021 Alexander Schilling 2.4.3 Filter

See the filter documentation for all the possibilities here.

2.4.3 Filter

This is like get() but instead of returning the filtered data, it will create a new DataStore with this data within. This can be also used further on.

Definition

public .filter( path: string | number | (string | number)[], fallback?: any ): DataStore

So it is also possible to concatenate multiple DataStores with the filter method together.

ds.get(); // return all data // instead of ds.get('complex.0.name') use ds.filter('complex.0').get('name'); // returns "Egon" // or in steps const parts = ds.filter('complex.0'); parts.get('name'); // returns "Egon"

See the filter documentation for all the possibilities here.

2.4.4 Coalesce

Get the first defined element from the given path definitions.

Definition

public .coalesce( paths: string | number | (string | number)[] | (string | number | (string | number)[])[], fallback?: any ): any

This allows to define multiple paths, from which the first found path will be returned:

ds.coalesce(['alias', 'person.alias', ['person', 'name']], 'name');

Also a default value can be given at the end to be used if nothing matched.

2.4.5 Set Element

Set the value of an element by path specification.

Definition

public .set( path: string | number | (string | number)[], value: any, doNotReplace?: boolean ): any

As the get methods return references you may always change this reference's value. But a clearer way is to use the set method. This will automatically create new values.

ds.set('person.sex', 'm'); ds.get('person.sex'); // returns "m"

If elements on the path are missing they will also be created:

- 27/80 - Copyright © 2019 - 2021 Alexander Schilling 2.4.6 Insert into Array

ds.set('person.address.street', 'Allee 5');

If the doNotReplace flag is added it will only set the value if it is not already defined.

ds.set('person.name', 'unknown', true);

2.4.6 Insert into Array

Definition

public .insert( path: string | number | (string | number)[], value: any, pos?: number ): any

To insert elements in an array the following method can be used (position may be specified):

ds.insert('list', 'm', 1);

If the specified element isn't an array, it will be replaced with one and the value inserted as first element.

Negative positions can also be used to position counting from the back.

[ 0, 1, 2, 3, 4 ] ↗ ↗ ↖ ↖ pos: 0 2 -2 5..999

2.4.7 Push to Array

Definition

public .push( path: string | number | (string | number)[], ...values: any[] ): any

Like insert this will work on arrays only and adds all values to the end of the array.

ds.push('lost', 4); ds.push('lost', 4, 5, 6);

As seen above you can push multiple values at once.

public empty(path: string | number | (string | number)[]): any {

2.4.8 Empty Element

Definition

public .empty( path: string | number | (string | number)[]7 ): any

This will keep the specified element but make it an empty array or object (depending on type).

ds.empty('name'); // string is now '' ds.empty('list'); // array is now [] ds.empty('person'); // object is now {}

- 28/80 - Copyright © 2019 - 2021 Alexander Schilling 2.4.9 Delete Element

The following values will be set:

Type Empty Value

boolean false

number 0

string ''

array []

object {}

2.4.9 Delete Element

Definition

public .delete( path: string | number | (string | number)[] ): any

This removes a element from the data structure, it will be undefined afterwards.

ds.delete('list.3');

- 29/80 - Copyright © 2019 - 2021 Alexander Schilling 3. Specifications

3. Specifications

3.1 Specifications

The following protocols can be used to access data:

Protocol Handler Protocol Load Save Meta

file file yes yes yes

sftp sftp yes yes yes

ftp ftp ftps yes yes yes

http http https yes (untested) yes

postgres postgres yes not yet no

shell shell ssh yes yes yes

Supported compression formats (optional):

Compression Handler Extension Load Save Dir

gzip .gzip .gz yes yes no

deflate .deflate yes yes no

brotli .br yes yes no

bzip2 .bz2 yes yes no

lzma .lzma yes yes no

tar .tar yes yes yes

tgz .tgz .tar.gz yes yes yes

tbz2 .tbz2 .tar.bz2 yes yes yes

tlz .tlz .tar.lzma yes yes yes

zip .zip yes yes yes

In the moment archive formats which store multiple files are not supported.

- 30/80 - Copyright © 2019 - 2021 Alexander Schilling 3.1.1 URI

And these formats are parsable:

Format Handler Extension Load Save

text .txt yes yes

binary yes yes

json .json yes yes

bson .bson yes yes

msgpack .msgpack yes yes

js .js yes yes

cson .cson yes yes

coffee .coffee yes as CSON

yaml .yml .yaml yes yes

ini .ini yes yes

properties .properties yes yes

toml .toml yes yes

xml .xml yes yes

html .htm .html yes yes

csv .csv yes yes

xls .xls yes yes

xlsx .xlsx yes yes

What to use is specified based on a URI which defines both. Comments are possible in some formats, but will not be imported or written on save.

3.1.1 URI

The URI is specific to the protocol and file format.

Example

URI: ://.#

The protocol specifies how to access the persistent storage and extension is used to decide which format to use. It will also detect compression and data format from the extensions, which may also be a concatenation of multiple like .yml.gz .

The sub-path is used within archives to locate the file within. This is also used for the file extension if defined.

Some protocols may have specialties see at example the psql database access.

- 31/80 - Copyright © 2019 - 2021 Alexander Schilling 3.2 Protocols

3.2 Protocols

3.2.1 File Protocol Handler

The file protocol is the simplest. It allows to read and write from local or locally mounted devices using a file path.

This handler is used for the following protocols: file

Example

Pattern: file:// Linux: file:///etc/my-app/config.json

The path has to be absolute like shown above.

The following options are possible:

• tail - only read the last number of lines from a text file

Meta Data

The file and file stats are stored as meta data ds.meta .

{ path: '/home/alex/code/node-datastore/test/data/example.log', stats: Stats { dev: 2051, mode: 33204, nlink: 1, uid: 1000, gid: 1000, rdev: 0, blksize: 4096, ino: 16417975, size: 17, blocks: 8, atimeMs: 1572794838988.7334, mtimeMs: 1572794798644.053, ctimeMs: 1572794798644.053, birthtimeMs: 1572794787399.8635, atime: 2019-11-03T15:27:18.989Z, mtime: 2019-11-03T15:26:38.644Z, ctime: 2019-11-03T15:26:38.644Z, birthtime: 2019-11-03T15:26:27.400Z } }

- 32/80 - Copyright © 2019 - 2021 Alexander Schilling 3.2.2 SSH File Transfer Protocol Handler

3.2.2 SSH File Transfer Protocol Handler

The sftp protocol is based on a ssh connection to a host and allows to access files after login using username and password or key file.

This handler is used for the following protocols: sftp

Example

Pattern: sftp://:@: With password: sftp://alex:mypass4u73@localhost/etc/my-app/config.json With key: sftp://alex@localhost/etc/my-app/config.json

The path has to be absolute like shown above.

The following options are possible:

• privateKey - Buffer or string that contains a private key for either key-based or hostbased user authentication

• passphrase - for an encrypted private key, this is the passphrase used to decrypt it

Meta Data

The file and file stats are stored as meta data ds.meta .

{ path: 'sftp://user:pass@localhost/home/alex/code/node-datastore/test/data/example.log', stats: { mode: 33279, // integer representing type and permissions uid: 1000, // user ID gid: 985, // group ID size: 5, // file size accessTime: 1566868566000, // Last access time. milliseconds modifyTime: 1566868566000, // last modify time. milliseconds isDirectory: false, // true if object is a directory isFile: true, // true if object is a file isBlockDevice: false, // true if object is a block device isCharacterDevice: false, // true if object is a character device isSymbolicLink: false, // true if object is a symbolic link isFIFO: false, // true if object is a FIFO isSocket: false // true if object is a socket } };

Debugging

If something went wrong you may get more information by showing the verbose protocol transfer using DEBUG=datastore:* :

datastore:protocol:sftp loading sftp://alex@localhost/home/alex/code/node-datastore/test/data/ example.json +224ms datastore:details DEBUG: Local ident: 'SSH-2.0-ssh2js0.4.2' +0ms datastore:details DEBUG: Client: Trying localhost on port 22 ... +0ms datastore:details DEBUG: Client: Connected +1ms datastore:details DEBUG: Parser: IN_INIT +5ms datastore:details DEBUG: Parser: IN_GREETING +0ms datastore:details DEBUG: Parser: IN_HEADER +0ms datastore:details DEBUG: Remote ident: 'SSH-2.0-OpenSSH_7.6p1 Ubuntu-4ubuntu0.3' +0ms datastore:details DEBUG: Outgoing: Writing KEXINIT +0ms datastore:details DEBUG: Parser: IN_PACKETBEFORE (expecting 8) +1ms datastore:details DEBUG: Parser: IN_PACKET +0ms datastore:details DEBUG: Parser: pktLen:1076,padLen:6,remainLen:1072 +0ms datastore:details DEBUG: Parser: IN_PACKETDATA +0ms datastore:details DEBUG: Parser: IN_PACKETDATAAFTER, packet: KEXINIT +0ms datastore:details DEBUG: Comparing KEXINITs ... +0ms datastore:details DEBUG: (local) KEX algorithms: ecdh-sha2-nistp256,ecdh-sha2-nistp384, ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie- hellman-group14-sha1 +0ms datastore:details DEBUG: (remote) KEX algorithms: curve25519-sha256,[email protected], ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2- nistp521,diffie-hellman-group-exchange-sha256, diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group14-sha256, diffie-hellman- group14-sha1 +0ms datastore:details DEBUG: KEX algorithm: ecdh-sha2-nistp256 +0ms datastore:details DEBUG: (local) Host key formats: ssh-rsa,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384, ecdsa-sha2-nistp521 +0ms datastore:details DEBUG: (remote) Host key formats: ssh-rsa,rsa-sha2-512,rsa-sha2-256, ecdsa-sha2-nistp256,ssh-ed25519 +0ms datastore:details DEBUG: Host key format: ssh-rsa +0ms datastore:details DEBUG: (local) Client->Server ciphers: aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm, [email protected],aes256-gcm,aes256- [email protected] +0ms

- 33/80 - Copyright © 2019 - 2021 Alexander Schilling 3.2.2 SSH File Transfer Protocol Handler

datastore:details DEBUG: (remote) Client->Server ciphers: [email protected],aes128-ctr, aes192-ctr,aes256-ctr,[email protected],aes256- [email protected] +0ms datastore:details DEBUG: Client->Server Cipher: aes128-ctr +0ms datastore:details DEBUG: (local) Server->Client ciphers: aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm, [email protected],aes256-gcm,aes256- [email protected] +0ms datastore:details DEBUG: (remote) Server->Client ciphers: [email protected],aes128-ctr, aes192-ctr,aes256-ctr,[email protected],aes256- [email protected] +0ms datastore:details DEBUG: Server->Client Cipher: aes128-ctr +0ms datastore:details DEBUG: (local) Client->Server HMAC algorithms: hmac-sha2-256,hmac-sha2-512,hmac-sha1 +0ms datastore:details DEBUG: (remote) Client->Server HMAC algorithms: [email protected], [email protected],[email protected],hmac- [email protected], [email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512, hmac-sha1 +0ms datastore:details DEBUG: Client->Server HMAC algorithm: hmac-sha2-256 +0ms datastore:details DEBUG: (local) Server->Client HMAC algorithms: hmac-sha2-256,hmac-sha2-512,hmac-sha1 +0ms datastore:details DEBUG: (remote) Server->Client HMAC algorithms: [email protected], [email protected],[email protected],hmac- [email protected], [email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512, hmac-sha1 +0ms datastore:details DEBUG: Server->Client HMAC algorithm: hmac-sha2-256 +0ms datastore:details DEBUG: (local) Client->Server compression algorithms: none,[email protected],zlib +0ms datastore:details DEBUG: (remote) Client->Server compression algorithms: none,[email protected] +0ms datastore:details DEBUG: Client->Server compression algorithm: none +0ms datastore:details DEBUG: (local) Server->Client compression algorithms: none,[email protected],zlib +0ms datastore:details DEBUG: (remote) Server->Client compression algorithms: none,[email protected] +0ms datastore:details DEBUG: Server->Client compression algorithm: none +0ms datastore:details DEBUG: Outgoing: Writing KEXECDH_INIT +1ms datastore:details DEBUG: Parser: IN_PACKETBEFORE (expecting 8) +0ms datastore:details DEBUG: Parser: IN_PACKET +1ms datastore:details DEBUG: Parser: pktLen:636,padLen:7,remainLen:632 +0ms datastore:details DEBUG: Parser: IN_PACKETDATA +0ms datastore:details DEBUG: Parser: IN_PACKETDATAAFTER, packet: KEXECDH_REPLY +0ms datastore:details DEBUG: Checking host key format +0ms datastore:details DEBUG: Checking signature format +0ms datastore:details DEBUG: Verifying host fingerprint +0ms datastore:details DEBUG: Host accepted by default (no verification) +0ms datastore:details DEBUG: Verifying signature +1ms datastore:details DEBUG: Outgoing: Writing NEWKEYS +0ms datastore:details DEBUG: Parser: IN_PACKETBEFORE (expecting 8) +0ms datastore:details DEBUG: Parser: IN_PACKET +0ms datastore:details DEBUG: Parser: pktLen:12,padLen:10,remainLen:8 +0ms datastore:details DEBUG: Parser: IN_PACKETDATA +0ms datastore:details DEBUG: Parser: IN_PACKETDATAAFTER, packet: NEWKEYS +0ms datastore:details DEBUG: Outgoing: Writing SERVICE_REQUEST (ssh-userauth) +0ms datastore:details DEBUG: Parser: IN_PACKETBEFORE (expecting 16) +1ms datastore:details DEBUG: Parser: IN_PACKET +0ms datastore:details DEBUG: Parser: Decrypting +0ms datastore:details DEBUG: Parser: pktLen:28,padLen:10,remainLen:16 +0ms datastore:details DEBUG: Parser: IN_PACKETDATA +0ms datastore:details DEBUG: Parser: Decrypting +0ms datastore:details DEBUG: Parser: HMAC size:32 +0ms datastore:details DEBUG: Parser: IN_PACKETDATAVERIFY +0ms datastore:details DEBUG: Parser: Verifying MAC +0ms datastore:details DEBUG: Parser: IN_PACKETDATAVERIFY (Valid HMAC) +0ms datastore:details DEBUG: Parser: IN_PACKETDATAAFTER, packet: SERVICE_ACCEPT +0ms datastore:details DEBUG: Outgoing: Writing USERAUTH_REQUEST (none) +0ms datastore:details DEBUG: Parser: IN_PACKETBEFORE (expecting 16) +0ms datastore:details DEBUG: Parser: IN_PACKET +1ms datastore:details DEBUG: Parser: Decrypting +0ms datastore:details DEBUG: Parser: pktLen:44,padLen:19,remainLen:32 +0ms datastore:details DEBUG: Parser: IN_PACKETDATA +0ms datastore:details DEBUG: Parser: Decrypting +0ms datastore:details DEBUG: Parser: HMAC size:32 +0ms datastore:details DEBUG: Parser: IN_PACKETDATAVERIFY +0ms datastore:details DEBUG: Parser: Verifying MAC +0ms datastore:details DEBUG: Parser: IN_PACKETDATAVERIFY (Valid HMAC) +0ms datastore:details DEBUG: Parser: IN_PACKETDATAAFTER, packet: USERAUTH_FAILURE +0ms datastore:details DEBUG: Client: none auth failed +0ms datastore:details DEBUG: Outgoing: Writing USERAUTH_REQUEST (password) +0ms datastore:details DEBUG: Parser: IN_PACKETBEFORE (expecting 16) +0ms datastore:details DEBUG: Parser: IN_PACKET +7ms datastore:details DEBUG: Parser: Decrypting +0ms datastore:details DEBUG: Parser: pktLen:12,padLen:10,remainLen:0 +0ms datastore:details DEBUG: Parser: IN_PACKETDATA +0ms datastore:details DEBUG: Parser: HMAC size:32 +0ms datastore:details DEBUG: Parser: IN_PACKETDATAVERIFY +0ms datastore:details DEBUG: Parser: Verifying MAC +0ms datastore:details DEBUG: Parser: IN_PACKETDATAVERIFY (Valid HMAC) +0ms datastore:details DEBUG: Parser: IN_PACKETDATAAFTER, packet: USERAUTH_SUCCESS +0ms datastore:details DEBUG: Outgoing: Writing CHANNEL_OPEN (0, session) +0ms datastore:details DEBUG: Parser: IN_PACKETBEFORE (expecting 16) +1ms datastore:details DEBUG: Parser: IN_PACKET +183ms datastore:details DEBUG: Parser: Decrypting +0ms datastore:details DEBUG: Parser: pktLen:492,padLen:16,remainLen:480 +0ms datastore:details DEBUG: Parser: IN_PACKETDATA +0ms datastore:details DEBUG: Parser: Decrypting +0ms datastore:details DEBUG: Parser: HMAC size:32 +0ms datastore:details DEBUG: Parser: IN_PACKETDATAVERIFY +0ms datastore:details DEBUG: Parser: Verifying MAC +0ms datastore:details DEBUG: Parser: IN_PACKETDATAVERIFY (Valid HMAC) +0ms datastore:details DEBUG: Parser: IN_PACKETDATAAFTER, packet: GLOBAL_REQUEST ([email protected]) +0ms datastore:details DEBUG: Parser: IN_PACKETBEFORE (expecting 16) +0ms datastore:details DEBUG: Parser: IN_PACKET +43ms datastore:details DEBUG: Parser: Decrypting +0ms datastore:details DEBUG: Parser: pktLen:28,padLen:10,remainLen:16 +0ms datastore:details DEBUG: Parser: IN_PACKETDATA +0ms datastore:details DEBUG: Parser: Decrypting +0ms datastore:details DEBUG: Parser: HMAC size:32 +0ms

- 34/80 - Copyright © 2019 - 2021 Alexander Schilling 3.2.2 SSH File Transfer Protocol Handler

datastore:details DEBUG: Parser: IN_PACKETDATAVERIFY +0ms datastore:details DEBUG: Parser: Verifying MAC +0ms datastore:details DEBUG: Parser: IN_PACKETDATAVERIFY (Valid HMAC) +1ms datastore:details DEBUG: Parser: IN_PACKETDATAAFTER, packet: CHANNEL_OPEN_CONFIRMATION +0ms datastore:details DEBUG: Outgoing: Writing CHANNEL_REQUEST (0, subsystem: sftp) +0ms datastore:details DEBUG: Parser: IN_PACKETBEFORE (expecting 16) +0ms datastore:details DEBUG: Parser: IN_PACKET +0ms datastore:details DEBUG: Parser: Decrypting +0ms datastore:details DEBUG: Parser: pktLen:28,padLen:18,remainLen:16 +0ms datastore:details DEBUG: Parser: IN_PACKETDATA +0ms datastore:details DEBUG: Parser: Decrypting +0ms datastore:details DEBUG: Parser: HMAC size:32 +0ms datastore:details DEBUG: Parser: IN_PACKETDATAVERIFY +1ms datastore:details DEBUG: Parser: Verifying MAC +0ms datastore:details DEBUG: Parser: IN_PACKETDATAVERIFY (Valid HMAC) +0ms datastore:details DEBUG: Parser: IN_PACKETDATAAFTER, packet: CHANNEL_WINDOW_ADJUST (0, 2097152) +0ms datastore:details DEBUG: Parser: IN_PACKETBEFORE (expecting 16) +0ms datastore:details DEBUG: Parser: IN_PACKET +0ms datastore:details DEBUG: Parser: Decrypting +0ms datastore:details DEBUG: Parser: pktLen:12,padLen:6,remainLen:0 +0ms datastore:details DEBUG: Parser: IN_PACKETDATA +0ms datastore:details DEBUG: Parser: HMAC size:32 +0ms datastore:details DEBUG: Parser: IN_PACKETDATAVERIFY +0ms datastore:details DEBUG: Parser: Verifying MAC +0ms datastore:details DEBUG: Parser: IN_PACKETDATAVERIFY (Valid HMAC) +0ms datastore:details DEBUG: Parser: IN_PACKETDATAAFTER, packet: CHANNEL_SUCCESS (0) +0ms datastore:details DEBUG: Parser: IN_PACKETBEFORE (expecting 16) +0ms datastore:details DEBUG: Outgoing: Writing CHANNEL_DATA (0) +0ms datastore:details DEBUG: Parser: IN_PACKET +3ms datastore:details DEBUG: Parser: Decrypting +0ms datastore:details DEBUG: Parser: pktLen:172,padLen:8,remainLen:160 +0ms datastore:details DEBUG: Parser: IN_PACKETDATA +0ms datastore:details DEBUG: Parser: Decrypting +0ms datastore:details DEBUG: Parser: HMAC size:32 +0ms datastore:details DEBUG: Parser: IN_PACKETDATAVERIFY +0ms datastore:details DEBUG: Parser: Verifying MAC +0ms datastore:details DEBUG: Parser: IN_PACKETDATAVERIFY (Valid HMAC) +0ms datastore:details DEBUG: Parser: IN_PACKETDATAAFTER, packet: CHANNEL_DATA (0) +0ms datastore:details DEBUG: Parser: IN_PACKETBEFORE (expecting 16) +0ms datastore:details DEBUG[SFTP]: Outgoing: Writing OPEN +0ms datastore:details DEBUG: Outgoing: Writing CHANNEL_DATA (0) +1ms datastore:details DEBUG: Parser: IN_PACKET +0ms datastore:details DEBUG: Parser: Decrypting +0ms datastore:details DEBUG: Parser: pktLen:44,padLen:17,remainLen:32 +0ms datastore:details DEBUG: Parser: IN_PACKETDATA +0ms datastore:details DEBUG: Parser: Decrypting +0ms datastore:details DEBUG: Parser: HMAC size:32 +0ms datastore:details DEBUG: Parser: IN_PACKETDATAVERIFY +0ms datastore:details DEBUG: Parser: Verifying MAC +0ms datastore:details DEBUG: Parser: IN_PACKETDATAVERIFY (Valid HMAC) +0ms datastore:details DEBUG: Parser: IN_PACKETDATAAFTER, packet: CHANNEL_DATA (0) +0ms datastore:details DEBUG[SFTP]: Parser: Response: HANDLE +0ms datastore:details DEBUG[SFTP]: Outgoing: Writing READ +0ms datastore:details DEBUG: Outgoing: Writing CHANNEL_DATA (0) +0ms datastore:details DEBUG: Parser: IN_PACKETBEFORE (expecting 16) +1ms datastore:details DEBUG: Parser: IN_PACKET +0ms datastore:details DEBUG: Parser: Decrypting +0ms datastore:details DEBUG: Parser: pktLen:348,padLen:8,remainLen:336 +0ms datastore:details DEBUG: Parser: IN_PACKETDATA +0ms datastore:details DEBUG: Parser: Decrypting +0ms datastore:details DEBUG: Parser: HMAC size:32 +0ms datastore:details DEBUG: Parser: IN_PACKETDATAVERIFY +0ms datastore:details DEBUG: Parser: Verifying MAC +0ms datastore:details DEBUG: Parser: IN_PACKETDATAVERIFY (Valid HMAC) +0ms datastore:details DEBUG: Parser: IN_PACKETDATAAFTER, packet: CHANNEL_DATA (0) +0ms datastore:details DEBUG[SFTP]: Parser: Response: DATA +0ms datastore:details DEBUG: Parser: IN_PACKETBEFORE (expecting 16) +0ms datastore:details DEBUG[SFTP]: Outgoing: Writing READ +0ms datastore:details DEBUG: Outgoing: Writing CHANNEL_DATA (0) +0ms datastore:details DEBUG: Parser: IN_PACKET +1ms datastore:details DEBUG: Parser: Decrypting +0ms datastore:details DEBUG: Parser: pktLen:60,padLen:18,remainLen:48 +0ms datastore:details DEBUG: Parser: IN_PACKETDATA +0ms datastore:details DEBUG: Parser: Decrypting +0ms datastore:details DEBUG: Parser: HMAC size:32 +0ms datastore:details DEBUG: Parser: IN_PACKETDATAVERIFY +0ms datastore:details DEBUG: Parser: Verifying MAC +0ms datastore:details DEBUG: Parser: IN_PACKETDATAVERIFY (Valid HMAC) +0ms datastore:details DEBUG: Parser: IN_PACKETDATAAFTER, packet: CHANNEL_DATA (0) +0ms datastore:details DEBUG[SFTP]: Parser: Response: STATUS +0ms datastore:details DEBUG: Parser: IN_PACKETBEFORE (expecting 16) +0ms datastore:details DEBUG[SFTP]: Outgoing: Writing CLOSE +0ms datastore:details DEBUG: Outgoing: Writing CHANNEL_DATA (0) +0ms datastore:protocol:sftp storing to sftp://alex@localhost/home/alex/code/node-datastore/test/data/ example.json +255ms datastore:details DEBUG: Local ident: 'SSH-2.0-ssh2js0.4.2' +1ms datastore:details DEBUG: Client: Trying localhost on port 22 ... +0ms datastore:details DEBUG: Parser: IN_PACKET +0ms datastore:details DEBUG: Parser: Decrypting +0ms datastore:details DEBUG: Parser: pktLen:44,padLen:6,remainLen:32 +0ms datastore:details DEBUG: Parser: IN_PACKETDATA +0ms datastore:details DEBUG: Parser: Decrypting +0ms datastore:details DEBUG: Parser: HMAC size:32 +0ms datastore:details DEBUG: Parser: IN_PACKETDATAVERIFY +0ms

- 35/80 - Copyright © 2019 - 2021 Alexander Schilling 3.2.2 SSH File Transfer Protocol Handler

datastore:details DEBUG: Parser: Verifying MAC +0ms datastore:details DEBUG: Parser: IN_PACKETDATAVERIFY (Valid HMAC) +0ms datastore:details DEBUG: Parser: IN_PACKETDATAAFTER, packet: CHANNEL_DATA (0) +0ms datastore:details DEBUG[SFTP]: Parser: Response: STATUS +0ms datastore:details DEBUG: Parser: IN_PACKETBEFORE (expecting 16) +0ms datastore:details DEBUG: Client: Connected +0ms datastore:details DEBUG: Parser: IN_INIT +6ms datastore:details DEBUG: Parser: IN_GREETING +0ms datastore:details DEBUG: Parser: IN_HEADER +0ms datastore:details DEBUG: Remote ident: 'SSH-2.0-OpenSSH_7.6p1 Ubuntu-4ubuntu0.3' +0ms datastore:details DEBUG: Outgoing: Writing KEXINIT +0ms datastore:details DEBUG: Parser: IN_PACKETBEFORE (expecting 8) +1ms datastore:details DEBUG: Parser: IN_PACKET +0ms datastore:details DEBUG: Parser: pktLen:1076,padLen:6,remainLen:1072 +0ms datastore:details DEBUG: Parser: IN_PACKETDATA +0ms datastore:details DEBUG: Parser: IN_PACKETDATAAFTER, packet: KEXINIT +0ms datastore:details DEBUG: Comparing KEXINITs ... +0ms datastore:details DEBUG: (local) KEX algorithms: ecdh-sha2-nistp256,ecdh-sha2-nistp384, ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie- hellman-group14-sha1 +0ms datastore:details DEBUG: (remote) KEX algorithms: curve25519-sha256,[email protected], ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2- nistp521,diffie-hellman-group-exchange-sha256, diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group14-sha256, diffie-hellman- group14-sha1 +0ms datastore:details DEBUG: KEX algorithm: ecdh-sha2-nistp256 +0ms datastore:details DEBUG: (local) Host key formats: ssh-rsa,ecdsa-sha2-nistp256,ecdsa-sha2-nistp384, ecdsa-sha2-nistp521 +0ms datastore:details DEBUG: (remote) Host key formats: ssh-rsa,rsa-sha2-512,rsa-sha2-256, ecdsa-sha2-nistp256,ssh-ed25519 +0ms datastore:details DEBUG: Host key format: ssh-rsa +0ms datastore:details DEBUG: (local) Client->Server ciphers: aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm, [email protected],aes256-gcm,aes256- [email protected] +0ms datastore:details DEBUG: (remote) Client->Server ciphers: [email protected],aes128-ctr, aes192-ctr,aes256-ctr,[email protected],aes256- [email protected] +0ms datastore:details DEBUG: Client->Server Cipher: aes128-ctr +0ms datastore:details DEBUG: (local) Server->Client ciphers: aes128-ctr,aes192-ctr,aes256-ctr,aes128-gcm, [email protected],aes256-gcm,aes256- [email protected] +0ms datastore:details DEBUG: (remote) Server->Client ciphers: [email protected],aes128-ctr, aes192-ctr,aes256-ctr,[email protected],aes256- [email protected] +0ms datastore:details DEBUG: Server->Client Cipher: aes128-ctr +0ms datastore:details DEBUG: (local) Client->Server HMAC algorithms: hmac-sha2-256,hmac-sha2-512,hmac-sha1 +0ms datastore:details DEBUG: (remote) Client->Server HMAC algorithms: [email protected], [email protected],[email protected],hmac- [email protected], [email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512, hmac-sha1 +0ms datastore:details DEBUG: Client->Server HMAC algorithm: hmac-sha2-256 +0ms datastore:details DEBUG: (local) Server->Client HMAC algorithms: hmac-sha2-256,hmac-sha2-512,hmac-sha1 +0ms datastore:details DEBUG: (remote) Server->Client HMAC algorithms: [email protected], [email protected],[email protected],hmac- [email protected], [email protected],[email protected],[email protected],hmac-sha2-256,hmac-sha2-512, hmac-sha1 +0ms datastore:details DEBUG: Server->Client HMAC algorithm: hmac-sha2-256 +0ms datastore:details DEBUG: (local) Client->Server compression algorithms: none,[email protected],zlib +0ms datastore:details DEBUG: (remote) Client->Server compression algorithms: none,[email protected] +0ms datastore:details DEBUG: Client->Server compression algorithm: none +0ms datastore:details DEBUG: (local) Server->Client compression algorithms: none,[email protected],zlib +0ms datastore:details DEBUG: (remote) Server->Client compression algorithms: none,[email protected] +0ms datastore:details DEBUG: Server->Client compression algorithm: none +0ms datastore:details DEBUG: Outgoing: Writing KEXECDH_INIT +1ms datastore:details DEBUG: Parser: IN_PACKETBEFORE (expecting 8) +0ms datastore:details DEBUG: Parser: IN_PACKET +1ms datastore:details DEBUG: Parser: pktLen:636,padLen:7,remainLen:632 +0ms datastore:details DEBUG: Parser: IN_PACKETDATA +0ms datastore:details DEBUG: Parser: IN_PACKETDATAAFTER, packet: KEXECDH_REPLY +0ms datastore:details DEBUG: Checking host key format +0ms datastore:details DEBUG: Checking signature format +0ms datastore:details DEBUG: Verifying host fingerprint +0ms datastore:details DEBUG: Host accepted by default (no verification) +0ms datastore:details DEBUG: Verifying signature +1ms datastore:details DEBUG: Outgoing: Writing NEWKEYS +0ms datastore:details DEBUG: Parser: IN_PACKETBEFORE (expecting 8) +0ms datastore:details DEBUG: Parser: IN_PACKET +0ms datastore:details DEBUG: Parser: pktLen:12,padLen:10,remainLen:8 +0ms datastore:details DEBUG: Parser: IN_PACKETDATA +0ms datastore:details DEBUG: Parser: IN_PACKETDATAAFTER, packet: NEWKEYS +0ms datastore:details DEBUG: Outgoing: Writing SERVICE_REQUEST (ssh-userauth) +0ms datastore:details DEBUG: Parser: IN_PACKETBEFORE (expecting 16) +0ms datastore:details DEBUG: Parser: IN_PACKET +0ms datastore:details DEBUG: Parser: Decrypting +0ms datastore:details DEBUG: Parser: pktLen:28,padLen:10,remainLen:16 +0ms datastore:details DEBUG: Parser: IN_PACKETDATA +0ms datastore:details DEBUG: Parser: Decrypting +0ms datastore:details DEBUG: Parser: HMAC size:32 +1ms datastore:details DEBUG: Parser: IN_PACKETDATAVERIFY +0ms datastore:details DEBUG: Parser: Verifying MAC +0ms datastore:details DEBUG: Parser: IN_PACKETDATAVERIFY (Valid HMAC) +0ms datastore:details DEBUG: Parser: IN_PACKETDATAAFTER, packet: SERVICE_ACCEPT +0ms datastore:details DEBUG: Outgoing: Writing USERAUTH_REQUEST (none) +0ms datastore:details DEBUG: Parser: IN_PACKETBEFORE (expecting 16) +0ms datastore:details DEBUG: Parser: IN_PACKET +0ms datastore:details DEBUG: Parser: Decrypting +0ms datastore:details DEBUG: Parser: pktLen:44,padLen:19,remainLen:32 +0ms datastore:details DEBUG: Parser: IN_PACKETDATA +0ms datastore:details DEBUG: Parser: Decrypting +0ms datastore:details DEBUG: Parser: HMAC size:32 +0ms datastore:details DEBUG: Parser: IN_PACKETDATAVERIFY +0ms datastore:details DEBUG: Parser: Verifying MAC +0ms datastore:details DEBUG: Parser: IN_PACKETDATAVERIFY (Valid HMAC) +0ms datastore:details DEBUG: Parser: IN_PACKETDATAAFTER, packet: USERAUTH_FAILURE +0ms datastore:details DEBUG: Client: none auth failed +0ms datastore:details DEBUG: Outgoing: Writing USERAUTH_REQUEST (password) +1ms datastore:details DEBUG: Parser: IN_PACKETBEFORE (expecting 16) +0ms

- 36/80 - Copyright © 2019 - 2021 Alexander Schilling 3.2.2 SSH File Transfer Protocol Handler

datastore:details DEBUG: Parser: IN_PACKET +6ms datastore:details DEBUG: Parser: Decrypting +1ms datastore:details DEBUG: Parser: pktLen:12,padLen:10,remainLen:0 +0ms datastore:details DEBUG: Parser: IN_PACKETDATA +0ms datastore:details DEBUG: Parser: HMAC size:32 +0ms datastore:details DEBUG: Parser: IN_PACKETDATAVERIFY +0ms datastore:details DEBUG: Parser: Verifying MAC +0ms datastore:details DEBUG: Parser: IN_PACKETDATAVERIFY (Valid HMAC) +0ms datastore:details DEBUG: Parser: IN_PACKETDATAAFTER, packet: USERAUTH_SUCCESS +0ms datastore:details DEBUG: Outgoing: Writing CHANNEL_OPEN (0, session) +0ms datastore:details DEBUG: Parser: IN_PACKETBEFORE (expecting 16) +0ms datastore:details DEBUG: Parser: IN_PACKET +156ms datastore:details DEBUG: Parser: Decrypting +0ms datastore:details DEBUG: Parser: pktLen:492,padLen:16,remainLen:480 +0ms datastore:details DEBUG: Parser: IN_PACKETDATA +1ms datastore:details DEBUG: Parser: Decrypting +0ms datastore:details DEBUG: Parser: HMAC size:32 +0ms datastore:details DEBUG: Parser: IN_PACKETDATAVERIFY +0ms datastore:details DEBUG: Parser: Verifying MAC +0ms datastore:details DEBUG: Parser: IN_PACKETDATAVERIFY (Valid HMAC) +0ms datastore:details DEBUG: Parser: IN_PACKETDATAAFTER, packet: GLOBAL_REQUEST ([email protected]) +0ms datastore:details DEBUG: Parser: IN_PACKETBEFORE (expecting 16) +0ms datastore:details DEBUG: Parser: IN_PACKET +43ms datastore:details DEBUG: Parser: Decrypting +0ms datastore:details DEBUG: Parser: pktLen:28,padLen:10,remainLen:16 +0ms datastore:details DEBUG: Parser: IN_PACKETDATA +0ms datastore:details DEBUG: Parser: Decrypting +0ms datastore:details DEBUG: Parser: HMAC size:32 +0ms datastore:details DEBUG: Parser: IN_PACKETDATAVERIFY +0ms datastore:details DEBUG: Parser: Verifying MAC +0ms datastore:details DEBUG: Parser: IN_PACKETDATAVERIFY (Valid HMAC) +0ms datastore:details DEBUG: Parser: IN_PACKETDATAAFTER, packet: CHANNEL_OPEN_CONFIRMATION +0ms datastore:details DEBUG: Outgoing: Writing CHANNEL_REQUEST (0, subsystem: sftp) +0ms datastore:details DEBUG: Parser: IN_PACKETBEFORE (expecting 16) +1ms datastore:details DEBUG: Parser: IN_PACKET +0ms datastore:details DEBUG: Parser: Decrypting +0ms datastore:details DEBUG: Parser: pktLen:28,padLen:18,remainLen:16 +0ms datastore:details DEBUG: Parser: IN_PACKETDATA +0ms datastore:details DEBUG: Parser: Decrypting +0ms datastore:details DEBUG: Parser: HMAC size:32 +0ms datastore:details DEBUG: Parser: IN_PACKETDATAVERIFY +0ms datastore:details DEBUG: Parser: Verifying MAC +0ms datastore:details DEBUG: Parser: IN_PACKETDATAVERIFY (Valid HMAC) +0ms datastore:details DEBUG: Parser: IN_PACKETDATAAFTER, packet: CHANNEL_WINDOW_ADJUST (0, 2097152) +0ms datastore:details DEBUG: Parser: IN_PACKETBEFORE (expecting 16) +0ms datastore:details DEBUG: Parser: IN_PACKET +0ms datastore:details DEBUG: Parser: Decrypting +1ms datastore:details DEBUG: Parser: pktLen:12,padLen:6,remainLen:0 +0ms datastore:details DEBUG: Parser: IN_PACKETDATA +0ms datastore:details DEBUG: Parser: HMAC size:32 +0ms datastore:details DEBUG: Parser: IN_PACKETDATAVERIFY +0ms datastore:details DEBUG: Parser: Verifying MAC +0ms datastore:details DEBUG: Parser: IN_PACKETDATAVERIFY (Valid HMAC) +0ms datastore:details DEBUG: Parser: IN_PACKETDATAAFTER, packet: CHANNEL_SUCCESS (0) +0ms datastore:details DEBUG: Parser: IN_PACKETBEFORE (expecting 16) +0ms datastore:details DEBUG: Outgoing: Writing CHANNEL_DATA (0) +0ms datastore:details DEBUG: Parser: IN_PACKET +1ms datastore:details DEBUG: Parser: Decrypting +0ms datastore:details DEBUG: Parser: pktLen:172,padLen:8,remainLen:160 +0ms datastore:details DEBUG: Parser: IN_PACKETDATA +0ms datastore:details DEBUG: Parser: Decrypting +0ms datastore:details DEBUG: Parser: HMAC size:32 +0ms datastore:details DEBUG: Parser: IN_PACKETDATAVERIFY +0ms datastore:details DEBUG: Parser: Verifying MAC +0ms datastore:details DEBUG: Parser: IN_PACKETDATAVERIFY (Valid HMAC) +1ms datastore:details DEBUG: Parser: IN_PACKETDATAAFTER, packet: CHANNEL_DATA (0) +0ms datastore:details DEBUG: Parser: IN_PACKETBEFORE (expecting 16) +0ms datastore:details DEBUG[SFTP]: Outgoing: Writing OPEN +0ms datastore:details DEBUG: Outgoing: Writing CHANNEL_DATA (0) +1ms datastore:details DEBUG: Parser: IN_PACKET +0ms datastore:details DEBUG: Parser: Decrypting +0ms datastore:details DEBUG: Parser: pktLen:44,padLen:17,remainLen:32 +0ms datastore:details DEBUG: Parser: IN_PACKETDATA +0ms datastore:details DEBUG: Parser: Decrypting +0ms datastore:details DEBUG: Parser: HMAC size:32 +0ms datastore:details DEBUG: Parser: IN_PACKETDATAVERIFY +0ms datastore:details DEBUG: Parser: Verifying MAC +0ms datastore:details DEBUG: Parser: IN_PACKETDATAVERIFY (Valid HMAC) +0ms datastore:details DEBUG: Parser: IN_PACKETDATAAFTER, packet: CHANNEL_DATA (0) +0ms datastore:details DEBUG[SFTP]: Parser: Response: HANDLE +0ms datastore:details DEBUG[SFTP]: Outgoing: Writing FSETSTAT +1ms datastore:details DEBUG: Outgoing: Writing CHANNEL_DATA (0) +0ms datastore:details DEBUG: Parser: IN_PACKETBEFORE (expecting 16) +0ms datastore:details DEBUG: Parser: IN_PACKET +0ms datastore:details DEBUG: Parser: Decrypting +0ms datastore:details DEBUG: Parser: pktLen:44,padLen:6,remainLen:32 +0ms datastore:details DEBUG: Parser: IN_PACKETDATA +0ms datastore:details DEBUG: Parser: Decrypting +0ms datastore:details DEBUG: Parser: HMAC size:32 +0ms datastore:details DEBUG: Parser: IN_PACKETDATAVERIFY +0ms datastore:details DEBUG: Parser: Verifying MAC +1ms datastore:details DEBUG: Parser: IN_PACKETDATAVERIFY (Valid HMAC) +0ms datastore:details DEBUG: Parser: IN_PACKETDATAAFTER, packet: CHANNEL_DATA (0) +0ms datastore:details DEBUG[SFTP]: Parser: Response: STATUS +0ms

- 37/80 - Copyright © 2019 - 2021 Alexander Schilling 3.2.2 SSH File Transfer Protocol Handler

datastore:details DEBUG[SFTP]: Outgoing: Writing WRITE +0ms datastore:details DEBUG: Outgoing: Writing CHANNEL_DATA (0) +0ms datastore:details DEBUG: Parser: IN_PACKETBEFORE (expecting 16) +0ms datastore:details DEBUG: Parser: IN_PACKET +1ms datastore:details DEBUG: Parser: Decrypting +0ms datastore:details DEBUG: Parser: pktLen:44,padLen:6,remainLen:32 +0ms datastore:details DEBUG: Parser: IN_PACKETDATA +0ms datastore:details DEBUG: Parser: Decrypting +0ms datastore:details DEBUG: Parser: HMAC size:32 +0ms datastore:details DEBUG: Parser: IN_PACKETDATAVERIFY +0ms datastore:details DEBUG: Parser: Verifying MAC +0ms datastore:details DEBUG: Parser: IN_PACKETDATAVERIFY (Valid HMAC) +0ms datastore:details DEBUG: Parser: IN_PACKETDATAAFTER, packet: CHANNEL_DATA (0) +0ms datastore:details DEBUG[SFTP]: Parser: Response: STATUS +0ms datastore:details DEBUG[SFTP]: Outgoing: Writing CLOSE +0ms datastore:details DEBUG: Outgoing: Writing CHANNEL_DATA (0) +0ms datastore:details DEBUG: Parser: IN_PACKETBEFORE (expecting 16) +1ms

- 38/80 - Copyright © 2019 - 2021 Alexander Schilling 3.2.3 FTP and FTPS Handler

3.2.3 FTP and FTPS Handler

It supports explicit FTPS over TLS, Passive Mode over IPv6

Info

Prefer alternative transfer protocols like HTTPS or SFTP (SSH). Use this protocol only when you have no choice and need to use FTP. Try to use FTPS whenever possible, FTP alone does not provide any security.

This handler is used for the following protocols: ftp , ftps

Example

Pattern: [ftp|ftps]://[:@]:/ Example: ftp://ftp.my-server/my-app/config.json Basic Auth: ftp://alex:[email protected]/my-app/config.json

Meta Data

The file and file stats are stored as meta data ds.meta .

{ path: 'ftp://user:pass@localhost/home/alex/code/node-datastore/test/data/example.log', stats: { modifyTime: 1566868566000 } };

Debugging

If something went wrong you may get more information by showing the verbose protocol transfer using DEBUG=datastore:* :

datastore:protocol:ftp loading ftp://alex@localhost/home/alex/code/node-datastore/test/data/ example.json +49ms datastore:details Connected to 127.0.0.1:21 +3ms datastore:details < 220 ProFTPD 1.3.5e Server (Debian) [::ffff:127.0.0.1] +184ms datastore:details Login security: No encryption +0ms datastore:details > USER alex +0ms datastore:details < 331 Password required for alex +1ms datastore:details > PASS ### +0ms datastore:details < 230 User alex logged in +8ms datastore:details > TYPE I +0ms datastore:details < 200 Type set to I +0ms datastore:details > STRU F +0ms datastore:details < 200 Structure set to F +1ms datastore:details Trying to find optimal transfer strategy... +0ms datastore:details > EPSV +0ms datastore:details < 229 Entering Extended Passive Mode (|||10345|) +0ms datastore:details Optimal transfer strategy found. +1ms datastore:details > RETR /home/alex/code/node-datastore/test/data/example.json +0ms datastore:details < 150 Opening BINARY mode data connection for /home/alex/code/node-datastore/test/ data/example.json (317 bytes) +0ms datastore:details Downloading from 127.0.0.1:10345 (No encryption) +0ms datastore:details < 226 Transfer complete +1ms

datastore:protocol:ftp storing to ftp://alex@localhost/home/alex/code/node-datastore/test/data/ example.json +197ms datastore:details Connected to 127.0.0.1:21 +3ms datastore:details < 220 ProFTPD 1.3.5e Server (Debian) [::ffff:127.0.0.1] +200ms datastore:details Login security: No encryption +0ms datastore:details > USER alex +0ms datastore:details < 331 Password required for alex +1ms datastore:details > PASS ### +0ms datastore:details < 230 User alex logged in +8ms datastore:details > TYPE I +0ms datastore:details < 200 Type set to I +0ms datastore:details > STRU F +0ms datastore:details < 200 Structure set to F +1ms datastore:details Trying to find optimal transfer strategy... +0ms datastore:details > EPSV +0ms datastore:details < 229 Entering Extended Passive Mode (|||10232|) +1ms datastore:details Optimal transfer strategy found. +0ms datastore:details > STOR /home/alex/code/node-datastore/test/data/example.json +0ms datastore:details < 150 Opening BINARY mode data connection for /home/alex/code/node-datastore/test/ data/example.json +1ms

- 39/80 - Copyright © 2019 - 2021 Alexander Schilling 3.2.3 FTP and FTPS Handler

datastore:details Uploading to 127.0.0.1:10232 (No encryption) +0ms datastore:details < 226 Transfer complete +3ms

- 40/80 - Copyright © 2019 - 2021 Alexander Schilling 3.2.4 HTTP and HTTPS Handler

3.2.4 HTTP and HTTPS Handler

This allows you to get the data structure from a Webserver or REST service.

This handler is used for the following protocols: http , https

Example

Pattern: [http|https]://[:@]:/? Example: https://my-server/my-app/config.json Basic Auth: https://alex:test123@my-server/my-app/config.json

If no format is defined using the options, it will be auto detected based on the returning mime-type.

The following specific options are possible:

• proxy - A proxy server and port string like 'myserver.de:80'. But an IP Address is also possible. You may also give the real server here to not use the normal name resolution.

• httpMethod: string - one of: get, delete, head, options, post, put, patch (default: get)

• httpHeader: string[] - add specific HTTP/HTTPS header lines

• httpData: string - send additional data with HTTP/HTTPS POST or PUT

• ignoreError: boolean - to ignore response code not equal 2xx for HTTP/HTTPS protocol

Meta Data

The status and server response headers are stored as meta data ds.meta .

{ status: 200, statusText: 'OK', headers: { server: 'nginx', date: 'Sat, 26 Oct 2019 19:47:18 GMT', 'content-type': 'text/plain; charset=utf-8', 'content-length': '317', connection: 'close', 'cache-control': 'max-age=60, public', 'content-disposition': 'inline', etag: 'W/"24d4aa47151123702a7a3b4e92ff8320"', 'referrer-policy': 'strict-origin-when-cross-origin, strict-origin-when-cross-origin', 'set-cookie': [ 'experimentation_subject_id=Ijg4Nzg4MTAzLTFiM2EtNDZhNi05MDY5LWE5NTg5YzlhYzZhNCI%3D--f9891296ec3948ca54191d83b2fbaec0403f52d4; domain=.gitlab.com; path=/; expires=Wed, 26 Oct 2039 19:47:18 -0000; secure' ], 'x-content-type-options': 'nosniff', 'x-download-options': 'noopen', 'x-frame-options': 'DENY', 'x-permitted-cross-domain-policies': 'none', 'x-request-id': 'mfxS9d29uS1', 'x-runtime': '0.063831', 'x-ua-compatible': 'IE=edge', 'x-xss-protection': '1; mode=block', 'strict-transport-security': 'max-age=31536000', 'gitlab-lb': 'fe-16-lb-gprd', 'gitlab-sv': 'web-23-sv-gprd' } }

Debugging

If something went wrong you may get more information by showing the verbose protocol transfer using DEBUG=datastore:* :

datastore:protocol:http loading /alinex/node-datastore/raw/master/test/data/example.json +2s REQUEST { method: 'GET', url: 'https://gitlab.com/alinex/node-datastore/raw/master/test/data/example.json', headers: { accept: 'application/json;q=0.9, */*;q=0.8' }, callback: [Function] } REQUEST make request https://gitlab.com/alinex/node-datastore/raw/master/test/data/example.json REQUEST onRequestResponse https://gitlab.com/alinex/node-datastore/raw/master/test/data/example.json 200 { server: 'nginx',

- 41/80 - Copyright © 2019 - 2021 Alexander Schilling 3.2.4 HTTP and HTTPS Handler

date: 'Mon, 29 Apr 2019 19:24:41 GMT', 'content-type': 'text/plain; charset=utf-8', 'content-length': '317', connection: 'close', 'cache-control': 'max-age=60, public', 'content-disposition': 'inline', etag: 'W/"24d4aa47151123702a7a3b4e92ff8320"', 'x-content-type-options': 'nosniff', 'x-frame-options': 'DENY', 'x-request-id': 'qPpHwQKGem3', 'x-runtime': '0.078344', 'x-ua-compatible': 'IE=edge', 'x-xss-protection': '1; mode=block', 'strict-transport-security': 'max-age=31536000', 'referrer-policy': 'strict-origin-when-cross-origin', 'content-security-policy': "object-src 'none'; worker-src https://assets.gitlab-static.net https://gl-canary.freetls.fastly.net https://gitlab.com blob:; script-src 'self' 'unsafe- inline' 'unsafe-eval' https://assets.gitlab-static.net https://gl-canary.freetls.fastly.net https://www.google.com/recaptcha/ https://www.recaptcha.net/ https://www.gstatic.com/recaptcha/ https://apis.google.com; style-src 'self' 'unsafe-inline' https://assets.gitlab-static.net https://gl- canary.freetls.fastly.net; img-src * data: blob:; frame-src 'self' https://www.google.com/recaptcha/ https://www.recaptcha.net/ https:// content.googleapis.com https://content-compute.googleapis.com https://content-cloudbilling.googleapis.com https://content-cloudresourcemanager.googleapis.com https://*.codesandbox.io; frame-ancestors 'self'; connect-src 'self' https://assets.gitlab-static.net https://gl-canary.freetls.fastly.net wss://gitlab.com https://sentry.gitlab.net https://customers.gitlab.com https://snowplow.trx.gitlab.net" } REQUEST reading response's body REQUEST finish init function https://gitlab.com/alinex/node-datastore/raw/master/test/data/example.json REQUEST response end https://gitlab.com/alinex/node-datastore/raw/master/test/data/example.json 200 { server: 'nginx', date: 'Mon, 29 Apr 2019 19:24:41 GMT', 'content-type': 'text/plain; charset=utf-8', 'content-length': '317', connection: 'close', 'cache-control': 'max-age=60, public', 'content-disposition': 'inline', etag: 'W/"24d4aa47151123702a7a3b4e92ff8320"', 'x-content-type-options': 'nosniff', 'x-frame-options': 'DENY', 'x-request-id': 'qPpHwQKGem3', 'x-runtime': '0.078344', 'x-ua-compatible': 'IE=edge', 'x-xss-protection': '1; mode=block', 'strict-transport-security': 'max-age=31536000', 'referrer-policy': 'strict-origin-when-cross-origin', 'content-security-policy': "object-src 'none'; worker-src https://assets.gitlab-static.net https://gl-canary.freetls.fastly.net https://gitlab.com blob:; script-src 'self' 'unsafe- inline' 'unsafe-eval' https://assets.gitlab-static.net https://gl-canary.freetls.fastly.net https://www.google.com/recaptcha/ https://www.recaptcha.net/ https://www.gstatic.com/recaptcha/ https://apis.google.com; style-src 'self' 'unsafe-inline' https://assets.gitlab-static.net https://gl- canary.freetls.fastly.net; img-src * data: blob:; frame-src 'self' https://www.google.com/recaptcha/ https://www.recaptcha.net/ https:// content.googleapis.com https://content-compute.googleapis.com https://content-cloudbilling.googleapis.com https://content-cloudresourcemanager.googleapis.com https://*.codesandbox.io; frame-ancestors 'self'; connect-src 'self' https://assets.gitlab-static.net https://gl-canary.freetls.fastly.net wss://gitlab.com https://sentry.gitlab.net https://customers.gitlab.com https://snowplow.trx.gitlab.net" } REQUEST end event https://gitlab.com/alinex/node-datastore/raw/master/test/data/example.json REQUEST has body https://gitlab.com/alinex/node-datastore/raw/master/test/data/example.json 317 REQUEST emitting complete https://gitlab.com/alinex/node-datastore/raw/master/test/data/example.json

To get a further step deeper, set environment: NODE_DEBUG=http,https .

- 42/80 - Copyright © 2019 - 2021 Alexander Schilling 3.2.5 PostgreSQL Database Handler

3.2.5 PostgreSQL Database Handler

This protocol will directly load from a postgres database. As the data is table the same operations like on CSV format will take place. But here the parsing and reformatting of the value will be done directly in the protocol handler.

Example

Pattern: postgres://:@:/# With password: postgres://alex:mypass4u73@localhost:5432/test#SELECT * FROM address

The following options are possible:

• search: string - defines the search path to use like my_schema, public

• records: boolean - set to true for record format, see below

The result will already be formatted in the database and put through the formatter as JSON without any change. So with this protocol no compressor or formatter can be selected.

Hint

Complex Database Statements

Also complexer queries with joins and subselects or dblink calls may be possible. But to keep it simple and easy to maintain better make a database view or function which holds the complexity and only include the simple call to it in the DataStore source.

For stability reasons the connection to the database will be tried to establish 5 times within 2.5 seconds before giving up.

The parsing of the table is like the CSV Format with a list and record format. A table with two columns is interpreted as name/value list and will be formatted as one structure with the name as the path in the structure and it's value. If more values are present or the records option is set each row is presented as an individual object within a list.

- 43/80 - Copyright © 2019 - 2021 Alexander Schilling 3.2.6 Shell Execution Protocol Handler

3.2.6 Shell Execution Protocol Handler

This protocol will call a command locally or remotely (using ssh from local shell) to execute a command and read data from STDOUT .

This handler is used for the following protocols: shell , ssh

Example

Pattern: shell://:@# Local: shell:///etc/my-app#cat config.json Remote: ssh://alex@my-server/etc/my-app#cat config.json

The path has to be absolute like shown above.

The following options are possible:

• privateKey - Buffer or string that contains a private key for either key-based or hostbased user authentication

• passphrase - for an encrypted private key, this is the passphrase used to decrypt it

Meta Data

The command stores additional information as meta data ds.meta (only on local calls).

{ command: 'ssh alex@localhost cd /home/alex/code/node-datastore/test/mocha/protocol \\&\\& cat ../../data/example.json', exitCode: 0, stderr: '', all: undefined, failed: false, timedOut: false, isCanceled: false, killed: false }

Writing

To write the data is send as string through STDIN to the process.

const load1 = `shell://${__dirname}#cat ../../data/example.json`; const store = `shell://${__dirname}#cat > ../../data/example.json`;

The above example shows how to read and store.

- 44/80 - Copyright © 2019 - 2021 Alexander Schilling 3.3 Compression

3.3 Compression

3.3.1 GZip Compression

Gzip is based on the DEFLATE algorithm, which is a combination of LZ77 and Huffman coding. That's the same algorithm as used in ZIP archives. But Gzip is used to compress just a single files. Compressed archives are typically created by assembling collections of files into a single tar archive and then compressing them. See the tgz compression.

Definition

Format: gzip File extension: .gz

The following options are possible:

• compressionLevel: number - gzip compression level 0 (no compression) to 9 (best compression, default)

An example (base.js) compressed will look like:

00000000: 1f8b 0800 0000 0000 0003 554f 3d0b c230 ...... UO=..0 00000010: 10dd fd15 4726 855a d28a 8566 13ea e2aa ....G&.Z...f.... 00000020: 93d2 216d 0fad 5c13 49e3 0714 ffbb 972a ..!m..\.I...... * 00000030: 4297 70f7 debd 8f0c 3300 a33b 5420 f0a5 B.p.....3..;T .. 00000040: bb1b a188 0274 2752 e31b b6ca 5a42 6d14 .....t'R....ZBm. 00000050: 7877 c700 f4de b5e6 cc1a 8fbd ff09 ba0a xw...... 00000060: 9d82 759c 85b5 d19e 2d0d 3ea1 e069 2e52 ..u.....-.>..i.R 00000070: 9964 4bb9 5e26 f290 e44a 666a 95c5 b9cc .dK.^&...Jfj.... 00000080: 8f62 11ae a9ed bd82 5312 411a c1aa 0cd0 .b...... S.A..... 00000090: 0d5d 6f39 71e0 f9df 7043 dcd1 34e8 605f .]o9q...pC..4.`_ 000000a0: 5f5a 22ee 3086 035c 6dc5 7c81 0f24 cb52 _Z".0..\m.|..$.R 000000b0: c1e0 3b30 b50d 5f7a b1f9 c468 7bb6 e67b ..;0.._z...h{..{ 000000c0: 330d d869 d31a 3d32 e5ec fd01 c5c7 e3f1 3..i..=2...... 000000d0: 1c01 0000 ....

It consists of:

• a 10-byte header, containing a magic number ( 1f 8b ), compression ID (08 for DEFLATE), file flags, a 32-bit timestamp, compression flags and ID.

• optional extra headers denoted by file flags, such as the original filename

• a body, containing a DEFLATE-compressed payload

• an 8-byte footer, containing a CRC-32 checksum and the length of the original uncompressed data, modulo 2 32 {\displaystyle 2^{32}} 2^{32}

- 45/80 - Copyright © 2019 - 2021 Alexander Schilling 3.3.2 Raw Deflate Compression

3.3.2 Raw Deflate Compression

The raw deflate has no defined file extension, but .deflate is supported here.

Definition

Format: deflate File extension: .deflate

The following options are possible:

• compressionLevel: number - gzip compression level 0 (no compression) to 9 (best compression, default)

An example (base.js) compressed will look like:

00000000: 01111000 10011100 01010101 01001111 00111101 00001011 x.UO=. 00000006: 11000010 00110000 00010000 11011101 11111101 00010101 .0.... 0000000c: 01000111 00100110 10000101 01011010 11010010 10001010 G&.Z.. 00000012: 10000101 01100110 00010011 11101010 11100010 10101010 .f.... 00000018: 10010011 11010010 00100001 01101101 00001111 10101101 ..!m.. 0000001e: 01011100 00010011 01001001 11100011 00000111 00010100 \.I... 00000024: 11111111 10111011 10010111 00101010 01000010 10010111 ...*B. 0000002a: 01110000 11110111 11011110 10111101 10001111 00001100 p..... 00000030: 00110011 00000000 10100011 00111011 01010100 00100000 3..;T 00000036: 11110000 10100101 10111011 00011011 10100001 10001000 ...... 0000003c: 00000010 01110100 00100111 01010010 11100011 00011011 .t'R.. 00000042: 10110110 11001010 01011010 01000010 01101101 00010100 ..ZBm. 00000048: 01111000 01110111 11000111 00000000 11110100 11011110 xw.... 0000004e: 10110101 11100110 11001100 00011010 10001111 10111101 ...... 00000054: 11111111 00001001 10111010 00001010 10011101 10000010 ...... 0000005a: 01110101 10011100 10000101 10110101 11010001 10011110 u..... 00000060: 00101101 00001101 00111110 10100001 11100000 01101001 -.>..i 00000066: 00101110 01010010 10011001 01100100 01001011 10111001 .R.dK. 0000006c: 01011110 00100110 11110010 10010000 11100100 01001010 ^&...J 00000072: 01100110 01101010 10010101 11000101 10111001 11001100 fj.... 00000078: 10001111 01100010 00010001 10101110 10101001 11101101 .b.... 0000007e: 10111101 10000010 01010011 00010010 01000001 00011010 ..S.A. 00000084: 11000001 10101010 00001100 11010000 00001101 01011101 .....] 0000008a: 01101111 00111001 01110001 11100000 11111001 11011111 o9q... 00000090: 01110000 01000011 11011100 11010001 00110100 11101000 pC..4. 00000096: 01100000 01011111 01011111 01011010 00100010 11101110 `__Z". 0000009c: 00110000 10000110 00000011 01011100 01101101 11000101 0..\m. 000000a2: 01111100 10000001 00001111 00100100 11001011 01010010 |..$.R 000000a8: 11000001 11100000 00111011 00110000 10110101 00001101 ..;0.. 000000ae: 01011111 01111010 10110001 11111001 11000100 01101000 _z...h 000000b4: 01111011 10110110 11100110 01111011 00110011 00001101 {..{3. 000000ba: 11011000 01101001 11010011 00011010 00111101 00110010 .i..=2 000000c0: 11100101 11101100 11111101 00000001 10000000 10110011 ...... 000000c6: 01001110 10110000 N.

- 46/80 - Copyright © 2019 - 2021 Alexander Schilling 3.3.3 Brotli Compression

3.3.3 Brotli Compression

Brotli is based on the DEFLATE algorithm like Gzip but uses a dictionary to gain better compression but with fast speed. It is designed by Google specifically for web transfer.

Definition

Format: brotli File extension: .br

An example (base.js) compressed will look like:

00000000: 1b1b 0120 2c0e 7813 eb94 f0aa 86a6 7e9e ... ,.x...... ~. 00000010: 0860 5c96 2953 7b0d d2a7 2656 ddf4 7a4b .`\.)S{...&V..zK 00000020: 6e95 3a51 30d7 39f1 7063 8830 da2a 526a n.:Q0.9.pc.0.*Rj 00000030: 1005 9698 9637 21e7 f7f6 1b9b 201e ca4f .....7!...... O 00000040: fac8 1445 562b a7db 0c8e 66fb d51d c450 ...EV+....f....P 00000050: 5e0f cd4e 785e c3c2 0689 a0c0 3ef7 ff5b ^..Nx^...... >..[ 00000060: 66f3 1672 82a0 ab3f c357 1bd5 d426 9592 f..r...?.W...&.. 00000070: d9e7 e588 6f38 45ad 31ab 2bbf da58 5aea ....o8E.1.+..XZ. 00000080: da6f 58c4 d19d e341 ae20 6546 9955 a112 .oX....A. eF.U.. 00000090: 05a2 03a4 d70e 7a6c 635c dc1f 4263 2934 ...... zlc\..Bc)4 000000a0: 4a06 1f80 0657 c1b0 ddb6 bd52 cb65 c716 J....W.....R.e.. 000000b0: 300b 0.

It's compressed size is mostly smaller than GZip and also faster to compress and decompress. The reason being, it uses a dictionary of common keywords and phrases on both client and server side and thus gives a better compression ratio. It is supported by all major browsers.

But for file compression it is rarely used, today.

- 47/80 - Copyright © 2019 - 2021 Alexander Schilling 3.3.4 BZip2 Compression

3.3.4 BZip2 Compression

The bzip2 compression uses the Burrows–Wheeler algorithm. But BZip2 is used to compress just a single files. Compressed archives are typically created by assembling collections of files into a single tar archive and then compressing them. See the tar.bz2 compression.

Definition

Format: bzip2 File extension: .bz2

An example (base.js) compressed will look like:

00000000: 425a 6839 3141 5926 5359 de0d 114d 0000 BZh91AY&SY...M.. 00000010: 72df 8044 1040 e7fb 3226 100c 1a3f f7df [email protected]&...?.. 00000020: ea30 013a 081a 048d 1a93 6123 614f 4d4f .0.:...... a#aOMO 00000030: 5347 9469 9a86 9e28 61a1 a680 1a00 0d00 SG.i...(a...... 00000040: 00c8 6812 9a9a a7a7 aa3c 927a 10c1 a086 ..h...... <.z.... 00000050: 8018 80ce 0c40 1c39 2e5f de5c e73a c59f [email protected]._.\.:.. 00000060: f3e1 4066 6c60 c6b1 a056 feda e767 adc9 ..@fl`...V...g.. 00000070: 10c1 98c9 c9b2 ed38 7b8e 6ebd baf7 6824 ...... 8{.n...h$ 00000080: d00f 93f6 3c5a 9a84 b2c9 d9f4 d975 6096 ........ 000000f0: d4b8 870d 25d7 edd1 37a0 94d5 0bfa 72d1 ....%...7.....r. 00000100: 4a8a b784 1ab3 c5b1 16da 5345 ce23 0f85 J...... SE.#.. 00000110: 6884 9463 a147 98f2 624a 43a4 9ebb 1db6 h..c.G..bJC..... 00000120: ac25 4bef cc61 b334 5713 7765 3026 cec8 .%K..a.4W.we0&.. 00000130: 94b3 abba e983 2909 ca2c 8d17 026d 17c5 ...... )..,...m.. 00000140: 66d1 4a3e 41f6 e21a c2c1 2625 23dd 68ea f.J>A.....&%#.h. 00000150: 91d0 5dc9 14e1 4243 7834 4534 ..]...BCx4E4

- 48/80 - Copyright © 2019 - 2021 Alexander Schilling 3.3.5 LZMA Compression

3.3.5 LZMA Compression

The LZMA compression uses the Lempel-Ziv-Markov (LZMA) chain compression algorithm. But LZMA is used to compress just a single files. Compressed archives are typically created by assembling collections of files into a single tar archive and then compressing them. See the tar.lzma compression.

Definition

Format: lzma File extension: .lzma

The following options are possible:

• compressionLevel: number - gzip compression level 0 (no compression) to 9 (best compression, default)

An example (base.js) compressed will look like:

00000000: 5d00 0000 021c 0100 0000 0000 0000 3d82 ]...... =. 00000010: 8017 6817 1424 a1c6 43a6 dd4f d57f 32d9 ..h..$..C..O..2. 00000020: 0fa4 0a98 756f 05b1 badb 5635 8534 7ad2 ....uo....V5.4z. 00000030: ee77 9b91 cbbd 4a11 5aa3 811c 0e6c 61bb .w....J.Z....la. 00000040: 9e6b af85 8a28 ab5f 56b0 d51b 8037 51c5 .k...(._V....7Q. 00000050: 9a46 e85c bcf2 13c9 d1c2 dc8f 0c52 7782 .F.\...... Rw. 00000060: 28bd 467e 50f9 0e25 4651 8e76 0f6d a39a (.F~P..%FQ.v.m.. 00000070: dc86 8b3b ff8b 8c6d 99af 7f2c 2523 8935 ...;...m...,%#.5 00000080: d6bb f04a 9d62 c274 e036 4533 27bc c071 ...J.b.t.6E3'..q 00000090: d5aa 1a6f 09df 1312 b6f0 1311 12c9 6e19 ...o...... n. 000000a0: bb3d b415 a6ef 507b abcd 5aa1 7ad8 e073 .=....P{..Z.z..s 000000b0: 86ba 1121 062c 9b2f 07da 8d84 30f4 f93d ...!.,./....0..= 000000c0: 5ef1 ebc6 7637 527d befe bc53 58b0 05cd ^...v7R}...SX... 000000d0: fd65 3690 ffee 5231 7e .e6...R1~

- 49/80 - Copyright © 2019 - 2021 Alexander Schilling 3.3.6 TAR Archive

3.3.6 TAR Archive

The TAR (Tape Archive) has no compression and only collects multiple files together in one file. Often this combined file is compressed using different algorithms. See more at TAR GZip.

Definition

Format: tar# File extension: .tar

If no hash is given on reading the first file the archive is used. So if the archive only contains one file the hash part of the URI is not needed on reading.

An example (base.js) as single file archive will look like:

00000000: 6261 7365 2e6a 7300 0000 0000 0000 0000 base.js...... 00000010: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 00000020: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 00000030: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 00000040: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 00000050: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 00000060: 0000 0000 3030 3036 3634 2000 3030 3137 ....000664 .0017 00000070: 3530 2000 3030 3137 3530 2000 3030 3030 50 .001750 .0000 00000080: 3030 3031 3034 3020 3133 3435 3437 3033 0001040 13454703 00000090: 3733 3620 3031 3232 3633 0020 3000 0000 736 012263. 0... 000000a0: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 000000b0: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 000000c0: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 000000d0: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 000000e0: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 000000f0: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 00000100: 0075 7374 6172 0030 3061 6c65 7800 0000 .ustar.00alex... 00000110: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 00000120: 0000 0000 0000 0000 0061 6c65 7800 0000 ...... alex... 00000130: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 00000140: 0000 0000 0000 0000 0030 3030 3030 3020 ...... 000000 00000150: 0030 3030 3030 3020 0000 0000 0000 0000 .000000 ...... 00000160: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 00000170: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 00000180: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 00000190: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 000001a0: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 000001b0: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 000001c0: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 000001d0: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 000001e0: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 000001f0: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 00000200: 2f2f 2075 7365 2061 6e20 6f62 6a65 6374 // use an object 00000210: 0a6d 6f64 756c 652e 6578 706f 7274 7320 .module.exports 00000220: 3d20 7b0a 2020 2020 6e61 6d65 3a20 2765 = {. name: 'e 00000230: 7861 6d70 6c65 272c 0a20 2020 202f 2f20 xample',. // 00000240: 6e75 6c6c 2076 616c 7565 0a20 2020 206e null value. n 00000250: 756c 6c3a 206e 756c 6c2c 0a20 2020 202f ull: null,. / 00000260: 2f20 626f 6f6c 6561 6e20 7365 7474 696e / boolean settin 00000270: 670a 2020 2020 626f 6f6c 6561 6e3a 2074 g. boolean: t 00000280: 7275 652c 0a20 2020 202f 2f20 696e 636c rue,. // incl 00000290: 7564 6520 6120 7374 7269 6e67 0a20 2020 ude a string. 000002a0: 2073 7472 696e 673a 2027 7465 7374 272c string: 'test', 000002b0: 0a20 2020 202f 2f20 616e 7920 696e 7465 . // any inte 000002c0: 6765 7220 6f72 2066 6c6f 6174 206e 756d ger or float num 000002d0: 6265 720a 2020 2020 6e75 6d62 6572 3a20 ber. number: 000002e0: 352e 362c 0a20 2020 202f 2f20 6120 6461 5.6,. // a da 000002f0: 7465 2061 7320 7374 7269 6e67 0a20 2020 te as string. 00000300: 2064 6174 653a 206e 6577 2044 6174 6528 date: new Date( 00000310: 2732 3031 362d 3035 2d31 3054 3139 3a30 '2016-05-10T19:0 00000320: 363a 3336 2e39 3039 5a27 292c 0a20 2020 6:36.909Z'),. 00000330: 202f 2f20 616e 6420 6120 6c69 7374 206f // and a list o 00000340: 6620 6e75 6d62 6572 730a 2020 2020 6c69 f numbers. li 00000350: 7374 3a20 5b31 2c20 322c 2033 5d2c 0a20 st: [1, 2, 3],. 00000360: 2020 202f 2f20 6164 6420 6120 7375 6220 // add a sub 00000370: 6f62 6a65 6374 0a20 2020 2070 6572 736f object. perso 00000380: 6e3a 207b 0a20 2020 2020 2020 206e 616d n: {. nam 00000390: 653a 2027 416c 6578 616e 6465 7220 5363 e: 'Alexander Sc 000003a0: 6869 6c6c 696e 6727 2c0a 2020 2020 2020 hilling',. 000003b0: 2020 6a6f 623a 2027 4465 7665 6c6f 7065 job: 'Develope 000003c0: 7227 0a20 2020 207d 2c0a 2020 2020 2f2f r'. },. // 000003d0: 2063 6f6d 706c 6578 206c 6973 7420 7769 complex list wi 000003e0: 7468 206f 626a 6563 740a 2020 2020 636f th object. co 000003f0: 6d70 6c65 783a 205b 7b20 6e61 6d65 3a20 mplex: [{ name: 00000400: 2745 676f 6e27 207d 2c20 7b20 6e61 6d65 'Egon' }, { name 00000410: 3a20 274a 616e 696e 6127 207d 5d0a 7d0a : 'Janina' }].}. 00000420: 0000 0000 0000 0000 0000 0000 0000 0000 ......

- 50/80 - Copyright © 2019 - 2021 Alexander Schilling 3.3.6 TAR Archive

00000430: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 00000440: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 00000450: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 00000460: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 00000470: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 00000480: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 00000490: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 000004a0: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 000004b0: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 000004c0: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 000004d0: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 000004e0: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 000004f0: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 00000500: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 00000510: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 00000520: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 00000530: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 00000540: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 00000550: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 00000560: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 00000570: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 00000580: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 00000590: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 000005a0: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 000005b0: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 000005c0: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 000005d0: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 000005e0: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 000005f0: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 00000600: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 00000610: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 00000620: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 00000630: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 00000640: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 00000650: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 00000660: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 00000670: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 00000680: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 00000690: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 000006a0: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 000006b0: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 000006c0: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 000006d0: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 000006e0: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 000006f0: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 00000700: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 00000710: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 00000720: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 00000730: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 00000740: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 00000750: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 00000760: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 00000770: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 00000780: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 00000790: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 000007a0: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 000007b0: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 000007c0: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 000007d0: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 000007e0: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 000007f0: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 00000800: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 00000810: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 00000820: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 00000830: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 00000840: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 00000850: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 00000860: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 00000870: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 00000880: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 00000890: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 000008a0: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 000008b0: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 000008c0: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 000008d0: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 000008e0: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 000008f0: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 00000900: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 00000910: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 00000920: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 00000930: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 00000940: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 00000950: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 00000960: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 00000970: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 00000980: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 00000990: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 000009a0: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 000009b0: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 000009c0: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 000009d0: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 000009e0: 0000 0000 0000 0000 0000 0000 0000 0000 ...... 000009f0: 0000 0000 0000 0000 0000 0000 0000 0000 ......

- 51/80 - Copyright © 2019 - 2021 Alexander Schilling 3.3.6 TAR Archive

A tar file is continuing stream of files without an index. To find something it has to be read through the whole file. It contains the file content directly with it's file attributes.

- 52/80 - Copyright © 2019 - 2021 Alexander Schilling 3.3.7 TAR + GZip Archives

3.3.7 TAR + GZip Archives

Based on a TAR archive, which can be used standalone this adds compression to it.

Definition

Format: tgz# File extension: .tgz or .tar.gz

The following options are possible:

• compressionLevel: number - gzip compression level 0 (no compression) to 9 (best compression, default)

If no hash is given on reading the first file the archive is used. So if the archive only contains one file the hash part of the URI is not needed on reading.

An example (base.js) compressed archive will look like:

00000000: 1f8b 0800 3e51 c55c 0003 ed52 bb6e 8330 ....>Q.\...R.n.0 00000010: 14cd cc57 dc8d 56ca c33c 4214 a40e 95d2 ...W..V...q. 00000040: 2f8e b8c6 e95e 8fee 09c6 5810 f840 abb3 /....^....X..@.. 00000050: 98b3 fe5a c361 3e03 c7f3 e7fe 8279 0b2f ...Z.a>...... y./ 00000060: 00e6 b86e e08d 80dd d555 8b42 1b9e 9315 ...n.....U.B.... 00000070: 2eb0 fc29 efb7 7873 17e8 d67f 82d9 0c0a ...)..xs...... 00000080: 8dc0 25a8 688f b1b1 3295 1402 a758 1e54 ..%.h...2....X.T 00000090: 6e34 3cc1 d902 82e4 1986 6063 c9b3 8340 n4<...... `c...@ 000000a0: 7b5c 9354 2c0b 21e0 c845 814d 1a1d c3fa {\.T,.!..E.M.... 000000b0: db65 444a 09a4 fe1a 8d49 e5b6 a65b 2e04 .eDJ.....I...[.. 000000c0: 9317 d865 a632 1645 4266 409b fc9a da6c [email protected] 000000d0: 49da a036 375d 2ebf 28df e016 7350 396c I..67]..(...sP9l 000000e0: 84e2 8674 b308 f3d6 48b5 0d61 3e0d 6e35 ...t....H..a>.n5 000000f0: 9070 43ed 75bf 7f45 9163 3cc1 8a76 0fb6 .pC.u..E.c<..v.. 00000100: cb9c 60c2 e613 87bd 3bcb 9005 a117 4c97 ..`.....;.....L. 00000110: 6cf9 613f f6b4 13ea 2552 6d40 6d5a 255d l.a?....%Rm@mZ%] 00000120: 072b 2e84 4f67 0cee 18bc f5ad 22a9 2a74 .+..Og...... ".*t 00000130: 115d a75c f107 aa52 3483 66c0 bd21 3fd3 .].\...R4.f..!?. 00000140: 5323 0dba d95b bc4b 8520 abed c52b ec55 S#...[.K. ...+.U 00000150: 4439 2b3c a250 d4c2 ae03 974e 2a56 d50f D9+<.P.....N*V.. 00000160: 2a1b 7ba7 d4ec fa92 6d90 2c9e af62 2f5b *.{.....m.,..b/[ 00000170: 256d aa87 8e79 e532 959c b8b5 75b1 fefa %m...y.2....u... 00000180: 7d0e 1830 60c0 80fb e01b eb93 51c1 000a }..0`...... Q... 00000190: 0000

- 53/80 - Copyright © 2019 - 2021 Alexander Schilling 3.3.8 TAR + BZip2 Archives

3.3.8 TAR + BZip2 Archives

Based on a TAR archive, which can be used standalone this adds compression to it.

Definition

Format: tbz2# File extension: .tbz2 or .tar.bz2

If no hash is given on reading the first file the archive is used. So if the archive only contains one file the hash part of the URI is not needed on reading.

An example (base.js) compressed archive will look like:

00000000: 425a 6839 3141 5926 5359 385b fbdc 0000 BZh91AY&SY8[.... 00000010: d77f 91dc 1000 4040 e7ff b226 100c 3a7f ...... @@...&..:. 00000020: f7df ea04 0020 0000 0830 0179 5b56 a1a2 ...... 0.y[V.. 00000030: 1a99 1a9a 6645 1e26 a646 8000 3688 c8f4 ....fE.&.F..6... 00000040: d211 1880 a7a9 ea19 0034 0003 4000 004a ...... [email protected] 00000050: 6a6a 3499 a44f 29ea 01b2 8680 681a 0681 jj4..O).....h... 00000060: 89a7 e1a6 ca76 b14c 406d e981 be5c 80c2 .....v.L@m...\.. 00000070: 668b 40d9 681a 3ed4 b67a 553c f751 19bc [email protected].>..zU<.Q.. 00000080: 2b2d ac05 8cbd c186 b018 cc34 79a3 226b +-...... 4y."k 00000090: 35d8 e61e b728 4109 1a11 5c57 7561 c875 5....(A...\Wua.u 000000a0: e1b7 0e0c 7aa8 7874 bbdb e8c8 de26 9a5c ....z.xt.....&.\ 000000b0: 1d1d 3db7 825c b76a ee5d 3d52 39dd 9e6c ..=..\.j.]=R9..l 000000c0: 5de2 89be c1ce 07f6 3ccd 41d3 01a7 cebe ]...... <.A..... 000000d0: 10ea 35b9 e593 c5ef f399 f4ce b272 51d0 ..5...... rQ. 000000e0: 6b32 8551 841a c50c f209 ac24 56f7 0bdc k2.Q...... $V... 000000f0: d059 a816 93af 9528 9b15 fd27 a7b0 5b9e .Y.....(...'..[. 00000100: 1599 91e6 b916 3541 8b01 c7d0 dc98 04e9 ...... 5A...... 00000110: 3835 6e55 071c 7df7 1e4b e591 0a19 4fcc 85nU..}..K....O. 00000120: 48a2 8271 9ec7 a9a2 bdd1 b493 1280 0190 H..q...... 00000130: 7d40 cb51 b77f 1025 4e96 8bc0 5e7d 3445 }@.Q...%N...^}4E 00000140: 0855 8cd9 6d49 19ac 4c6b 516f a4b1 9d85 .U..mI..LkQo.... 00000150: 0b28 1685 8e29 30d0 d2e5 c2ba aa9d 906b .(...)0...... k 00000160: 5588 4951 92a2 a79c 7831 0501 d24f 2bdd U.IQ....x1...O+. 00000170: b058 8919 72dc 3a48 b5f2 96bc d008 6561 .X..r.:H...... ea 00000180: 1852 b3d8 a4a8 dec8 071a 303c e04d a481 .R...... 0<.M.. 00000190: 95b7 b9e8 d44a 4f68 f852 3476 aa04 999e .....JOh.R4v.... 000001a0: 07b2 e2dc 8099 bfc5 dc91 4e14 240e 16fe ...... N.$... 000001b0: f700

- 54/80 - Copyright © 2019 - 2021 Alexander Schilling 3.3.9 TAR + LZMA Archives

3.3.9 TAR + LZMA Archives

Based on a TAR archive, which can be used standalone this adds compression to it.

Definition

Format: tlz# File extension: .tlz or .tar.lzma

The following options are possible:

• compressionLevel: number - gzip compression level 0 (no compression) to 9 (best compression, default)

If no hash is given on reading the first file the archive is used. So if the archive only contains one file the hash part of the URI is not needed on reading.

An example (base.js) compressed archive will look like:

00000000: 5d00 0000 0200 0800 0000 0000 0000 3118 ]...... 1. 00000010: 4acc eca6 897a f175 50ae 9fc2 7578 4bcb J....z.uP...uxK. 00000020: 7a73 cc15 7c64 d27c c4bd 6ca0 882f a1f0 zs..|d.|..l../.. 00000030: 0385 20cb 6118 3d7c 730b f091 0e7d 929e .. .a.=|s....}.. 00000040: aa06 e45c 98bc 942f 96d8 516a 7810 ab59 ...\.../..Qjx..Y 00000050: 5595 cc9a b3c0 413f 1519 6cfb fdcf a1c8 U.....A?..l..... 00000060: da4d afc7 a629 061f 3dc1 8db9 1545 662b .M...)..=....Ef+ 00000070: 7655 d408 9a05 45b4 3adf 459f 55da 854f vU....E.:.E.U..O 00000080: 42b6 7bcb 4d89 4b96 44ee ee94 6145 c23a B.{.M.K.D...aE.: 00000090: ed68 cc4a 748d 6c56 cfef ea75 c327 e7d2 .h.Jt.lV...u.'.. 000000a0: 6bc2 0a44 3324 a354 5bda ce10 ff65 48d5 k..D3$.T[....eH. 000000b0: ef01 0940 2dff 6e65 adab 1224 d319 2f66 [email protected]...$../f 000000c0: 65b8 a46c 0253 cb30 8cc0 5730 943b 7e69 e..l.S.0..W0.;~i 000000d0: 0094 639c af64 bc80 67f2 9b08 4a9b 0191 ..c..d..g...J... 000000e0: 2c85 e253 00ef 88cc 6921 14b4 d097 e499 ,..S....i!...... 000000f0: a734 3eb7 3c4d 7a57 297b 8427 9654 1781 .4>.

- 55/80 - Copyright © 2019 - 2021 Alexander Schilling 3.3.10 ZIP Archives

3.3.10 ZIP Archives

ZIP is based on the DEFLATE algorithm, which is a combination of LZ77 and Huffman coding. That's the same algorithm as used in GZip archives. But while Gzip is used to compress just a single file. ZIP files are archives which may contain multiple files.

Definition

Format: zip# File extension: .zip

The following options are possible:

• compressionLevel: number - gzip compression level 0 (no compression) to 9 (best compression, default)

This format will auto detect the filename if only one file is given. Therefore the hash part within the URL is not needed for single file ZIP archives on reading.

An example (base.js) compressed archive will look like:

00000000: 504b 0304 1403 0000 0800 7daa 8e4e f964 PK...... }..N.d 00000010: 397a 2e01 0000 2002 0000 0700 0000 6261 9z...... ba 00000020: 7365 2e6a 734d 51cb 6ec2 3010 bce7 2bf6 se.jsMQ.n.0...+. 00000030: 9656 e291 8088 84a5 1e2a d14b afed a915 .V...... *.K.... 00000040: 0727 5920 c8b1 51bc 8654 887f efda 9810 .'Y ..Q..T...... 00000050: 1fa2 cdec ccce 783d 9f83 b308 5283 298f ...... x=....R.). 00000060: 5851 d29a da29 9c61 7f32 1d59 7883 6b02 XQ...).a.2.Yx.k. 00000070: 7cb4 6c51 408a bd6c 4f0a d349 0059 ac9d |[email protected].. 00000080: 5270 96ca a147 c2af 08df 8151 1aa3 90e7 Rp...G.....Q.... 00000090: 5b24 6af4 dec3 0f4c 0075 0e07 66a3 2be5 [$j....L.u..f.+. 000000a0: 6a0e 0396 ba48 8d25 5b13 5a1a 7c39 f01f j....H.%[.Z.|9.. 000000b0: f309 f7d8 81e9 60a7 8c24 f66d 4bec 3c25 ...... `..$.mK.<% 000000c0: 9602 56b3 e2a9 815a 128f b7a3 f901 e2c4 ..V....Z...... 000000d0: 7881 0d57 2fe9 22cb 8b69 b69a e6d9 77be x..W/."..i....w. 000000e0: 1659 2196 c56c 9dad 7fd2 d791 77cd b354 .Y!..l...... w..T 000000f0: 6309 cc2e 3a59 df0c 9880 df7c 028b 092c c...:Y.....|..., 00000100: b74f 45ed 15d6 9571 cb01 3fb1 ca68 1117 .OE....q..?..h.. 00000110: 3c5a f2bb c29e 3df8 665f d5a1 518a a3fa .n.dl. 00000150: dc92 7f50 4b01 023f 0314 0300 0008 007d ...PK..?...... } 00000160: aa8e 4ef9 6439 7a2e 0100 0020 0200 0007 ..N.d9z...... 00000170: 0024 0000 0000 0000 0020 80b4 8100 0000 .$...... 00000180: 0062 6173 652e 6a73 0a00 2000 0000 0000 .base.js...... 00000190: 0100 1800 0083 6e0c f7f2 d401 809a 8d05 ...... n...... 000001a0: 80f6 d401 0083 6e0c f7f2 d401 504b 0506 ...... n.....PK.. 000001b0: 0000 0000 0100 0100 5900 0000 5301 0000 ...... Y...S... 000001c0: 0000 ..

- 56/80 - Copyright © 2019 - 2021 Alexander Schilling 3.4 Formats

3.4 Formats

3.4.1 JSON Format

This format uses the JavaScript object notation a human readable structure. It is widely used in different languages, not only in JavaScript. See description at Wikipedia.

Definition

Format: json File extension: .json Mimetype: application/json

An example may look like:

{ "name": "example", "null": null, "boolean": true, "string": "test", "number": 5.6, "date": "2016-05-10T19:06:36.909Z", "list": [1, 2, 3], "person": { "name": "Alexander Schilling", "job": "Developer" }, "complex": [{ "name": "Egon" }, { "name": "Janina" }]

JSON's basic data types are:

• Number: a signed decimal number that may contain a fractional part and may use exponential E notation, but cannot include non-numbers like NaN .

• String: a sequence of zero or more Unicode characters. Strings are delimited with double-quotation marks and support a backslash escaping syntax.

• Boolean: either of the values true or false

• Array: an ordered list of zero or more values, each of which may be of any type. Arrays use square bracket notation with elements being comma-separated.

• Object: an unordered collection of name/value pairs where the keys are strings. Objects are delimited with curly brackets and use commas to separate each pair, while within each pair the colon ':' character separates the key or name from its value.

• Date: will be formatted as ISO string and stay as string after parsing

• null: An empty value, using the word null

Whitespace (space, horizontal tab, line feed, and carriage return) is allowed and ignored around or between syntactic elements.

JSON won't allow comments but you may use JavaScript like comments using // and /*...*/ like known in javascript. Therefore use the javascript parsing described below.

Transformations

Date will be stored in its ISO string representation. Therefore the parser will also cast strings which are valid dates back to Date objects.

- 57/80 - Copyright © 2019 - 2021 Alexander Schilling 3.4.2 BSON Format

3.4.2 BSON Format

Binary JSON is a more compressed version of JSON but not human readable because it's a binary format. It is mainly used in the MongoDB database.

Definition

Format: bson File extension: .bson Mimetype: application/bson

An example binary may look like:

00000000: 0701 0000 026e 616d 6500 0800 0000 6578 .....name.....ex 00000010: 616d 706c 6500 0a6e 756c 6c00 0862 6f6f ample..null..boo 00000020: 6c65 616e 0001 0273 7472 696e 6700 0500 lean...string... 00000030: 0000 7465 7374 0001 6e75 6d62 6572 0066 ..test..number.f 00000040: 6666 6666 6616 4002 6461 7465 0019 0000 [email protected].... 00000050: 0032 3031 362d 3035 2d31 3054 3139 3a30 .2016-05-10T19:0 00000060: 363a 3336 2e39 3039 5a00 046c 6973 7400 6:36.909Z..list. 00000070: 1a00 0000 1030 0001 0000 0010 3100 0200 .....0...... 1... 00000080: 0000 1032 0003 0000 0000 0370 6572 736f ...2...... perso 00000090: 6e00 3600 0000 026e 616d 6500 1400 0000 n.6....name..... 000000a0: 416c 6578 616e 6465 7220 5363 6869 6c6c Alexander Schill 000000b0: 696e 6700 026a 6f62 000a 0000 0044 6576 ing..job.....Dev 000000c0: 656c 6f70 6572 0000 0463 6f6d 706c 6578 eloper...complex 000000d0: 0035 0000 0003 3000 1400 0000 026e 616d .5....0...... nam 000000e0: 6500 0500 0000 4567 6f6e 0000 0331 0016 e.....Egon...1.. 000000f0: 0000 0002 6e61 6d65 0007 0000 004a 616e ....name.....Jan 00000100: 696e 6100 0000 00 ina....

Compared to JSON, BSON is designed to be efficient both in storage space and scan-speed. Large elements in a BSON document are prefixed with a length field to facilitate scanning.

But in spite of its name, BSON's compatibility with JSON. BSON has special types like "ObjectId", "Min key", "UUID" or "MD5" (comming from MongoDB). These types are not compatible with JSON. That means some type information can be lost when you convert objects from BSON to JSON, but of course only when these special types are in the BSON source. It can be a disadvantage to use both JSON and BSON in single service.

Transformations

BSON don't allow top-level array into valid BSON according to the BSON spec. Therefor the top-level array is converted into an object with string numbered keys. The deserialization of this module will detect such top-level objects and convert them back.

[ { company: 'example 1', chief: { name: 'Alexander Schilling', job: 'Developer' }, partner: [ [Object], [Object] ] }, { company: 'example 2', chief: { name: 'Glen Campaign', job: 'Developer' }, partner: [ [Object], [Object] ] }, { company: 'example 3', chief: { name: 'Marie Mc Fling', job: 'Architect' }, partner: [ [Object], [Object] ] } ]

This is stored in binary format like:

{ '0': { company: 'example 1', chief: { name: 'Alexander Schilling', job: 'Developer' }, partner: [ [Object], [Object] ] }, '1': { company: 'example 2', chief: { name: 'Glen Campaign', job: 'Developer' }, partner: [ [Object], [Object] ] }, '2': { company: 'example 3', chief: { name: 'Marie Mc Fling', job: 'Architect' }, partner: [ [Object], [Object] ] } }

So this is how other tools may display loaded BSON top-level arrays.

- 58/80 - Copyright © 2019 - 2021 Alexander Schilling 3.4.3 MessagePack Format

3.4.3 MessagePack Format

MessagePack is an efficient binary serialization format. It lets you exchange data among multiple languages like JSON. But it's faster and smaller.

Definition

Format: msgpack File extension: .msgpack Mimetype: application/x-msgpack

An example binary may look like:

00000000: 89a4 6e61 6d65 a765 7861 6d70 6c65 a46e ..name.example.n 00000010: 756c 6cc0 a762 6f6f 6c65 616e c3a6 7374 ull..boolean..st 00000020: 7269 6e67 a474 6573 74a6 6e75 6d62 6572 ring.test.number 00000030: cb40 1666 6666 6666 66a4 6461 7465 b832 [email protected] 00000040: 3031 362d 3035 2d31 3054 3139 3a30 363a 016-05-10T19:06: 00000050: 3336 2e39 3039 5aa4 6c69 7374 9301 0203 36.909Z.list.... 00000060: a670 6572 736f 6e82 a46e 616d 65b3 416c .person..name.Al 00000070: 6578 616e 6465 7220 5363 6869 6c6c 696e exander Schillin 00000080: 67a3 6a6f 62a9 4465 7665 6c6f 7065 72a7 g.job.Developer. 00000090: 636f 6d70 6c65 7892 81a4 6e61 6d65 a445 complex...name.E 000000a0: 676f 6e81 a46e 616d 65a6 4a61 6e69 6e61 gon..name.Janina

It is fully compatible to JSON it is one of the smallest serialization format.

Specification

- 59/80 - Copyright © 2019 - 2021 Alexander Schilling 3.4.4 JS Format

3.4.4 JS Format

Also allowed are normal JavaScript files. In comparison to the JSON format it is more loosely so you may use single quotes, keys don't need quotes at all and at last you may use calculations.

But to import it they have to be run in a sandbox which makes execution a bit slower.

Definition

Format: js , 'javascript' File extension: .js , .mjs or .ts Mimetype: application/javascript

The following options are possible:

• module: boolean - use module format in storing as JavaScript, so you can load it using normal require or import

An example may look like:

simple

```js // use an object

{ name: 'example', // null value null: null, // boolean setting boolean: true, // include a string string: 'test', // any integer or float number number: 5.6, // a date as string date: new Date("2016-05-10T19:06:36.909Z"), // and a list of numbers list: [1, 2, 3], // add a sub object person: { name: "Alexander Schilling", job: "Developer" }, // complex list with object complex: [ {name: 'Egon'}, {name: 'Janina'} ], // calculate session timeout in milliseconds calc: 15*60*1000, math: Math.sqrt(16) }

```

module

```js // use an object

module.exports = { name: 'example', // null value null: null, // boolean setting boolean: true, // include a string string: 'test', // any integer or float number number: 5.6, // a date as string date: new Date('2016-05-10T19:06:36.000Z'), // and a list of numbers list: [1, 2, 3], // add a sub object person: { name: 'Alexander Schilling', job: 'Developer' }, // complex list with object complex: [{ name: 'Egon' }, { name: 'Janina' }] }; ```

Both of the above formats are possible to read, but to write the module format is only selected with file extension .mjs , .ts or the option:

• module - set to true to write in module format so you can load it using normal require or import

As comments are ignored on reading, they will get lost on writing the data structure.

XSS Protection

A primary feature of this package is to serialize code to a string of literal JavaScript which can be embedded in an HTML document by adding it as the contents of the '; formatted = '\\u003C\\u002Fscript\\u003E';

If you don't want this, you may use JSON format.

- 60/80 - Copyright © 2019 - 2021 Alexander Schilling 3.4.5 CSON Format

3.4.5 CSON Format

Like JSON but here the object is defined using CoffeeScript instead of JavaScript which makes it better readable.

Definition

Format: cson File extension: .cson Mimetype: text/x-coffeescript

An example may look like:

name: 'example' # null value null: null # boolean values boolean: true # include a string string: 'test' date: '2016-05-10T19:06:36.909Z' # numbers numberInt: -8 numberFloat: 5.6 # and a list of numbers list: [1, 2, 3] list2: [ 1 2 3 ] # add a sub object person: name: 'Alexander Schilling' job: 'Developer' # complex list with object complex: [ name: 'Egon' , name: 'Janina' ] # Multi-Line Strings! Without Quote Escaping! emissions: ''' Livestock and their byproducts account for at least 32,000 million tons of carbon dioxide (CO2) per year, or 51% of all worldwide greenhouse gas emissions. Goodland, R Anhang, J. “Livestock and Climate Change: What if the key actors in climate change were pigs, chickens and cows?” WorldWatch, November/December 2009. Worldwatch Institute, Washington, DC, USA. Pp. 10–19. http://www.worldwatch.org/node/6294 '''

CSON solves several major problems with hand-writing JSON by providing:

• the ability to use both single-quoted and double-quoted strings

• the ability to write multi-line strings in multiple lines

• the ability to write a redundant comma

• comments start with # and are allowed

Besides this facts it's the same as JSON and have the same types.

Transformations

Date will be stored in its ISO string representation. Therefore the parser will also cast strings which are valid dates back to Date objects.

- 61/80 - Copyright © 2019 - 2021 Alexander Schilling 3.4.6 CoffeeScript Format

3.4.6 CoffeeScript Format

The Coffee Script format is nearly the same as CSON but caused by the changed parser it may contain calculations, too.

Definition

Format: coffee File extension: .coffee Mimetype: text/x-coffeescript

An example may look like:

name: 'example' null: null boolean: true # include a string string: 'test' number: 5.6 date: '2016-05-10T19:06:36.909Z' # and a list of numbers list: [ 1 2 3 ] # add a sub object person: name: 'Alexander Schilling' job: 'Developer' # complex structure complex: [ {name: 'Egon'} {name: 'Janina'} ] # Multi-Line Strings! Without Quote Escaping! emissions: ''' Livestock and their byproducts account for at least 32,000 million tons of carbon dioxide (CO2) per year, or 51% of all worldwide greenhouse gas emissions. Goodland, R Anhang, J. “Livestock and Climate Change: What if the key actors in climate change were pigs, chickens and cows?” WorldWatch, November/December 2009. Worldwatch Institute, Washington, DC, USA. Pp. 10–19. http://www.worldwatch.org/node/6294 ''' # calculate session timeout in milliseconds calc: 15*60*1000 math: Math.sqrt 16

Transformations

If you store these it will be in CSON format, which makes Date to be written in its ISO string representation. Therefore the parser will also cast strings which are valid dates back to Date objects.

- 62/80 - Copyright © 2019 - 2021 Alexander Schilling 3.4.7 YAML Format

3.4.7 YAML Format

This is a simplified and best human readable language to write structured information. See some examples at Wikipedia.

Definition

Format: yaml File extension: .yml Mimetype: text/x-yaml

An example may look like:

name: example # null value null: null # boolean values boolean: true # include a string string: test unicode: "Sosa did fine.\u263A" control: "\b1998\t1999\t2000\n" hex esc: "\x0d\x0a is \r\n" single: '"Howdy!" he cried.' quoted: " # Not a 'comment'." # date support date: 2016-05-10T19:06:36.909Z # numbers numberInt: -8 numberFloat: 5.6 octal: 0o14 hexadecimal: 0xC exponential: 12.3015e+02 fixed: 1230.15 negative infinity: -.inf not a number: .NaN # and a list of numbers list: [one, two, three] list2: - one - two - three # add a sub object person: name: Alexander Schilling job: Developer # complex list with object complex: - name: Egon - { name: Janina } # multiline support multiline: This text will be read as one line without linebreaks. multilineQuoted: 'This text will be read as one line without linebreaks.' lineBreaks: | This text will keep as it is and all line breaks will be kept. lineSingle: > This text will be read as one line without linebreaks. lineBreak: > The empty line

will be a line break. # use references address1: &adr001 city: Stuttgart address2: *adr001 # specific type casts numberString: '123' numberString2: !!str 123 #numberFloat: !!float 123 # regular expression re: !!js/regexp /\d+/ # binary type picture: !!binary | R0lGODdhDQAIAIAAAAAAANn Z2SwAAAAADQAIAAACF4SDGQ ar3xxbJ9p0qa7R0YxwzaFME

- 63/80 - Copyright © 2019 - 2021 Alexander Schilling 3.4.7 YAML Format

1IAADs= # complex mapping key ? - Detroit Tigers - Chicago cubs : 2001-07-23

The YAML syntax is very powerful but also easy to write in it's basics:

• comments allowed starting with #

• dates are allowed as ISO string, too

• different multiline text entries

• special number values

• references with *xxx to the defined &xxx anchor

See the example above.

- 64/80 - Copyright © 2019 - 2021 Alexander Schilling 3.4.8 INI Format

3.4.8 INI Format

This is one of the oldest formats used for configurations. It is very simple but allows also complex objects through extended groups.

Definition

Format: ini File extension: .ini Mimetype: text/x-ini

An example may look like:

name = example

; simple text string = test

; add a simple list list[] = 1 list[] = 2 list[] = 3

; add a group [person] name = Alexander Schilling job = Developer

; add a subgroup [city.address] name = Stuttgart

Spaces around equal signs are prevented to be compatible with older parsers.

Comments start with semicolon and groups or sections are marked by square brackets. The group name defines the object to add the properties to.

Transformations

Date will be stored in its ISO string representation. Therefore the parser will also cast strings which are valid dates back to Date objects.

Arrays of objects will be converted to objects of objects on write to be presented as sections (using numbers within). On parsing objects with numbered keys are ported back to arrays.

- 65/80 - Copyright © 2019 - 2021 Alexander Schilling 3.4.9 Properties Format

3.4.9 Properties Format

Mainly in the Java world properties are used to setup configuration values. But it won't have support for arrays, you only may use objects with numbered keys.

Definition

Format: properties File extension: .properties Mimetype: text/x-java-properties

An example may look like:

name = example

# strings string = test other text multiline This text \ goes over multiple lines.

# numbers integer = 15 float: -4.6

! add a simple list list.0 = one list.1 = two list.2 = three

! add a sub object person.name: Alexander Schilling person.job: Developer

#references ref = ${string}

# add a section [section] name = Alex same = ${section|name}

This format supports:

• key and value may be divided by = , : with spaces or only a space

• comments start with ! or #

• structure data using sections with square brackets like in INI files

• structure data using namespaces in the key using dot as separator

• references to other values with ${key}

• references work also as section names or reference name

Transformations

Date will be stored in its ISO string representation. Therefore the parser will also cast strings which are valid dates back to Date objects.

- 66/80 - Copyright © 2019 - 2021 Alexander Schilling 3.4.10 TOML Format

3.4.10 TOML Format

TOML (Tom's Obvious, Minimal Language) aims to be a minimal configuration file format that's easy to read due to obvious semantics. TOML is designed to map unambiguously to a hash table.

Definition

Format: toml File extension: .toml Mimetype: text/x-toml

An example may look like:

name = "example"

# simple text string = "test"

# add a simple list list = [1, 2, 3]

# add a group [person] name = "Alexander Schilling" job = "Developer"

# add a subgroup [city] [city.address] name = "Stuttgart"

Read more about the format.

Attention

NULL values are not stored, the whole entry will be removed which should be the same for further processing.

Transformations

As TOML doesn't allow the top level element to be an array, this will be converted to object for storing and transformed back on parse.

- 67/80 - Copyright © 2019 - 2021 Alexander Schilling 3.4.11 XML Format

3.4.11 XML Format

The XML format should only use Tags and values, but no arguments for normal data structures. Arguments and combined text with sub-tags are possible but leads to a specific data structure.

Definition

Format: xml File extension: .xml Mimetype: application/xml

The following options are possible:

• rootName: string - define the root element name in formatting as XML

• records: boolean - flag to read the CSV or table data always as with records (only needed with less than 3 columns)

An example may look like:

example

null true test 5.6

1 2 3

Alexander SchillingDeveloper

!]]>

Hello all together And specially you!

If the top level element is an array, this will look like (the entry tag is a fixed name here):

example 1 Alexander Schilling Developer Egon Janina example 2 Glen Campaign Developer Marc

- 68/80 - Copyright © 2019 - 2021 Alexander Schilling 3.4.11 XML Format

Dennon example 3 Marie Mc Fling Architect Anton Anina

Attributes are stored in $ element while text is stored in _ element if sub-tags exist.

Transformations

As all values are text numbers and Date will be auto converted back on parsing.

The root element name in formatting may be defined using the option rootName , the default is 'data'.

If the data structure is based on a top-level array this is converted to object named entry containing the array to store and removed on parsing again.

- 69/80 - Copyright © 2019 - 2021 Alexander Schilling 3.4.12 HTML Format

3.4.12 HTML Format

This format converts the hypertext document in a structured definition using objects and arrays.

Definition

Format: html File extension: .htm .html Mimetype: text/html

An example may look like:

My First Heading

My first paragraph.

This will make a structure like:

[ { "!": "!DOCTYPE html" }, { _: "\n" }, { html: [ { _: "\n " }, { body: [ { _: "\n " }, { h1: [{ _: "My First Heading" }] }, { _: "\n " }, { p: [{ $: { class: "line" } }, { _: "My first paragraph." }] }, { _: "\n " } ] }, { _: "\n" } ] }, { _: "\n" } ];

Attributes are stored in $ , declarations in ! element while text is stored in _ element if sub-tags exist.

- 70/80 - Copyright © 2019 - 2021 Alexander Schilling 3.4.13 CSV Format

3.4.13 CSV Format

The CSV format is a flat two dimensional format for which the structure will be stored in a flattened format.

Definition

Format: csv File extension: .csv Mimetype: text/csv

The following options are possible:

• records - set to true for record format, see below

• delimiter - character to be used as field separator in CSV data (default: , ).

• quote - character used to quote field values in CSV data (default: " ).

An example may look like:

"name","example" "null", "boolean",true "string","test" "number",5.6 "date","2016-05-10T19:06:36.000Z" "list.0",1 "list.1",2 "list.2",3 "person.name","Alexander Schilling" "person.job","Developer" "complex.0.name","Egon" "complex.1.name","Janina"

If the top level element is an array the format will be more like normal CSV:

"company","chief.name","chief.job","partner.0.name","partner.1.name" "example 1","Alexander Schilling","Developer","Egon","Janina" "example 2","Glen Campaign","Developer","Marc","Dennon" "example 3","Marie Mc Fling","Architect","Anton","Anina"

Transformations

The CSV will have no header because it only has name and value columns in it. It uses the defaults like , as field separator and " to quote strings. All strings will be quoted.

Unquoted values will be converted to number , boolean or null if possible. Dates are represented as ISO string and will also be back ported to Date . The structure in any depth will be stored with its path and value lines.

All of these is converted back on parse.

Custom Format

To import a custom format like:

name: example null: boolean: true string: test number: 5.6 date: 2016-05-10T19:06:36.000Z list.0: 1 list.1: 2 list.2: 3 person.name: Alexander Schilling person.job: Developer complex.0.name: Egon complex.1.name: Janina with spaces: possible

- 71/80 - Copyright © 2019 - 2021 Alexander Schilling 3.4.13 CSV Format

You can do so with the options:

• delimiter: ': '

• quote: ''

- 72/80 - Copyright © 2019 - 2021 Alexander Schilling 3.4.14 Excel 97-2004 Workbook Format

3.4.14 Excel 97-2004 Workbook Format

The XLS format module allows to handle spreadsheets with one work sheet only. The content is like in the CSV format. This format is mainly to be preferred for Excel because CSV is not as easy to import.

Definition

Format: xls File extension: .xls Mimetype: application/vnd.ms-excel

The following options are possible:

• records - set to true for record format, see below

An example may look like (shown as CSV):

"name","example" "null", "boolean",true "string","test" "number",5.6 "date","2016-05-10T19:06:36.000Z" "list.0",1 "list.1",2 "list.2",3 "person.name","Alexander Schilling" "person.job","Developer" "complex.0.name","Egon" "complex.1.name","Janina"

If the top level element is an array the format will be more like normal table:

"company","chief.name","chief.job","partner.0.name","partner.1.name" "example 1","Alexander Schilling","Developer","Egon","Janina" "example 2","Glen Campaign","Developer","Marc","Dennon" "example 3","Marie Mc Fling","Architect","Anton","Anina"

- 73/80 - Copyright © 2019 - 2021 Alexander Schilling 3.4.15 Excel 2007+ XML Format

3.4.15 Excel 2007+ XML Format

The XLSX format module allows to handle spreadsheets with one work sheet only. The content is like in the CSV format. This format is mainly to be preferred for Excel because CSV is not as easy to import.

Definition

Format: xlsx File extension: .xlsx Mimetype: application/vnd.openxmlformats-officedocument.spreadsheetml.sheet

The following options are possible:

• records - set to true for record format, see below

An example may look like (shown as CSV):

"name","example" "null", "boolean",true "string","test" "number",5.6 "date","2016-05-10T19:06:36.000Z" "list.0",1 "list.1",2 "list.2",3 "person.name","Alexander Schilling" "person.job","Developer" "complex.0.name","Egon" "complex.1.name","Janina"

If the top level element is an array the format will be more like normal table:

"company","chief.name","chief.job","partner.0.name","partner.1.name" "example 1","Alexander Schilling","Developer","Egon","Janina" "example 2","Glen Campaign","Developer","Marc","Dennon" "example 3","Marie Mc Fling","Architect","Anton","Anina"

- 74/80 - Copyright © 2019 - 2021 Alexander Schilling 3.4.16 Plain Text Format

3.4.16 Plain Text Format

This is not really a specific format but works more like a simple put through.

Definition

Format: txt File extension: .txt , .log Mimetype: text/plain

Everything is returned as a simple string with the complete content.

But if data structures are given to store, they will be formatted as human readable structure using the util.inspect method.

The following options are possible:

• pattern: RegExp - for text format to be split into pattern matches (Using named groups) for event logs

RegExp Matching

The pattern matching is used to match event log entries into a data structure:

2019-10-30 09:04:36,829 ( ) [WARN ] - SQL Error: 0, SQLState: null 2019-10-30 09:04:36,829 ( ) [ERROR] - [ajp-bio-8093-exec-2336] Timeout: Pool empty. Unable to fetch a connection in 30 seconds, none available [size:4; busy:4; idle:0; lastwait:30000]. 2019-10-30 09:04:36,830 ( ) [WARN ] - Handler execution resulted in exception: Could not open JPA EntityManager for transaction; nested exception is org.hibernate.exception.GenericJDBCException: Could not open connection 2019-10-30 09:04:36,883 ( ) [WARN ] - SQL Error: 0, SQLState: null 2019-10-30 09:04:36,884 ( ) [ERROR] - [ajp-bio-8093-exec-2236] Timeout: Pool empty. Unable to fetch a connection in 30 seconds, none available [size:4; busy:4; idle:0; lastwait:30000]. 2019-10-30 09:04:36,884 ( ) [WARN ] - Handler execution resulted in exception: Could not open JPA EntityManager for transaction; nested exception is org.hibernate.exception.GenericJDBCException: Could not open connection

Such a text from log could be split up with the following expression:

pattern: /(?\d+-\d+-\d+ \d+:\d+:\d+,\d+)\s+.*?\[(?[A-Z+]+)\s*\] - (\[(?.*?)\] )?(?.*)/g;

This gives you the following data structure:

[ { date: '2019-10-30 09:04:36,829', level: 'WARN', code: undefined, message: 'SQL Error: 0, SQLState: null' }, { date: '2019-10-30 09:04:36,829', level: 'ERROR', code: 'ajp-bio-8093-exec-2336', message: 'Timeout: Pool empty. Unable to fetch a connection in 30 seconds, none available[size:4; busy:4; idle:0; lastwait:30000].' }, { date: '2019-10-30 09:04:36,830', level: 'WARN', code: undefined, message: 'Handler execution resulted in exception: Could not open JPA EntityManager for transaction; nested exception is org.hibernate.exception.GenericJDBCException: Could not open connection' }, { date: '2019-10-30 09:04:36,883', level: 'WARN', code: undefined, message: 'SQL Error: 0, SQLState: null' }, { date: '2019-10-30 09:04:36,884', level: 'ERROR', code: 'ajp-bio-8093-exec-2236', message: 'Timeout: Pool empty. Unable to fetch a connection in 30 seconds, none available[size:4; busy:4; idle:0; lastwait:30000].' }, {

- 75/80 - Copyright © 2019 - 2021 Alexander Schilling 3.4.16 Plain Text Format

date: '2019-10-30 09:04:36,884', level: 'WARN', code: undefined, message: 'Handler execution resulted in exception: Could not open JPA EntityManager for transaction; nested exception is org.hibernate.exception.GenericJDBCException: Could not open connection' } ];

- 76/80 - Copyright © 2019 - 2021 Alexander Schilling 3.4.17 Binary Format

3.4.17 Binary Format

This is not really a specific format but works more like a simple put through.

Definition

Format: binary File extension: -

Everything is returned as a byte array.

The response may look like:

Uint8Array [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ... 32131 more items ] +803ms

- 77/80 - Copyright © 2019 - 2021 Alexander Schilling 4. Internal

4. Internal

4.1 Extending this Module

Warning

The following information is only needed to develop within this library. If you only want to use this module it is not neccessary to know this.

The development is based on the alinex core rules.

4.1.1 Structure

DataStore

API load

parse Buffer Buffer Data Structure get / set /... file / sftp ftp / ftps / http / https gzip / brotli /bz2 / tar / tgz / tbz2 / zip js / json / bson / cson / yaml / xml / csv / ini / ...

format

save

To make it easy to extend with other formats or protocols the functionality is split in parts within separate directories:

• protocol - different access protocols to retrieve or store a Buffer which may be a string or binary

• compression - compression and packaging support (coming later)

• format - different data formats used to retrieve or store the data structure

Within each directory the index.ts contains convenient access methods which will select the correct implementation and type definitions for this type of handler.

- 78/80 - Copyright © 2019 - 2021 Alexander Schilling 4.1.2 Processing

4.1.2 Processing

Load

1. load into Buffer using correct protocol handler

2. uncompress and extract if defined

3. parse Buffer into data structure using correct format handler

Save

1. format data structure using correct format handler into Buffer

2. compress and include into archive if defined

3. save Buffer to persistent store using correct protocol handler

4.1.3 DataStore

The DataStore itself contains the following fields:

• source - full URL path to get/set

• _source - URL object to access parts of it

4.1.4 Statistics

The following statistics will give you a better understanding about it's complexity. The documentation has 80 pages (in A4 PDF format) and contains 145737 characters. And this package has 3187 lines of code.

The latest version in the repository is Version 1.16.2. It has a size of 233,81KiB in 166 files. The development contains 48 packages (resolved) and 3 of it are from the Alinex project.

4.1.5 Download Documentation

If you want to have an offline access to the documentation, feel free to download the 80 pages PDF Documentation.

4.1.6 License

Alinex Data Store

Copyright 2019 - 2021 Alexander Schilling (https://gitlab.com/alinex/node-datastore)

Apache License, Version 2.0

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

- 79/80 - Copyright © 2019 - 2021 Alexander Schilling 4.2 Test

4.2 Test

This module contains a suite of tests for all it's possibilities. But for some protocols you have to setup your environment first to work correctly. The systems will be started automatically so you don't have to enable all the server components.

The tests will write some files under /tmp/datastore-test-... . These may be deleted at any time.

To run the tests you have to give your password as environment variable:

PASSWORD=mypassword npm run test

4.2.1 SFTP

This uses the openssh server:

Arch Linux

sudo pacman -S openssh

Debian/Ubuntu

sudo apt-get install openssh -y

4.2.2 FTP

To test this you need a local ftp server. To install such run:

Arch Linux

sudo pacman -S vsftpd

Debian/Ubuntu

sudo apt-get install vsftpd -y

To also allow writing enable:

$ sudo vi /etc/vsftpd.conf # set: write_enable=YES

Postgres

To test this you need a local postgres server running.

Arch Linux

sudo pacman -S postgresql

Debian/Ubuntu

sudo apt-get install postgresql -y

Then you have to initialize your database:

$ sudo -u postgres initdb --locale $LANG -E UTF8 -D '/var/lib/postgres/data/' $ sudo -u postgres createuser $USER $ sudo -u postgres createdb $USER $ sudo -u postgres psql -c "alter user $USER with encrypted password '';" $ sudo -u postgres psql -c "grant all privileges on database $USER to $USER;" $ psql < test/data/example.sql

- 80/80 - Copyright © 2019 - 2021 Alexander Schilling