NAME DIFFERENCES BETWEEN GNU Parallel and ALTERNATIVES

Total Page:16

File Type:pdf, Size:1020Kb

NAME DIFFERENCES BETWEEN GNU Parallel and ALTERNATIVES GNU Parallel alternatives NAME parallel_alternatives - Alternatives to GNU parallel DIFFERENCES BETWEEN GNU Parallel AND ALTERNATIVES There are a lot programs that share functionality with GNU parallel. Some of these are specialized tools, and while GNU parallel can emulate many of them, a specialized tool can be betterat a given task. GNU parallel strives to include the best of thegeneral functionality without sacrificing ease of use. parallel has existed since 2002-01-06 and as GNU parallel since2010. A lot of the alternatives have not had the vitality to survivethat long, but have come and gone during that time. GNU parallel is actively maintained with a new release every monthsince 2010. Most other alternatives are fleeting interests of thedevelopers with irregular releases and only maintained for a fewyears. SUMMARY LEGEND The following features are in some of the comparable tools: Inputs I1. Arguments can be read from stdin I2. Arguments can be read from a file I3. Arguments can be read from multiple files I4. Arguments can be read from command line I5. Arguments can be read from a table I6. Arguments can be read from the same file using #! (shebang) I7. Line oriented input as default (Quoting of special chars not needed) Manipulation of input M1. Composed command M2. Multiple arguments can fill up an execution line M3. Arguments can be put anywhere in the execution line M4. Multiple arguments can be put anywhere in the execution line M5. Arguments can be replaced with context M6. Input can be treated as the complete command line Outputs O1. Grouping output so output from different jobs do not mix O2. Send stderr (standard error) to stderr (standard error) O3. Send stdout (standard output) to stdout (standard output) O4. Order of output can be same as order of input O5. Stdout only contains stdout (standard output) from the command O6. Stderr only contains stderr (standard error) from the command O7. Buffering on disk O8. Cleanup of temporary files if killed O9. Test if disk runs full during run O10. Output of a line bigger than 4 GB Execution E1. Running jobs in parallel Page 1 GNU Parallel alternatives E2. List running jobs E3. Finish running jobs, but do not start new jobs E4. Number of running jobs can depend on number of cpus E5. Finish running jobs, but do not start new jobs after first failure E6. Number of running jobs can be adjusted while running E7. Only spawn new jobs if load is less than a limit Remote execution R1. Jobs can be run on remote computers R2. Basefiles can be transferred R3. Argument files can be transferred R4. Result files can be transferred R5. Cleanup of transferred files R6. No config files needed R7. Do not run more than SSHD's MaxStartups can handle R8. Configurable SSH command R9. Retry if connection breaks occasionally Semaphore S1. Possibility to work as a mutex S2. Possibility to work as a counting semaphore Legend - = no x = not applicable ID = yes As every new version of the programs are not tested the table may beoutdated. Please file a bug report if you find errors (See REPORTINGBUGS). parallel: I1 I2 I3 I4 I5 I6 I7 M1 M2 M3 M4 M5 M6 O1 O2 O3 O4 O5 O6 O7 O8 O9 O10 E1 E2 E3 E4 E5 E6 E7 R1 R2 R3 R4 R5 R6 R7 R8 R9 S1 S2 DIFFERENCES BETWEEN xargs AND GNU Parallel Summary (see legend above): I1 I2 - - - - - - M2 M3 - - - - O2 O3 - O5 O6 E1 - - - - - - - - - - - x - - - - - xargs offers some of the same possibilities as GNU parallel. Page 2 GNU Parallel alternatives xargs deals badly with special characters (such as space, \, ' and"). To see the problem try this: touch important_file touch 'not important_file' ls not* | xargs rm mkdir -p "My brother's 12\" records" ls | xargs rmdir touch 'c:\windows\system32\clfs.sys' echo 'c:\windows\system32\clfs.sys' | xargs ls -l You can specify -0, but many input generators are not optimized forusing NUL as separator but are optimized for newline asseparator. E.g. awk, ls, echo, tar -v, head (requiresusing -z), tail (requires using -z), sed (requires using -z), perl (-0 and \0 instead of \n), locate (requiresusing -0), find (requires using -print0), grep (requiresusing -z or -Z), sort (requires using -z). GNU parallel's newline separation can be emulated with: cat | xargs -d "\n" -n1 command xargs can run a given number of jobs in parallel, but has nosupport for running number-of-cpu-cores jobs in parallel. xargs has no support for grouping the output, therefore output mayrun together, e.g. the first half of a line is from one process andthe last half of the line is from another process. The example Parallel grep cannot be done reliably with xargs because ofthis. To see this in action try: parallel perl -e '\$a=\"1\".\"{}\"x10000000\;print\ \$a,\"\\n\"' \ '>' {} ::: a b c d e f g h # Serial = no mixing = the wanted result # 'tr -s a-z' squeezes repeating letters into a single letter echo a b c d e f g h | xargs -P1 -n1 grep 1 | tr -s a-z # Compare to 8 jobs in parallel parallel -kP8 -n1 grep 1 ::: a b c d e f g h | tr -s a-z echo a b c d e f g h | xargs -P8 -n1 grep 1 | tr -s a-z echo a b c d e f g h | xargs -P8 -n1 grep --line-buffered 1 | \ tr -s a-z Or try this: slow_seq() { echo Count to "$@" seq "$@" | perl -ne '$|=1; for(split//){ print; select($a,$a,$a,0.100);}' } export -f slow_seq # Serial = no mixing = the wanted result seq 8 | xargs -n1 -P1 -I {} bash -c 'slow_seq {}' # Compare to 8 jobs in parallel seq 8 | parallel -P8 slow_seq {} seq 8 | xargs -n1 -P8 -I {} bash -c 'slow_seq {}' xargs has no support for keeping the order of the output, thereforeif running jobs in parallel using xargs the output of the secondjob cannot be postponed till the first job is done. xargs has no support for running jobs on remote computers. xargs has no support for context replace, so you will have to create thearguments. If you use a replace string in xargs (-I) you can not force xargs to use more than one argument. Page 3 GNU Parallel alternatives Quoting in xargs works like -q in GNU parallel. This meanscomposed commands and redirection require using bash -c. ls | parallel "wc {} >{}.wc" ls | parallel "echo {}; ls {}|wc" becomes (assuming you have 8 cores and that none of the filenamescontain space, " or '). ls | xargs -d "\n" -P8 -I {} bash -c "wc {} >{}.wc" ls | xargs -d "\n" -P8 -I {} bash -c "echo {}; ls {}|wc" A more extreme example can be found on:https://unix.stackexchange.com/q/405552/ https://www.gnu.org/software/findutils/ DIFFERENCES BETWEEN find -exec AND GNU Parallel Summary (see legend above): - - - x - x - - M2 M3 - - - - - O2 O3 O4 O5 O6 - - - - - - - - - - - - - - - - x x find -exec offers some of the same possibilities as GNU parallel. find -exec only works on files. Processing other input (such ashosts or URLs) will require creating these inputs as files. find-exec has no support for running commands in parallel. https://www.gnu.org/software/findutils/ (Last checked: 2019-01) DIFFERENCES BETWEEN make -j AND GNU Parallel Summary (see legend above): - - - - - - - - - - - - - O1 O2 O3 - x O6 E1 - - - E5 - - - - - - - - - - - - make -j can run jobs in parallel, but requires a crafted Makefileto do this. That results in extra quoting to get filenames containingnewlines to work correctly. make -j computes a dependency graph before running jobs. Jobs runby GNU parallel does not depend on each other. (Very early versions of GNU parallel were coincidentally implementedusing make -j). https://www.gnu.org/software/make/ (Last checked: 2019-01) DIFFERENCES BETWEEN ppss AND GNU Parallel Summary (see legend above): I1 I2 - - - - I7 M1 - M3 - - M6 Page 4 GNU Parallel alternatives O1 - - x - - E1 E2 ?E3 E4 - - - R1 R2 R3 R4 - - ?R7 ? ? - - ppss is also a tool for running jobs in parallel. The output of ppss is status information and thus not useful forusing as input for another command. The output from the jobs are putinto files. The argument replace string ($ITEM) cannot be changed. Arguments mustbe quoted - thus arguments containing special characters (space '"&!*)may cause problems. More than one argument is not supported. Filenamescontaining newlines are not processed correctly. When reading input from a file null cannot be used as a terminator. ppss needs to readthe whole input file before starting any jobs. Output and status information is stored in ppss_dir and thus requirescleanup when completed. If the dir is not removed before running ppss again it may cause nothing to happen as ppss thinks thetask is already done. GNU parallel will normally not need cleaningup if running locally and will only need cleaning up if stoppedabnormally and running remote (--cleanup may not complete ifstopped abnormally). The example Parallel grep would require extrapostprocessing if written using ppss. For remote systems PPSS requires 3 steps: config, deploy, andstart. GNU parallel only requires one step. EXAMPLES FROM ppss MANUAL Here are the examples from ppss's manual page with the equivalentusing GNU parallel: 1$ ./ppss.sh standalone -d /path/to/files -c 'gzip ' 1$ find /path/to/files -type f | parallel gzip 2$ ./ppss.sh standalone -d /path/to/files -c 'cp "$ITEM" /destination/dir ' 2$ find /path/to/files -type f | parallel cp {} /destination/dir 3$ ./ppss.sh standalone -f list-of-urls.txt -c 'wget -q ' 3$ parallel -a list-of-urls.txt wget -q 4$ ./ppss.sh standalone -f list-of-urls.txt -c 'wget -q "$ITEM"' 4$ parallel -a list-of-urls.txt wget -q {} 5$ ./ppss config -C config.cfg -c 'encode.sh ' -d /source/dir \ -m 192.168.1.100 -u ppss -k ppss-key.key -S ./encode.sh \ -n nodes.txt -o /some/output/dir --upload --download; ./ppss deploy -C config.cfg ./ppss start -C config 5$ # parallel does not use configs.
Recommended publications
  • Sample Docx File Downloadable Right Here
    Sample docx file downloadable right here. This is the most likely source of the files for this repo and will provide you with a basic understanding of how git works using Linux's Linux repository repository. On our server side of the system, we can use git-reload, that is, we need only the root directory of every binary and it will be displayed. So just install git with a basic command from the "packages" field and make a new directory to run git:reload on. Open the newly created copy and make a new git backup of the file, in the name of the original copy. Run git pull to generate an updated version to build this code into one file of every binary you wish to get a git version to build into. (See also the article). Now just get the latest release and place the git remote in that directory you just downloaded. The remote is now going to show up in one directory (from your local machine or other work you will never even have to write, you could change a few values on the build machine later). After the build gets started you will be able to start that next step with: $ cd git-reload then start from the root directory. Note to developers: If you forget to go over the manual, if they can't figure out the new command and there's only one change that looks like we need to add it to the repo right after, this will likely screw you out.
    [Show full text]
  • Linux Commands Cheat Sheet
    LINUX COMMANDS CHEAT SHEET System File Permission uname => Displays Linux system information chmod octal filename => Change file permissions of the file to octal uname -r => Displays kernel release information Example uptime => Displays how long the system has been running including chmod 777 /data/test.c => Set rwx permissions to owner, group and everyone (every- load average one else who has access to the server) hostname => Shows the system hostname chmod 755 /data/test.c => Set rwx to the owner and r_x to group and everyone hostname -i => Displays the IP address of the system chmod 766 /data/test.c => Sets rwx for owner, rw for group and everyone last reboot => Shows system reboot history chown owner user-file => Change ownership of the file date => Displays current system date and time chown owner-user: owner-group => Change owner and group owner of the file timedatectl => Query and change the System clock file_name chown owner-user:owner-group- => Change owner and group owner of the directory cal => Displays the current calendar month and day directory w => Displays currently logged in users in the system whoami => Displays who you are logged in as Network finger username => Displays information about the user ip addr show => Displays IP addresses and all the network interfaces Hardware ip address add => Assigns IP address 192.168.0.1 to interface eth0 192.168.0.1/24 dev eth0 dmesg => Displays bootup messages ifconfig => Displays IP addresses of all network interfaces cat /proc/cpuinfo => Displays more information about CPU e.g model, model name, cores, vendor id ping host => ping command sends an ICMP echo request to establish a connection to server / PC cat /proc/meminfo => Displays more information about hardware memory e.g.
    [Show full text]
  • Version 7.8-Systemd
    Linux From Scratch Version 7.8-systemd Created by Gerard Beekmans Edited by Douglas R. Reno Linux From Scratch: Version 7.8-systemd by Created by Gerard Beekmans and Edited by Douglas R. Reno Copyright © 1999-2015 Gerard Beekmans Copyright © 1999-2015, Gerard Beekmans All rights reserved. This book is licensed under a Creative Commons License. Computer instructions may be extracted from the book under the MIT License. Linux® is a registered trademark of Linus Torvalds. Linux From Scratch - Version 7.8-systemd Table of Contents Preface .......................................................................................................................................................................... vii i. Foreword ............................................................................................................................................................. vii ii. Audience ............................................................................................................................................................ vii iii. LFS Target Architectures ................................................................................................................................ viii iv. LFS and Standards ............................................................................................................................................ ix v. Rationale for Packages in the Book .................................................................................................................... x vi. Prerequisites
    [Show full text]
  • Chapter 19 RECOVERING DIGITAL EVIDENCE from LINUX SYSTEMS
    Chapter 19 RECOVERING DIGITAL EVIDENCE FROM LINUX SYSTEMS Philip Craiger Abstract As Linux-kernel-based operating systems proliferate there will be an in­ evitable increase in Linux systems that law enforcement agents must process in criminal investigations. The skills and expertise required to recover evidence from Microsoft-Windows-based systems do not neces­ sarily translate to Linux systems. This paper discusses digital forensic procedures for recovering evidence from Linux systems. In particular, it presents methods for identifying and recovering deleted files from disk and volatile memory, identifying notable and Trojan files, finding hidden files, and finding files with renamed extensions. All the procedures are accomplished using Linux command line utilities and require no special or commercial tools. Keywords: Digital evidence, Linux system forensics !• Introduction Linux systems will be increasingly encountered at crime scenes as Linux increases in popularity, particularly as the OS of choice for servers. The skills and expertise required to recover evidence from a Microsoft- Windows-based system, however, do not necessarily translate to the same tasks on a Linux system. For instance, the Microsoft NTFS, FAT, and Linux EXT2/3 file systems work differently enough that under­ standing one tells httle about how the other functions. In this paper we demonstrate digital forensics procedures for Linux systems using Linux command line utilities. The ability to gather evidence from a running system is particularly important as evidence in RAM may be lost if a forensics first responder does not prioritize the collection of live evidence. The forensic procedures discussed include methods for identifying and recovering deleted files from RAM and magnetic media, identifying no- 234 ADVANCES IN DIGITAL FORENSICS tables files and Trojans, and finding hidden files and renamed files (files with renamed extensions.
    [Show full text]
  • “Linux at the Command Line” Don Johnson of BU IS&T  We’Ll Start with a Sign in Sheet
    “Linux at the Command Line” Don Johnson of BU IS&T We’ll start with a sign in sheet. We’ll end with a class evaluation. We’ll cover as much as we can in the time allowed; if we don’t cover everything, you’ll pick it up as you continue working with Linux. This is a hands-on, lab class; ask questions at any time. Commands for you to type are in BOLD The Most Common O/S Used By BU Researchers When Working on a Server or Computer Cluster Linux is a Unix clone begun in 1991 and written from scratch by Linus Torvalds with assistance from a loosely-knit team of hackers across the Net. 64% of the world’s servers run some variant of Unix or Linux. The Android phone and the Kindle run Linux. a set of small Linux is an O/S core programs written by written by Linus Richard Stallman and Torvalds and others others. They are the AND GNU utilities. http://www.gnu.org/ Network: ssh, scp Shells: BASH, TCSH, clear, history, chsh, echo, set, setenv, xargs System Information: w, whoami, man, info, which, free, echo, date, cal, df, free Command Information: man, info Symbols: |, >, >>, <, ;, ~, ., .. Filters: grep, egrep, more, less, head, tail Hotkeys: <ctrl><c>, <ctrl><d> File System: ls, mkdir, cd, pwd, mv, touch, file, find, diff, cmp, du, chmod, find File Editors: gedit, nedit You need a “xterm” emulation – software that emulates an “X” terminal and that connects using the “SSH” Secure Shell protocol. ◦ Windows Use StarNet “X-Win32:” http://www.bu.edu/tech/support/desktop/ distribution/xwindows/xwin32/ ◦ Mac OS X “Terminal” is already installed Why? Darwin, the system on which Apple's Mac OS X is built, is a derivative of 4.4BSD-Lite2 and FreeBSD.
    [Show full text]
  • XAVIER CANAL I MASJUAN SOFTWARE DEVELOPER - BACKEND C E N T E L L E S – B a R C E L O N a - SPAIN
    XAVIER CANAL I MASJUAN SOFTWARE DEVELOPER - BACKEND C e n t e l l e s – B a r c e l o n a - SPAIN EXPERIENCE R E D H A T / K i a l i S OFTWARE ENGINEER Barcelona / Remote Kiali is the default Observability console for Istio Service Mesh deployments. September 2017 – Present It helps its users to discover, secure, health-check, spot misconfigurations and much more. Full-time as maintainer. Fullstack developer. Five people team. Ownership for validations and security. Occasional speaker. Community lead. Stack: Openshift (k8s), GoLang, Testify, Reactjs, Typescript, Redux, Enzyme, Jest. M A M M O T H BACKEND DEVELOPER HUNTERS Mammoth Hunters is a mobile hybrid solution (iOS/Android) that allow you Barcelona / Remote to workout with functional training sessions and offers customized nutrition Dec 2016 – Jul 2017 plans based on your training goals. Freelancing part-time. Evangelizing test driven development. Owning refactorings against spaghetti code. Code-reviewing and adding SOLID principles up to some high coupled modules. Stack: Ruby on Rails, Mongo db, Neo4j, Heroku, Slim, Rabl, Sidekiq, Rspec. PLAYFULBET L E A D BACKEND DEVELOPER Barcelona / Remote Playfulbet is a leading social gaming platform for sports and e-sports with Jul 2016 – Dec 2016 over 7 million users. Playfulbet is focused on free sports betting: players are not only able to bet and test themselves, but also compete against their friends with the main goal of win extraordinary prizes. Freelancing part-time. CTO quit company and I led the 5-people development team until new CTO came. Team-tailored scrum team organization.
    [Show full text]
  • Node Js Clone Schema
    Node Js Clone Schema Lolling Guido usually tricing some isohels or rebutted tasselly. Hammy and spacious Engelbert socialising some plod so execrably! Rey breveting his diaphragm abreacts accurately or speciously after Chadwick gumshoe and preplans neglectingly, tannic and incipient. Mkdir models Copy Next felt a file called sharksjs to angle your schema. Build a Twitter Clone Server with Apollo GraphQL Nodejs. To node js. To start consider a Nodejs and Expressjs project conduct a new smart folder why create. How to carriage a JavaScript object Flavio Copes. The GitHub repository requires Nodejs 12x and Python 3 Before. Dockerizing a Nodejs Web Application Semaphore Tutorial. Packagejson Scripts AAP GraphQL Server with NodeJS. Allows you need create a GraphQLjs GraphQLSchema instance from GraphQL schema. The Nodejs file system API with nice promise fidelity and methods like copy remove mkdirs. Secure access protected resources that are assets of choice for people every time each of node js, etc or if it still full spec files. The nodes are stringent for Node-RED but can alternatively be solid from. Different Ways to Duplicate Objects in JavaScript by. Copy Open srcappjs and replace the content with none below code var logger. Introduction to Apollo Server Apollo GraphQL. Git clone httpsgithubcomIBMcrud-using-nodejs-and-db2git. Create root schema In the schemas folder into an indexjs file and copy the code below how it graphqlschemasindexjs const gql. An api requests per user. Schema federation is internal approach for consolidating many GraphQL APIs services into one. If present try to saying two users with available same email you'll drizzle a true key error.
    [Show full text]
  • Other Useful Commands
    Bioinformatics 101 – Lecture 2 Introduction to command line Alberto Riva ([email protected]), J. Lucas Boatwright ([email protected]) ICBR Bioinformatics Core Computing environments ▪ Standalone application – for local, interactive use; ▪ Command-line – local or remote, interactive use; ▪ Cluster oriented: remote, not interactive, highly parallelizable. Command-line basics ▪ Commands are typed at a prompt. The program that reads your commands and executes them is the shell. ▪ Interaction style originated in the 70s, with the first visual terminals (connections were slow…). ▪ A command consists of a program name followed by options and/or arguments. ▪ Syntax may be obscure and inconsistent (but efficient!). Command-line basics ▪ Example: to view the names of files in the current directory, use the “ls” command (short for “list”) ls plain list ls –l long format (size, permissions, etc) ls –l –t sort newest to oldest ls –l –t –r reverse sort (oldest to newest) ls –lrt options can be combined (in this case) ▪ Command names and options are case sensitive! File System ▪ Unix systems are centered on the file system. Huge tree of directories and subdirectories containing all files. ▪ Everything is a file. Unix provides a lot of commands to operate on files. ▪ File extensions are not necessary, and are not recognized by the system (but may still be useful). ▪ Please do not put spaces in filenames! Permissions ▪ Different privileges and permissions apply to different areas of the filesystem. ▪ Every file has an owner and a group. A user may belong to more than one group. ▪ Permissions specify read, write, and execute privileges for the owner, the group, everyone else.
    [Show full text]
  • Husky: Towards a More Efficient and Expressive Distributed Computing Framework
    Husky: Towards a More Efficient and Expressive Distributed Computing Framework Fan Yang Jinfeng Li James Cheng Department of Computer Science and Engineering The Chinese University of Hong Kong ffyang,jfli,[email protected] ABSTRACT tends Spark, and Gelly [3] extends Flink, to expose to programmers Finding efficient, expressive and yet intuitive programming models fine-grained control over the access patterns on vertices and their for data-parallel computing system is an important and open prob- communication patterns. lem. Systems like Hadoop and Spark have been widely adopted Over-simplified functional or declarative programming interfaces for massive data processing, as coarse-grained primitives like map with only coarse-grained primitives also limit the flexibility in de- and reduce are succinct and easy to master. However, sometimes signing efficient distributed algorithms. For instance, there is re- over-simplified API hinders programmers from more fine-grained cent interest in machine learning algorithms that compute by fre- control and designing more efficient algorithms. Developers may quently and asynchronously accessing and mutating global states have to resort to sophisticated domain-specific languages (DSLs), (e.g., some entries in a large global key-value table) in a fine-grained or even low-level layers like MPI, but this raises development cost— manner [14, 19, 24, 30]. It is not clear how to program such algo- learning many mutually exclusive systems prolongs the develop- rithms using only synchronous and coarse-grained operators (e.g., ment schedule, and the use of low-level tools may result in bug- map, reduce, and join), and even with immutable data abstraction prone programming.
    [Show full text]
  • Google Go! a Look Behind the Scenes
    University of Salzburg Department of Computer Science Google Go! A look behind the scenes Seminar for Computer Science Summer 2010 Martin Aigner Alexander Baumgartner July 15, 2010 Contents 1 Introduction3 2 Data representation in Go5 2.1 Basic types and arrays............................5 2.2 Structs and pointers.............................6 2.3 Strings and slices...............................7 2.4 Dynamic allocation with \new" and \make"................9 2.5 Maps...................................... 10 2.6 Implementation of interface values...................... 11 3 The Go Runtime System 14 3.1 Library dependencies............................. 14 3.2 Memory safety by design........................... 14 3.3 Limitations of multi-threading........................ 15 3.4 Segmented stacks............................... 16 4 Concurrency 17 4.1 Share by communicating........................... 18 4.2 Goroutines................................... 18 4.2.1 Once.................................. 20 4.3 Channels.................................... 21 4.3.1 Channels of channels......................... 22 4.4 Parallelization................................. 23 4.4.1 Futures................................ 23 4.4.2 Generators............................... 24 4.4.3 Parallel For-Loop........................... 25 4.4.4 Semaphores.............................. 25 4.4.5 Example................................ 26 1 Introduction Go is a programming language with a focus on systems programming, i.e. writing code for servers, databases, system libraries,
    [Show full text]
  • Latexsample-Thesis
    INTEGRAL ESTIMATION IN QUANTUM PHYSICS by Jane Doe A dissertation submitted to the faculty of The University of Utah in partial fulfillment of the requirements for the degree of Doctor of Philosophy Department of Mathematics The University of Utah May 2016 Copyright c Jane Doe 2016 All Rights Reserved The University of Utah Graduate School STATEMENT OF DISSERTATION APPROVAL The dissertation of Jane Doe has been approved by the following supervisory committee members: Cornelius L´anczos , Chair(s) 17 Feb 2016 Date Approved Hans Bethe , Member 17 Feb 2016 Date Approved Niels Bohr , Member 17 Feb 2016 Date Approved Max Born , Member 17 Feb 2016 Date Approved Paul A. M. Dirac , Member 17 Feb 2016 Date Approved by Petrus Marcus Aurelius Featherstone-Hough , Chair/Dean of the Department/College/School of Mathematics and by Alice B. Toklas , Dean of The Graduate School. ABSTRACT Blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah.
    [Show full text]
  • Using Node.Js in CICS
    CICS Transaction Server for z/OS 5.6 Using Node.js in CICS IBM Note Before using this information and the product it supports, read the information in Product Legal Notices. This edition applies to the IBM® CICS® Transaction Server for z/OS®, Version 5 Release 6 (product number 5655- Y305655-BTA ) and to all subsequent releases and modifications until otherwise indicated in new editions. © Copyright International Business Machines Corporation 1974, 2020. US Government Users Restricted Rights – Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp. Contents About this PDF.......................................................................................................v Chapter 1. CICS and Node.js.................................................................................. 1 Node.js runtime environment ..................................................................................................................... 2 Node.js and CICS bundles ...........................................................................................................................3 Lifecycle of a NODEJSAPP bundle part ...................................................................................................... 3 Chapter 2. Developing Node.js applications............................................................5 Best practice for developing Node.js applications......................................................................................5 Environment variables for use in Node.js applications...............................................................................6
    [Show full text]