Python for Data Science Cheat Sheet Scikit-Learn

Total Page:16

File Type:pdf, Size:1020Kb

Python for Data Science Cheat Sheet Scikit-Learn Python For Data Science Cheat Sheet Create Your Model Evaluate Your Model’s Performance Scikit-Learn Supervised Learning Estimators Classification Metrics Learn Python for data science Interactively at www.DataCamp.com Linear Regression Accuracy Score Estimator score method >>> from sklearn.linear_model import LinearRegression >>> knn.score(X_test, y_test) >>> lr = LinearRegression(normalize=True) >>> from sklearn.metrics import accuracy_score Metric scoring functions Support Vector Machines (SVM) >>> accuracy_score(y_test, y_pred) Scikit-learn >>> from sklearn.svm import SVC Classification Report >>> svc = SVC(kernel='linear') >>> from sklearn.metrics import classification_report Precision, recall, f1-score Scikit-learn is an open source Python library that Naive Bayes >>> print(classification_report(y_test, y_pred)) and support implements a range of machine learning, Confusion Matrix >>> from sklearn.naive_bayes import GaussianNB preprocessing, cross-validation and visualization >>> from sklearn.metrics import confusion_matrix >>> gnb = GaussianNB() >>> print(confusion_matrix(y_test, y_pred)) algorithms using a unified interface. KNN Regression Metrics A Basic Example >>> from sklearn import neighbors >>> knn = neighbors.KNeighborsClassifier(n_neighbors=5) Mean Absolute Error >>> from sklearn import neighbors, datasets, preprocessing Unsupervised Learning Estimators >>> from sklearn.model_selection import train_test_split >>> from sklearn.metrics import mean_absolute_error >>> from sklearn.metrics import accuracy_score >>> y_true = [3, -0.5, 2] >>> iris = datasets.load_iris() Principal Component Analysis (PCA) >>> mean_absolute_error(y_true, y_pred) >>> X, y = iris.data[:, :2], iris.target >>> from sklearn.decomposition import PCA Mean Squared Error >>> X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=33) >>> pca = PCA(n_components=0.95) >>> from sklearn.metrics import mean_squared_error >>> scaler = preprocessing.StandardScaler().fit(X_train) K Means >>> mean_squared_error(y_test, y_pred) >>> X_train = scaler.transform(X_train) R² Score >>> X_test = scaler.transform(X_test) >>> from sklearn.cluster import KMeans >>> knn = neighbors.KNeighborsClassifier(n_neighbors=5) >>> k_means = KMeans(n_clusters=3, random_state=0) >>> from sklearn.metrics import r2_score >>> knn.fit(X_train, y_train) >>> r2_score(y_true, y_pred) >>> y_pred = knn.predict(X_test) Clustering Metrics >>> accuracy_score(y_test, y_pred) Model Fitting Adjusted Rand Index Supervised learning Also see NumPy & Pandas Fit the model to the data >>> from sklearn.metrics import adjusted_rand_score Loading The Data >>> lr.fit(X, y) >>> adjusted_rand_score(y_true, y_pred) >>> knn.fit(X_train, y_train) Your data needs to be numeric and stored as NumPy arrays or SciPy sparse >>> svc.fit(X_train, y_train) Homogeneity matrices. Other types that are convertible to numeric arrays, such as Pandas Unsupervised Learning >>> from sklearn.metrics import homogeneity_score Fit the model to the data >>> homogeneity_score(y_true, y_pred) DataFrame, are also acceptable. >>> k_means.fit(X_train) V-measure >>> pca_model = pca.fit_transform(X_train) Fit to data, then transform it >>> import numpy as np >>> from sklearn.metrics import v_measure_score >>> X = np.random.random((10,5)) >>> metrics.v_measure_score(y_true, y_pred) >>> y = np.array(['M','M','F','F','M','F','M','M','F','F','F']) >>> X[X < 0.7] = 0 Prediction Cross-Validation Supervised Estimators >>> from sklearn.cross_validation import cross_val_score Predict labels >>> print(cross_val_score(knn, X_train, y_train, cv=4)) Training And Test Data >>> y_pred = svc.predict(np.random.random((2,5))) >>> print(cross_val_score(lr, X, y, cv=2)) >>> y_pred = lr.predict(X_test) Predict labels >>> from sklearn.model_selection import train_test_split >>> y_pred = knn.predict_proba(X_test) Estimate probability of a label > > > X_train, X_test, y_train, y_test = train_test_split(X , Unsupervised Estimators Tune Your Model y, Predict labels in clustering algos random_state=0) >>> y_pred = k_means.predict(X_test) Grid Search >>> from sklearn.grid_search import GridSearchCV >>> params = {"n_neighbors": np.arange(1,3), Preprocessing The Data "metric": ["euclidean", "cityblock"]} >>> grid = GridSearchCV(estimator=knn, Standardization Encoding Categorical Features param_grid=params) >>> grid.fit(X_train, y_train) >>> from sklearn.preprocessing import StandardScaler >>> from sklearn.preprocessing import LabelEncoder >>> print(grid.best_score_) >>> print(grid.best_estimator_.n_neighbors) >>> scaler = StandardScaler().fit(X_train) >>> enc = LabelEncoder() >>> standardized_X = scaler.transform(X_train) >>> y = enc.fit_transform(y) >>> standardized_X_test = scaler.transform(X_test) Randomized Parameter Optimization Normalization Imputing Missing Values >>> from sklearn.grid_search import RandomizedSearchCV >>> params = {"n_neighbors": range(1,5), >>> from sklearn.preprocessing import Normalizer "weights": ["uniform", "distance"]} >>> from sklearn.preprocessing import Imputer >>> rsearch = RandomizedSearchCV(estimator=knn, >>> scaler = Normalizer().fit(X_train) >>> imp = Imputer(missing_values=0, strategy='mean', axis=0) param_distributions=params, >>> normalized_X = scaler.transform(X_train) >>> imp.fit_transform(X_train) cv=4, >>> normalized_X_test = scaler.transform(X_test) n_iter=8, random_state=5) Binarization Generating Polynomial Features >>> rsearch.fit(X_train, y_train) >>> print(rsearch.best_score_) >>> from sklearn.preprocessing import Binarizer >>> from sklearn.preprocessing import PolynomialFeatures >>> binarizer = Binarizer(threshold=0.0).fit(X) >>> poly = PolynomialFeatures(5) DataCamp >>> binary_X = binarizer.transform(X) >>> poly.fit_transform(X) Learn Python for Data Science Interactively.
Recommended publications
  • Sagemath and Sagemathcloud
    Viviane Pons Ma^ıtrede conf´erence,Universit´eParis-Sud Orsay [email protected] { @PyViv SageMath and SageMathCloud Introduction SageMath SageMath is a free open source mathematics software I Created in 2005 by William Stein. I http://www.sagemath.org/ I Mission: Creating a viable free open source alternative to Magma, Maple, Mathematica and Matlab. Viviane Pons (U-PSud) SageMath and SageMathCloud October 19, 2016 2 / 7 SageMath Source and language I the main language of Sage is python (but there are many other source languages: cython, C, C++, fortran) I the source is distributed under the GPL licence. Viviane Pons (U-PSud) SageMath and SageMathCloud October 19, 2016 3 / 7 SageMath Sage and libraries One of the original purpose of Sage was to put together the many existent open source mathematics software programs: Atlas, GAP, GMP, Linbox, Maxima, MPFR, PARI/GP, NetworkX, NTL, Numpy/Scipy, Singular, Symmetrica,... Sage is all-inclusive: it installs all those libraries and gives you a common python-based interface to work on them. On top of it is the python / cython Sage library it-self. Viviane Pons (U-PSud) SageMath and SageMathCloud October 19, 2016 4 / 7 SageMath Sage and libraries I You can use a library explicitly: sage: n = gap(20062006) sage: type(n) <c l a s s 'sage. interfaces .gap.GapElement'> sage: n.Factors() [ 2, 17, 59, 73, 137 ] I But also, many of Sage computation are done through those libraries without necessarily telling you: sage: G = PermutationGroup([[(1,2,3),(4,5)],[(3,4)]]) sage : G . g a p () Group( [ (3,4), (1,2,3)(4,5) ] ) Viviane Pons (U-PSud) SageMath and SageMathCloud October 19, 2016 5 / 7 SageMath Development model Development model I Sage is developed by researchers for researchers: the original philosophy is to develop what you need for your research and share it with the community.
    [Show full text]
  • Mastering Machine Learning with Scikit-Learn
    www.it-ebooks.info Mastering Machine Learning with scikit-learn Apply effective learning algorithms to real-world problems using scikit-learn Gavin Hackeling BIRMINGHAM - MUMBAI www.it-ebooks.info Mastering Machine Learning with scikit-learn Copyright © 2014 Packt Publishing All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior written permission of the publisher, except in the case of brief quotations embedded in critical articles or reviews. Every effort has been made in the preparation of this book to ensure the accuracy of the information presented. However, the information contained in this book is sold without warranty, either express or implied. Neither the author, nor Packt Publishing, and its dealers and distributors will be held liable for any damages caused or alleged to be caused directly or indirectly by this book. Packt Publishing has endeavored to provide trademark information about all of the companies and products mentioned in this book by the appropriate use of capitals. However, Packt Publishing cannot guarantee the accuracy of this information. First published: October 2014 Production reference: 1221014 Published by Packt Publishing Ltd. Livery Place 35 Livery Street Birmingham B3 2PB, UK. ISBN 978-1-78398-836-5 www.packtpub.com Cover image by Amy-Lee Winfield [email protected]( ) www.it-ebooks.info Credits Author Project Coordinator Gavin Hackeling Danuta Jones Reviewers Proofreaders Fahad Arshad Simran Bhogal Sarah
    [Show full text]
  • Installing Python Ø Basics of Programming • Data Types • Functions • List Ø Numpy Library Ø Pandas Library What Is Python?
    Python in Data Science Programming Ha Khanh Nguyen Agenda Ø What is Python? Ø The Python Ecosystem • Installing Python Ø Basics of Programming • Data Types • Functions • List Ø NumPy Library Ø Pandas Library What is Python? • “Python is an interpreted high-level general-purpose programming language.” – Wikipedia • It supports multiple programming paradigms, including: • Structured/procedural • Object-oriented • Functional The Python Ecosystem Source: Fabien Maussion’s “Getting started with Python” Workshop Installing Python • Install Python through Anaconda/Miniconda. • This allows you to create and work in Python environments. • You can create multiple environments as needed. • Highly recommended: install Python through Miniconda. • Are we ready to “play” with Python yet? • Almost! • Most data scientists use Python through Jupyter Notebook, a web application that allows you to create a virtual notebook containing both text and code! • Python Installation tutorial: [Mac OS X] [Windows] For today’s seminar • Due to the time limitation, we will be using Google Colab instead. • Google Colab is a free Jupyter notebook environment that runs entirely in the cloud, so you can run Python code, write report in Jupyter Notebook even without installing the Python ecosystem. • It is NOT a complete alternative to installing Python on your local computer. • But it is a quick and easy way to get started/try out Python. • You will need to log in using your university account (possible for some schools) or your personal Google account. Basics of Programming Indentations – Not Braces • Other languages (R, C++, Java, etc.) use braces to structure code. # R code (not Python) a = 10 if (a < 5) { result = TRUE b = 0 } else { result = FALSE b = 100 } a = 10 if (a < 5) { result = TRUE; b = 0} else {result = FALSE; b = 100} Basics of Programming Indentations – Not Braces • Python uses whitespaces (tabs or spaces) to structure code instead.
    [Show full text]
  • How to Access Python for Doing Scientific Computing
    How to access Python for doing scientific computing1 Hans Petter Langtangen1,2 1Center for Biomedical Computing, Simula Research Laboratory 2Department of Informatics, University of Oslo Mar 23, 2015 A comprehensive eco system for scientific computing with Python used to be quite a challenge to install on a computer, especially for newcomers. This problem is more or less solved today. There are several options for getting easy access to Python and the most important packages for scientific computations, so the biggest issue for a newcomer is to make a proper choice. An overview of the possibilities together with my own recommendations appears next. Contents 1 Required software2 2 Installing software on your laptop: Mac OS X and Windows3 3 Anaconda and Spyder4 3.1 Spyder on Mac............................4 3.2 Installation of additional packages.................5 3.3 Installing SciTools on Mac......................5 3.4 Installing SciTools on Windows...................5 4 VMWare Fusion virtual machine5 4.1 Installing Ubuntu...........................6 4.2 Installing software on Ubuntu....................7 4.3 File sharing..............................7 5 Dual boot on Windows8 6 Vagrant virtual machine9 1The material in this document is taken from a chapter in the book A Primer on Scientific Programming with Python, 4th edition, by the same author, published by Springer, 2014. 7 How to write and run a Python program9 7.1 The need for a text editor......................9 7.2 Spyder................................. 10 7.3 Text editors.............................. 10 7.4 Terminal windows.......................... 11 7.5 Using a plain text editor and a terminal window......... 12 8 The SageMathCloud and Wakari web services 12 8.1 Basic intro to SageMathCloud...................
    [Show full text]
  • An Introduction to Numpy and Scipy
    An introduction to Numpy and Scipy Table of contents Table of contents ............................................................................................................................ 1 Overview ......................................................................................................................................... 2 Installation ...................................................................................................................................... 2 Other resources .............................................................................................................................. 2 Importing the NumPy module ........................................................................................................ 2 Arrays .............................................................................................................................................. 3 Other ways to create arrays............................................................................................................ 7 Array mathematics .......................................................................................................................... 8 Array iteration ............................................................................................................................... 10 Basic array operations .................................................................................................................. 11 Comparison operators and value testing ....................................................................................
    [Show full text]
  • Cheat Sheet Pandas Python.Indd
    Python For Data Science Cheat Sheet Asking For Help Dropping >>> help(pd.Series.loc) Pandas Basics >>> s.drop(['a', 'c']) Drop values from rows (axis=0) Selection Also see NumPy Arrays >>> df.drop('Country', axis=1) Drop values from columns(axis=1) Learn Python for Data Science Interactively at www.DataCamp.com Getting Sort & Rank >>> s['b'] Get one element -5 Pandas >>> df.sort_index() Sort by labels along an axis Get subset of a DataFrame >>> df.sort_values(by='Country') Sort by the values along an axis The Pandas library is built on NumPy and provides easy-to-use >>> df[1:] Assign ranks to entries Country Capital Population >>> df.rank() data structures and data analysis tools for the Python 1 India New Delhi 1303171035 programming language. 2 Brazil Brasília 207847528 Retrieving Series/DataFrame Information Selecting, Boolean Indexing & Setting Basic Information Use the following import convention: By Position (rows,columns) Select single value by row & >>> df.shape >>> import pandas as pd >>> df.iloc[[0],[0]] >>> df.index Describe index Describe DataFrame columns 'Belgium' column >>> df.columns Pandas Data Structures >>> df.info() Info on DataFrame >>> df.iat([0],[0]) Number of non-NA values >>> df.count() Series 'Belgium' Summary A one-dimensional labeled array a 3 By Label Select single value by row & >>> df.sum() Sum of values capable of holding any data type b -5 >>> df.loc[[0], ['Country']] Cummulative sum of values 'Belgium' column labels >>> df.cumsum() >>> df.min()/df.max() Minimum/maximum values c 7 Minimum/Maximum index
    [Show full text]
  • Homework 2 an Introduction to Convolutional Neural Networks
    Homework 2 An Introduction to Convolutional Neural Networks 11-785: Introduction to Deep Learning (Spring 2019) OUT: February 17, 2019 DUE: March 10, 2019, 11:59 PM Start Here • Collaboration policy: { You are expected to comply with the University Policy on Academic Integrity and Plagiarism. { You are allowed to talk with / work with other students on homework assignments { You can share ideas but not code, you must submit your own code. All submitted code will be compared against all code submitted this semester and in previous semesters using MOSS. • Overview: { Part 1: All of the problems in Part 1 will be graded on Autolab. You can download the starter code from Autolab as well. This assignment has 100 points total. { Part 2: This section of the homework is an open ended competition hosted on Kaggle.com, a popular service for hosting predictive modeling and data analytic competitions. All of the details for completing Part 2 can be found on the competition page. • Submission: { Part 1: The compressed handout folder hosted on Autolab contains four python files, layers.py, cnn.py, mlp.py and local grader.py, and some folders data, autograde and weights. File layers.py contains common layers in convolutional neural networks. File cnn.py contains outline class of two convolutional neural networks. File mlp.py contains a basic MLP network. File local grader.py is foryou to evaluate your own code before submitting online. Folders contain some data to be used for section 1.3 and 1.4 of this homework. Make sure that all class and function definitions originally specified (as well as class attributes and methods) in the two files layers.py and cnn.py are fully implemented to the specification provided in this write-up.
    [Show full text]
  • User Guide Release 1.18.4
    NumPy User Guide Release 1.18.4 Written by the NumPy community May 24, 2020 CONTENTS 1 Setting up 3 2 Quickstart tutorial 9 3 NumPy basics 33 4 Miscellaneous 97 5 NumPy for Matlab users 103 6 Building from source 111 7 Using NumPy C-API 115 Python Module Index 163 Index 165 i ii NumPy User Guide, Release 1.18.4 This guide is intended as an introductory overview of NumPy and explains how to install and make use of the most important features of NumPy. For detailed reference documentation of the functions and classes contained in the package, see the reference. CONTENTS 1 NumPy User Guide, Release 1.18.4 2 CONTENTS CHAPTER ONE SETTING UP 1.1 What is NumPy? NumPy is the fundamental package for scientific computing in Python. It is a Python library that provides a multidi- mensional array object, various derived objects (such as masked arrays and matrices), and an assortment of routines for fast operations on arrays, including mathematical, logical, shape manipulation, sorting, selecting, I/O, discrete Fourier transforms, basic linear algebra, basic statistical operations, random simulation and much more. At the core of the NumPy package, is the ndarray object. This encapsulates n-dimensional arrays of homogeneous data types, with many operations being performed in compiled code for performance. There are several important differences between NumPy arrays and the standard Python sequences: • NumPy arrays have a fixed size at creation, unlike Python lists (which can grow dynamically). Changing the size of an ndarray will create a new array and delete the original.
    [Show full text]
  • Introduction to Python in HPC
    Introduction to Python in HPC Omar Padron National Center for Supercomputing Applications University of Illinois at Urbana Champaign [email protected] Outline • What is Python? • Why Python for HPC? • Python on Blue Waters • How to load it • Available Modules • Known Limitations • Where to Get More information 2 What is Python? • Modern programming language with a succinct and intuitive syntax • Has grown over the years to offer an extensive software ecosystem • Interprets JIT-compiled byte-code, but can take full advantage of AOT-compiled code • using python libraries with AOT code (numpy, scipy) • interfacing seamlessly with AOT code (CPython API) • generating native modules with/from AOT code (cython) 3 Why Python for HPC? • When you want to maximize productivity (not necessarily performance) • Mature language with large user base • Huge collection of freely available software libraries • High Performance Computing • Engineering, Optimization, Differential Equations • Scientific Datasets, Analysis, Visualization • General-purpose computing • web apps, GUIs, databases, and tons more • Python combines the best of both JIT and AOT code. • Write performance critical loops and kernels in C/FORTRAN • Write high level logic and “boiler plate” in Python 4 Python on Blue Waters • Blue Waters Python Software Stack (bw-python) • Over 60 python modules supporting HPC applications • Implements the Scipy Stack Specification (http://www.scipy.org/stackspec.html) • Provides extensions for parallel programming (mpi4py, pycuda) and scientific data IO (h5py, pycuda) • Numerical routines linked from ACML • Built specifically for running jobs on the compute nodes module swap PrgEnv-cray PrgEnv-gnu module load bw-python python ! Python 2.7.5 (default, Jun 2 2014, 02:41:21) [GCC 4.8.2 20131016 (Cray Inc.)] on Blue Waters Type "help", "copyright", "credits" or "license" for more information.
    [Show full text]
  • Sage 9.4 Reference Manual: Coercion Release 9.4
    Sage 9.4 Reference Manual: Coercion Release 9.4 The Sage Development Team Aug 24, 2021 CONTENTS 1 Preliminaries 1 1.1 What is coercion all about?.......................................1 1.2 Parents and Elements...........................................2 1.3 Maps between Parents..........................................3 2 Basic Arithmetic Rules 5 3 How to Implement 9 3.1 Methods to implement..........................................9 3.2 Example................................................. 10 3.3 Provided Methods............................................ 13 4 Discovering new parents 15 5 Modules 17 5.1 The coercion model........................................... 17 5.2 Coerce actions.............................................. 36 5.3 Coerce maps............................................... 39 5.4 Coercion via construction functors.................................... 40 5.5 Group, ring, etc. actions on objects................................... 68 5.6 Containers for storing coercion data................................... 71 5.7 Exceptions raised by the coercion model................................ 78 6 Indices and Tables 79 Python Module Index 81 Index 83 i ii CHAPTER ONE PRELIMINARIES 1.1 What is coercion all about? The primary goal of coercion is to be able to transparently do arithmetic, comparisons, etc. between elements of distinct sets. As a concrete example, when one writes 1 + 1=2 one wants to perform arithmetic on the operands as rational numbers, despite the left being an integer. This makes sense given the obvious
    [Show full text]
  • Pandas: Powerful Python Data Analysis Toolkit Release 0.25.3
    pandas: powerful Python data analysis toolkit Release 0.25.3 Wes McKinney& PyData Development Team Nov 02, 2019 CONTENTS i ii pandas: powerful Python data analysis toolkit, Release 0.25.3 Date: Nov 02, 2019 Version: 0.25.3 Download documentation: PDF Version | Zipped HTML Useful links: Binary Installers | Source Repository | Issues & Ideas | Q&A Support | Mailing List pandas is an open source, BSD-licensed library providing high-performance, easy-to-use data structures and data analysis tools for the Python programming language. See the overview for more detail about whats in the library. CONTENTS 1 pandas: powerful Python data analysis toolkit, Release 0.25.3 2 CONTENTS CHAPTER ONE WHATS NEW IN 0.25.2 (OCTOBER 15, 2019) These are the changes in pandas 0.25.2. See release for a full changelog including other versions of pandas. Note: Pandas 0.25.2 adds compatibility for Python 3.8 (GH28147). 1.1 Bug fixes 1.1.1 Indexing • Fix regression in DataFrame.reindex() not following the limit argument (GH28631). • Fix regression in RangeIndex.get_indexer() for decreasing RangeIndex where target values may be improperly identified as missing/present (GH28678) 1.1.2 I/O • Fix regression in notebook display where <th> tags were missing for DataFrame.index values (GH28204). • Regression in to_csv() where writing a Series or DataFrame indexed by an IntervalIndex would incorrectly raise a TypeError (GH28210) • Fix to_csv() with ExtensionArray with list-like values (GH28840). 1.1.3 Groupby/resample/rolling • Bug incorrectly raising an IndexError when passing a list of quantiles to pandas.core.groupby. DataFrameGroupBy.quantile() (GH28113).
    [Show full text]
  • MATLAB Commands in Numerical Python (Numpy) 1 Vidar Bronken Gundersen /Mathesaurus.Sf.Net
    MATLAB commands in numerical Python (NumPy) 1 Vidar Bronken Gundersen /mathesaurus.sf.net MATLAB commands in numerical Python (NumPy) Copyright c Vidar Bronken Gundersen Permission is granted to copy, distribute and/or modify this document as long as the above attribution is kept and the resulting work is distributed under a license identical to this one. The idea of this document (and the corresponding xml instance) is to provide a quick reference for switching from matlab to an open-source environment, such as Python, Scilab, Octave and Gnuplot, or R for numeric processing and data visualisation. Where Octave and Scilab commands are omitted, expect Matlab compatibility, and similarly where non given use the generic command. Time-stamp: --T:: vidar 1 Help Desc. matlab/Octave Python R Browse help interactively doc help() help.start() Octave: help -i % browse with Info Help on using help help help or doc doc help help() Help for a function help plot help(plot) or ?plot help(plot) or ?plot Help for a toolbox/library package help splines or doc splines help(pylab) help(package=’splines’) Demonstration examples demo demo() Example using a function example(plot) 1.1 Searching available documentation Desc. matlab/Octave Python R Search help files lookfor plot help.search(’plot’) Find objects by partial name apropos(’plot’) List available packages help help(); modules [Numeric] library() Locate functions which plot help(plot) find(plot) List available methods for a function methods(plot) 1.2 Using interactively Desc. matlab/Octave Python R Start session Octave: octave -q ipython -pylab Rgui Auto completion Octave: TAB or M-? TAB Run code from file foo(.m) execfile(’foo.py’) or run foo.py source(’foo.R’) Command history Octave: history hist -n history() Save command history diary on [..] diary off savehistory(file=".Rhistory") End session exit or quit CTRL-D q(save=’no’) CTRL-Z # windows sys.exit() 2 Operators Desc.
    [Show full text]