Releases-0.8.1
Total Page:16
File Type:pdf, Size:1020Kb
Ray Documentation Release 0.8.1 The Ray Team Feb 02, 2020 Installation 1 Quick Start 3 2 Tune Quick Start 5 3 RLlib Quick Start 7 4 More Information 9 4.1 Blog and Press..............................................9 4.2 Talks (Videos).............................................. 10 4.3 Slides................................................... 10 4.4 Academic Papers............................................. 10 5 Getting Involved 11 5.1 Installing Ray............................................... 11 5.2 Using Ray................................................ 14 5.3 Configuring Ray............................................. 49 5.4 Deploying Ray.............................................. 52 5.5 Examples Overview........................................... 71 5.6 Ray Package Reference......................................... 98 5.7 Tune: A Scalable Hyperparameter Tuning Library........................... 116 5.8 Tune Walkthrough............................................ 118 5.9 Tune Advanced Tutorials......................................... 122 5.10 Tune User Guide............................................. 133 5.11 Tune Distributed Experiments...................................... 150 5.12 Tune Trial Schedulers.......................................... 157 5.13 Tune Search Algorithms......................................... 165 5.14 Tune Package Reference......................................... 176 5.15 Tune Design Guide............................................ 200 5.16 Tune Examples.............................................. 201 5.17 Contributing to Tune........................................... 203 5.18 RLlib: Scalable Reinforcement Learning................................ 204 5.19 RLlib Table of Contents......................................... 206 5.20 RLlib Training APIs........................................... 210 5.21 RLlib Environments........................................... 228 5.22 RLlib Models, Preprocessors, and Action Distributions......................... 237 5.23 RLlib Algorithms............................................. 249 5.24 RLlib Offline Datasets.......................................... 272 i 5.25 RLlib Concepts and Custom Algorithms................................ 277 5.26 RLlib Examples............................................. 288 5.27 RLlib Development........................................... 290 5.28 RLlib Package Reference........................................ 293 5.29 RaySGD: Distributed Deep Learning.................................. 331 5.30 Pandas on Ray.............................................. 343 5.31 Ray Projects (Experimental)....................................... 343 5.32 Signal API (Experimental)........................................ 347 5.33 Async API (Experimental)........................................ 350 5.34 Ray Serve (Experimental)........................................ 351 5.35 Parallel Iterator API (Experimental)................................... 355 5.36 multiprocessing.Pool API (Experimental)................................ 364 5.37 Development Tips............................................ 365 5.38 Profiling for Ray Developers....................................... 368 5.39 Fault Tolerance.............................................. 369 5.40 Getting Involved............................................. 371 Python Module Index 373 Index 375 ii Ray Documentation, Release 0.8.1 Ray is a fast and simple framework for building and running distributed applications. Tip: Join our community slack to discuss Ray! Ray is packaged with the following libraries for accelerating machine learning workloads: • Tune: Scalable Hyperparameter Tuning • RLlib: Scalable Reinforcement Learning • RaySGD: Distributed Training Star us on on GitHub. You can also get started by visiting our Tutorials. For the latest wheels (nightlies), see the installation page. Installation 1 Ray Documentation, Release 0.8.1 2 Installation CHAPTER 1 Quick Start First, install Ray with: pip install ray # Execute Python functions in parallel. import ray ray.init() @ray.remote def f(x): return x * x futures= [f.remote(i) for i in range(4)] print(ray.get(futures)) To use Ray’s actor model: import ray ray.init() @ray.remote class Counter(object): def __init__(self): self.n=0 def increment(self): self.n+=1 def read(self): return self.n counters= [Counter.remote() for i in range(4)] [c.increment.remote() for c in counters] futures= [c.read.remote() for c in counters] print(ray.get(futures)) 3 Ray Documentation, Release 0.8.1 Visit the Walkthrough page a more comprehensive overview of Ray features. Ray programs can run on a single machine, and can also seamlessly scale to large clusters. To execute the above Ray script in the cloud, just download this configuration file, and run: ray submit [CLUSTER.YAML] example.py --start Read more about launching clusters. 4 Chapter 1. Quick Start CHAPTER 2 Tune Quick Start Tune is a library for hyperparameter tuning at any scale. With Tune, you can launch a multi-node distributed hy- perparameter sweep in less than 10 lines of code. Tune supports any deep learning framework, including PyTorch, TensorFlow, and Keras. Note: To run this example, you will need to install the following: $ pip install ray torch torchvision filelock This example runs a small grid search to train a CNN using PyTorch and Tune. import torch.optim as optim from ray import tune from ray.tune.examples.mnist_pytorch import get_data_loaders, ConvNet, train, test def train_mnist(config): train_loader, test_loader= get_data_loaders() model= ConvNet() optimizer= optim.SGD(model.parameters(), lr=config["lr"]) for i in range(10): train(model, optimizer, train_loader) acc= test(model, test_loader) tune.track.log(mean_accuracy=acc) analysis= tune.run( train_mnist, config={"lr": tune.grid_search([0.001, 0.01, 0.1])}) print("Best config:", analysis.get_best_config(metric="mean_accuracy")) # Get a dataframe for analyzing trial results. df= analysis.dataframe() 5 Ray Documentation, Release 0.8.1 If TensorBoard is installed, automatically visualize all trial results: tensorboard --logdir ~/ray_results 6 Chapter 2. Tune Quick Start CHAPTER 3 RLlib Quick Start RLlib is an open-source library for reinforcement learning built on top of Ray that offers both high scalability and a unified API for a variety of applications. pip install tensorflow # or tensorflow-gpu pip install ray[rllib] # also recommended: ray[debug] import gym from gym.spaces import Discrete, Box from ray import tune class SimpleCorridor(gym.Env): def __init__(self, config): self.end_pos= config["corridor_length"] self.cur_pos=0 self.action_space= Discrete(2) self.observation_space= Box(0.0, self.end_pos, shape=(1, )) def reset(self): self.cur_pos=0 return [self.cur_pos] def step(self, action): if action ==0 and self.cur_pos>0: self.cur_pos-=1 elif action ==1: self.cur_pos+=1 done= self.cur_pos>= self.end_pos return [self.cur_pos],1 if done else 0, done, {} tune.run( "PPO", config={ "env": SimpleCorridor, (continues on next page) 7 Ray Documentation, Release 0.8.1 (continued from previous page) "num_workers":4, "env_config":{"corridor_length":5}}) 8 Chapter 3. RLlib Quick Start CHAPTER 4 More Information Here are some talks, papers, and press coverage involving Ray and its libraries. Please raise an issue if any of the below links are broken! 4.1 Blog and Press • Modern Parallel and Distributed Python: A Quick Tutorial on Ray • Why Every Python Developer Will Love Ray • Ray: A Distributed System for AI (BAIR) • 10x Faster Parallel Python Without Python Multiprocessing • Implementing A Parameter Server in 15 Lines of Python with Ray • Ray Distributed AI Framework Curriculum • RayOnSpark: Running Emerging AI Applications on Big Data Clusters with Ray and Analytics Zoo • First user tips for Ray • [Tune] Tune: a Python library for fast hyperparameter tuning at any scale • [Tune] Cutting edge hyperparameter tuning with Ray Tune • [RLlib] New Library Targets High Speed Reinforcement Learning • [RLlib] Scaling Multi Agent Reinforcement Learning • [RLlib] Functional RL with Keras and Tensorflow Eager • [Modin] How to Speed up Pandas by 4x with one line of code • [Modin] Quick Tip – Speed up Pandas using Modin • Ray Blog 9 Ray Documentation, Release 0.8.1 4.2 Talks (Videos) • Programming at any Scale with Ray | SF Python Meetup Sept 2019 • Ray for Reinforcement Learning | Data Council 2019 • Scaling Interactive Pandas Workflows with Modin • Ray: A Distributed Execution Framework for AI | SciPy 2018 • Ray: A Cluster Computing Engine for Reinforcement Learning Applications | Spark Summit • RLlib: Ray Reinforcement Learning Library | RISECamp 2018 • Enabling Composition in Distributed Reinforcement Learning | Spark Summit 2018 • Tune: Distributed Hyperparameter Search | RISECamp 2018 4.3 Slides • Talk given at UC Berkeley DS100 • Talk given in October 2019 • [Tune] Talk given at RISECamp 2019 4.4 Academic Papers • Ray paper • Ray HotOS paper • RLlib paper • Tune paper 10 Chapter 4. More Information CHAPTER 5 Getting Involved • [email protected]: For discussions about development or any general questions. • StackOverflow: For questions about how to use Ray. • GitHub Issues: For reporting bugs and feature requests. • Pull Requests: For submitting code contributions. 5.1 Installing Ray Ray currently supports MacOS and Linux. Windows support is planned for the future. 5.1.1 Latest stable version You can install the latest stable version of Ray as follows. pip install -U ray # also recommended: ray[debug] 5.1.2 Latest Snapshots (Nightlies)