
Ceph documentation Release dev Ceph developers August 08, 2012 CONTENTS 1 Getting Started 3 1.1 5-minute Quick Start...........................................3 1.2 RBD Quick Start.............................................4 1.3 Ceph FS Quick Start...........................................5 1.4 Get Involved in the Ceph Community!..................................5 1.5 Installing Ceph Manually........................................6 2 Installation 7 2.1 Hardware Recommendations.......................................7 2.2 Installing Debian/Ubuntu Packages...................................8 2.3 Installing RPM Packages.........................................9 2.4 Installing Chef.............................................. 10 2.5 Installing OpenStack........................................... 14 3 Configuration 15 3.1 Hard Disk and File System Recommendations............................. 15 3.2 Ceph Configuration Files......................................... 16 3.3 Deploying with mkcephfs ....................................... 39 3.4 Deploying with Chef........................................... 40 3.5 Storage Pools............................................... 44 3.6 Authentication.............................................. 44 4 Operating a Cluster 47 4.1 Starting a Cluster............................................. 48 4.2 Checking Cluster Health......................................... 48 4.3 Stopping a Cluster............................................ 48 5 Ceph FS 49 5.1 Mount Ceph FS with the Kernel Driver................................. 49 5.2 Mount Ceph FS as a FUSE........................................ 49 5.3 Mount Ceph FS in your File Systems Table............................... 50 6 Block Devices 51 6.1 RADOS RBD Commands........................................ 51 6.2 RBD Kernel Object Operations..................................... 53 6.3 RBD Snapshotting............................................ 54 6.4 QEMU and RBD............................................. 56 6.5 Using libvirt with Ceph RBD.................................... 57 6.6 RBD and OpenStack........................................... 58 i 7 RADOS Gateway 59 7.1 Install Apache, FastCGI and RADOS GW............................... 59 7.2 Configuring RADOS Gateway...................................... 60 7.3 RADOS Gateway Configuration Reference............................... 62 7.4 RADOS S3 API............................................. 65 7.5 Swift-compatible API.......................................... 99 8 Operations 117 8.1 Managing a Ceph cluster......................................... 117 8.2 Radosgw installation and administration................................. 130 8.3 RBD setup and administration...................................... 133 8.4 Monitoring Ceph............................................. 133 9 Recommendations 135 9.1 Hardware................................................. 135 9.2 Filesystem................................................ 135 9.3 Data placement.............................................. 136 9.4 Disabling cryptography......................................... 136 10 Control commands 137 10.1 Monitor commands............................................ 137 10.2 System commands............................................ 137 10.3 AUTH subsystem............................................. 137 10.4 PG subsystem.............................................. 138 10.5 OSD subsystem............................................. 138 10.6 MDS subsystem............................................. 141 10.7 Mon subsystem.............................................. 141 11 API Documentation 143 11.1 Librados (C)............................................... 143 11.2 LibradosPP (C++)............................................ 171 11.3 Librbd (Python)............................................. 171 12 Ceph Source Code 177 12.1 Build Prerequisites............................................ 177 12.2 Downloading a Ceph Release Tarball.................................. 179 12.3 Set Up Git................................................ 179 12.4 Cloning the Ceph Source Code Repository............................... 180 12.5 Building Ceph.............................................. 180 12.6 Build Ceph Packages........................................... 181 12.7 Contributing Source Code........................................ 182 13 Internal developer documentation 183 13.1 Configuration Management System................................... 183 13.2 CephContext............................................... 185 13.3 CephFS delayed deletion......................................... 185 13.4 Documenting Ceph............................................ 185 13.5 File striping................................................ 187 13.6 Filestore filesystem compatilibity.................................... 189 13.7 Building Ceph Documentation...................................... 190 13.8 Kernel client troubleshooting (FS).................................... 192 13.9 Library architecture........................................... 192 13.10 Debug logs................................................ 192 13.11 Monitor bootstrap............................................ 193 13.12 Object Store Architecture Overview................................... 196 ii 13.13 OSD class path issues.......................................... 197 13.14 Peering.................................................. 197 13.15 Perf counters............................................... 200 13.16 PG (Placement Group) notes....................................... 203 13.17 RBD Layering.............................................. 205 13.18 OSD developer documentation...................................... 209 14 Manual pages 215 14.1 Section 1, executable programs or shell commands........................... 215 14.2 Section 8, system administration commands............................... 217 15 Architecture of Ceph 249 15.1 Monitor cluster.............................................. 249 15.2 RADOS.................................................. 250 15.3 Ceph filesystem............................................. 250 15.4 radosgw ................................................ 251 15.5 Rados Block Device (RBD)....................................... 251 15.6 Client................................................... 251 15.7 TODO.................................................. 252 16 Frequently Asked Questions 253 16.1 Is Ceph Production-Quality?....................................... 253 16.2 How can I add a question to this list?.................................. 253 17 Academic papers 255 18 Release Notes 257 18.1 v0.48 “argonaut”............................................. 257 19 Appendices 259 19.1 Differences from POSIX......................................... 259 Python Module Index 261 iii iv Ceph documentation, Release dev Ceph uniquely delivers object, block, and file storage in one unified system. Ceph is highly reliable, easy to manage, and free. The power of Ceph can transform your company’s IT infrastructure and your ability to manage vast amounts of data. Ceph delivers extraordinary scalability–thousands of clients accessing petabytes to exabytes of data. Ceph leverages commodity hardware and intelligent daemons to accommodate large numbers of storage hosts, which communicate with each other to replicate data, and redistribute data dynamically. Ceph’s cluster of monitors oversees the hosts in the Ceph storage cluster to ensure that the storage hosts are running smoothly. CONTENTS 1 Ceph documentation, Release dev 2 CONTENTS CHAPTER ONE GETTING STARTED Welcome to Ceph! The following sections provide information that will help you get started: 1.1 5-minute Quick Start Thank you for trying Ceph! Petabyte-scale data clusters are quite an undertaking. Before delving deeper into Ceph, we recommend setting up a cluster on a single host to explore some of the functionality. Ceph 5-Minute Quick Start is intended for use on one machine with a recent Debian/Ubuntu operating system. The intent is to help you exercise Ceph functionality without the deployment overhead associated with a production-ready storage cluster. 1.1.1 Install Debian/Ubuntu Install a recent release of Debian or Ubuntu (e.g., 12.04 precise). 1.1.2 Add Ceph Packages To get the latest Ceph packages, add a release key to APT, add a source location to your /etc/apt/sources.list, update your system and install Ceph. wget -q -O- https://raw.github.com/ceph/ceph/master/keys/release.asc | sudo apt-key add - echo deb http://ceph.com/debian/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list sudo apt-get update && sudo apt-get install ceph 1.1.3 Add a Configuration File Modify the contents of the following configuration file such that localhost is the actual host name, and the monitor IP address is the actual IP address of the host (i.e., not 127.0.0.1).Then, copy the contents of the modified configuration file and save it to /etc/ceph/ceph.conf. This file will configure Ceph to operate a monitor, two OSD daemons and one metadata server on your local machine. [osd] osd journal size= 1000 filestore xattr use omap = true [mon.a] host= localhost 3 Ceph documentation, Release dev mon addr = 127.0.0.1:6789 [osd.0] host= localhost [osd.1] host= localhost [mds.a] host= localhost 1.1.4 Deploy the Configuration To deploy the configuration, create a directory for each daemon as follows: sudo mkdir /var/lib/ceph/osd/ceph-0 sudo mkdir /var/lib/ceph/osd/ceph-1 sudo mkdir /var/lib/ceph/mon/ceph-a
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages267 Page
-
File Size-