Skript for Community Meeting Topics LVM, and in RHEL7

Partition or not to partition

Depends on use cases and who is dealing with the storage. My recommendation : always create a partition. If booting in BIOS mode : MBR for OS disk, if using UEFI Boot GPT. Create GPT tables with parted! # parted /dev/vdb mklabel gpt # parted /dev/vdb mkpart primary 2048s 100% ( or in one command # parted -s /dev/vdb mklabel gpt mkpart primary 2048s 100% ) Partition Alignment automatically by anaconda. For manual alignment see : http://rainbow.chard.org/2013/01/30/how-to-align-partitions-for-best-performance-using-parted/

LVM

Advanced command list : http://www.datadisk.co.uk/html_docs/redhat/rh_lvm.htm Stacked System : # pvs -v # pvs -a # vgs -a -o +devices # lvs -a -o +devices # lvs -a -o +devices,stripes,stripesize,seg_pe_ranges --segments

See e.g. # lvs -o test for all fields There is the lvm command. Traditional LVM snapshots get a reserved space if it runs full, the snapshot is invalid and not usable any more. To move snapshot changes to the original, you have to merge : How to merge a Snapshot with it's origin in RHEL 6 / 7 # lvmerge # see https://access.redhat.com/kb/docs/DOC-47130 Resize filesystem along with the logical volume. # lvresize -r ...

Activate / Deactivate LVs / VGs # vgchange -a [n|y] ... # lvchange -a [n|y] ...

Split a volume group # pvmove /dev/sdc1 /dev/sdb1 # lvchange -a n /dev/myvg/mylv # vgsplit myvg newvg /dev/sdc1

Display Usage on physical volumes : # pvs -o+pv_used PV VG Fmt Attr PSize PFree Used /dev/vda2 rhel lvm2 a-- 24.51g 0 24.51g Naming : “incorporate the hostname into the vg” Stripes and Raid Modes When creating the LV “-i N | --stripes N” where N is the number of PV stripes. Use “-m N | --mirrors N” to specify how many pvs you want to mirror to. I do recommend Hardware Raid instead of Software Raid. See man lvcreate EXAMPLE section for how to create LVM Software raid. Thin Provisioning Example # pvcreate /dev/vdb1 /dev/vdc1 /dev/vdd1 # vgcreate demovg /dev/vdb1 /dev/vdc1 create a thin pool in that Volume Group # lvcreate -T -L 500M demovg/thinpool create a thin logical volume in that pool # lvcreate -V1G -T demovg/thinpool -n thinlv01 both steps in one command : # lvcreate -L 500M -T demovg/thinpool -V1G -n thinlv01 You can convert existing logical volumes to thin pools. ( destroys data ) # lvconvert --thinpool demovg/thinpool # mkfs.xfs /dev/demovg/thinlv01 # mount /dev/demovg/thinlv01 /mnt # dd if=/dev/zero of=/mnt/testfile bs=1M count=250 # df -h # lvs You can only extend Thin Pools. You have to monitor your thin pools for available space. Your thin pool and thin volumes break if you the pool runs out of space. You can not recover from this. What happens if the thin pool is full? --> system breaks : umount / mount hangs indefinitely You will have to monitor the thin pool! Use /usr/lib/systemd/system/lvm2-monitor.service

# systemctl status lvm2-monitor.service works via dmeventd # man dmeventd --> writes messages to system log starting with 80% full, last warning 95% full Thin Provisioning Snapshots # lvcreate -s -n thinsnap1lv /dev/demovg/thinlv01 If you add -l or -L than a regular snapshot is created lvm.conf : auto_set_activation_skip = 1 # default for thin snap auto_activation off : # lvcreate -s -kn ... auto_activation on # lvcretae -s -ky ... Thin snaphots are created with activation skip flag implicitly set to 'true'. That means they won't be activated by ordinary lvchange command. You need to override the flag by issuing -K parameter during activation. Activate thin snapshots # lvchange -a y -K /dev/demovg/thinsnap1lv

Temp Directories

RHEL5 had the tmpwatch that would delete old files in /tmp and /var/tmp. RHEL6 has it also. There was the idea of removing /tmp/* with each reboot, but it was not implemented. The default settings are to remove everything that is older than 10 days from /tmp, and everything that is older than 30 days from /var/tmp. There are some exceptions however, look into tmpwatch for details. RHEL7 has no tmpwatch. This task falls to systemd now. # man systemd-tmpfiles Show config : # locate tmpwatch.d

BTRFS Possibly still Tech Preview in RHEL7 --> As smooth as Butter pdf B-Tree-filesystem, btrfs butter-fs. Btrfs is a copy on filesystem (COW) . - data is never overwritten - allows for advanced features - e.g. persistent snapshots Open Questions ? -> https://btrfs.wiki.kernel.org/index.php/FAQ Install btrfs-progs. # yum install btrfs-progs -y

A Volume or a Pool - is a set of devices that are bundled together and supply storage - the volume is mountable and usable as file system

A Subvolume - you can generate subvolumes in the pool - subvolumes can operate as independent filesystems - you can mark a subvolume as default - subvolumes are mountable and usable as filesystems

A Snapshot - is a special kind of subvolume - is a moment in time recording - in btrfs you can snapshot files or directories

A Device? - is a block special file that points to some storage to be used - you can use any device from a pool to mount the pool volume or subvolumes Prep : three devices # parted -s /dev/vdd mklabel gpt mkpart primary 2048s 100% check for optimal alignment # parted /dev/vdd align-check opt 1

Scan Devices for Btrfs, like you would for LVM Pvs .. # btrfs device scan

Display details like UUID # btrfs filesystem show

Create a btrfs # mkfs.btrfs /dev/sdb1

Data and Meta data are treated differently by btrfs. An administrator can control how to secure them. You can even change this on the fly.

Or even stripe across multiple devices # mkfs.btrfs /dev/sdb1 /dev/sdc1 /dev/sdd1 --> mirrors metadata, stripes data

Stripe metadata & data # mkfs.btrfs -m raid0 /dev/sdb1 /dev/sdc1 /dev/sdd1

Mirror metadate & data # mkfs.btrfs -d raid1 /dev/sdb1 /dev/sdc1 /dev/sdd1

RAID capabilities March 2014 : raid0 raid1 raid10 raid5 raid6 Btrfs Filesystems can be converted to other raid modes online. # btrfs balance start -dconvert=raid5 -mconvert=raid5 /mountpoint

Simple Example : # mkfs.btrfs /dev/terravg/btrfslv # btrfs device scan # btrfs filesystem show

You can now mount the volume by using one of it's devices. # mkdir /mnt/btrfs # mount /dev/terravg/btrfslv /mnt/btrfs

Btrfs is very dynamic. If you have multiple devices in a volume, you can remove or add devices. You will need to balance the file system after adding new devices. e.g. if I had an sdc1 and this needs replacing # btrfs device delete /dev/sdc1 /mnt/btrfs # btrfs device add /dev/sdc1 /mnt/btrfs # btrfs filesystem balance /mnt/btrfs

Subvolumes are the Logical Volume equivalent of btrfs create a subvolume # btrfs subvolume create /mnt/btrfs/mysub - subvolumes show up as directories in the pool volume mount list subvolumes # btrfs subvolume list /mnt/btrfs --> shows an ID e.g. 256 mount a specific subvolume # mount -t btrfs -o subvol=mysub /dev/vg/btrfslv /mnt/mysub mark a specific subvolume as default # btrfs subvolume list /mnt/btrfs ID 256 gen 7 top level 5 mysub1 # btrfs subvolume set-default 256 /mnt/btrfs --> mount will now chose this subvolume by defaults --> mount root with subvolid=0 put some data into /mnt/btrfs/mysub1 # cp -r /etc/sysconfig /mnt/btrfs/mysub1 create a snapshot of mysub1 # btrfs subvolume snapshot /mnt/btrfs/mysub1 /mnt/btrfs/snap_root list subvolumes again # btrfs subvolume list /mnt/btrfs --> you can mount the snapshot like all other subvolumes - you can use these snapshots for backups - you can create snapshots via cron for save keeping - they are available for files and directories as well - data is only freed if no snapshot is still using it. delete snapshots or subvolumes that you do not need # btrfs subvolume delete /mnt/btrfs/snap_root display filesystem sizes # btrfs filesystem df /mnt/btrfs # btrfs filesystem show /mnt/btrfs

Online Defragmentation # btrfs filesystem defragment [file]

Troubleshooting -- try to get data off first # btrfs restore -- try btrfsck next ( might make things worse ) # btrfs

- How to enter btrfs and the mounting of subvolumes into fstab? Should be as easy as putting subvol= in option field - Subvolumes of Subvolumes works - There is a yum plugin that uses btrfs or thin lvm : fs-snapshot Systemd in RHEV 7

Systemd replaces Upstart of RHEL 6 Initial Design Document : http://0pointer.de/blog/projects/systemd.html SystemD Homepage http://www.freedesktop.org/wiki/Software/systemd/ It brings better management for features like : - - SELinux - private tmp - delayed service activation - on demand service activation - massive parallelization of service start - focus on services instead of processes Lennart Pöttering talking about SystemD at Red Hat Summit 2013 (Subscription Content) https://access.redhat.com/site/videos/403833 12 different Units : # man systemd.unit # systemctl list-units Configuration : General config : /etc/systemd/ System unit files Do NOT edit : /usr/lib/systemd/system/ Override location : /etc/systemd/system/

Service Management # systemctl list-units -t service --all start|stop|reload|restart|try-restart|enable|disable|kill # systemctl status sshd sshd only started on demand # systemctl disable sshd.service # systemctl enable sshd.socket

Properly ending a service # systemctl stop crond.service # systemctl kill crond.service # systemctl kill -s SIGKILL crond.service

Enable / Disable a service according to distribution presets systemctl preset /usr/lib/systemd/system-preset/90-default.preset

Runlevel are now in systemd as targets: # locate '*runlevel*.target' | xargs ls -l | cut -c 40-150 /usr/lib/systemd/system/runlevel0.target -> poweroff.target /usr/lib/systemd/system/runlevel1.target -> rescue.target /usr/lib/systemd/system/runlevel2.target -> multi-user.target /usr/lib/systemd/system/runlevel3.target -> multi-user.target /usr/lib/systemd/system/runlevel4.target -> multi-user.target /usr/lib/systemd/system/runlevel5.target -> graphical.target /usr/lib/systemd/system/runlevel6.target -> reboot.target

# systemctl isolate multi-user.target # systemctl get-default # systemctl set-default multi-user.target # systemctl default # systemctl rescue # systemctl suspend

Do I need to convert SysV start scritps ? No. But it is recommended. # less /etc/init.d/README

Quickstart : How to write unit files for services For general syntax see # man systemd.unit

# vim /etc/systemd/system/foo.service [Unit] Description=A Foo demo to show service files After=syslog.target [Service] ExecStart=/usr/bin/sleep 30 [Install] WantedBy=multi­user.target

This service dies after 30 seconds. You can make systemd restart it. See # man systemd.service

Example for a DBus Service : [Unit] Description=ABRT Automated Bug Reporting Tool After=syslog.target [Service] Type=dbus BusName=com.redhat.abrt ExecStart=/usr/sbin/abrtd ­d ­s [Install] WantedBy=multi­user.target

How to modify an existing unit file Option A : copy and modify # cp /usr/lib/systemd/system/sshd.service /etc/systemd/system Tell systemd to reload config # kill -1 1 or # systemctl daemon-reload --> you will be decoupled from changes to the original Option B : create a drop in to add your changes to the default config # mkdir /etc/systemd/system/abrtd.service.d # cat <> /etc/systemd/system/abrtd.service.d/restart.conf > [SERVICE] > Restart=always > END

# systemctl daemon-reload

If you want to show all settings : # systemctl show abrtd.service # systemctl show abrtd.service | grep Restart