AsiaBSDCon 2014 Proceedings March 13-16, 2014 Tokyo, Japan Copyright c 2014 BSD Research. All rights reserved. Unauthorized republication is prohibited. Published in Japan, March 2014 INDEX P1A: Bold, fast optimizing linker for BSD — Luba Tang P1B: Visualizing Unix: Graphing bhyve, ZFS and PF with Graphite 007 Michael Dexter P2A: LLVM in the FreeBSD Toolchain 013 David Chisnall P2B: NPF - progress and perspective 021 Mindaugas Rasiukevicius K1: OpenZFS: a Community of Open Source ZFS Developers 027 Matthew Ahrens K2: Bambi Meets Godzilla: They Elope 033 Eric Allman P3A: Snapshots, Replication, and Boot-Environments—How new ZFS utilities are changing FreeBSD & PC-BSD 045 Kris Moore P3B: Netmap as a core networking technology 055 Luigi Rizzo, Giuseppe Lettieri, and Michio Honda P4A: ZFS for the Masses: Management Tools Provided by the PC-BSD and FreeNAS Projects 065 Dru Lavigne P4B: OpenBGPD turns 10 years - Design, Implementation, Lessons learned 077 Henning Brauer P5A: Introduction to FreeNAS development 083 John Hixson P5B: VXLAN and Cloud-based networking with OpenBSD 091 Reyk Floeter INDEX P6A: Nested Paging in bhyve 097 Neel Natu and Peter Grehan P6B: Developing CPE Routers based on NetBSD: Fifteen Years of SEIL 107 Masanobu SAITOH and Hiroki SUENAGA P7A: Deploying FreeBSD systems with Foreman and mfsBSD 115 Martin Matuška P7B: Implementation and Modification for CPE Routers: Filter Rule Optimization, IPsec Interface and Ethernet Switch 119 Masanobu SAITOH and Hiroki SUENAGA K3: Modifying the FreeBSD kernel Netflix streaming servers — Scott Long K4: An Overview of Security in the FreeBSD Kernel 131 Dr. Marshall Kirk McKusick P8A: Transparent Superpages for FreeBSD on ARM 151 Zbigniew Bodek P8B: Carve your NetBSD 165 Pierre Pronchery and Guillaume Lasmayous P9A: How FreeBSD Boots: a soft-core MIPS perspective 179 Brooks Davis, Robert Norton, Jonathan Woodruff, and Robert N. M. Watson P9B: Adapting OSX to the enterprise 187 Jos Jansen P10A: Analysis of BSD Associate Exam Results 199 Jim Brown Visualizing Unix: Graphing bhyve, ZFS and PF with Graphite Michael Dexter <[email protected]> AsiaBSDCon 2014, Tokyo, Japan "Silence is golden", or so goes the classic Unix tenet and the result is that a traditional Unix system provides us only a command prompt while performing its most demanding tasks. While this is perfectly correct behavior, it provides the operator few insights into the strain that a given system may be experiencing or if the system is behaving unexpectedly. In this study we will explore a strategy for institutionally visualizing Unix system activity using collectd, its site-specific alternatives, Graphite, DTrace and FreeBSD. We have chosen FreeBSD because it includes a trifecta of Unix innovations: the bhyve hypervisor, the PF packet filter and the ZFS file system. While each of these tools provides its own facilities for displaying performance metrics, they collectively present a challenge to quantify their interaction. Existing Facilities Complementing the familiar yet verbose top(1), tcpdump(1) and gstat(8) commands, bhyve, PF and ZFS each have their own dedicated tools for interactively and noninteractively displaying their activity metrics. The first of these, the bhyve hypervisor, includes the most limited quantification facility of the set. The /usr/sbin/bhyvectl --get-stats --vm=<vm name> command provides a summary of virtual machine (VM) operation with an emphasis on its kernel resource utilization but few with insights into its relative performance to the host. The PF packet filter includes the pflog(4) logging interface for use with tcpdump(1) but the output is consistent with standard tcpdump(1) behavior, providing literally a view of active network connections. tcpdump(1) is complemented by other in-base tools such as netstat(1) but none make any effort to visualize the activity of a given network interface. Finally, the ZFS file system benefits from in-base tools such as gstat(8) which introduce visual aids such as color coding of file system input/output operations and can be further complimented with tools like sysutils/zfs-stats but each nonetheless provides a simplistic summary of file system activity and performance. Environment While the subject of this study is the bhyve hypervisor itself, bhyve will also serve as the primary environment for the study. This choice was made because of bhyve's indistinguishable host and VM performance with CPU-intensive tasks, precluding the use of a more-familiar virtualization technology such as FreeBSD jail(8). Virtual machines will be used for not only as participant hosts but can also be used for primary hosts as the statistics collection and graphing host. The choice of the bhyve hypervisor allows for this study to be conducted on FreeBSD 10.0 or later, PC-BSD 10.0 or later and its sister distribution TrueOS. Testing Methodology While the bhyve hypervisor is fully functional with the included vmrun.sh script found in /usr/share/examples/bhyve/, this study will elect to use the vmrc system found a bhyve.org/tools/vmrc/. vmrc is a virtual machine run command script that facilitates the full provisioning (formatting and OS installation) and management (loading, booting, stopping) of bhyve virtual machines. vmrc usage is as follows: 7 VM configuration files are located in /usr/local/vm/ and can be installed with the included install.sh script. vmrc usage is displayed with '/usr/local/etc/rc.d/vm' and a VM is provisioned with '/usr/local/etc/rc.d/vm provision vm0' which corresponds to the vm0 configuration file found at /usr/local/etc/rc.d/vm/vm0/vm0.conf . # /usr/local/etc/rc.d/vm Usage: /usr/local/etc/rc.d/vm [fast|force|one|quiet](start|stop|restart|rcvar|enabled|attach|boot |debug|delete|destroy|fetch|format|grub|install|iso|jail|load|mount |provision|status|umount) vmrc's behavior is documented in the included instructions.txt file. For our purposes, each VM will be simply provisioned with the ‘provision’ directive and loaded and booted with the ‘start’ directive. This study will require at a minimum one or more participant hosts that will be analyzed and optionally a VM for telemetry collection and graphing. Graphite is a numeric time-series data and graphing system built in the Python scripting language and is best documented at https://graphite.readthedocs.org . Legacy and independent documentation sources exist but they vary in reliability and are rarely BSD Unix-oriented. The countless opportunities to customize a Graphite graphing deployments has resulted in a myriad of often site-specific documentation. For our purposes we will demonstrate a minimalistic “stock” Graphite installation that uses default settings whenever possible. Graphite Components Graphite is comprised of the graphite-web Django web framework “project” that performs all telemetry presentation tasks, the carbon telemetry collection daemon (scheduled to be replaced by the ceres daemon) and the whisper round-robin storage system. Host activity telemetry from one or more hosts is sent to the carbon daemon which in turn stores it in whisper, which retains or discards telemetry data according to pre-determined criteria, resulting in a fixed-size database after all criteria have been applied. The whisper storage system is modular and can be replaced with alternative stores and the carbon telemetry collection daemon is data source-agnostic. carbon is often supplied host telemetry using the collectd or statsd system statistics collection daemons but such data can also be provided with a myriad of other tools given that the telemetry data format is very simple: <host.data.name> <statistic> <UET timestamp> which is fed into port 2003 of the carbon host using TCP or UDP. graphite-web in turn presents the host telemetry stored in whisper in a user-friendly manner by default on port 8080 of the graphite-web host which is in turn often served via an established http daemon such as lighttpd, Apache or Nginx via the Python-based Web Server Gateway Interface (WSGI). Complexity is the enemy of reliability… …or installability for that matter. The operator will quickly find that the myriad of possible Graphite configurations will greatly complicate the installation and operation of Graphite with most 8 documentation being of a site-specific and advanced-user nature. This study will mercifully provide a minimalistic “default” configuration but in doing so will provide one that is highly user- customizable in exchange for a negligible performance impact. The result is a “Unix” style configuration if you will that leverages familiar Unix conventions and established programmability. Example host or virtual machine telemetry collection and transmission script #!/bin/sh host=$( uname -n ) # Used to distinguish the source within carbon interval=3 # Sampling interval in seconds destination=localhost # The carbon host port=2003 # The default carbon port while :; do # Operate continuously until terminated externally sleep $interval timestamp=$( date +%s ) # Average load within last minute from uptime(1) load=$( uptime | grep -ohe 'load average[s:][: ].*' | \ awk '{ print $3 }' | sed 's/,$//') echo "${host}.cpu.load $load $timestamp" | \ socat - udp-sendto:$destination:$port echo "Sent: ${host}.cpu.load $load $timestamp" # Debug # IOPS from gstat(8) iops=$( gstat -b | grep ada0 | head -1 | \ awk '{print $2}' ) echo "${host}.ada0.iops $iops $timestamp" | \ socat - udp-sendto:$destination:$port echo "Sent: ${host}.ada0.iops $iops $timestamp” done Example bhyve virtual machine telemetry syntax for the
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages204 Page
-
File Size-