Setup of a Ganglia Monitoring System for a Grid Computing Cluster
Total Page:16
File Type:pdf, Size:1020Kb
Setup of a Ganglia Monitoring System for a Grid Computing Cluster Aufbau eines Ganglia-Überwachungssystems für ein Grid-Computing-Cluster von Oleg Ivasenko Bachelorarbeit in Physik vorgelegt der Fakultät für Mathematik, Informatik und Naturwissenschaften der RWTH Aachen im September, 2015 angefertigt im Physikalischen Institut IIIb bei Prof. Dr. Achim Stahl Zusammenfassung Das RWTH Grid Cluster ist ein Tier 2/3 Rechenzentrum im WLHC Grid mit über zwei hundert Rechnern. Diese müssen bei Betrieb überwacht werden, um aufkommende Probleme analysieren zu können. Da das aktuell laufende System gewisse Inkompatibilitäten zu neuen Versionen des laufenden Betriebssystems zeigt, soll es durch Ganglia ersetzt werden. In dieser Arbeit wird der Aufbau des Ganglia Monitoring Systems und die Anpassung dessen an die Besonderheiten des RWTH Grid Clusters vorgenommen. Das Ganglia Monitoring System überwacht das gesamte Cluster und zeigt sich weiterhin skalierbar für zukünftige Erweiterungen. Abstract The RWTH Grid Cluster is a Tier 2/3 grid site in the WLHC Grid with over two hundred com- puters. Those have to be monitored while operating to analyze occurring problems. The currently running monitoring system shows incompatibilities with newer versions of the running operating system and thus should be replaced by Ganglia. In this thesis the setup and the customization of the Ganglia Monitoring System to suit the RWTH Grid Cluster specic needs is detailed. The Ganglia Monitoring System monitors the whole Cluster and proves itself as further scalable for future extensions. Contents List of Figures I List of Tables II List of Listings II 1 Introduction 1 1.1 Worldwide LHC Computing Grid . .2 1.2 RWTH Grid Cluster . .3 2 Monitoring 5 2.1 Ganglia Monitoring System . .5 2.1.1 gmond . .6 2.1.2 RRDtool . 11 2.1.3 gmetad . 13 2.1.4 gweb . 15 3 Deployment 17 3.1 Installation . 17 3.2 Conguration . 17 3.2.1 Database . 17 3.2.2 Network Topology . 18 3.2.3 IO Optimisations . 18 3.3 Modules . 19 3.3.1 Torque . 20 3.3.2 Maui . 22 3.3.3 dCache . 23 3.4 Ganglia Web . 25 3.5 Extensions . 26 3.5.1 Job Monarch . 26 4 Conclusion 29 List of Figures 1.1 CERN's site plan [6] . .1 1.2 Hierarchical structure of the WLCG . .2 1.3 RWTH Grid Cluster overview . .3 2.1 Ganglia Monitoring System overview . .6 2.2 Structural overview of gmond . .6 2.3 File structure of a Round Robin Database . 12 2.4 Normalization and consolidation of reported data . 13 2.5 Structural overview of gmetad. LT: listening threads, IP: interactive ports . 13 2.6 Structure of the directory containing the RRDs . 14 2.7 Main navigation tabs of gweb . 15 2.8 Views of the main tab of Ganglia Web . 16 3.1 Impact of the monitoring system on the hosting node . 19 3.2 Optimized metric_callback function . 20 3.3 Update function for node statistics . 20 3.4 Update function for general job statistics . 20 3.5 Update function for jobs by queue statistics . 21 3.6 Update function for jobs by owner statistics . 21 3.7 Update function for job eciency by owner statistics . 22 3.8 Update function for reservations statistics . 23 3.9 Update function for fair share statistics . 24 3.10 dCache pool usage collection query . 24 3.11 dCache queues collection query . 25 3.12 Graph generated from the nodes statistics . 26 3.13 dCache view from the `Views' tab . 27 3.14 Job Monarch user interface . 28 4.1 Fully implemented Ganglia Monitoring System . 29 I List of Tables 1.1 List of the 13 Tier 1 grid sites[29] . .2 2.1 List of default metrics: reformatted output of gmond -m part 1 . .9 2.2 List of default metrics: reformatted output of gmond -m part 2 . 10 3.1 RWTH Grid Cluster network topology . 18 List of Listings 2.1 gmond global settings . .7 2.2 gmond cluster settings (Worker Nodes) . .7 2.3 gmond host settings . .7 2.4 gmond udp send settings . .8 2.5 gmond udp receive settings . .8 2.6 gmond tcp accept settings . .8 2.7 gmond modules settings . .8 2.8 gmond collection group settings . 10 2.9 Skeleton python module[18] . 11 2.10 gmetad settings . 14 3.1 Output of showq -r ................................... 22 3.2 Output of showres -n .................................. 23 3.3 Output of mdiag -f (truncated) . 23 3.4 Node Report denition . 25 3.5 dCache view denition . 26 3.6 Job Monarch conguration . 27 II 1| Introduction Build by the European Organization for Nuclear Research (CERN), the Large Hadron Collider (LHC) is the largest experimental facility in existence. It is located in a tunnel beneath the Swiss French border near Geneva, on the former site of the Large Electron-Positron Collider (LEP). In contrast to the LEP the LHC is able to accelerate much heavier particles and achieve signicantly higher collision energies. The LHC was build with collision energies of 14 TeV in mind, which corresponds to protons traveling at 99.9999991% the speed of light. Initially it was operated at much lower energies, and even this year as phase two of the experiment is started it still operates at 13 TeV. Higher collision energies enable physicists to observe physics on a much smaller scale with the possibility to generate even heavier and possibly previously unobserved particles. Just this month (August 2015) a paper[2] on the rst observation of a pentaquark was submitted for peer review. CMS North Area ALICE LHC LHCb 40 TT41TT TI8 neutrinos TI2 TT10 SPS ATLAS CNGS TT60 Gran Sasso TT2 AD BOOSTER ISOLDE p p East Area n-ToF LINAC 2 PS CTF3 neutrons e– LINAC 3 LEIR Ions Figure 1.1: CERN's site plan [6] Inside the LHC, two beams of protons or lead ions in two separate tubes are accelerated to nearly the speed of light in opposite direction from each other. A beam consists of 2808 bunches and each bunch contains about 1011 protons. 9593 superconducting magnets[5] keep the two beams on a circular trajectory of 27 km in circumference and aect the form of the beams. To maximize the collision rate the beams are highly concentrated at the four experiment sites (g. 1.1). These are the only sections of the LHC where the two tubes intersect and collisions are possible. 1. Compact Muon Solenoid (CMS)[7] is a general purpose detector with a solenoid magnet at its core that forces charged particles onto a circular trajectory. From the curvature of the trajectory the momentum of the particle can be determined. The goals of the CMS experiment are the study of the standard physics model, the search for extra dimensions and new particles. 2. ATLAS[4] has the same purpose as CMS, but is build up dierently. The magnetic eld in ATLAS is weaker and engulfs fewer of the sub-detectors. 3. Large Hadron Collider beauty (LHCb)[17] is used to examine the dierences of particles and antiparticles, particularly the b quark, with the goal to explain the absence of antimatter in the universe. 1 1.1. WORLDWIDE LHC COMPUTING GRID CHAPTER 1. INTRODUCTION Country Tier 1 Grid site Tier 0 Tier 3 Tier 1 Canada TRIUMF Germany KIT Tier 2 Spain PIC France IN2P3 Italy INFN Nordic countries Nordic Datagrid Facility Netherlands NIKHEF / SARA Republic of Korea GSDC at KISTI Russian Federation RRC-KI and JINR Taipei ASGC United Kingdom GridPP US Fermilab-CMS US BNL ATLAS Figure 1.2: Hierarchical structure of the WLCG Table 1.1: List of the 13 Tier 1 grid sites[29] 4. A Large Ion Collider Experiment (ALICE)[3] examines quark-gluon plasma. A phase that matter is believed to have been in just after the Big Bang. There are two more smaller experiments: TOTal Elastic and diractive cross section Measure- ment (TOTEM) that is placed in the same cavern as the CMS detector and Large Hadron Collider forward (LHCf) that is placed in the same cavern as the ATLAS detector. Both measure the same events as CMS and ATLAS respectively, but at a much more acute angle of diraction. As of 2015 the data from TOTEM and CMS will be combined to increase accuracy[32]. As soon as protons collide in one of the interaction points new particles are formed and decay into other particles. The resulting cascade of particles is interacting with the detectors. To be able to detect dierent particles the large detectors like CMS consist of a multitude of smaller sub- detectors that each have the ability to detect certain types of particles. Each collision is considered as one event. About 600 million events occur every second[16], which is why the data has to be ltered with hardware and software lters to retain only the events deemed as interesting (e.g. where four muons were detected, which is a typical signature of a certain Higgs boson dacay). Even with proper selection those experiments collect as much as 15 petabytes of data per year or 1 gigabyte per second. In phase two runs, which start this year, data rates of 6 GB/s are expected on average and peaking rates of up to 10 GB/s[36] are possible. Storing, distributing and analyzing this amount of data goes beyond the scope of what a single institution could provide in resources, which is why an international network of computing facilities that share their resources was initiated. 1.1 Worldwide LHC Computing Grid The Worldwide LHC Computing Grid (WLCG)[35] is the world's largest computing grid. Combin- ing the resources from multiple international and regional grids, it provides the necessary resources to process and store the data generated by the LHC experiments.