Measuring PROOF Lite performance in (non)virtualized environment

Ioannis Charalampidis, Aristotle University of Thessaloniki Summer Student 2010 Overview

• Introduction • Benchmarks: Overall execution time • Benchmarks: In-depth analysis • Conclusion What am I looking for?

• There is a known overhead caused by the virtualization ▫ How big is it? ▫ Where is located? ▫ How can we minimize it? ▫ Which hypervisor has the best performance? • I am using CernVM as guest What is CernVM?

• It’s a baseline Virtual Software Appliance for use by LHC experiments • It’s available for many hypervisors

• Hyper-V • KVM / QEMU • VM Ware • XEN • Virtual Box How am I going to find the answers?

• Using as benchmark a standard data analysis application (ROOT + PROOF Lite)

• Test it on different hypervisors • And on varying number of workers/CPUs

• Compare the performance (Physical vs. Virtualized) Problem

• The benchmark application requires too much time to complete ( 2 min ~ 15 min ) ▫ At least 3 runs are required for reliable results ▫ The in-depth analysis overhead is about 40% ▫ It is not efficient to perform detailed analysis for every CPU / Hypervisor configuration

 Create the overall execution time benchmarks  Find the best configuration to run the traces on Benchmarks performed

• Overall time ▫ Using time utility and automated batch scripts • In-depth analysis ▫ Tracing system calls using  Strace  KernelTAP ▫ Analyzing the trace files using applications I wrote  BASST (Batch analyzer based on STrace)  KARBON (General purpose application profiler based on trace files) Process description and results Benchmark Configuration

• Base machine ▫ Scientific CERN 5 • Guests ▫ CernVM 2.1 • Software packages from SLC repositories ▫ Linux Kernel 2.6.18-194.8.1.el5 ▫ XEN 3.1.2 + 2.6.18-194.8.1.el5 ▫ KVM 83-194.8.1.el5 ▫ Python 2.5.4p2 (from AFS) ▫ ROOT 5.26.00b (from AFS) • Base machine hardware ▫ 24 x Intel Xeon X7460 2.66GHz with VT-x Support (64 bit) ▫ No VT-d nor Extended Tables (EPT) hardware support ▫ 32G RAM Benchmark Configuration

• Virtual machine configuration ▫ 1, 2 to 16 CPUs with 2 CPU step ▫ + 1Gb RAM for Physical disk and Network tests ▫ + 17Gb RAM for RAM Disk tests ▫ Disk image for the OS ▫ Physical disk for the Data + Software

• Important background services running ▫ NSCD (Caching daemon) Benchmark Configuration

• Caches were cleared before every test ▫ Page , dentries and inodes ▫ Using the /proc/sys/vm/drop_caches flag

• No swap memory was used ▫ By periodically monitoring the free memory Automated batch scripts • The VM batch script runs on the host machine Server • It repeats the following procedure: ▫ Crate a new Virtual Machine Hypervisor ▫ Wait for the machine to finish booting ▫ Connect to the controlling script Client inside the VM ▫ Drop caches both on the host and Benchmark the guest BenchmarkBenchmark ▫ Start the job ▫ Receive and archive the results Problem

• There was a bug on PROOF Lite that was looking up a non-existing hostname during the startup of each worker  Example : 0.2-plitehp24.cern.ch-1281241251-1271 • Discovered by detailed tracing  The hostname couldn’t be cached  The application had to wait for the timeout  The startup time was delayed randomly  Call tracing applications made this delay even bigger virtually hanging the application Problem

• The problem was resolved with: ▫ A minimal DNS proxy was developed that fakes the existence of the buggy hostname ▫ It was later fixed in PROOF source

cernvm.cern.ch?

137.138.234.20 Fake DNS Application Proxy DNS Server x.x-xxxxxx-xxx-xxx?

127.0.0.1 Problem Example: Events / sec for different CPU settings, as reported by the buggy benchmark Before After 18000

16000

14000

12000

10000

8000

6000

4000

2000

0 0 5 10 15 20 25 30 0 5 10 15 20 25 30

RAM Disk - XEN RAM Disk - Host RAM Disk + Fixed DNS - XEN RAM Disk + Fixed DNS - Host Phys. Disk - XEN Phys. Disk - HOST Phys. Disk + Fixed DNS - XEN Phys. Disk + Fixed DNS - Host Results – Physical Disk

14000

12000

10000

8000

Baremetal

6000 XEN Events/Sec KVM 4000

2000

0 1 2 4 6 8 10 12 14 16 Workers = CPUs Results – Network (XROOTD)

14000

12000

10000

8000

Baremetal

6000 XEN Events/Sec KVM 4000

2000

0 1 2 4 6 8 10 12 14 16 Workers = CPUs Results – RAM Disk

14000

12000

10000

8000

Baremetal

6000 XEN Events/Sec KVM 4000

2000

0 1 2 4 6 8 10 12 14 16 Workers = CPUs Results – Relative values

RAM Disk Network (XROOTD) Physical Disk 1.2

1

0.8 Ratio

0.6 Baremetal

VM/ 0.4

0.2

0 0 5 10 15 200 5 10 15 200 5 10 15 20 Workers = CPUs Workers = CPUs Workers = CPUs

Bare metal XEN KVM Results – Absolute values

RAM Disk Network (XROOTD) Physical Disk 14000

12000

10000

8000

6000 Events /EventsSec

4000

2000

0 0 5 10 15 0 5 10 15 0 5 10 15 Workers = CPUs Workers = CPUs Workers = CPUs

Bare metal XEN KVM Results – Comparison chart

14000

12000

10000

8000

6000 Events /EventsSec

4000

2000

0 0 2 4 6 8 10 12 14 16 18 Workers = CPUs

Physical Disk - Bare metal Xrootd - Bare metal RAM Disk - Bare metal Physical Disk - XEN Xrootd - XEN RAM Disk - XEN Physica Disk - KVM Xrootd - KVM RAM Disk - KVM Procedure, problems and results In depth analysis

• In order to get more details the program execution was monitored and all the system calls were traced and logged • Afterwards, the analyzer extracted useful information from the trace files such as ▫ Detecting the time spent on each system call ▫ Detecting the filesystem / network activity • The process of tracing adds some overhead but it is cancelled out from the overall performance measurement System call tracing utilities

• STrace ▫ Traces application-wide system calls from Kernel ▫ Connects to the tracing process using the () system call and monitors it’s activity • Advantages STrace ▫ Traces the application’s system calls in real time Process ▫ Has very verbose output • Disadvantages ▫ Creates big overhead System call tracing utilities

• SystemTAP ▫ Traces system-wide kernel System TAP activity, asynchronously ▫ Runs as a kernel module • Advantages Kernel ▫ Can trace virtually everything on a running kernel ▫ Supports scriptable kernel probes • Disadvantages Process ▫ It is not simple to extract detailed information ▫ System calls can be lost on high CPU activity System call tracing utilities

• Sample STrace output:

5266 1282662179.860933 arch_prctl(ARCH_SET_FS, 0x2b5f2bcc27d0) = 0 <0.000005> 5266 1282662179.860960 mprotect(0x34ca54d000, 16384, PROT_READ) = 0 <0.000007> 5266 1282662179.860985 mprotect(0x34ca01b000, 4096, PROT_READ) = 0 <0.000006> 5266 1282662179.861009 munmap(0x2b5f2bc92000, 189020) = 0 <0.000011> 5266 1282662179.861082 open("/usr/lib/locale/locale-archive", O_RDONLY) = 4 <0.000008> 5266 1282662179.861113 fstat(4, {st_mode=S_IFREG|0644, st_size=56442560, ...}) = 0 <0.000005> 5266 1282662179.861166 mmap(NULL, 56442560, PROT_READ, MAP_PRIVATE, 4, 0) = 0x2b5f2bcc3000 <0.000007> 5266 1282662179.861192 close(4) = 0 <0.000005> 5266 1282662179.861269 brk(0) = 0x1ad1f000 <0.000005> 5266 1282662179.861290 brk(0x1ad40000) = 0x1ad40000 <0.000006> 5266 1282662179.861444 open("/usr/share/locale/locale.alias", O_RDONLY) = 4 <0.000009> 5266 1282662179.861483 fstat(4, {st_mode=S_IFREG|0644, st_size=2528, ...}) = 0 <0.000005> 5266 1282662179.861944 read(4, "", 4096) = 0 <0.000006> 5266 1282662179.861968 close(4) = 0 <0.000005> 5266 1282662179.861989 munmap(0x2b5f2f297000, 4096) = 0 <0.000009> 5264 1282662179.863063 wait4(-1, 0x7fff8d813064, WNOHANG, NULL) = -1 ECHILD (No child processes) ... KARBON – A trace file analyzer KARBON – A trace file analyzer

• Is a general purpose application profiler based on system call trace files • It traces file descriptors and reports detailed I/O statistics for files, network sockets and FIFO pipes • It analyzes the child processes and creates process graphs and process trees • It can detect the “Hot spots” of an application • Custom analyzing tools can be created on-demand using the development API KARBON – Application block diagram

Preprocessing Tool Presenter

Filter Analyzer Presenter

Router

Tokenizer Source (File or TCP Stream) TCP or (File Source Results

• Time utilization of the traced application

Physical Disk - KVM File IO Physical Disk - XEN Net IO Misc calls Physical Disk - Baremetal

Network (Xrootd) - KVM File IO UNIX Sockets Network (Xrootd) - XEN TCP Sockets Misc calls Network (Xrootd) - Baremetal

RAM Disk - KVM File IO RAM Disk - XEN Net IO Misc calls RAM Disk - Baremetal

0 50000 100000 150000 200000 250000 300000 Time spent (ms) Results

• Time utilization of the traced application

Physical Disk - KVM File IO Physical Disk - XEN UNIX Sockets Misc calls Physical Disk - Baremetal

Network (Xrootd) - KVM File IO UNIX Sockets Network (Xrootd) - XEN TCP Sockets

Network (Xrootd) - Baremetal Misc calls

RAM Disk - KVM File IO RAM Disk - XEN UNIX Sockets Misc calls RAM Disk - Baremetal

0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100% Results

• Time utilization of the traced application

Physical Disk - KVM File IO Physical Disk - XEN UNIX Sockets Misc calls Physical Disk - Baremetal

Network (Xrootd) - KVM File IO UNIX Sockets Network (Xrootd) - XEN TCP Sockets

Network (Xrootd) - Baremetal Misc calls

RAM Disk - KVM File IO RAM Disk - XEN UNIX Sockets Misc calls RAM Disk - Baremetal

0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100% Results

• Overall system call time for filesystem I/O

[ms] Reading Writing Seeking Total Bare metal 490,861.354 2,054.354 21,594.583 524,872.823 KVM 38,391.715 36,422.440 122,769.518 244,406.512 XEN 38,111.980 20,930.382 102,769.901 210,247.468

• Reminder: Kernel buffers were dropped before every test ▫ Possible caching effect inside the hypervisor Results

• Overall system call time for UNIX Sockets

[ms] Receiving Sending Bind, Listen Connecting Total Bare metal 993.884 10,313.304 4.251 5.259 11,301.588 KVM 59,637.942 164,655.077 7.412 13.656 223,872.164 XEN 97,823.986 550,050.484 5.014 8.493 652,784.010 Results

• Most time-consuming miscellaneous system calls

System call Bare metal KVM XEN wait4() 178,200.34 316,829,30 388,885,57 gettimeofday() (No trace) 219,780,33 218,018,63 nanosleep() (No trace) 12,250,12 12,029,30 time() (No trace) (No trace) 9,081,94 rt_sigreturn() 150,943 1,685,285 9,271,061 setitimer() 23,245 698,785 223,669 Conclusion

• Physical Disk ▫ KVM can achieve better performance than XEN, reaching 70 - 98% of the native speed ▫ Best performance achieved on 6 CPUs/6 workers (7Gb RAM) with 81% of the native speed • Network (Xrootd) ▫ XEN can achieve better performance than KVM, reaching 73 - 90% of the native speed ▫ Best performance achieved again on 6 CPUs / 6 workers (7G RAM) with 92% of the native speed Conclusion

• Some disk I/O operations (read) appear to be faster inside the Virtual Machine • Some of them appear to be slower (seek, write) ▫ Possible caching effect even on direct disk access • Network I/O ▫ TCP under XEN looks fine, whereas with KVM there are some issues ▫ UNIX Sockets seem to have significant penalty inside the VMs • Some miscellaneous system calls take longer inside the VM ▫ Time-related functions (gettimeoftheday, nanosleep)  Used for paravirtualized implementation of other system calls? Other uses of the tools

• SystemTAP could be used by nightly builds in order to detect hanged applications • KARBON can be used as a general log file analysis program Future work

• Benchmark VMs with a disk image file residing on a RAID Array • Benchmark many concurrent KVM virtual machines with total memory exceed the overall system memory – Exploit NPT • Test the PCI Pass-through for network cards (KVM) – Test VT-d • Convert the benchmark application from python to pure • Repeat the benchmarks with the optimized ROOT input files • Test again the KVM Network performance with • Recompile the kernel with CONFIG_KVM_CLOCK