Ilkka Ollakka CERN LIBRARY THIN CLIENT SYSTEM MODERNIZATION
Total Page:16
File Type:pdf, Size:1020Kb
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING Ilkka Ollakka CERN LIBRARY THIN CLIENT SYSTEM MODERNIZATION Master’s Thesis Degree Programme in Computer Science and Engineering April 2013 Ollakka I. (2013) Cern library Thin Client System modernization. University of Oulu, Department of Computer Science and Engineering, Master’s Thesis, 59 pages 2 appendices. ABSTRACT The goal of this Master’s thesis is to modernize the CERN library public termi- nal system. With modernization the aim was set to minimize the administrative burden on library staff from the new system. Modernization focuses on replacing the currently used Linux based thin client system that suffers from hardware age- ing with a system that resides in the CERN-IT operated private cloud. Another aspect in modernization was to utilize the provided network infrastructure more to simplify system complexity and the amount of remote components that require library staff maintenance and administration. The new system differentiates from a traditional LTSP-system by dividing ev- ery thin client into a different virtual machine. By distributing every thin client in our own virtual host, we can distribute resource allocation, like CPU-time and physical memory, between different physical hosts. The system also contains monitoring features that informs administrative problems that have been noted. Monitoring decreases the time required for dealing with system maintenance. Linux-based thin client remote connection efficiencies are compared using UI- latency and bandwidth efficiency as metrics. A plain X11 connection is measured against an NX-connection and an SSH-tunneled X11 connection. Results show NX bandwidth efficiency that comes from utilizing an extra caching layer. Mea- surements for overall latency and bandwidth usage are presented. Keywords: virtualization, Linux, remote desktop, X11 Ollakka I. (2013) CERN kirjaston asiakaspäätteiden modernisointi. Oulun yli- opisto, Tietotekniikan osasto, Diplomityö, 59 sivua 2 liitettä. TIIVISTELMÄ Tässä diplomityössä modernisoidaan CERN:n kirjaston käytössä olevat asia- kaspäätteet. Modernisoinnin tarkoituksena on vähentää asiakaspäätteistä mah- dollisesti aiheutuvaa ylläpitokuormaa kirjaston henkilökuntaa kohtaan. Pääsuun- ta modernisaatiossa on rakentaa uusi asiakaspäätejärjestelmä vanhan LTSP jär- jestelmän tilalle, joka kärsii palvelinraudan vanhenemisesta, hyödyntäen CERN- IT:n operoimaa sisäistä pilvipalvelua sekä jo olemassa olevaa verkkoinfrastruk- tuuria. Rakennettu järjestelmä poikkeaa perinteisestä LTSP-järjestelmästä määritte- lemällä jokaiselle päätteelle erillisen virtuaalikoneen johon kytkeytyä. Erillisil- lä virtuaalikoneilla saadaan käyttäjien varaamia resursseja kuten CPU-aikaa ja fyysistä muistia hajautettua useammalle fyysiselle palvelimelle. Järjestelmä sisäl- tää myös proaktiivisen monitoroinnin joka ilmoittaa havaituista ongelmista yllä- pitovastuussa olevia henkilöitä eikä näin vaadi järjestelmän aktiivista tarkkailua ylläpitovastuussa olevilta henkilöiltä. Linux-pohjaisten tyhmien päätteiden etäyhteys-protokollien tehokkuutta ver- taillaan käyttämällä käyttöliittymän latenssia ja kaistankulutusta mittareina. Nor- maalia X11 yhteyttä verrataan yleisesti käytettyyn SSH-tunneloituun X11 yhtey- teen sekä NX-yhteyteen. Tulokset osoittavat NX-yhteyden ylimääräisen välimuis- tin tehostavan kaistankäyttöä. Mittausjärjestelyt ja tulokset on esitelty. Avainsanat: Tyhmät päätteet, virtualisointi, Linux, X11 TABLE OF CONTENTS ABSTRACT TIIVISTELMÄ TABLE OF CONTENTS PREFACE LIST OF SYMBOLS AND ABBREVIATIONS 1. INTRODUCTION9 2. VIRTUALIZATION 11 2.1. Hypervisor............................... 11 2.2. Hypervisor types............................ 13 2.3. X86-architecture virtualization..................... 14 2.3.1. Binary translation........................ 14 2.3.2. Paravirtualized CPU...................... 15 2.3.3. Memory management..................... 15 2.3.4. X86 64-bit virtualization.................... 17 2.3.5. Hardware assisted virtualization................ 18 2.4. Paravirtualization............................ 18 2.5. KVM.................................. 19 2.6. HyperV................................. 20 2.7. Xen................................... 22 2.8. Cloud services.............................. 22 3. REMOTE CONNECTION PROTOCOLS 24 3.1. User interface latency limits...................... 24 3.2. X11................................... 25 3.3. X11 tunneled inside SSH........................ 26 3.4. NoMachine NX............................. 27 3.5. Spice.................................. 28 4. CERN NETWORK INFRASTRUCTURE 31 4.1. Network boot.............................. 31 4.2. Automated Installations Management Systems............. 33 5. CURRENT SYSTEM 34 5.1. History of public terminals in CERN library.............. 34 5.2. Requirements for public terminals in library.............. 35 5.3. Thin client concept........................... 35 5.4. Current implementation overview................... 36 5.5. Problems................................ 37 6. PROPOSED SOLUTION 38 6.1. Thin client................................ 39 6.2. Virtualization platform......................... 41 6.3. Desktop OS, Scientific Linux CERN 6................. 41 6.4. Security................................. 42 6.5. Availability............................... 42 6.6. Public Cloud.............................. 43 7. LATENCY AND BANDWIDTH TESTING 45 8. REMOTE CONNECTION PROTOCOL TEST RESULTS 47 8.1. Latency test............................... 47 8.2. Bandwidth usage............................ 49 9. DISCUSSION 52 9.1. Regressions in proposed system compared to current system..... 53 9.2. Migration from current system to proposed solution.......... 53 9.3. Test results............................... 53 10. CONCLUSIONS 54 BIBLIOGRAPHY 55 APPENDIX 59 List of Figures 1 High level architecture of thin client setup, where the virtual instance resides in the cloud service........................ 10 2 Virtual machine layer 0 is run in the host’s layer 2........... 12 3 Type 1 hypervisor runs directly on top of hardware........... 13 4 Type 2 hypervisor runs on top of OS................... 13 5 Binary translator intercepts guest program counter........... 15 6 Shadow page tables in virtualization................... 16 7 VMM code location in memory segments................ 16 8 Ballooning driver inside guest OS.................... 17 9 KVM high-level architecture....................... 19 10 KVM guest execution loop........................ 20 11 HyperV-architecture........................... 21 12 Distinction between cloud service types................. 23 13 X11 client/server separation....................... 25 14 X11 traffic goes inside SSH connection from server to client...... 27 15 NX architecture overview........................ 28 16 Spice architecture paired with QEMU.................. 29 17 Network booting sequence in CERN network.............. 32 18 Current system overview in CERN library................ 36 19 Overview of proposed public terminal setup............... 38 20 Thin client boot-up sequence....................... 40 21 Measuring setup............................. 45 22 Screen update latency on local display during test run.......... 48 23 Screen update latency on plain X11 display during test run....... 48 24 Screen update latency on SSH tunneled display during test run..... 49 25 Screen update latency on NX-connection during test run........ 50 26 Bandwidth in kbits/second in plain X11 display during testrun..... 50 27 Bandwidth in kbits/second in SSH tunneled display during testrun... 51 28 Bandwidth in kbits/second in NX connection during testrun...... 51 PREFACE This Master’s thesis is a result of my four-month technical student period at CERN during summer 2011. CERN had the academic freedom to work with all the up and down sides that comes with that. My work was performed as part of a joint project between HIP-TEK and CERN library. I would like to thank Jukka Kommeri and all the HIP-TEK researchers who made me feel welcome and provided me with ideas during the project. Special thanks are also due to Jukka for giving me so much time to provide comments on my writing and his overall guidance on the project. All the ’hippies’ and ’mummo’ also deserve a special mention for being part of that special summer. Thanks also to Mark for reading this thesis and providing me with his useful observations. Thanks to CERN Library staff for accommodatinge me in one of their offices during the summer and with whom I had interesting conversations about variety of things over lunch. In addition, the always helpful CERN-IT team sped up the project provided gave excellent help during the implementation phase. Special thanks to Professor Röning, for providing supervising on the thesis, posing excellent questions and making comments that helped me to the clarify content and really think about what to write. Last but not least, thanks to my family, particularly Jaana, who supported and helped me during the writing process a great deal. Ilkka Ollakka LIST OF SYMBOLS AND ABBREVIATIONS CERN The European Organization for Nuclear Research DaaS Desktop as an Service. The desktop runs inside a cloud service and is used with a remote connection method Guest mode Running mode for a virtual host, where hardware keeps track of CPU states and allows for the running of privileged code without interfering with the hypervisor Hyper-V Microsoft virtualization platform Hypervisor Synonym for VMM in this thesis IaaS Infrastructure as a Service. The cloud