Universitat De Val`Encia Distributed Computing and Farm Management

Total Page:16

File Type:pdf, Size:1020Kb

Universitat De Val`Encia Distributed Computing and Farm Management Universitat de Val`encia Departament de F´ısica Te`orica Distributed computing and farm management with application to the search for heavy gauge bosons using the ATLAS experiment at the LHC (CERN) CERN-THESIS-2008-056 21/01/2008 Tesis de Doctorado Juan Antonio Lopez-Perez 2007 CONTENTS Preface 9 Prefacio 13 I Introduction 17 1 CERN, LHC and ATLAS 19 1.1 CERN . 19 1.1.1 Discoveries and research fields . 20 1.2 The LHC . 22 1.3 The ATLAS experiment . 25 1.3.1 ATLAS sub-detectors . 27 1.3.2 ATLAS performance . 32 1.3.3 ATLAS software . 35 2 The Standard Model and Extra Dimensions 41 2.1 The Standard Model . 41 2.2 Beyond the Standard Model . 43 2.3 Extra dimensions . 45 3 Distributed Grid computing 49 3.1 The Grid . 49 3.1.1 Definition . 49 3.1.2 Some key concepts . 51 3.1.3 Middleware components . 52 3 4 3.1.4 The Globus toolkit . 53 3.1.5 Overview of a Grid job . 54 3.1.6 Challenges and benefits of the Grid . 55 3.1.7 Different kinds of Grids . 57 3.2 LHC Computing Grid . 59 3.2.1 The LCG project . 59 3.2.2 The EGEE project . 61 3.2.3 The EDG project . 63 3.2.4 The Quattor software management tool . 64 3.3 The BOINC distributed computing environment . 66 3.3.1 Overview of a BOINC job . 67 3.3.2 BOINC credit system . 68 3.3.3 The future of BOINC . 69 II Data analysis 73 4 Search for Z∗ and W ∗ decay modes 75 4.1 Introduction . 75 4.2 b-tagging . 76 4.3 Search for Z∗ −→ b¯b ....................... 77 4.3.1 Simulation . 77 4.3.2 Selection cuts . 78 4.3.3 Results . 79 4.4 Search for Z∗ −→ tt ....................... 81 4.4.1 Simulation . 81 4.4.2 Selection cuts . 82 4.4.3 Results . 82 4.5 Search for W ∗ −→ t b ....................... 85 4.5.1 Simulation . 85 4.5.2 Selection cuts . 85 4.5.3 Results . 86 4.6 Mass dependence . 88 III Distributed Computing 93 5 The LCG project 95 5.1 LCG technology and infrastructure . 95 5.1.1 Security . 97 5.1.2 Information service . 97 5.1.3 Workload management System (WMS) . 98 5.1.4 Storage Element (SE) . 99 5.1.5 Compute Resource Services . 99 5 5.1.6 Data Management . 100 5.1.7 File Transfer Service (FTS) . 100 5.2 LCG Activity Areas . 101 5.2.1 Fabric Area . 101 5.2.2 Grid Deployment Area . 101 5.2.3 Distributed Analysis (ARDA) . 102 5.2.4 Applications Area . 102 5.3 Quattor and farm management . 105 6 BOINC 107 6.1 Middleware components . 107 6.2 BOINC technology and infrastructure . 109 6.2.1 Generating work . 111 6.2.2 Validation . 112 6.2.3 Result assimilation . 113 6.2.4 Server-side file deletion . 113 6.2.5 Database purging . 113 6.3 LHC@home . 114 6.4 Applications ported to BOINC . 115 6.4.1 PYTHIA . 116 6.4.2 Atlfast . 118 6.4.3 Geant4 . 119 6.4.4 Garfield . 120 6.5 Installations performed . 121 6.5.1 lxboinc cluster . 121 6.5.2 Extremadura testbed . 122 7 Comparing LCG and BOINC 125 7.1 Technology and infrastructure . 125 7.1.1 Installation, configuration and management . 125 7.1.2 Security . 126 7.1.3 Information service . 127 7.1.4 Workload management System (WMS) . 127 7.1.5 Storage Elements and Data Management . 128 7.1.6 Compute Resource Services . 128 7.1.7 File Transfer Service (FTS) . 129 7.1.8 User Interfaces (UIs) . 129 7.2 Summary of the comparison . 130 IV Conclusions 133 Conclusions 135 Conclusiones 141 6 V Appendices 147 A Quattor tasks performed at CERN 149 A.1 Use case. Implementation of Quattor in Solaris . 149 A.1.1 Porting . 149 A.1.2 Corrections . 150 A.1.3 Creation . 151 A.2 Quattor software management system . 151 B Porting of LCG software 155 B.1 LCG External Software . 156 B.2 SEAL . 157 C Atlfast porting to BOINC 159 C.1 Atlfast compilation and configuration . 159 D Geant4 porting to BOINC 165 D.1 The Simulation . 165 D.2 Boincification . 166 D.3 Porting to Windows . 168 D.4 BOINC templates . 171 E Glossary 173 Acknowledges Agradecimientos 181 References 183 LIST OF FIGURES 1.1 Scheme of the CERN accelerator complex. 22 1.2 The 4 LHC experiments. 23 1.3 The Alice and LHCb detectors. 24 1.4 Sketch of the ATLAS detector. 24 1.5 The CMS detector. 25 1.6 Photograph of the ATLAS detector under construction. 26 1.7 A view of the ATLAS Inner Detector. 28 1.8 Scheme including the calorimeters of the ATLAS detector. 30 3.1 Overview of the life cycle of a Grid job . 54 3.2 The different countries involved in the ATLAS collaboration. 56 3.3 Overview of the life cycle of a BOINC job. 67 3.4 Sketch about describing BOINC credits are granted. 69 4.1 Simulated signal obtained in the study of Z∗ → bb . 79 4.2 Signal and background obtained in the study of Z∗ → bb . 80 4.3 Simulated signal obtained in the study of Z∗ → tt with M(Z∗) = 2 T eV .......................... 83 4.4 Signal and background obtained in the study of Z∗ → tt with M(Z∗) = 2 T eV .......................... 83 4.5 Simulated signal obtained in the study of W ∗ → t b with M(Z∗) = 2 T eV .......................... 86 4.6 Signal and background obtained in the study of W ∗ → t b with M(Z∗) = 2 T eV ....................... 87 4.7 Significance as a function of mass for the study of the decay channel Z∗ → bb and assuming L = 3 × 105 pb−1 . 89 7 8 LIST OF FIGURES 4.8 Significance as a function of mass for the study of the decay channel Z∗ → tt and assuming L = 3 × 105 pb−1 . 89 4.9 Significance as a function of mass for the study of the decay channel W ∗ → t b and assuming L = 3 × 105 pb−1 . 90 6.1 BOINC client-server infrastructure showing the different parts of each kind of components. 108 6.2 BOINC daemons . 110 6.3 Statistics of participants on the LHC@home project and total credits. 115 A.1 Schema of the Quattor automated software management system.152 Preface The Standard Model of particle physics (SM), presented in the introduction of this work, describes the strong, weak, and electromagnetic forces between the fundamental particles of ordinary matter. The SM describes with high precision the interactions at energies reached by present accelerators. How- ever, it presents several problems, also discussed in this work, and some questions remain unanswered so the SM cannot be considered a complete theory of fundamental interactions. Since the completion of the SM, many extensions have been proposed in order to address these problems. Some important recent extensions are the Extra Dimensions (ED) theories. These theories unify gravity with the other fundamental forces and solve the hierarchy problem. An introduction on the motivation, history and main ideas of ED theories is given in this work. In the context of some models with ED of size about 1 T eV −1, in particular in the ADD model with only fermions confined to a D-brane, heavy Kaluza-Klein (KK) excitations are expected, with the same properties as SM gauge bosons but more massive. In this work, three hadronic decay modes of some of such massive gauge bosons, Z∗ and W ∗, are investigated using the ATLAS experiment at the Large Hadron Collider (LHC), presently under construction at CERN. The LHC will start to operate in 2007 and collect data in 2008. It will pro- duce roughly 15 Petabytes (15 million Gigabytes, i.e. more than 20 millions of CDs) of data per year. Access to this experimental data has to be provided for some 5,000 scientists working in 500 research institutes and universities. In addition, all data need to be available over the estimated 15-year lifetime of the LHC. The analysis of the data, including comparison with theoretical 9 simulations, requires a huge amount of computing power. The computing challenges that scientists have to face are: the huge amount of data and calculations to perform and the large number of col- laborators. The Grid has been proposed as a solution for those challenges. In particular, regarding the high CPU requirements required for LHC studies, a particular kind of Grid that is useful and has been considered in this work is BOINC. The feasibility of porting some simulation and reconstruction physics tools has been studied as well. This work is presented in five different parts, including seven sections and four appendices. In short, there are two main topics. One refers to data analysis and another one to distributed computing. In addition, there is an introduction, conclusions and appendices. In the first part of this work, an introduction to the physics and computing tools is presented. In the first chapter of this part an introduction to CERN and to its main collider, the LHC, is given. The ATLAS detector and the main computing tools are also introduced. In the second chapter, the Standard Model of particle physics is intro- duced and some extensions are proposed. In particular, an introduction to Extra Dimensions Models is given. In the third chapter, a general introduction to Grid computing is presented.
Recommended publications
  • Tesi Di Laurea La Solidarietà Digitale
    Tesi di Laurea La Solidarietà Digitale Da Seti@home a Boinc. Ipotesi di una società dell’elaborazione. di Francesco Morello 1 INDICE Introduzione............................................................... 4 Capitolo I Calcolo Vontario....................................... 5 1.1 Dai media di massa al calcolo distribuito......... 5 1.2 Calcolo Distribuito............................................... 6 1.3 Calcolo Volontario............................................... 8 1.3.1 Come funziona il calcolo volontario ?.......... 10 1.3.2 Applicazioni del Calcolo Volontario.............. 11 Capitolo II Analisi di BOINC.................................... 23 2.1 Piattaforma per il calcolo volontario............... 23 2.2 Architettura di BOINC........................................ 25 2.2.1 L'interfaccia di BOINC.................................... 25 2.2.2 Progetti e Volontari......................................... 31 2.2.3 Checkpointing e Work unit............................ 32 2.2.4 Crediti e ridondanza....................................... 32 2.2.5 Gli scopi di BOINC.......................................... 33 Capitolo III Aspetti tecnici del calcolo distribuito 36 3.1 Grid Computing vs Volunteer Computing....... 36 3.2 Hardware e Software per il Distributed Computing38 3.2.1 La Playstation 3 per raggiungere il Petaflop.41 Capitolo IV Aspetti sociali del calcolo volontario 45 4.1 Riavvicinarci alla scienza.................................. 45 2 4.2 Volontari oltre la CPU........................................ 47 4.2.1 Forum, Blog
    [Show full text]
  • NA61/SHINE Facility at the CERN SPS: Beams and Detector System
    Preprint typeset in JINST style - HYPER VERSION NA61/SHINE facility at the CERN SPS: beams and detector system N. Abgrall11, O. Andreeva16, A. Aduszkiewicz23, Y. Ali6, T. Anticic26, N. Antoniou1, B. Baatar7, F. Bay27, A. Blondel11, J. Blumer13, M. Bogomilov19, M. Bogusz24, A. Bravar11, J. Brzychczyk6, S. A. Bunyatov7, P. Christakoglou1, T. Czopowicz24, N. Davis1, S. Debieux11, H. Dembinski13, F. Diakonos1, S. Di Luise27, W. Dominik23, T. Drozhzhova20 J. Dumarchez18, K. Dynowski24, R. Engel13, I. Efthymiopoulos10, A. Ereditato4, A. Fabich10, G. A. Feofilov20, Z. Fodor5, A. Fulop5, M. Ga´zdzicki9;15, M. Golubeva16, K. Grebieszkow24, A. Grzeszczuk14, F. Guber16, A. Haesler11, T. Hasegawa21, M. Hierholzer4, R. Idczak25, S. Igolkin20, A. Ivashkin16, D. Jokovic2, K. Kadija26, A. Kapoyannis1, E. Kaptur14, D. Kielczewska23, M. Kirejczyk23, J. Kisiel14, T. Kiss5, S. Kleinfelder12, T. Kobayashi21, V. I. Kolesnikov7, D. Kolev19, V. P. Kondratiev20, A. Korzenev11, P. Koversarski25, S. Kowalski14, A. Krasnoperov7, A. Kurepin16, D. Larsen6, A. Laszlo5, V. V. Lyubushkin7, M. Mackowiak-Pawłowska´ 9, Z. Majka6, B. Maksiak24, A. I. Malakhov7, D. Maletic2, D. Manglunki10, D. Manic2, A. Marchionni27, A. Marcinek6, V. Marin16, K. Marton5, H.-J.Mathes13, T. Matulewicz23, V. Matveev7;16, G. L. Melkumov7, M. Messina4, St. Mrówczynski´ 15, S. Murphy11, T. Nakadaira21, M. Nirkko4, K. Nishikawa21, T. Palczewski22, G. Palla5, A. D. Panagiotou1, T. Paul17, W. Peryt24;∗, O. Petukhov16 C.Pistillo4 R. Płaneta6, J. Pluta24, B. A. Popov7;18, M. Posiadala23, S. Puławski14, J. Puzovic2, W. Rauch8, M. Ravonel11, A. Redij4, R. Renfordt9, E. Richter-Wa¸s6, A. Robert18, D. Röhrich3, E. Rondio22, B. Rossi4, M. Roth13, A. Rubbia27, A. Rustamov9, M.
    [Show full text]
  • FIAS Scientific Report 2012
    FIAS Scientific Report 2012 Frankfurt Institute for Advanced Studies Editor: Dr. Joachim Reinhardt Ruth-Moufang-Str. 1 reinhardt@fias.uni-frankfurt.de 60438 Frankfurt am Main Germany Tel.: +49 (0)69 798 47600 Fax: +49 (0)69 798 47611 fias.uni-frankfurt.de Vorstand: Prof. Dr. Volker Lindenstruth, Vorsitzender Regierungspräsidium Darmstadt Prof. Dr. Dirk H. Rischke Az:II21.1–25d04/11–(12)–545 Prof. Dr. Dr. h.c. mult. Wolf Singer Finanzamt Frankfurt Prof. Dr. Dres. h.c. Horst Stöcker Steuernummer: 47 250 4216 1 – XXI/101 Prof. Dr. Jochen Triesch Freistellungsbescheid vom 16.08.2010 Geschäftsführer: Gisbert Jockenhöfer FIAS Scientific Report 2012 Table of Contents Preface..........................................................................5 Research highlights 2012 . 6 1. Partner Research Centers 1.1 HIC for FAIR / EMMI . 9 1.2 Bernstein Focus Neurotechnology . 11 2. Graduate Schools 2.1 HGS-HIRe / HQM . 14 2.2 FIGSS . 16 3. FIAS Scientific Life 3.1 Seminars and Colloquia . 20 3.2 Organized Conferences. .23 3.2 FIAS Forum . 25 4. Research Reports 4.1 Nuclear Physics, Particle Physics, Astrophysics . 26 4.2 Neuroscience. .59 4.3 Biology, Chemistry, Molecules, Nanosystems . 75 4.4 Scientific Computing, Information Technology . .101 5. Talks and Publications 5.1 Conference and Seminar Talks . 117 5.2 Conference Abstracts and Posters . 126 5.3 Cumulative List of Publications . 129 3 4 Preface In the year 2012 FIAS has continued to carry out its mission as an independent research institute performing cutting-edge research in the natural and computer sciences. An account of recent scientific accomplishments can be found in the brief individual research reports collected in Section 4.
    [Show full text]
  • "Challenges and Formal Aspects of Volunteer Computing"
    BUDAPEST UNIVERSITY OF TECHNOLOGY AND ECONOMICS FACULTY OF ELECTRICAL ENGINEERING AND INFORMATICS DEPARTMENT OF AUTOMATION AND APPLIED INFORMATICS Attila Csaba Marosi Önkéntes Számítási Rendszerek Kihívásai és Formális Aspektusai Challenges and Formal Aspects of Volunteer Computing Ph.D. Thesis Thesis Advisors: Péter Kacsuk MTA SZTAKI, LPDS Sándor Juhász BME, AUT Budapest, 2016. ‘‘The important thing is not to stop questioning. Curiosity has its own reason for existing. One cannot help but be in awe when he contemplates the mysteries of eternity, of life, of the marvelous structure of reality. It is enough if one tries merely to comprehend a little of this mystery every day. Never lose a holy curiosity.’’ -- Albert Einstein Abstract Volunteer Computing (VC) and Desktop Grid (DG) systems collect and make available the donated the resources from non-dedicated computers like office and home desktops. VC systems are usually deployed to solve a grand compute intensive problem by researchers who either don’t have access to or don’t have the resources to buy a dedicated infrastruc- ture; or simply don’t want to maintain such an infrastructure. VC and DG paradigms seem similar, however they target different use cases and environments: DG systems operate within the boundaries of institutes, while VC systems collect resources from the publicly accessible internet. Evidently VC resembles DGs whereas DGs are not fully equivalent to VC. Contrary to “traditional grids” [1,2] there is no formal definition for the relationship of DG and VC that could be used to categorize existing systems. There are informal at- tempts to categorize them and compare with grid systems [3,4,5].
    [Show full text]
  • The Social Cloud for Public Eresearch
    The Social Cloud for Public eResearch by Koshy John A thesis submitted to the Victoria University of Wellington in fulfilment of the requirements for the degree of Master of Engineering in Network Engineering. Victoria University of Wellington 2012 Abstract Scientific researchers faced with extremely large computations or the re- quirement of storing vast quantities of data have come to rely on dis- tributed computational models like grid and cloud computing. However, distributed computation is typically complex and expensive. The Social Cloud for Public eResearch aims to provide researchers with a platform to exploit social networks to reach out to users who would otherwise be unlikely to donate computational time for scientific and other research ori- ented projects. This thesis explores the motivations of users to contribute computational time and examines the various ways these motivations can be catered to through established social networks. We specifically look at integrating Facebook and BOINC, and discuss the architecture of the functional system and the novel social engineering algorithms that power it. ii Acknowledgments I would first like to thank my parents, John Koshy and Susan John, for their unwavering love and support in all my endeavours. I would like to thank my supervisor, Kris Bubendorfer, for his valuable guidance and support throughout my thesis. Kyle Chard and Ben Palmer have my thanks for their contributions and feedback in the course of au- thoring the IEEE e-Science paper on the Social Cloud for Public eResearch. Many thanks are also due to Andy Linton for his help with managing the development and test server for the Social Cloud for Public eResearch.
    [Show full text]
  • Proposta De Mecanismo De Checkpoint Com Armazenamento De Contexto Em Memória Para Ambientes De Computação Voluntária
    UNIVERSIDADE FEDERAL DO RIO GRANDE DO SUL INSTITUTO DE INFORMÁTICA PROGRAMA DE PÓS-GRADUAÇÃO EM COMPUTAÇÃO RAFAEL DAL ZOTTO Proposta de Mecanismo de Checkpoint com Armazenamento de Contexto em Memória para Ambientes de Computação Voluntária Dissertação apresentada como requisito parcial para a obtenção do grau de Mestre em Ciência da Computação Prof. Dr. Cláudio Fernando Resin Geyer Orientador Porto Alegre, Setembro de 2010 CIP – CATALOGAÇÃO NA PUBLICAÇÃO Dal Zotto, Rafael Proposta de Mecanismo de Checkpoint com Armazenamento de Contexto em Memória para Ambientes de Computação Volun- tária / Rafael Dal Zotto. – Porto Alegre: PPGC da UFRGS, 2010. 134 f.: il. Dissertação (mestrado) – Universidade Federal do Rio Grande do Sul. Programa de Pós-Graduação em Computação, Porto Ale- gre, BR–RS, 2010. Orientador: Cláudio Fernando Resin Geyer. 1. Computação voluntária. 2. Mecanismos para Checkpoint. 3. Alto Desempenho. 4. Prevalência de Objetos. I. Geyer, Cláudio Fernando Resin. II. Título. UNIVERSIDADE FEDERAL DO RIO GRANDE DO SUL Reitor: Prof. Carlos Alexandre Netto Vice-Reitor: Prof. Rui Vicente Oppermann Pró-Reitor de Pós-Graduação: Prof. Aldo Bolten Lucion Diretor do Instituto de Informática: Prof. Flávio Rech Wagner Coordenador do PPGC: Profa. Álvaro Freitas Moreira Bibliotecária-chefe do Instituto de Informática: Beatriz Regina Bastos Haro “An education isn’t how much you have committed to memory, or even how much you know. It’s being able to differentiate between what you do know and what you don’t.” —ANATOLE FRANCE AGRADECIMENTOS Nesta etapa final de minha escrita, gostaria de registrar minha gratidão a todos aqueles que participaram direta ou indiretamente dessa caminhada. Muito obrigado pelas conver- sas, apoio e incentivo oferecidos ao longo dessa jornada.
    [Show full text]
  • In Experimental Particle Physics
    CAPACITORS UP TO 65 KV AND 20 KJ/S... The Maxwell CCS Series is the ultimate HV VERY RELIABLE. Power Supply performer and only choice for low, medium and high repetition rate Pulse Discharge Systems. With voltages to as high as 65 kV and out­ put power to 20 kJ/s, when it comes to relia­ bility, performance and price, we've rewrit­ ten the specifications for High Voltage Capacitor Charging. Maxwell has successfully applied the CCS to every conceivable Pulsed Power Application. Whether you are developing the latest Pulsed Modulator, Solid State Laser, High Energy Storage Bank or Ultra Fast Excimer, call for the specialist "San Diego Chargers." SERIES CCS Capacitor Charging Power Supplies • Output Power 2, 4, 6, 8, 10, 12 kJ/s (20 kJ/s available as custom). • Output Voltages 1, 2, 3, 5, 10, 20, 30, 40, 50, 60, 65 kV. • 208,400,480 VAC 3 phase all standard. • 22/240 VAC single phase to 6 kW available. • 2 year warranty. ^LM I _________ BIBI™ For information on the CCS, visit: ^^^HS_l_« f_l—li http://www.hvpower.com • «^•!J^/"^»™ Or for our extensive range of HV ™ ™ •TECHNOLOGItS Components and Systems, go to: Energy Products http://www.maxweILcom/energy 4949 Greencraig Lane, San Diego, CA 92123 • (619) 576-7545 • FAX (619) 576-7672 Contents Covering current developments in high- energy physics and related fields worldwide CERN Courier is distributed to Member State governments, institutes and laboratories affiliated with CERN, and to their personnel. It is published monthly except January and August, in English and French editions. The views expressed are not CERN necessarily those of the CERN management.
    [Show full text]
  • Toward Crowdsourced Drug Discovery: Start-Up of the Volunteer Computing Project Sidock@Home
    Toward crowdsourced drug discovery: start-up of the volunteer computing project SiDock@home Natalia Nikitina1[0000-0002-0538-2939] , Maxim Manzyuk2[000-0002-6628-0119], Marko Juki´c3;4[0000-0001-6083-5024], Crtomirˇ Podlipnik5[0000-0002-8429-0273], Ilya Kurochkin6[0000-0002-0399-6208], and Alexander Albertian6[0000-0002-6586-8930] 1 Institute of Applied Mathematical Research, Karelian Research Center of the Russian Academy of Sciences, Petrozavodsk, Russia, [email protected] 2 Internet portal BOINC.ru, Moscow, Russia, [email protected] 3 Chemistry and Chemical Engineering, University of Maribor, Maribor, Slovenia 4 Faculty of Mathematics, Natural Sciences and Information Technologies, University of Primorska, Koper, Slovenia [email protected] 5 Faculty of Chemistry and Chemical Technology, University of Ljubljana, Ljubljana, Slovenia, [email protected] 6 Federal Research Center \Computer Science and Control" of the Russian Academy of Sciences, Moscow, Russia, [email protected], [email protected], [email protected] Abstract. In this paper, we describe the experience of setting up a computational infrastructure based on BOINC middleware and running a volunteer computing project on its basis. We characterize the first series of computational experiments and review the project's development in its first six months. The gathered experience shows that BOINC-based Desktop Grids allow to to efficiently aid drug discovery at its early stages. Keywords: Desktop Grid · Distributed computing · Volunteer comput- ing · BOINC · Virtual drug screening · Molecular docking · SARS-CoV-2 1 Introduction Among the variety of high-performance computing (HPC) systems, Desktop Grids hold a special place due to their enormous potential and, at the same time, high availability.
    [Show full text]
  • Bringing the Heavens Down to Earth
    International Journal of High-Energy Physics CERN I COURIER Volume 44 Number 3 April 2004 Bringing the heavens down to Earth ACCELERATORS NUCLEAR PHYSICS Ministers endorse NuPECC looks to linear collider p6 the future p22 POWER CONVERTERS Principles : Technologies : • Linear, Switch Node primary or secondary, Current or voltage stabilized • Hani, or résonant» Buck, from % to the sub ppm level • Boost, 4-quadrant operation Limits : Control : * 1A up to 25kA • Local manual and/or computer control * 3V to 50kV • Interfaces: RS232, RS422, RS485, IEEE488/GPIB, •O.lkVAto 3MVA • CANbus, Profibus DP, Interbus S, Ethernet • Adaptation to EPICS • DAC and ADC 16 to 20 bit resolution and linearity Applications : Electromagnets and coils Superconducting magnets or short samples Resistive or capacitive loads Klystrons, lOTs, RF transmitters 60V/350OM!OkW Thyristor controlled (S£M®) I0"4, Profibus 80V/600A,50kW 5Y/30Ô* for supraconducting magnets linear technology < Sppm stability with 10 extra shims mm BROKER BIOSPIN SA • France •m %M W\. WSÊ ¥%, 34 rue de l'industrie * F-67166 Wissembourg Cedex Tél. +33 (0)3 88 73 68 00 • Fax. +33 (0)3 88 73 68 79 lOSPIN power@brukerir CONTENTS Covering current developments in high- energy physics and related fields worldwide CERN Courier is distributed to member-state governments, institutes and laboratories affiliated with CERN, and to their personnel. It is published monthly, except for January and August, in English and French editions. The views expressed are not CERN necessarily those of the CERN management.
    [Show full text]
  • Multiagent Social Computing
    56 International Journal of Web Portals, 3(4), 56-68, October-December 2011 Multiagent Social Computing Ben Choi, Louisiana Tech University, USA ABSTRACT This article provides a framework for extending social networks to social computing. When people join social networks, such as Facebook and discussion groups, their personal computers can also join the social networks. This framework facilitates sharing of computing resources among friends and groups. Computers of friends and groups act autonomously to help each other perform various tasks. The framework combines many key technologies, including intelligent agents, multi-agent system, object space, and parallel and distributed computing, into a new computing platform, which has been successfully implemented and tested. With this framework, any person will have access to not only the computing power of his or her own personal computer but also the vast computing power of a community of computers. The collective capabilities of humans and computers working in communities will create complementary capabilities of computing to achieve behaviors that transcend those of people and computers in isolation. The future of computing is moving from personal computers to societies of computers. Keywords: Cloud Computing, Intelligent Agents, Multi-Agent System, Parallel Distributed Processing, Social Computing, Social Network INTRODUCTION Internet can join computing communities and so does any networked computers, creating social This article extends the concept of socially computing systems ranging
    [Show full text]
  • Addressing Selfishness in the Design of Cooperative Systems Guido Lena Cota
    Addressing selfishness in the design of cooperative systems Guido Lena Cota To cite this version: Guido Lena Cota. Addressing selfishness in the design of cooperative systems. Computer Science and Game Theory [cs.GT]. Université de Lyon; Università degli studi (Milan, Italie). Convegno di studi (2000 ; Milano), 2017. English. NNT : 2017LYSEI023. tel-02191480 HAL Id: tel-02191480 https://tel.archives-ouvertes.fr/tel-02191480 Submitted on 23 Jul 2019 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. NNT : 2017LYSEI023 DOCTORAL THESIS Cotutelle-de-thèse INSTITUT NATIONAL DES SCIENCES APPLIQUÉES DE LYON ÉCOLE DOCTORALE ED 512 – INFORMATIQUE ET MATHÉMATIQUES DE LYON SPÉCIALITÉ INFORMATIQUE DIRECTOR: PROF. LUCA Q. ZAMBONI UNIVERSITÁ DEGLI STUDI DI MILANO DEPARTMENT OF COMPUTER SCIENCE CORSO DI DOTTORATO IN INFORMATICA (XXVIII CYCLE) – INF/01 COORDINATOR: PROF. PAOLO BOLDI Defended on 24 March 2017, by : Guido LENA COTA Addressing Selfishness in the Design of Cooperative Systems Supervisors: Prof. Lionel BRUNIE INSA de Lyon Prof. Ernesto DAMIANI Università degli Studi di Milano Cosupervisors: Dr. Sonia BEN MOKHTAR INSA de Lyon Dr. Gabriele GIANINI Università degli Studi di Milano EXAMINATION COMMITTEE: Reviewers: Prof. Harald KOSCH Universität Passau, Germany Prof.
    [Show full text]
  • From the Web to the Grid and Beyond
    The Frontiers Collection From the Web to the Grid and Beyond Computing Paradigms Driven by High-Energy Physics Bearbeitet von René Brun, Frederico Carminati, Giuliana Galli-Carminati 1. Auflage 2012. Buch. xx, 360 S. Hardcover ISBN 978 3 642 23156 8 Format (B x L): 15,5 x 23,5 cm Gewicht: 726 g Weitere Fachgebiete > EDV, Informatik > Hardwaretechnische Grundlagen > Grid- Computing & Paralleles Rechnen schnell und portofrei erhältlich bei Die Online-Fachbuchhandlung beck-shop.de ist spezialisiert auf Fachbücher, insbesondere Recht, Steuern und Wirtschaft. Im Sortiment finden Sie alle Medien (Bücher, Zeitschriften, CDs, eBooks, etc.) aller Verlage. Ergänzt wird das Programm durch Services wie Neuerscheinungsdienst oder Zusammenstellungen von Büchern zu Sonderpreisen. Der Shop führt mehr als 8 Millionen Produkte. Preface Modern High Energy Physics (HEP), as a science studying the results of accelerator- driven particle collisions, was born after the Second World War, at the same time as computers. HEP research has been constantly limited by technology, both in the accelerator and detector domains as well as that of computing. At the same time High Energy physicists have greatly contributed to the development of Information Technology. During all these years, the Conseil Europeen´ pour la Recherche Nucleaire´ 1 (CERN [1], located in Geneva, Switzerland) has been a privileged place for the evolution of HEP computing. Several applications conceived for HEP have found applications well beyond it, the World Wide Web (see Chap. 2) being the most notable example. During all these years HEP computing has faced the chal- lenge of software development within distributed communities, and of exploiting geographically distributed computing resources.
    [Show full text]