Book of Abstracts Ii Contents

Book of Abstracts Ii Contents

<p><strong>CHEP 2016 Conference, San Francisco, October 8-14, 2016 </strong></p><p>Monday, 10 October 2016 - Friday, 14 October 2016 <br>San Francisco Marrioꢀ Marquis </p><p><strong>Book of Abstracts </strong></p><p>ii </p><p><strong>Contents </strong></p><p>Experim ent Managem ent System&nbsp;for the SND Detector 0 .&nbsp;. . . . . . . . . . . . . . . . . Reconstruction soſtware of the silicon tracker of DAMPE m ission 2 .&nbsp;. . . . . . . . . . . HEPData - a repository for high energy physics data exploration 3 .&nbsp;. . . . . . . . . . . . Reconstruction of Micropaꢀern Detector Signals using Convolutional Neural Networks 4 <br>1123<br>Federated data storage system&nbsp;prototype for LHC experim ents and data intensive science </p><ul style="display: flex;"><li style="flex:1">6 .&nbsp;. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . </li><li style="flex:1">3</li></ul><p>4566<br>BelleII@hom e: Integrate volunteer com puting resources into DIRAC in a secure way 7&nbsp;. Reconstruction and calibration of MRPC endcap TOF of BESIII 8 .&nbsp;. . . . . . . . . . . . . RootJS: Node.js Bindings for ROOT 6 9 .&nbsp;. . . . . . . . . . . . . . . . . . . . . . . . . . . C++ Soſtware ꢁality in the ATLAS experim ent: Tools and Experience 10 .&nbsp;. . . . . . . . An autom ated m eta-m onitoring m obile application and frontend interface for the WLCG </p><ul style="display: flex;"><li style="flex:1">com puting m odel 11 .&nbsp;. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . </li><li style="flex:1">7</li></ul><p>Experience of Google’s latest Deep Learning library, TensorFlow, with Docker in a WLCG </p><ul style="display: flex;"><li style="flex:1">cluster 12 .&nbsp;. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . </li><li style="flex:1">8</li></ul><p>89<br>Flexible online m onitoring for high-energy physics with Pyram e 13 .&nbsp;. . . . . . . . . . . Sim ulation of orientational coherent effects via Geant4 14 .&nbsp;. . . . . . . . . . . . . . . . . Detector control system&nbsp;for the AFP detector in ATLAS experim ent at CERN 15 .&nbsp;. . . .&nbsp;10 e InfiniBand based Event Builder im plem entation for the LHCb upgrade 16 .&nbsp;. . . . . .&nbsp;11 JavaScript ROOT v4 17 .&nbsp;. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .&nbsp;12 e evolution of m onitoring system : the INFN-CNAF case study 18 .&nbsp;. . . . . . . . . . .&nbsp;13 Statistical and Data Analysis Package in SWIFT 19 .&nbsp;. . . . . . . . . . . . . . . . . . . . .&nbsp;13 Analysis Tools in Geant4 10.2 20 .&nbsp;. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .&nbsp;14 Online &amp; Offline Storage and Processing for the upcom ing European XFEL Experim ents 21&nbsp;15 Future approach to tier-0 extension 22 .&nbsp;. . . . . . . . . . . . . . . . . . . . . . . . . . . .&nbsp;15 iii <br>Internal security consulting, reviews and penetration testing at CERN 23 .&nbsp;. . . . . . . .&nbsp;16 Web technology detection - for asset inventory and vulnerability m anagem ent 24 .&nbsp;. . .&nbsp;16 Integrating Containers in the CERN Private Cloud 25 .&nbsp;. . . . . . . . . . . . . . . . . . .&nbsp;17 CERN Com puting in Com m ercial Clouds 26 .&nbsp;. . . . . . . . . . . . . . . . . . . . . . . . .&nbsp;18 Real-tim e com plex event processing for cloud resources 27 .&nbsp;. . . . . . . . . . . . . . . .&nbsp;18 Benchm arking cloud resources 28 .&nbsp;. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .&nbsp;19 RapidIO as a m ulti-purpose interconnect 29 .&nbsp;. . . . . . . . . . . . . . . . . . . . . . . . .&nbsp;20 Evolution of CERN Print Services 30 .&nbsp;. . . . . . . . . . . . . . . . . . . . . . . . . . . . .&nbsp;20 Windows Term inal Server Orchestration 31 .&nbsp;. . . . . . . . . . . . . . . . . . . . . . . . .&nbsp;21 Update on CERN Search based on SharePoint 2013 33 .&nbsp;. . . . . . . . . . . . . . . . . . .&nbsp;21 First experience with the new .CERN top level dom ain 34 .&nbsp;. . . . . . . . . . . . . . . . .&nbsp;22 Flash is Dead. Finally. 35 .&nbsp;. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .&nbsp;22 Access to WLCG resources: e X509-free pilot 36 .&nbsp;. . . . . . . . . . . . . . . . . . . . .&nbsp;23 Deploying FTS with Docker Swarm&nbsp;and Openstack Magnum&nbsp;37 .&nbsp;. . . . . . . . . . . . .&nbsp;24 DNS Load Balancing in the CERN Cloud 38 .&nbsp;. . . . . . . . . . . . . . . . . . . . . . . . .&nbsp;24 Custom ization of the general fiꢀing tool genfit2 in PandaRoot 39 .&nbsp;. . . . . . . . . . . . .&nbsp;25 A world-wide databridge supported by a com m ercial cloud provider 40 .&nbsp;. . . . . . . . .&nbsp;26 DPM Evolution: a Disk Operations Managem ent Engine for DPM 41 .&nbsp;. . . . . . . . . . .&nbsp;27 Making the m ost of cloud storage - a toolkit for exploitation by WLCG experim ents. 42&nbsp;. 27 EOS Cross Tier Federation 43 .&nbsp;. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .&nbsp;28 EOS Developm ents 44 .&nbsp;. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .&nbsp;28 Using HEP Com puting Tools, Grid and Supercom puters for Genom e Sequencing Studies <br>45 .&nbsp;. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .&nbsp;29 </p><p>Support Vector Machines in HEP 46 .&nbsp;. . . . . . . . . . . . . . . . . . . . . . . . . . . . .&nbsp;30 Production Managem ent System&nbsp;for AMS Rem ote Com puting Centers 48 .&nbsp;. . . . . . . .&nbsp;31 Evolution of Monitoring System&nbsp;for AMS Science Operation Center 49 .&nbsp;. . . . . . . . . .&nbsp;31 Storage Strategy of AMS Science Data at Science Operation Center at CERN 50 .&nbsp;. . . . .&nbsp;32 AMS Data Production Facilities at Science Operation Center at CERN 51 .&nbsp;. . . . . . . .&nbsp;33 Context-aware distributed cloud com puting using CloudScheduler 52 .&nbsp;. . . . . . . . . .&nbsp;33 XrootdFS, a posix file system&nbsp;for XrootD 53 .&nbsp;. . . . . . . . . . . . . . . . . . . . . . . . .&nbsp;34 iv <br>Research and application of OpenStack in Chinese Spallation Neutron Source Com puting environm ent 54 .&nbsp;. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .&nbsp;34 </p><p>Next Generation high perform ance, m ulti-dim ensional scalable data transfer 55 .&nbsp;. . . .&nbsp;35 Globally Distributed Soſtware Defined Storage 56 .&nbsp;. . . . . . . . . . . . . . . . . . . . .&nbsp;36 An efficient, m odular and sim ple tape archiving solution for LHC Run-3 57 .&nbsp;. . . . . . .&nbsp;36 Data Center Environm ental Sensor for safeguarding the CERN Data Archive 58 .&nbsp;. . . .&nbsp;37 PaaS for web applications with OpenShiſt Origin 59 .&nbsp;. . . . . . . . . . . . . . . . . . . .&nbsp;38 40-Gbps Data-Acquisition System&nbsp;for NectarCAM, a cam era for the Medium&nbsp;Size Telescopes of the Cherenkov Telescope Array 60 .&nbsp;. . . . . . . . . . . . . . . . . . . . . .&nbsp;38 </p><p>Model-independent partial wave analysis using a m assively-parallel fiꢀing fram ework 61&nbsp;39 CERN openlab Researched Technologies at Might Becom e Gam e Changers in Soſtware <br>Developm ent 62 .&nbsp;. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .&nbsp;40 </p><p>CERN openlab Knowledge Transfer and Innovation Projects 64 .&nbsp;. . . . . . . . . . . . . .&nbsp;40 CERN data services for LHC com puting 66 .&nbsp;. . . . . . . . . . . . . . . . . . . . . . . . .&nbsp;41 CERN AFS Replacem ent project 67 .&nbsp;. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .&nbsp;41 From Physics&nbsp;to industry: EOS outside HEP 68 .&nbsp;. . . . . . . . . . . . . . . . . . . . . . .&nbsp;42 Big Data Analytics for the Future Circular Collider Reliability and Availability Studies 69&nbsp;42 CERNBox: the data hub for data analysis 70 .&nbsp;. . . . . . . . . . . . . . . . . . . . . . . . .&nbsp;43 Tape SCSI m onitoring and encryption at CERN 71 .&nbsp;. . . . . . . . . . . . . . . . . . . . .&nbsp;44 Wi-Fi service enhancem ent at CERN 72 .&nbsp;. . . . . . . . . . . . . . . . . . . . . . . . . . .&nbsp;44 A graphical perform ance analysis and exploration tool for Linux-perf 73 .&nbsp;. . . . . . . .&nbsp;45 e role of dedicated com puting centers in the age of cloud com puting 74 .&nbsp;. . . . . . . .&nbsp;45 Next Generation Monitoring 75 .&nbsp;. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .&nbsp;46 e m erit of data processing application elasticity 76 .&nbsp;. . . . . . . . . . . . . . . . . . . .&nbsp;47 Identifying m em ory allocation paꢀerns in HEP soſtware 77 .&nbsp;. . . . . . . . . . . . . . . .&nbsp;48 Finding unused m em ory allocations with FOM-tools 78 .&nbsp;. . . . . . . . . . . . . . . . . .&nbsp;48 SDN-NGenIA A Soſtware Defined Next Generation integrated Architecture for HEP and <br>Data Intensive Science 79 .&nbsp;. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .&nbsp;49 </p><p>A Com parison of Deep Learning Architectures with GPU Acceleration and eir Applications 80 .&nbsp;. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .&nbsp;50 </p><p>CERN’s Ceph infrastructure: OpenStack, NFS, CVMFS, CASTOR, and m ore! 81 .&nbsp;. . . . .&nbsp;51 LHCb Kalm an Filter cross architectures studies 82 .&nbsp;. . . . . . . . . . . . . . . . . . . . .&nbsp;51 v<br>Unified Monitoring Architecture for IT and Grid Services 83 .&nbsp;. . . . . . . . . . . . . . .&nbsp;52 Developm ent of DAQ Soſtware for CULTASK Experim ent 84 .&nbsp;. . . . . . . . . . . . . . .&nbsp;53 e new ATLAS Fast Calorim eter Sim ulation 85 .&nbsp;. . . . . . . . . . . . . . . . . . . . . .&nbsp;54 ATLAS com puting on Swiss Cloud SWITCHengines 86 .&nbsp;. . . . . . . . . . . . . . . . . .&nbsp;55 Global EOS: exploring the 300-m s-latency region 87 .&nbsp;. . . . . . . . . . . . . . . . . . . .&nbsp;55 ATLAS and LHC com puting on CRAY 88 .&nbsp;. . . . . . . . . . . . . . . . . . . . . . . . . .&nbsp;56 Analysis of em pty ATLAS pilot jobs and subsequent resource usage on grid sites 89 .&nbsp;. .&nbsp;56 e ATLAS com puting challenge for HL-LHC 90 .&nbsp;. . . . . . . . . . . . . . . . . . . . . .&nbsp;57 Networks in ATLAS 91 .&nbsp;. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .&nbsp;58 Production Experience with the ATLAS Event Service 92 .&nbsp;. . . . . . . . . . . . . . . . .&nbsp;58 Com puting shiſts to m onitor ATLAS distributed com puting infrastructure and operations <br>93 .&nbsp;. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .&nbsp;59 </p><p>Consolidation of Cloud Com puting in ATLAS 95 .&nbsp;. . . . . . . . . . . . . . . . . . . . . .&nbsp;60 Giving pandas ROOT to chew on: experiences with the XENON1T Dark Maꢀer experim ent <br>96 .&nbsp;. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .&nbsp;61 </p><p>Advancing data m anagem ent and analysis in different scientific disciplines 97 .&nbsp;. . . . .&nbsp;62 Interfacing HTCondor-CE with OpenStack 98 .&nbsp;. . . . . . . . . . . . . . . . . . . . . . .&nbsp;62 PODIO - Applying plain-old-data for defining physics data m odels 100 .&nbsp;. . . . . . . . .&nbsp;63 Distributed Metadata Managem ent of Mass Storage System&nbsp;in High Energy of Physics 102&nbsp;64 SCEAPI: A Unified Restful Web APIs for High-Perform ance Com puting 103 .&nbsp;. . . . . . .&nbsp;65 Message ꢁeues for Online Reconstruction on the Exam ple of the PANDA Experim ent 104&nbsp;65 XROOT developm ent update - support for m etalinks and extrem e copy 106 .&nbsp;. . . . . . .&nbsp;66 e DAQ system&nbsp;for the AEgIS experim ent 107 .&nbsp;. . . . . . . . . . . . . . . . . . . . . . .&nbsp;67 e Muon Ionization Cooling Experim ent Analysis User Soſtware 108 .&nbsp;. . . . . . . . . .&nbsp;67 e future of academ ic com puting security 109 .&nbsp;. . . . . . . . . . . . . . . . . . . . . . .&nbsp;68 Optim izing ROOT’s Perform ance Using C++ Modules 110 .&nbsp;. . . . . . . . . . . . . . . . .&nbsp;68 Enabling Federated Access for HEP 111 .&nbsp;. . . . . . . . . . . . . . . . . . . . . . . . . . .&nbsp;69 Full and Fast Sim ulation Fram ework for the Future Circular Collider Studies 112 .&nbsp;. . . .&nbsp;70 FPGA based data processing in the ALICE High-Level Trigger in LHC Run 2 113 .&nbsp;. . . .&nbsp;71 Events visualisation in ALICE - current status and strategy for Run 3 114 .&nbsp;. . . . . . . .&nbsp;72 vi <br>Kalm an filter tracking on parallel architectures 115 .&nbsp;. . . . . . . . . . . . . . . . . . . . .&nbsp;72 Perform ance of the CMS Event Builder 116 .&nbsp;. . . . . . . . . . . . . . . . . . . . . . . . .&nbsp;73 An interactive and com prehensive working environm ent for high-energy physics soſtware with Python and jupyter notebooks 117 .&nbsp;. . . . . . . . . . . . . . . . . . . . . . . . .&nbsp;74 </p><p>SND DAQ system&nbsp;evolution 118 .&nbsp;. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .&nbsp;75 Using ALFA for high throughput, distributed data transm ission in ALICE O2 system&nbsp;119 76 Design of the data quality control system&nbsp;for the ALICE O2 120 .&nbsp;. . . . . . . . . . . . .&nbsp;76 A perform ance study of WebDav access to storages within the Belle II collaboration. 121&nbsp;77 A lightweight federation of the Belle II storages through dynafed 122 .&nbsp;. . . . . . . . . .&nbsp;78 Integration of Oracle and Hadoop: hybrid databases affordable at scale 123 .&nbsp;. . . . . . .&nbsp;78 e ATLAS Production System&nbsp;Evolution. New&nbsp;Data Processing and Analysis Paradigm for the LHC Run2 and High-Lum inosity 124 .&nbsp;. . . . . . . . . . . . . . . . . . . . . .&nbsp;79 </p><p>Large scale soſtware building with CMake in ATLAS 125 .&nbsp;. . . . . . . . . . . . . . . . .&nbsp;80 Monitoring of Com puting Resource Use of Active Soſtware Releases at ATLAS 127 .&nbsp;. . .&nbsp;81 Developm ent and perform ance of track reconstruction algorithm s at the energy frontier with the ATLAS detector 128 .&nbsp;. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .&nbsp;82 </p><p>Prim ary Vertex Reconstruction with the ATLAS experim ent 130 .&nbsp;. . . . . . . . . . . . .&nbsp;82 Using m achine learning algorithm s to forecast network and system&nbsp;load m etrics for ATLAS <br>Distributed Com puting 131 .&nbsp;. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .&nbsp;83 </p><p>Autom atic rebalancing of data in ATLAS distributed data m anagem ent 132 .&nbsp;. . . . . . .&nbsp;84 ATLAS Sim ulation using Real Data: Em bedding and Overlay 133 .&nbsp;. . . . . . . . . . . . .&nbsp;84 Metadata for fine-grained processing at ATLAS 135 .&nbsp;. . . . . . . . . . . . . . . . . . . .&nbsp;85 e ATLAS EventIndex General Dataflow and Monitoring Infrastructure 136 .&nbsp;. . . . . .&nbsp;86 ACTS: from&nbsp;ATLAS soſtware towards a com m on track reconstruction soſtware 137&nbsp;. . .&nbsp;86 ATLAS soſtware stack on ARM64 138 .&nbsp;. . . . . . . . . . . . . . . . . . . . . . . . . . . .&nbsp;87 Modernizing the ATLAS Sim ulation Infrastructure 139 .&nbsp;. . . . . . . . . . . . . . . . . . .&nbsp;88 PanDA for ATLAS distributed com puting in the next decade 140 .&nbsp;. . . . . . . . . . . . .&nbsp;88 Mem ory handling in the ATLAS subm ission system&nbsp;from job&nbsp;definition to sites lim its 141&nbsp;89 Evolution and experience with the ATLAS Sim ulation at Point1 Project 143 .&nbsp;. . . . . . .&nbsp;90 C3PO - A Dynam ic Data Placem ent Agent for ATLAS Distributed Data Managem ent 144&nbsp;91 Rucio WebUI - e Web Interface for the ATLAS Distributed Data Managem ent 145 .&nbsp;. .&nbsp;91 vii <br>Object-based storage integration within the ATLAS DDM system&nbsp;146 .&nbsp;. . . . . . . . . .&nbsp;92 Experiences with the new ATLAS Distributed Data Managem ent System&nbsp;147 .&nbsp;. . . . . .&nbsp;93 Rucio Auditor - Consistency in the ATLAS Distributed Data Managem ent System&nbsp;148 .&nbsp;. 93 How To Review 4 Million Lines of ATLAS Code 149 .&nbsp;. . . . . . . . . . . . . . . . . . . .&nbsp;94 Frozen-shower sim ulation of electrom agnetic showers in the ATLAS forward calorim eters <br>150 .&nbsp;. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .&nbsp;95 </p><p>ATLAS World-cloud and networking in PanDA 151 .&nbsp;. . . . . . . . . . . . . . . . . . . .&nbsp;95 Data intensive ATLAS workflows in the Cloud 152 .&nbsp;. . . . . . . . . . . . . . . . . . . . .&nbsp;96 Volunteer Com puting Experience with ATLAS@Hom e 153 .&nbsp;. . . . . . . . . . . . . . . .&nbsp;97 Assessm ent of Geant4 Maintainability with respect to Soſtware Engineering References 154&nbsp;97 Migrating the Belle II Collaborative Services and Tools 155 .&nbsp;. . . . . . . . . . . . . . . .&nbsp;98 ROOT and new program m ing paradigm s 156 .&nbsp;. . . . . . . . . . . . . . . . . . . . . . . .&nbsp;99 Status and Evolution of ROOT 157 .&nbsp;. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 Com puting Perform ance of GeantV Physics Models 158 .&nbsp;. . . . . . . . . . . . . . . . . . 101 Soſtware Aspects of the Geant4 Validation Repository 159 .&nbsp;. . . . . . . . . . . . . . . . . 102 MPEXS: A CUDA MonteCarlo of the sim ulaiton of electrom agnetic interactions 160 .&nbsp;. . 103 Multi-threaded Geant4 on Intel Many Integrated Core architectures 161 .&nbsp;. . . . . . . . . 103 CEPHFS: a new generation storage platform&nbsp;for Australian high energy physics 162 .&nbsp;. . 104 Evaluation of lightweight site setups within WLCG infrastructure 165 .&nbsp;. . . . . . . . . . 105 e Cherenkov Telescope Array production system&nbsp;for Monte Carlo sim ulations and analysis 166 .&nbsp;. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 </p><p>Processing and ꢁality Monitoring for the ATLAS Tile Hadronic Calorim eter data 167 .&nbsp;. 107 Data acquisition and processing in the ATLAS Tile Calorim eter Phase-II Upgrade Dem onstrator 168 .&nbsp;. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 </p><p>Validation of Electrom agnetic Physics Models for Parallel Com puting Architectures in the <br>GeantV project 169 .&nbsp;. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 </p><p>Stochastic optim isation of GeantV code by use of genetic algorithm s 170 .&nbsp;. . . . . . . . 109 GeantV phase 2: developing the particle transport library 171 .&nbsp;. . . . . . . . . . . . . . . 110 New Directions in the CernVM File System&nbsp;172 .&nbsp;. . . . . . . . . . . . . . . . . . . . . . . 111 Toward a CERN Virtual Visit Service 173 .&nbsp;. . . . . . . . . . . . . . . . . . . . . . . . . . 111 Perform ance and evolution of the DAQ system&nbsp;of the CMS experim ent for Run-2 174&nbsp;. . 112 viii <br>Upgrading and Expanding Lustre Storage for use with the WLCG 175 .&nbsp;. . . . . . . . . . 113 XRootD Popularity on Hadoop Clusters 176 .&nbsp;. . . . . . . . . . . . . . . . . . . . . . . . . 114 e Vacuum&nbsp;Platform 177&nbsp;. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 e Machine/Job Features m echanism&nbsp;178 .&nbsp;. . . . . . . . . . . . . . . . . . . . . . . . . . 115 Integration of grid and local batch system&nbsp;resources at DESY 179 .&nbsp;. . . . . . . . . . . . . 116 Web Proxy Auto Discovery for WLCG 180 .&nbsp;. . . . . . . . . . . . . . . . . . . . . . . . . 116 Visualization of historical data for the ATLAS detector controls 181 .&nbsp;. . . . . . . . . . . 117 e detector read-out in ALICE during Run 3 and 4 182 .&nbsp;. . . . . . . . . . . . . . . . . . 118 Grid Access with Federated Identities 183 .&nbsp;. . . . . . . . . . . . . . . . . . . . . . . . . . 119 Recent progress of Geant4 electrom agnetic physics for LHC and other applications 184&nbsp;. 119 Scaling Up a CMS Tier-3 Site with Cam pus Resources and a 100 Gb/s Network Connection: <br>What Could Go Wrong? 185 .&nbsp;. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 </p><p>Facilitating the deploym ent and exploitation of HEP Phenom enology codes using INDIGO- <br>Datacloud tools 186 .&nbsp;. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 </p><p>C2MON: a m odern open-source platform&nbsp;for data acquisition, m onitoring and control 187 122 ARC CE cache as a solution for lightweight Grid sites in ATLAS 188 .&nbsp;. . . . . . . . . . . 123 Exploiting Opportunistic Resources for ATLAS with ARC CE and the Event Service 189 .&nbsp;123 Evolution of user analysis on the Grid in ATLAS 190 .&nbsp;. . . . . . . . . . . . . . . . . . . . 124 AthenaMT: Upgrading the ATLAS Soſtware Fram ework for the Many-Core World with <br>Multi-reading 191 .&nbsp;. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 </p><p>An Oracle-based Event Index for ATLAS 192 .&nbsp;. . . . . . . . . . . . . . . . . . . . . . . . 125 Collecting conditions usage m etadata to optim ize current and future ATLAS soſtware and processing 193 .&nbsp;. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 </p><p>Integration of the Titan supercom puter at OLCF with the ATLAS Production System&nbsp;194 127 First use of LHC Run 3 Conditions Database infrastructure for auxiliary data files in ATLAS <br>195 .&nbsp;. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 </p><p>Multi-threaded ATLAS Sim ulation on Intel Knights Landing Processors 196 .&nbsp;. . . . . . . 128 e Fast Sim ulation Chain for ATLAS 197 .&nbsp;. . . . . . . . . . . . . . . . . . . . . . . . . . 129 Integration of the Chinese HPC Grid in ATLAS Distributed Com puting 198 .&nbsp;. . . . . . . 130 How to keep the Grid full and working with ATLAS production and physics jobs 199&nbsp;. . 130 A study of data representations in Hadoop to optim ize data storage and search perform ance of the ATLAS EventIndex 200 .&nbsp;. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 </p><p>ix <br>AGIS: Integration of new technologies used in ATLAS Distributed Com puting 201 .&nbsp;. . . 131 A Roadm ap to Continuous Integration for ATLAS soſtware developm ent 202 .&nbsp;. . . . . . 132 ATLAS Metadata Interface (AMI), a generic m etadata fram ework 204 .&nbsp;. . . . . . . . . . 133 Deploying the ATLAS Metadata Interface (AMI) on the cloud with Jenkins 205 .&nbsp;. . . . . 133 ATLAS Fast Physics Monitoring: TADA 206 .&nbsp;. . . . . . . . . . . . . . . . . . . . . . . . . 134 A new m echanism&nbsp;to persistify the detector geom etry of ATLAS and serving it through an experim ent-agnostic REST API 207 .&nbsp;. . . . . . . . . . . . . . . . . . . . . . . . . . . 135 </p><p>ATLAS Distributed Com puting experience and perform ance during the LHC Run-2 208&nbsp;. 135 Benefits and perform ance of ATLAS approaches to utilizing opportunistic resources 209 .&nbsp;136 DIRAC universal pilots 211 .&nbsp;. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 e ATLAS Com puting Agora : a resource web site for citizen science projects 212 .&nbsp;. . . 138 HiggsHunters - a citizen science project for ATLAS 213 .&nbsp;. . . . . . . . . . . . . . . . . . 139 Big Data Analytics Tools as Applied to ATLAS Event Data 215 .&nbsp;. . . . . . . . . . . . . . 139 Event visualisation in ATLAS: current soſtware technologies / future prospects and trends <br>216 .&nbsp;. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140 </p><p>DIRAC in Large Particle Physics Experim ents 217 .&nbsp;. . . . . . . . . . . . . . . . . . . . . 141 ATLAS Data Preparation in Run 2 218 .&nbsp;. . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 Caching Servers for ATLAS 219 .&nbsp;. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 An Analysis of Reproducibility and Non-Determ inism&nbsp;in HEP Soſtware and ROOT Data <br>220 .&nbsp;. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 </p>

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    436 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us