CHEP 2016 Conference, San Francisco, October 8-14, 2016
Monday, 10 October 2016 - Friday, 14 October 2016
San Francisco Marrioꢀ Marquis
Book of Abstracts
ii
Contents
Experim ent Managem ent System for the SND Detector 0 . . . . . . . . . . . . . . . . . . Reconstruction soſtware of the silicon tracker of DAMPE m ission 2 . . . . . . . . . . . . HEPData - a repository for high energy physics data exploration 3 . . . . . . . . . . . . . Reconstruction of Micropaꢀern Detector Signals using Convolutional Neural Networks 4
1123
Federated data storage system prototype for LHC experim ents and data intensive science
- 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
- 3
4566
BelleII@hom e: Integrate volunteer com puting resources into DIRAC in a secure way 7 . Reconstruction and calibration of MRPC endcap TOF of BESIII 8 . . . . . . . . . . . . . . RootJS: Node.js Bindings for ROOT 6 9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . C++ Soſtware ꢁality in the ATLAS experim ent: Tools and Experience 10 . . . . . . . . . An autom ated m eta-m onitoring m obile application and frontend interface for the WLCG
- com puting m odel 11 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
- 7
Experience of Google’s latest Deep Learning library, TensorFlow, with Docker in a WLCG
- cluster 12 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
- 8
89
Flexible online m onitoring for high-energy physics with Pyram e 13 . . . . . . . . . . . . Sim ulation of orientational coherent effects via Geant4 14 . . . . . . . . . . . . . . . . . . Detector control system for the AFP detector in ATLAS experim ent at CERN 15 . . . . . 10 e InfiniBand based Event Builder im plem entation for the LHCb upgrade 16 . . . . . . . 11 JavaScript ROOT v4 17 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 e evolution of m onitoring system : the INFN-CNAF case study 18 . . . . . . . . . . . . 13 Statistical and Data Analysis Package in SWIFT 19 . . . . . . . . . . . . . . . . . . . . . . 13 Analysis Tools in Geant4 10.2 20 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Online & Offline Storage and Processing for the upcom ing European XFEL Experim ents 21 15 Future approach to tier-0 extension 22 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 iii
Internal security consulting, reviews and penetration testing at CERN 23 . . . . . . . . . 16 Web technology detection - for asset inventory and vulnerability m anagem ent 24 . . . . 16 Integrating Containers in the CERN Private Cloud 25 . . . . . . . . . . . . . . . . . . . . 17 CERN Com puting in Com m ercial Clouds 26 . . . . . . . . . . . . . . . . . . . . . . . . . . 18 Real-tim e com plex event processing for cloud resources 27 . . . . . . . . . . . . . . . . . 18 Benchm arking cloud resources 28 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 RapidIO as a m ulti-purpose interconnect 29 . . . . . . . . . . . . . . . . . . . . . . . . . . 20 Evolution of CERN Print Services 30 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 Windows Term inal Server Orchestration 31 . . . . . . . . . . . . . . . . . . . . . . . . . . 21 Update on CERN Search based on SharePoint 2013 33 . . . . . . . . . . . . . . . . . . . . 21 First experience with the new .CERN top level dom ain 34 . . . . . . . . . . . . . . . . . . 22 Flash is Dead. Finally. 35 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 Access to WLCG resources: e X509-free pilot 36 . . . . . . . . . . . . . . . . . . . . . . 23 Deploying FTS with Docker Swarm and Openstack Magnum 37 . . . . . . . . . . . . . . 24 DNS Load Balancing in the CERN Cloud 38 . . . . . . . . . . . . . . . . . . . . . . . . . . 24 Custom ization of the general fiꢀing tool genfit2 in PandaRoot 39 . . . . . . . . . . . . . . 25 A world-wide databridge supported by a com m ercial cloud provider 40 . . . . . . . . . . 26 DPM Evolution: a Disk Operations Managem ent Engine for DPM 41 . . . . . . . . . . . . 27 Making the m ost of cloud storage - a toolkit for exploitation by WLCG experim ents. 42 . 27 EOS Cross Tier Federation 43 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 EOS Developm ents 44 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 Using HEP Com puting Tools, Grid and Supercom puters for Genom e Sequencing Studies
45 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
Support Vector Machines in HEP 46 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 Production Managem ent System for AMS Rem ote Com puting Centers 48 . . . . . . . . . 31 Evolution of Monitoring System for AMS Science Operation Center 49 . . . . . . . . . . . 31 Storage Strategy of AMS Science Data at Science Operation Center at CERN 50 . . . . . . 32 AMS Data Production Facilities at Science Operation Center at CERN 51 . . . . . . . . . 33 Context-aware distributed cloud com puting using CloudScheduler 52 . . . . . . . . . . . 33 XrootdFS, a posix file system for XrootD 53 . . . . . . . . . . . . . . . . . . . . . . . . . . 34 iv
Research and application of OpenStack in Chinese Spallation Neutron Source Com puting environm ent 54 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Next Generation high perform ance, m ulti-dim ensional scalable data transfer 55 . . . . . 35 Globally Distributed Soſtware Defined Storage 56 . . . . . . . . . . . . . . . . . . . . . . 36 An efficient, m odular and sim ple tape archiving solution for LHC Run-3 57 . . . . . . . . 36 Data Center Environm ental Sensor for safeguarding the CERN Data Archive 58 . . . . . 37 PaaS for web applications with OpenShiſt Origin 59 . . . . . . . . . . . . . . . . . . . . . 38 40-Gbps Data-Acquisition System for NectarCAM, a cam era for the Medium Size Telescopes of the Cherenkov Telescope Array 60 . . . . . . . . . . . . . . . . . . . . . . . 38
Model-independent partial wave analysis using a m assively-parallel fiꢀing fram ework 61 39 CERN openlab Researched Technologies at Might Becom e Gam e Changers in Soſtware
Developm ent 62 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
CERN openlab Knowledge Transfer and Innovation Projects 64 . . . . . . . . . . . . . . . 40 CERN data services for LHC com puting 66 . . . . . . . . . . . . . . . . . . . . . . . . . . 41 CERN AFS Replacem ent project 67 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 From Physics to industry: EOS outside HEP 68 . . . . . . . . . . . . . . . . . . . . . . . . 42 Big Data Analytics for the Future Circular Collider Reliability and Availability Studies 69 42 CERNBox: the data hub for data analysis 70 . . . . . . . . . . . . . . . . . . . . . . . . . . 43 Tape SCSI m onitoring and encryption at CERN 71 . . . . . . . . . . . . . . . . . . . . . . 44 Wi-Fi service enhancem ent at CERN 72 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 A graphical perform ance analysis and exploration tool for Linux-perf 73 . . . . . . . . . 45 e role of dedicated com puting centers in the age of cloud com puting 74 . . . . . . . . . 45 Next Generation Monitoring 75 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 e m erit of data processing application elasticity 76 . . . . . . . . . . . . . . . . . . . . . 47 Identifying m em ory allocation paꢀerns in HEP soſtware 77 . . . . . . . . . . . . . . . . . 48 Finding unused m em ory allocations with FOM-tools 78 . . . . . . . . . . . . . . . . . . . 48 SDN-NGenIA A Soſtware Defined Next Generation integrated Architecture for HEP and
Data Intensive Science 79 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
A Com parison of Deep Learning Architectures with GPU Acceleration and eir Applications 80 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
CERN’s Ceph infrastructure: OpenStack, NFS, CVMFS, CASTOR, and m ore! 81 . . . . . . 51 LHCb Kalm an Filter cross architectures studies 82 . . . . . . . . . . . . . . . . . . . . . . 51 v
Unified Monitoring Architecture for IT and Grid Services 83 . . . . . . . . . . . . . . . . 52 Developm ent of DAQ Soſtware for CULTASK Experim ent 84 . . . . . . . . . . . . . . . . 53 e new ATLAS Fast Calorim eter Sim ulation 85 . . . . . . . . . . . . . . . . . . . . . . . 54 ATLAS com puting on Swiss Cloud SWITCHengines 86 . . . . . . . . . . . . . . . . . . . 55 Global EOS: exploring the 300-m s-latency region 87 . . . . . . . . . . . . . . . . . . . . . 55 ATLAS and LHC com puting on CRAY 88 . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 Analysis of em pty ATLAS pilot jobs and subsequent resource usage on grid sites 89 . . . 56 e ATLAS com puting challenge for HL-LHC 90 . . . . . . . . . . . . . . . . . . . . . . . 57 Networks in ATLAS 91 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 Production Experience with the ATLAS Event Service 92 . . . . . . . . . . . . . . . . . . 58 Com puting shiſts to m onitor ATLAS distributed com puting infrastructure and operations
93 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
Consolidation of Cloud Com puting in ATLAS 95 . . . . . . . . . . . . . . . . . . . . . . . 60 Giving pandas ROOT to chew on: experiences with the XENON1T Dark Maꢀer experim ent
96 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Advancing data m anagem ent and analysis in different scientific disciplines 97 . . . . . . 62 Interfacing HTCondor-CE with OpenStack 98 . . . . . . . . . . . . . . . . . . . . . . . . 62 PODIO - Applying plain-old-data for defining physics data m odels 100 . . . . . . . . . . 63 Distributed Metadata Managem ent of Mass Storage System in High Energy of Physics 102 64 SCEAPI: A Unified Restful Web APIs for High-Perform ance Com puting 103 . . . . . . . . 65 Message ꢁeues for Online Reconstruction on the Exam ple of the PANDA Experim ent 104 65 XROOT developm ent update - support for m etalinks and extrem e copy 106 . . . . . . . . 66 e DAQ system for the AEgIS experim ent 107 . . . . . . . . . . . . . . . . . . . . . . . . 67 e Muon Ionization Cooling Experim ent Analysis User Soſtware 108 . . . . . . . . . . . 67 e future of academ ic com puting security 109 . . . . . . . . . . . . . . . . . . . . . . . . 68 Optim izing ROOT’s Perform ance Using C++ Modules 110 . . . . . . . . . . . . . . . . . . 68 Enabling Federated Access for HEP 111 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 Full and Fast Sim ulation Fram ework for the Future Circular Collider Studies 112 . . . . . 70 FPGA based data processing in the ALICE High-Level Trigger in LHC Run 2 113 . . . . . 71 Events visualisation in ALICE - current status and strategy for Run 3 114 . . . . . . . . . 72 vi
Kalm an filter tracking on parallel architectures 115 . . . . . . . . . . . . . . . . . . . . . . 72 Perform ance of the CMS Event Builder 116 . . . . . . . . . . . . . . . . . . . . . . . . . . 73 An interactive and com prehensive working environm ent for high-energy physics soſtware with Python and jupyter notebooks 117 . . . . . . . . . . . . . . . . . . . . . . . . . . 74
SND DAQ system evolution 118 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 Using ALFA for high throughput, distributed data transm ission in ALICE O2 system 119 76 Design of the data quality control system for the ALICE O2 120 . . . . . . . . . . . . . . 76 A perform ance study of WebDav access to storages within the Belle II collaboration. 121 77 A lightweight federation of the Belle II storages through dynafed 122 . . . . . . . . . . . 78 Integration of Oracle and Hadoop: hybrid databases affordable at scale 123 . . . . . . . . 78 e ATLAS Production System Evolution. New Data Processing and Analysis Paradigm for the LHC Run2 and High-Lum inosity 124 . . . . . . . . . . . . . . . . . . . . . . . 79
Large scale soſtware building with CMake in ATLAS 125 . . . . . . . . . . . . . . . . . . 80 Monitoring of Com puting Resource Use of Active Soſtware Releases at ATLAS 127 . . . . 81 Developm ent and perform ance of track reconstruction algorithm s at the energy frontier with the ATLAS detector 128 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
Prim ary Vertex Reconstruction with the ATLAS experim ent 130 . . . . . . . . . . . . . . 82 Using m achine learning algorithm s to forecast network and system load m etrics for ATLAS
Distributed Com puting 131 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
Autom atic rebalancing of data in ATLAS distributed data m anagem ent 132 . . . . . . . . 84 ATLAS Sim ulation using Real Data: Em bedding and Overlay 133 . . . . . . . . . . . . . . 84 Metadata for fine-grained processing at ATLAS 135 . . . . . . . . . . . . . . . . . . . . . 85 e ATLAS EventIndex General Dataflow and Monitoring Infrastructure 136 . . . . . . . 86 ACTS: from ATLAS soſtware towards a com m on track reconstruction soſtware 137 . . . 86 ATLAS soſtware stack on ARM64 138 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 Modernizing the ATLAS Sim ulation Infrastructure 139 . . . . . . . . . . . . . . . . . . . . 88 PanDA for ATLAS distributed com puting in the next decade 140 . . . . . . . . . . . . . . 88 Mem ory handling in the ATLAS subm ission system from job definition to sites lim its 141 89 Evolution and experience with the ATLAS Sim ulation at Point1 Project 143 . . . . . . . . 90 C3PO - A Dynam ic Data Placem ent Agent for ATLAS Distributed Data Managem ent 144 91 Rucio WebUI - e Web Interface for the ATLAS Distributed Data Managem ent 145 . . . 91 vii
Object-based storage integration within the ATLAS DDM system 146 . . . . . . . . . . . 92 Experiences with the new ATLAS Distributed Data Managem ent System 147 . . . . . . . 93 Rucio Auditor - Consistency in the ATLAS Distributed Data Managem ent System 148 . . 93 How To Review 4 Million Lines of ATLAS Code 149 . . . . . . . . . . . . . . . . . . . . . 94 Frozen-shower sim ulation of electrom agnetic showers in the ATLAS forward calorim eters
150 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
ATLAS World-cloud and networking in PanDA 151 . . . . . . . . . . . . . . . . . . . . . 95 Data intensive ATLAS workflows in the Cloud 152 . . . . . . . . . . . . . . . . . . . . . . 96 Volunteer Com puting Experience with ATLAS@Hom e 153 . . . . . . . . . . . . . . . . . 97 Assessm ent of Geant4 Maintainability with respect to Soſtware Engineering References 154 97 Migrating the Belle II Collaborative Services and Tools 155 . . . . . . . . . . . . . . . . . 98 ROOT and new program m ing paradigm s 156 . . . . . . . . . . . . . . . . . . . . . . . . . 99 Status and Evolution of ROOT 157 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 Com puting Perform ance of GeantV Physics Models 158 . . . . . . . . . . . . . . . . . . . 101 Soſtware Aspects of the Geant4 Validation Repository 159 . . . . . . . . . . . . . . . . . . 102 MPEXS: A CUDA MonteCarlo of the sim ulaiton of electrom agnetic interactions 160 . . . 103 Multi-threaded Geant4 on Intel Many Integrated Core architectures 161 . . . . . . . . . . 103 CEPHFS: a new generation storage platform for Australian high energy physics 162 . . . 104 Evaluation of lightweight site setups within WLCG infrastructure 165 . . . . . . . . . . . 105 e Cherenkov Telescope Array production system for Monte Carlo sim ulations and analysis 166 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
Processing and ꢁality Monitoring for the ATLAS Tile Hadronic Calorim eter data 167 . . 107 Data acquisition and processing in the ATLAS Tile Calorim eter Phase-II Upgrade Dem onstrator 168 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
Validation of Electrom agnetic Physics Models for Parallel Com puting Architectures in the
GeantV project 169 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
Stochastic optim isation of GeantV code by use of genetic algorithm s 170 . . . . . . . . . 109 GeantV phase 2: developing the particle transport library 171 . . . . . . . . . . . . . . . . 110 New Directions in the CernVM File System 172 . . . . . . . . . . . . . . . . . . . . . . . . 111 Toward a CERN Virtual Visit Service 173 . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 Perform ance and evolution of the DAQ system of the CMS experim ent for Run-2 174 . . 112 viii
Upgrading and Expanding Lustre Storage for use with the WLCG 175 . . . . . . . . . . . 113 XRootD Popularity on Hadoop Clusters 176 . . . . . . . . . . . . . . . . . . . . . . . . . . 114 e Vacuum Platform 177 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 e Machine/Job Features m echanism 178 . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 Integration of grid and local batch system resources at DESY 179 . . . . . . . . . . . . . . 116 Web Proxy Auto Discovery for WLCG 180 . . . . . . . . . . . . . . . . . . . . . . . . . . 116 Visualization of historical data for the ATLAS detector controls 181 . . . . . . . . . . . . 117 e detector read-out in ALICE during Run 3 and 4 182 . . . . . . . . . . . . . . . . . . . 118 Grid Access with Federated Identities 183 . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 Recent progress of Geant4 electrom agnetic physics for LHC and other applications 184 . 119 Scaling Up a CMS Tier-3 Site with Cam pus Resources and a 100 Gb/s Network Connection:
What Could Go Wrong? 185 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
Facilitating the deploym ent and exploitation of HEP Phenom enology codes using INDIGO-
Datacloud tools 186 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
C2MON: a m odern open-source platform for data acquisition, m onitoring and control 187 122 ARC CE cache as a solution for lightweight Grid sites in ATLAS 188 . . . . . . . . . . . . 123 Exploiting Opportunistic Resources for ATLAS with ARC CE and the Event Service 189 . 123 Evolution of user analysis on the Grid in ATLAS 190 . . . . . . . . . . . . . . . . . . . . . 124 AthenaMT: Upgrading the ATLAS Soſtware Fram ework for the Many-Core World with
Multi-reading 191 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
An Oracle-based Event Index for ATLAS 192 . . . . . . . . . . . . . . . . . . . . . . . . . 125 Collecting conditions usage m etadata to optim ize current and future ATLAS soſtware and processing 193 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
Integration of the Titan supercom puter at OLCF with the ATLAS Production System 194 127 First use of LHC Run 3 Conditions Database infrastructure for auxiliary data files in ATLAS
195 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
Multi-threaded ATLAS Sim ulation on Intel Knights Landing Processors 196 . . . . . . . . 128 e Fast Sim ulation Chain for ATLAS 197 . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 Integration of the Chinese HPC Grid in ATLAS Distributed Com puting 198 . . . . . . . . 130 How to keep the Grid full and working with ATLAS production and physics jobs 199 . . 130 A study of data representations in Hadoop to optim ize data storage and search perform ance of the ATLAS EventIndex 200 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
ix
AGIS: Integration of new technologies used in ATLAS Distributed Com puting 201 . . . . 131 A Roadm ap to Continuous Integration for ATLAS soſtware developm ent 202 . . . . . . . 132 ATLAS Metadata Interface (AMI), a generic m etadata fram ework 204 . . . . . . . . . . . 133 Deploying the ATLAS Metadata Interface (AMI) on the cloud with Jenkins 205 . . . . . . 133 ATLAS Fast Physics Monitoring: TADA 206 . . . . . . . . . . . . . . . . . . . . . . . . . . 134 A new m echanism to persistify the detector geom etry of ATLAS and serving it through an experim ent-agnostic REST API 207 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
ATLAS Distributed Com puting experience and perform ance during the LHC Run-2 208 . 135 Benefits and perform ance of ATLAS approaches to utilizing opportunistic resources 209 . 136 DIRAC universal pilots 211 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 e ATLAS Com puting Agora : a resource web site for citizen science projects 212 . . . . 138 HiggsHunters - a citizen science project for ATLAS 213 . . . . . . . . . . . . . . . . . . . 139 Big Data Analytics Tools as Applied to ATLAS Event Data 215 . . . . . . . . . . . . . . . 139 Event visualisation in ATLAS: current soſtware technologies / future prospects and trends
216 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
DIRAC in Large Particle Physics Experim ents 217 . . . . . . . . . . . . . . . . . . . . . . 141 ATLAS Data Preparation in Run 2 218 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 Caching Servers for ATLAS 219 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 An Analysis of Reproducibility and Non-Determ inism in HEP Soſtware and ROOT Data
220 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143