Book of Abstracts Ii Contents

Total Page:16

File Type:pdf, Size:1020Kb

Load more

CHEP 2016 Conference, San Francisco, October 8-14, 2016

Monday, 10 October 2016 - Friday, 14 October 2016
San Francisco Marrioꢀ Marquis

Book of Abstracts

ii

Contents

Experim ent Managem ent System for the SND Detector 0 . . . . . . . . . . . . . . . . . . Reconstruction soſtware of the silicon tracker of DAMPE m ission 2 . . . . . . . . . . . . HEPData - a repository for high energy physics data exploration 3 . . . . . . . . . . . . . Reconstruction of Micropaꢀern Detector Signals using Convolutional Neural Networks 4
1123
Federated data storage system prototype for LHC experim ents and data intensive science

  • 6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
  • 3

4566
BelleII@hom e: Integrate volunteer com puting resources into DIRAC in a secure way 7 . Reconstruction and calibration of MRPC endcap TOF of BESIII 8 . . . . . . . . . . . . . . RootJS: Node.js Bindings for ROOT 6 9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . C++ Soſtware ꢁality in the ATLAS experim ent: Tools and Experience 10 . . . . . . . . . An autom ated m eta-m onitoring m obile application and frontend interface for the WLCG

  • com puting m odel 11 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
  • 7

Experience of Google’s latest Deep Learning library, TensorFlow, with Docker in a WLCG

  • cluster 12 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
  • 8

89
Flexible online m onitoring for high-energy physics with Pyram e 13 . . . . . . . . . . . . Sim ulation of orientational coherent effects via Geant4 14 . . . . . . . . . . . . . . . . . . Detector control system for the AFP detector in ATLAS experim ent at CERN 15 . . . . . 10 e InfiniBand based Event Builder im plem entation for the LHCb upgrade 16 . . . . . . . 11 JavaScript ROOT v4 17 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 e evolution of m onitoring system : the INFN-CNAF case study 18 . . . . . . . . . . . . 13 Statistical and Data Analysis Package in SWIFT 19 . . . . . . . . . . . . . . . . . . . . . . 13 Analysis Tools in Geant4 10.2 20 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 Online & Offline Storage and Processing for the upcom ing European XFEL Experim ents 21 15 Future approach to tier-0 extension 22 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 iii
Internal security consulting, reviews and penetration testing at CERN 23 . . . . . . . . . 16 Web technology detection - for asset inventory and vulnerability m anagem ent 24 . . . . 16 Integrating Containers in the CERN Private Cloud 25 . . . . . . . . . . . . . . . . . . . . 17 CERN Com puting in Com m ercial Clouds 26 . . . . . . . . . . . . . . . . . . . . . . . . . . 18 Real-tim e com plex event processing for cloud resources 27 . . . . . . . . . . . . . . . . . 18 Benchm arking cloud resources 28 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 RapidIO as a m ulti-purpose interconnect 29 . . . . . . . . . . . . . . . . . . . . . . . . . . 20 Evolution of CERN Print Services 30 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 Windows Term inal Server Orchestration 31 . . . . . . . . . . . . . . . . . . . . . . . . . . 21 Update on CERN Search based on SharePoint 2013 33 . . . . . . . . . . . . . . . . . . . . 21 First experience with the new .CERN top level dom ain 34 . . . . . . . . . . . . . . . . . . 22 Flash is Dead. Finally. 35 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 Access to WLCG resources: e X509-free pilot 36 . . . . . . . . . . . . . . . . . . . . . . 23 Deploying FTS with Docker Swarm and Openstack Magnum 37 . . . . . . . . . . . . . . 24 DNS Load Balancing in the CERN Cloud 38 . . . . . . . . . . . . . . . . . . . . . . . . . . 24 Custom ization of the general fiꢀing tool genfit2 in PandaRoot 39 . . . . . . . . . . . . . . 25 A world-wide databridge supported by a com m ercial cloud provider 40 . . . . . . . . . . 26 DPM Evolution: a Disk Operations Managem ent Engine for DPM 41 . . . . . . . . . . . . 27 Making the m ost of cloud storage - a toolkit for exploitation by WLCG experim ents. 42 . 27 EOS Cross Tier Federation 43 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 EOS Developm ents 44 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 Using HEP Com puting Tools, Grid and Supercom puters for Genom e Sequencing Studies
45 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

Support Vector Machines in HEP 46 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 Production Managem ent System for AMS Rem ote Com puting Centers 48 . . . . . . . . . 31 Evolution of Monitoring System for AMS Science Operation Center 49 . . . . . . . . . . . 31 Storage Strategy of AMS Science Data at Science Operation Center at CERN 50 . . . . . . 32 AMS Data Production Facilities at Science Operation Center at CERN 51 . . . . . . . . . 33 Context-aware distributed cloud com puting using CloudScheduler 52 . . . . . . . . . . . 33 XrootdFS, a posix file system for XrootD 53 . . . . . . . . . . . . . . . . . . . . . . . . . . 34 iv
Research and application of OpenStack in Chinese Spallation Neutron Source Com puting environm ent 54 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

Next Generation high perform ance, m ulti-dim ensional scalable data transfer 55 . . . . . 35 Globally Distributed Soſtware Defined Storage 56 . . . . . . . . . . . . . . . . . . . . . . 36 An efficient, m odular and sim ple tape archiving solution for LHC Run-3 57 . . . . . . . . 36 Data Center Environm ental Sensor for safeguarding the CERN Data Archive 58 . . . . . 37 PaaS for web applications with OpenShiſt Origin 59 . . . . . . . . . . . . . . . . . . . . . 38 40-Gbps Data-Acquisition System for NectarCAM, a cam era for the Medium Size Telescopes of the Cherenkov Telescope Array 60 . . . . . . . . . . . . . . . . . . . . . . . 38

Model-independent partial wave analysis using a m assively-parallel fiꢀing fram ework 61 39 CERN openlab Researched Technologies at Might Becom e Gam e Changers in Soſtware
Developm ent 62 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

CERN openlab Knowledge Transfer and Innovation Projects 64 . . . . . . . . . . . . . . . 40 CERN data services for LHC com puting 66 . . . . . . . . . . . . . . . . . . . . . . . . . . 41 CERN AFS Replacem ent project 67 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 From Physics to industry: EOS outside HEP 68 . . . . . . . . . . . . . . . . . . . . . . . . 42 Big Data Analytics for the Future Circular Collider Reliability and Availability Studies 69 42 CERNBox: the data hub for data analysis 70 . . . . . . . . . . . . . . . . . . . . . . . . . . 43 Tape SCSI m onitoring and encryption at CERN 71 . . . . . . . . . . . . . . . . . . . . . . 44 Wi-Fi service enhancem ent at CERN 72 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44 A graphical perform ance analysis and exploration tool for Linux-perf 73 . . . . . . . . . 45 e role of dedicated com puting centers in the age of cloud com puting 74 . . . . . . . . . 45 Next Generation Monitoring 75 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 e m erit of data processing application elasticity 76 . . . . . . . . . . . . . . . . . . . . . 47 Identifying m em ory allocation paꢀerns in HEP soſtware 77 . . . . . . . . . . . . . . . . . 48 Finding unused m em ory allocations with FOM-tools 78 . . . . . . . . . . . . . . . . . . . 48 SDN-NGenIA A Soſtware Defined Next Generation integrated Architecture for HEP and
Data Intensive Science 79 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

A Com parison of Deep Learning Architectures with GPU Acceleration and eir Applications 80 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

CERN’s Ceph infrastructure: OpenStack, NFS, CVMFS, CASTOR, and m ore! 81 . . . . . . 51 LHCb Kalm an Filter cross architectures studies 82 . . . . . . . . . . . . . . . . . . . . . . 51 v
Unified Monitoring Architecture for IT and Grid Services 83 . . . . . . . . . . . . . . . . 52 Developm ent of DAQ Soſtware for CULTASK Experim ent 84 . . . . . . . . . . . . . . . . 53 e new ATLAS Fast Calorim eter Sim ulation 85 . . . . . . . . . . . . . . . . . . . . . . . 54 ATLAS com puting on Swiss Cloud SWITCHengines 86 . . . . . . . . . . . . . . . . . . . 55 Global EOS: exploring the 300-m s-latency region 87 . . . . . . . . . . . . . . . . . . . . . 55 ATLAS and LHC com puting on CRAY 88 . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 Analysis of em pty ATLAS pilot jobs and subsequent resource usage on grid sites 89 . . . 56 e ATLAS com puting challenge for HL-LHC 90 . . . . . . . . . . . . . . . . . . . . . . . 57 Networks in ATLAS 91 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 Production Experience with the ATLAS Event Service 92 . . . . . . . . . . . . . . . . . . 58 Com puting shiſts to m onitor ATLAS distributed com puting infrastructure and operations
93 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

Consolidation of Cloud Com puting in ATLAS 95 . . . . . . . . . . . . . . . . . . . . . . . 60 Giving pandas ROOT to chew on: experiences with the XENON1T Dark Maꢀer experim ent
96 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

Advancing data m anagem ent and analysis in different scientific disciplines 97 . . . . . . 62 Interfacing HTCondor-CE with OpenStack 98 . . . . . . . . . . . . . . . . . . . . . . . . 62 PODIO - Applying plain-old-data for defining physics data m odels 100 . . . . . . . . . . 63 Distributed Metadata Managem ent of Mass Storage System in High Energy of Physics 102 64 SCEAPI: A Unified Restful Web APIs for High-Perform ance Com puting 103 . . . . . . . . 65 Message ꢁeues for Online Reconstruction on the Exam ple of the PANDA Experim ent 104 65 XROOT developm ent update - support for m etalinks and extrem e copy 106 . . . . . . . . 66 e DAQ system for the AEgIS experim ent 107 . . . . . . . . . . . . . . . . . . . . . . . . 67 e Muon Ionization Cooling Experim ent Analysis User Soſtware 108 . . . . . . . . . . . 67 e future of academ ic com puting security 109 . . . . . . . . . . . . . . . . . . . . . . . . 68 Optim izing ROOT’s Perform ance Using C++ Modules 110 . . . . . . . . . . . . . . . . . . 68 Enabling Federated Access for HEP 111 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 Full and Fast Sim ulation Fram ework for the Future Circular Collider Studies 112 . . . . . 70 FPGA based data processing in the ALICE High-Level Trigger in LHC Run 2 113 . . . . . 71 Events visualisation in ALICE - current status and strategy for Run 3 114 . . . . . . . . . 72 vi
Kalm an filter tracking on parallel architectures 115 . . . . . . . . . . . . . . . . . . . . . . 72 Perform ance of the CMS Event Builder 116 . . . . . . . . . . . . . . . . . . . . . . . . . . 73 An interactive and com prehensive working environm ent for high-energy physics soſtware with Python and jupyter notebooks 117 . . . . . . . . . . . . . . . . . . . . . . . . . . 74

SND DAQ system evolution 118 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 Using ALFA for high throughput, distributed data transm ission in ALICE O2 system 119 76 Design of the data quality control system for the ALICE O2 120 . . . . . . . . . . . . . . 76 A perform ance study of WebDav access to storages within the Belle II collaboration. 121 77 A lightweight federation of the Belle II storages through dynafed 122 . . . . . . . . . . . 78 Integration of Oracle and Hadoop: hybrid databases affordable at scale 123 . . . . . . . . 78 e ATLAS Production System Evolution. New Data Processing and Analysis Paradigm for the LHC Run2 and High-Lum inosity 124 . . . . . . . . . . . . . . . . . . . . . . . 79

Large scale soſtware building with CMake in ATLAS 125 . . . . . . . . . . . . . . . . . . 80 Monitoring of Com puting Resource Use of Active Soſtware Releases at ATLAS 127 . . . . 81 Developm ent and perform ance of track reconstruction algorithm s at the energy frontier with the ATLAS detector 128 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82

Prim ary Vertex Reconstruction with the ATLAS experim ent 130 . . . . . . . . . . . . . . 82 Using m achine learning algorithm s to forecast network and system load m etrics for ATLAS
Distributed Com puting 131 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83

Autom atic rebalancing of data in ATLAS distributed data m anagem ent 132 . . . . . . . . 84 ATLAS Sim ulation using Real Data: Em bedding and Overlay 133 . . . . . . . . . . . . . . 84 Metadata for fine-grained processing at ATLAS 135 . . . . . . . . . . . . . . . . . . . . . 85 e ATLAS EventIndex General Dataflow and Monitoring Infrastructure 136 . . . . . . . 86 ACTS: from ATLAS soſtware towards a com m on track reconstruction soſtware 137 . . . 86 ATLAS soſtware stack on ARM64 138 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87 Modernizing the ATLAS Sim ulation Infrastructure 139 . . . . . . . . . . . . . . . . . . . . 88 PanDA for ATLAS distributed com puting in the next decade 140 . . . . . . . . . . . . . . 88 Mem ory handling in the ATLAS subm ission system from job definition to sites lim its 141 89 Evolution and experience with the ATLAS Sim ulation at Point1 Project 143 . . . . . . . . 90 C3PO - A Dynam ic Data Placem ent Agent for ATLAS Distributed Data Managem ent 144 91 Rucio WebUI - e Web Interface for the ATLAS Distributed Data Managem ent 145 . . . 91 vii
Object-based storage integration within the ATLAS DDM system 146 . . . . . . . . . . . 92 Experiences with the new ATLAS Distributed Data Managem ent System 147 . . . . . . . 93 Rucio Auditor - Consistency in the ATLAS Distributed Data Managem ent System 148 . . 93 How To Review 4 Million Lines of ATLAS Code 149 . . . . . . . . . . . . . . . . . . . . . 94 Frozen-shower sim ulation of electrom agnetic showers in the ATLAS forward calorim eters
150 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95

ATLAS World-cloud and networking in PanDA 151 . . . . . . . . . . . . . . . . . . . . . 95 Data intensive ATLAS workflows in the Cloud 152 . . . . . . . . . . . . . . . . . . . . . . 96 Volunteer Com puting Experience with ATLAS@Hom e 153 . . . . . . . . . . . . . . . . . 97 Assessm ent of Geant4 Maintainability with respect to Soſtware Engineering References 154 97 Migrating the Belle II Collaborative Services and Tools 155 . . . . . . . . . . . . . . . . . 98 ROOT and new program m ing paradigm s 156 . . . . . . . . . . . . . . . . . . . . . . . . . 99 Status and Evolution of ROOT 157 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 Com puting Perform ance of GeantV Physics Models 158 . . . . . . . . . . . . . . . . . . . 101 Soſtware Aspects of the Geant4 Validation Repository 159 . . . . . . . . . . . . . . . . . . 102 MPEXS: A CUDA MonteCarlo of the sim ulaiton of electrom agnetic interactions 160 . . . 103 Multi-threaded Geant4 on Intel Many Integrated Core architectures 161 . . . . . . . . . . 103 CEPHFS: a new generation storage platform for Australian high energy physics 162 . . . 104 Evaluation of lightweight site setups within WLCG infrastructure 165 . . . . . . . . . . . 105 e Cherenkov Telescope Array production system for Monte Carlo sim ulations and analysis 166 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106

Processing and ꢁality Monitoring for the ATLAS Tile Hadronic Calorim eter data 167 . . 107 Data acquisition and processing in the ATLAS Tile Calorim eter Phase-II Upgrade Dem onstrator 168 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108

Validation of Electrom agnetic Physics Models for Parallel Com puting Architectures in the
GeantV project 169 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109

Stochastic optim isation of GeantV code by use of genetic algorithm s 170 . . . . . . . . . 109 GeantV phase 2: developing the particle transport library 171 . . . . . . . . . . . . . . . . 110 New Directions in the CernVM File System 172 . . . . . . . . . . . . . . . . . . . . . . . . 111 Toward a CERN Virtual Visit Service 173 . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 Perform ance and evolution of the DAQ system of the CMS experim ent for Run-2 174 . . 112 viii
Upgrading and Expanding Lustre Storage for use with the WLCG 175 . . . . . . . . . . . 113 XRootD Popularity on Hadoop Clusters 176 . . . . . . . . . . . . . . . . . . . . . . . . . . 114 e Vacuum Platform 177 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 e Machine/Job Features m echanism 178 . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 Integration of grid and local batch system resources at DESY 179 . . . . . . . . . . . . . . 116 Web Proxy Auto Discovery for WLCG 180 . . . . . . . . . . . . . . . . . . . . . . . . . . 116 Visualization of historical data for the ATLAS detector controls 181 . . . . . . . . . . . . 117 e detector read-out in ALICE during Run 3 and 4 182 . . . . . . . . . . . . . . . . . . . 118 Grid Access with Federated Identities 183 . . . . . . . . . . . . . . . . . . . . . . . . . . . 119 Recent progress of Geant4 electrom agnetic physics for LHC and other applications 184 . 119 Scaling Up a CMS Tier-3 Site with Cam pus Resources and a 100 Gb/s Network Connection:
What Could Go Wrong? 185 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120

Facilitating the deploym ent and exploitation of HEP Phenom enology codes using INDIGO-
Datacloud tools 186 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121

C2MON: a m odern open-source platform for data acquisition, m onitoring and control 187 122 ARC CE cache as a solution for lightweight Grid sites in ATLAS 188 . . . . . . . . . . . . 123 Exploiting Opportunistic Resources for ATLAS with ARC CE and the Event Service 189 . 123 Evolution of user analysis on the Grid in ATLAS 190 . . . . . . . . . . . . . . . . . . . . . 124 AthenaMT: Upgrading the ATLAS Soſtware Fram ework for the Many-Core World with
Multi-reading 191 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125

An Oracle-based Event Index for ATLAS 192 . . . . . . . . . . . . . . . . . . . . . . . . . 125 Collecting conditions usage m etadata to optim ize current and future ATLAS soſtware and processing 193 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126

Integration of the Titan supercom puter at OLCF with the ATLAS Production System 194 127 First use of LHC Run 3 Conditions Database infrastructure for auxiliary data files in ATLAS
195 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128

Multi-threaded ATLAS Sim ulation on Intel Knights Landing Processors 196 . . . . . . . . 128 e Fast Sim ulation Chain for ATLAS 197 . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 Integration of the Chinese HPC Grid in ATLAS Distributed Com puting 198 . . . . . . . . 130 How to keep the Grid full and working with ATLAS production and physics jobs 199 . . 130 A study of data representations in Hadoop to optim ize data storage and search perform ance of the ATLAS EventIndex 200 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131

ix
AGIS: Integration of new technologies used in ATLAS Distributed Com puting 201 . . . . 131 A Roadm ap to Continuous Integration for ATLAS soſtware developm ent 202 . . . . . . . 132 ATLAS Metadata Interface (AMI), a generic m etadata fram ework 204 . . . . . . . . . . . 133 Deploying the ATLAS Metadata Interface (AMI) on the cloud with Jenkins 205 . . . . . . 133 ATLAS Fast Physics Monitoring: TADA 206 . . . . . . . . . . . . . . . . . . . . . . . . . . 134 A new m echanism to persistify the detector geom etry of ATLAS and serving it through an experim ent-agnostic REST API 207 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135

ATLAS Distributed Com puting experience and perform ance during the LHC Run-2 208 . 135 Benefits and perform ance of ATLAS approaches to utilizing opportunistic resources 209 . 136 DIRAC universal pilots 211 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 e ATLAS Com puting Agora : a resource web site for citizen science projects 212 . . . . 138 HiggsHunters - a citizen science project for ATLAS 213 . . . . . . . . . . . . . . . . . . . 139 Big Data Analytics Tools as Applied to ATLAS Event Data 215 . . . . . . . . . . . . . . . 139 Event visualisation in ATLAS: current soſtware technologies / future prospects and trends
216 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140

DIRAC in Large Particle Physics Experim ents 217 . . . . . . . . . . . . . . . . . . . . . . 141 ATLAS Data Preparation in Run 2 218 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 Caching Servers for ATLAS 219 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 An Analysis of Reproducibility and Non-Determ inism in HEP Soſtware and ROOT Data
220 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143

Recommended publications
  • The Linux Command Line

    The Linux Command Line

    The Linux Command Line Fifth Internet Edition William Shotts A LinuxCommand.org Book Copyright ©2008-2019, William E. Shotts, Jr. This work is licensed under the Creative Commons Attribution-Noncommercial-No De- rivative Works 3.0 United States License. To view a copy of this license, visit the link above or send a letter to Creative Commons, PO Box 1866, Mountain View, CA 94042. A version of this book is also available in printed form, published by No Starch Press. Copies may be purchased wherever fine books are sold. No Starch Press also offers elec- tronic formats for popular e-readers. They can be reached at: https://www.nostarch.com. Linux® is the registered trademark of Linus Torvalds. All other trademarks belong to their respective owners. This book is part of the LinuxCommand.org project, a site for Linux education and advo- cacy devoted to helping users of legacy operating systems migrate into the future. You may contact the LinuxCommand.org project at http://linuxcommand.org. Release History Version Date Description 19.01A January 28, 2019 Fifth Internet Edition (Corrected TOC) 19.01 January 17, 2019 Fifth Internet Edition. 17.10 October 19, 2017 Fourth Internet Edition. 16.07 July 28, 2016 Third Internet Edition. 13.07 July 6, 2013 Second Internet Edition. 09.12 December 14, 2009 First Internet Edition. Table of Contents Introduction....................................................................................................xvi Why Use the Command Line?......................................................................................xvi
  • 38 Bittorrent Software and Usage Justin Mckinney

    38 Bittorrent Software and Usage Justin Mckinney

    ARTICLE BITTORRENT SOFTwaRE AND USAGE Justin McKinney Mark Simon Haydn 1. Abstract: While the circulation of cultural material outside of official channels is not new, the scale and infrastructure afforded by digital networks and peer-to-peer protocols has drastically changed its dynamics. Focusing on private trackers and online, members-only communities dedicated to sharing difficult to find and “gray-area” cinema content, our paper discusses new digital re- positories and their connection to the traditional film archive. With discussion of the types of materials held, user participation, and custodial efforts to restore or improve cultural material, we will discuss the activities of a contemporary private tracker community. Additionally, the paper will interrogate the legality and copyright issues surrounding these activities and ex- plore recent, licit adoption of the infrastructure that has been developed for online circulation. Discussion will conclude with attention to cases in which pirated material has resurfaced in a rights-holding context, and an assessment of what these developments mean for custodians of film material working in a traditional film archive context. As a combination of discussion, case study, and argument, the paper will serve as a topical primer on a pressing and under- researched area of interest in this field, building on a panel presented at last year’s Association of Moving Image Archivists conference in Georgia. 2. BitTorrent Software and Usage BitTorrent is a software protocol developed in 2001 and designed to aid the practice of peer- to-peer file sharing on the Internet. The primary advantage of BitTorrent is that it allows for segmented downloading, which is the coordinated transmission of a file sourced from multiple servers to a single destination.81 This protocol allows for the rapid sharing of large amounts of data by allowing a user to download a file from multiple sources that are uploading the file at the same time.82 This allows for more efficient and faster downloading than the traditional client-server model.
  • A Review on Torrent & Torrent Poisoning Over Internet

    A Review on Torrent & Torrent Poisoning Over Internet

    IJCSMS (International Journal of Computer Science & Management Studies) Vol. 22, Issue 01 Publishing Month: January 2016 An Indexed and Referred Journal with ISSN (Online): 2231-5268 www.ijcsms.com A Review on Torrent & Torrent Poisoning over Internet Pooja Balhara Lecturer, All India Jat Heroes Memorial College, Rohtak, Haryana (India) [email protected] Publishing Date: January 22, 2016 Abstract A torrent file is a encoded dictionary with the Now a days, Torrents are widely used for downloading following keys: heavy files over internet the reason is being unlike other download methods, Torrent maximizes transfer speed by downloading the pieces of the files you want ∑ Announce the URL of the tracker simultaneously from people who already have them. This ∑ Info this maps to a dictionary whose keys are process makes very large files, such as videos and dependent on whether one or more files are television programs; games download much faster than is being shared: possible with other methods. In this paper, I am ∑ Name suggested filename where the file is to representing the basic and some popular details and be saved (if one file)/suggested directory name activities after a deep review of torrent and torrent poisoning. where the files are to be saved (if multiple files) Keywords: Torrent, bit torrent, torrent poisoning. ∑ Piece length number of bytes per piece. ∑ Pieces a hash list, i.e., a concatenation of each 1. Introduction to Torrent piece's SHA-1 hash. As SHA-1 returns a 160-bit hash, pieces will be a string whose length is a A Torrent file is a computer file that multiple of 160-bits.
  • How Does Digital Piracy Influence the Subscription of Online Video Bundling Services?

    How Does Digital Piracy Influence the Subscription of Online Video Bundling Services?

    How does Digital Piracy influence the subscription of online video bundling services? Bernardo Filipe Geadas Joaquim Dissertation presented as partial requirement for obtaining the Double Degree Master’s in Information Management and Business Informatics i How does Digital Piracy influence the subscription of online video bundling services? Bernardo Filipe Geadas Joaquim Dissertation presented as partial requirement for obtaining the Double Degree Master’s in Information Management and Business Informatics ii 201 Title: Student 6 full name MEGI Subtitle: 201 Title: Student 6 full name MGI Subtitle: i i NOVA Information Management School Instituto Superior de Estatística e Gestão de Informação Universidade Nova de Lisboa HOW DOES DIGITAL PIRACY INFLUENCE THE SUBSCRIPTION OF ONLINE VIDEO BUNDLING SERVICES? By Bernardo Filipe Geadas Joaquim Dissertation presented as partial requirement for obtaining the Double degree Master in Information Systems Management Co Advisor: Professor Aleš Popovič Co Advisor: Professor Vitor Santos ii October 2016 SUBMISSION RESULTING FROM THIS MASTER THESIS Joaquim, B., Santos, V., Popovič, A. How does Digital Piracy influence the subscription of Online Video Bundling Services? (Submission to WorldCist’17 – 5th World Conference on Information Systems and Technologies) iii ACKNOWLEDGEMENTS I would like to thanks my parents for providing me the opportunity to be a part of this Master’s, my mentors: Professor Aleš Popovič and Professor Vitor Santos for all the help and advices given during the progression of the thesis and the country of Slovenia for hosting me during the execution of this master thesis. iv ABSTRACT The availability of digital channels that allow the distribution of copyrighted material has raised several questions over the last few years.
  • CS555: Distributed Systems [Fall 2019] Dept

    CS555: Distributed Systems [Fall 2019] Dept

    CS555: Distributed Systems [Fall 2019] Dept. Of Computer Science, Colorado State University Frequently asked questions from the previous class survey CS 555: DISTRIBUTED SYSTEMS ¨ Difference in routing in the network space vs ID space [ BITTORRENT & DISTRIBUTED COMPUTING ECONOMICS] ¨ Can Gnutella be viewed as a semi-structured P2P system? Shrideep Pallickara Computer Science Colorado State University CS555: Distributed Systems [Fall 2019] L9.1 September 24, 2019 September 24, 2019 CS555: Distributed Systems [Fall 2019] L9.2 Dept. Of Computer Science, Colorado State University Professor: SHRIDEEP PALLICKARA Dept. Of Computer Science, Colorado State University Topics covered in this lecture ¨ BitTorrent ¨ Distributed Computing Economics BITTORRENT CS555: Distributed Systems [Fall 2019] September 24, 2019 CS555: Distributed Systems [Fall 2019] L9.3 September 24, 2019 L9.4 Professor: SHRIDEEP PALLICKARA Dept. Of Computer Science, Colorado State University Dept. Of Computer Science, Colorado State University Bit Torrent: Traffic statistics BitTorrent ¨ In November 2004 ¨ Designed for downloading large files ¤ Responsible for 25% of all Internet traffic ¨ Not intended for real-time routing of content ¨ February 2013 ¨ Relies on capabilities of ordinary user machines ¤ 3.35% of all worldwide bandwidth ¤ > 50% of the 6% total bandwidth dedicated to file sharing September 24, 2019 CS555: Distributed Systems [Fall 2019] L9.5 September 24, 2019 CS555: Distributed Systems [Fall 2019] L9.6 Professor: SHRIDEEP PALLICKARA Dept. Of Computer Science, Colorado
  • Efficient Data Communication and Storage Using Synchronized Servers with Load Distribution Facility

    ISSN (Online) 2278-1021 IJARCCE ISSN (Print) 2319-5940 International Journal of Advanced Research in Computer and Communication Engineering Vol. 10, Issue 4, April 2021 DOI 10.17148/IJARCCE.2021.10470 Efficient Data Communication and Storage using Synchronized Servers with Load Distribution Facility Prof. Aparna Kalaskar1, Mohit Patil2, Smit Mahajan3, Neha Goundadkar4, Amit Raj5 Assistant Professor, Department of Computer Engineering, Sinhgad College of Engineering, Pune, India 1 Student, Department of Computer Engineering, Sinhgad College of Engineering, Pune, India 2,3,4,5 Abstract: In the early 2000’s, when data used to be of simple structure and small size, the client- server architecture handled the sharing and storage of data pretty efficiently. But as the technology progressed, the data became more and more complex and huge in size which led to load on servers for storing and sharing the data. Here, the P2P network architecture comes into picture where the peers could communicate and share the data without any centralized server. The basic idea behind the proposal of this system is the combined use of segmented file transfer on P2P network. This makes it possible to download a single file from multiple peers at the same time. Moreover, it also includes a synchronized server module for efficient storage and updating data more efficiently on the servers. This paper presents a system in which data transfer and storage system uses file downloading based on peer-to-peer network architecture. It’s the software developed for sharing/downloading huge data files and storing important data efficiently. It’s very convenient to share data using this system as the data is spread across the peer-to-peer network.
  • Peer to Peer Archiectures

    Lesson 8: Peer-to-Peer Architectures Phillip J. Windley, Ph.D. CS462 – Large-Scale Distributed Systems Lesson 08: Peer to Peer Architectures Contents Fallacies of Distributed Distributed Discovery 00 Computing 03 01 The CAP Theorem 04 Conclusion 02 Swarm File Sharing Lesson 08: Peer-to-Peer Architectures Introduction Peer-to-peer (P2P) architectures are networked Special purpose P2P systems are frequently systems where each node in the network is on built on top of the Internet. Such a network is equal footing with every other node. called an overlay network. The largest and most successful P2P network is Designing a functional P2P system can be the Internet itself. Every machine connected to difficult because humans think in stories— the network gets an IP address and can interact linearly. And we love centralized systems with any other machine on the Internet so long because they’re easy to understand. as it knows that machine’s IP address. This lesson introduces concepts and algorithms The Web is not a P2P architecture because it for building practical decentralized peer-to-peer sees some machines as clients and some as systems. servers. They are not peers. CS462 – Large-Scale Distributed Systems © PJW LC (CC BY-NC 4.0) 3 Lesson 08: Peer-to-Peer Architectures Lesson Objectives After completing this lesson, you should be able to: 1. Explain the eight fallacies of distributed computing 2. Give examples of the CAP theorem and explain tradeoffs 3. Show how distributed file sharing works and relate it to consensus 4. Describe distributed discovery and show how a distributed hash table works CS462 – Large-Scale Distributed Systems © PJW LC (CC BY-NC 4.0) 4 Lesson 08: Peer-to-Peer Architectures Reading Read the following: Ø Fallacies of Distributed Computing Explained (PDF) by Arnon Rotem-Gal-Oz (http://www.rgoarchitects.com/Files/fallacies.pdf) Ø A plain English introduction to CAP Theorem by Kaushik Sathupadi (http://ksat.me/a-plain-english- introduction-to-cap-theorem/ ) Ø Perspectives on the CAP Theorem (PDF) by Seth Gilbert and Nancy A.
  • Best Torrent Download Client Best Torrent Client to Download Torrent Files

    best torrent download client Best torrent client to download torrent files. There will be a time that you need to download a file through torrent. There are quite a lot of applications that you can use to download torrent files. But, most of them are not that great. Are you looking for a torrent client which is clean by design, with no advertisements, open-source, and kept up to date? You’re on the right page. Let’s dive into it. Table of contents. Introduction. I need to download a file, and I see that the provider asks us to download from the torrent link. So it’s an excellent way for me to introduce the best torrent client, which I always use. Is torrenting legal or illegal. Before we move on, let’s answer a question that is always asked: Is torrenting legal or illegal? Torrenting itself isn’t illegal , but downloading unsanctioned copyrighted material is. It’s not always immediately apparent which content is legal to torrent and which isn’t. Some fall in a gray area, so you may find yourself unwittingly on the wrong side of the law. Download qBittorrent client. Go to the qBittorrent download page and download the version that suits your operating system. At the moment of writing, qBittorrent v4.3.6 is the current stable version for Windows 7/8/10. There are 32 and 64-bit versions. Make sure you select the version that is compatible with your operating system. In this example, I will download qBittorrent for Windows 10 64-bit.
  • The Linux Command Line the Linux

    The Linux Command Line the Linux

    BANISHBANISH YOURYOUR MOUSEMOUSE THE LINUX COMMAND LINE THE LINUX COMMAND LINE THETHE LINUXLINUX You’ve experienced the shiny, point-and-click surface • Use standard input and output, redirection, and COMMANDCOMMAND LINELINE of your Linux computer—now dive below and explore pipelines its depths with the power of the command line. • Edit files with Vi, the world’s most popular text editor A COMPLETE INTRODUCTION The Linux Command Line takes you from your very first • Write shell scripts to automate common or boring tasks terminal keystrokes to writing full programs in Bash, the most popular Linux shell. Along the way you’ll learn • Slice and dice text files with cut, paste, grep, patch, the timeless skills handed down by generations of and sed WILLIAM E. SHOTTS, JR. gray-bearded, mouse-shunning gurus: file navigation, Once you overcome your initial “shell shock,” you’ll environment configuration, command chaining, pattern find that the command line is a natural and expressive matching with regular expressions, and more. way to communicate with your computer. Just don’t be In addition to that practical knowledge, author William surprised if your mouse starts to gather dust. Shotts reveals the philosophy behind these tools and ABOUT THE AUTHOR the rich heritage that your desktop Linux machine has inherited from Unix supercomputers of yore. William E. Shotts, Jr., has been a software professional and avid Linux user for more than 15 years. He has an As you make your way through the book’s short, easily extensive background in software development, including digestible chapters, you’ll learn how to: technical support, quality assurance, and documentation.
  • Arxiv:1711.08060V1 [Cs.DC] 21 Nov 2017 When Embracing the Cloud

    Arxiv:1711.08060V1 [Cs.DC] 21 Nov 2017 When Embracing the Cloud

    Efficient image deployment in Cloud environments Alvaro´ L´opez Garc´ıa∗, Enol Fern´andezdel Castillo Instituto de F´ısica de Cantabria | IFCA (CSIC|UC). Avda. los Castros s/n. 39005 Santander, Spain Abstract The biggest overhead for the instantiation of a virtual machine in a cloud in- frastructure is the time spent in transferring the image of the virtual machine into the physical node that executes it. This overhead becomes larger for re- quests composed of several virtual machines to be started concurrently, and the illusion of flexibility and elasticity usually associated with the cloud computing model may vanish. This poses a problem for both the resource providers and the software developers, since tackling those overheads is not a trivial issue. In this work we implement and evaluate several improvements for virtual machine image distribution problem in a cloud infrastructure and propose a method based on BitTorrent and local caching of the virtual machine images that reduces the transfer time when large requests are made. Keywords: Cloud Computing, Image Deployment, OpenStack, Scheduling 1. Introduction As it is widely known, the Cloud Computing model is aimed on deliver- ing resources (such as virtual machines, storage and network capacity) as an on demand service. The most accepted publication defining the Cloud from the United States National Institute of Standards and Technology (NIST), em- phasizes the rapid elasticity as one of the essential characteristics of the Cloud Computing model: \capabilities can be elastically provisioned and released, (...), to scale rapidly outward and inward (...)" [1]. Moreover, users and con- sumers consider them as the new key features that are more attractive [2, 3] arXiv:1711.08060v1 [cs.DC] 21 Nov 2017 when embracing the cloud.
  • (12) United States Patent (10) Patent No.: US 7,984,116 B2 Hudson Et Al

    (12) United States Patent (10) Patent No.: US 7,984,116 B2 Hudson Et Al

    USOO79841 16B2 (12) United States Patent (10) Patent No.: US 7,984,116 B2 Hudson et al. (45) Date of Patent: *Jul. 19, 2011 (54) CENTRALIZED SELECTION OF PEERS AS (56) References Cited MEDIA DATASOURCES IN A DSPERSED PEER NETWORK U.S. PATENT DOCUMENTS 5,740,170 A 4/1998 Andou et al. (75) Inventors: Michael D. Hudson, Portland, OR (US); 5,778,187 A 7/1998 Monteiro et al. Brian L. Windheim, Sherwood, OR 5,850,396 A 12, 1998 Gilbert (US); Darin Stewart, Hillsboro, OR 5,864,854 A 1/1999 Boyle (US); Sudhir Menon, Portland, OR 5,884,031 A 3, 1999 Ice (US); Mark W. Goschie, Tualatin, OR 5,944,780 A 8, 1999 Chase et al. (US); Glen Curtis Shipley, Portland, (Continued) OR (US) FOREIGN PATENT DOCUMENTS (73) Assignee: Sony Corporation (JP) WO WO-O2/O76003 A2 9, 2002 (*) Notice: Subject to any disclaimer, the term of this OTHER PUBLICATIONS patent is extended or adjusted under 35 U.S. Appl. No. 10/033,305, filed Oct. 1, 2007, Chapweske. U.S.C. 154(b) by 0 days. This patent is Subject to a terminal dis (Continued) claimer. Primary Examiner — Larry Donaghue Assistant Examiner — Nicholas Taylor (21) Appl. No.: 12/533,434 (74) Attorney, Agent, or Firm — Lerner, David, Littenberg, (22) Filed: Jul. 31, 2009 Krumholz & Mentlik, LLP (65) Prior Publication Data (57) ABSTRACT A multi-source peer content distribution system transfers US 201O/OO11061 A1 Jan. 14, 2010 content files from multiple, distributed peer computers to any Related U.S. Application Data requesting computer. The content distribution network coor dinates file transfers through a mediation system includings (63) Continuation of application No.
  • 1 Contents Editorial 2 President's Letter 4 in Memory of Mark Jones 6

    1 Contents Editorial 2 President's Letter 4 in Memory of Mark Jones 6

    CONTENTS Editorial 2 President’s Letter 4 In Memory of Mark Jones 6 Articles An Intermittent Wander Through the Pages of the Phonographic Bulletin George Boston, United Kingdom 7 Original Physical Recordings of Audiovisual Documents: Preserve or Destroy After Digitizing? Pio Pellizzari, Fonoteca Nazionale Swizzera, Lugano, Switzerland 8 Digitization Software Obsolescence, Too? Dave Rice, City University of New York (CUNY), USA 11 Content Identification for Audiovisual Archives Richard W. Kroon, Entertainment Identifier Registry Association, USA Raymond Drewry, Motion Picture Laboratories, USA Andrea Leigh, Library of Congress, USA Stephen McConnachie, British Film Institute, UK 20 DIY Digital Radio Preservation: An Introduction to CiTR’s Audio Archiving Project Shyla Seller, University of British Columbia, Canada 31 BitTorrent Software and Usage Justin McKinney Mark Simon Haydn 38 Towards an Open and Accessible Sound and Audiovisual Archives: Case Study of Zimbabwe Collence Takaingenhamo Chisita, Principal Lecturer in Department of Library and Information Science School, Harare Polytechnic, Zimbabwe Forbes. Z. Chinyemba, Lecturer in Department of Library and Information Sciences School, Harare Polytechnic, Zimbabwe 47 Audiovisual Knowledge Management and the Fear of Losing Control Gisa Jähnichen, Universiti Putra Malaysia 54 Artists, Archivists, and Community Heidi Stalla Diana Chester 59 The Sound Archive at School: a Project to Prevent Violence in Early Childhood Education Dra. Perla Olivia Rodríguez Reséndiz, Instituto de Investigaciones