Machine Learning Pipelines with Modern Big Data Tools for High Energy Physics

Machine Learning Pipelines with Modern Big Data Tools for High Energy Physics

Machine Learning Pipelines with Modern Big Data Tools for High Energy Physics M. Migliorini1,2, R. Castellotti1, L. Canali1,*, and M. Zanetti2 1European Organization for Nuclear Research (CERN), Geneva, Switzerland 2University of Padova, Padua, Italy *[email protected] Abstract amples of \big-data" endeavors: chasing extremely rare physics processes requires producing, manag- The effective utilization at scale of complex machine ing and analyzing large amounts of complex data. learning (ML) techniques for HEP use cases poses Data processing throughput of those experiments is several technological challenges, most importantly on expected to exceed 200 TB/s in 2025 with the up- the actual implementation of dedicated end-to-end grade of the LHC (HL-LHC project), which, after data pipelines. A solution to these challenges is pre- tight online filtering, implies saving on permanent sented, which allows training neural network clas- storage 100 PB of data per year. Thorough and com- sifiers using solutions from the Big Data and data plex data processing enables major scientific achieve- science ecosystems, integrated with tools, software, ments, like the discovery of the Higgs boson in 2012. and platforms common in the HEP environment. Data ingestion, feature engineering, data reduction In particular, Apache Spark is exploited for data and classification are complex tasks, each requiring preparation and feature engineering, running the cor- advanced techniques to be accomplished. While, so responding (Python) code interactively on Jupyter far, custom solutions have been employed, recent de- notebooks. Key integrations and libraries that make velopments of open source tools are making the lat- Spark capable of ingesting data stored using ROOT ter compelling options for HEP specific use-cases. format and accessed via the XRootD protocol, are Furthermore, physics data analysis is profiting to a described and discussed. Training of the neural net- large extent from modern Machine Learning (ML) work models, defined using the Keras API, is per- techniques, which are revolutionizing each processing formed in a distributed fashion on Spark clusters by step, from physics objects reconstruction (feature en- using BigDL with Analytics Zoo and also by using gineering), to parameter estimation (regression) and arXiv:1909.10389v5 [cs.DC] 16 Jun 2020 TensorFlow, notably for distributed training on CPU signal selection (classification). In this scope, Apache and GPU resourcess. The implementation and the Spark [1] represents a very promising tool to extend results of the distributed training are described in the traditional HEP approach, by combining in a detail in this work. unique system powerful means for both sophisticated data engineering and Machine Learning. Among the most popular analytics engines for big data process- Introduction ing, Spark allows performing interactive analysis and data exploration through its mature data processing High energy physics (HEP) experiments like those at engine and API for distributed data processing, its in- the Large Hadron Collider (LHC) are paramount ex- 1 tegration with cluster systems and by featuring ML Centralized production systems orchestrate the libraries giving the possibility to train in a distributed data processing workflow, converting the raw infor- fashion all common classifiers and regressors on large mation into higher-level quantities (data or feature datasets. engineering). Computing resources, organized world- It has been proved (e.g. in [2]) that Deep Learning wide in hierarchical tiers, are exploited employing can boost considerably the performances of physics GRID protocols [6]. Although the centrally pro- data analysis, yielding remarkable results from larger duced datasets may require some additional feature sets of low-level (i.e. less \engineered") quantities. preparation, from this stage on, the processing is There is thus a significant interest in integrating analysis-dependent and is done by the users using Spark with tools, like BigDL [3], allowing distributed batch systems. Machine Learning algorithms are ex- training of deep learning models. ecuted either from within the ROOT framework (us- The development of an end-to-end machine learn- ing TMVA [7], a toolkit for multivariate data analy- ing pipeline to analyze HEP data using Apache Spark sis) or using more common open source frameworks is described in this paper. After briefly recalling the (Keras/Tensorflow, PyTorch, etc.). traditional data processing and analysis workflow in HEP, the specific physics use-case addressed in work is presented; the various steps of the pipeline are The Physics Use Case then described in detail, from data ingestion to model training, whereas the overall results are reported in Physicists primarily aim at distinguishing interest- the final section. ing collision events from uninteresting ones, the for- The primary goal of this work is to reproduce the mer being those associated with specific physics sig- classification performance results of [4] using tools nals whose existence is sought or whose properties from the Big Data and data science ecosystems, show- are worth being studied. Typically, those signals are ing that the proposed pipeline makes more efficient extremely rare and correspond to a tiny fraction of usage of computing resources and provides a more the whole dataset. Data analysis results then in a productive interface for the physicists, along all the classification problem, where, in addition to the sig- steps of the processing pipeline. nal category, the background is often also split into several classes. Out of the 40 million collisions produced by the LHC every second, only a small fraction (about 1000 Traditional Analysis Workflow events) can currently be stored by the online pipelines and Tools of the two omni-purpose detectors, CMS and AT- LAS. A haphazard selection of those events would The backbone of the traditional HEP analysis work- dilute the already rare signal processes, thus efficient flow is ROOT [5], a multipurpose C++ toolkit devel- data classification needs to take place already online. oped at CERN implementing functionalities for I/O Experiments implement complex trigger systems, de- operations, persistent storage, statistical analysis, signed to maximize the true-positive rate and min- and data visualization. Data gathered by LHC exper- imize the false-positive rate thus allowing effective iments or produced by their simulation software are utilization of computing resources both online and provided in ROOT format, with file-based data rep- offline (e.g. processing units, storage, etc.). resentation and an event-based class structure with The Machine Learning pipeline described in this branches. The latter is a feature of paramount impor- paper addresses the same physics use-case considered tance, as it enables the flexibility required to preserve in the work by Nguyen et al. [4] where event topol- the complexity of the recorded data, allowing keeping ogy classification, based on deep learning, is used to track of intrinsic dependencies among physics objects improve the purity of data samples selected at trig- of each collision event. ger level. The dataset is the result of a Monte Carlo 2 event generation, where three different processes (cat- egories) have been simulated: the inclusive produc- tion of a leptonically decaying W ± boson, the pair production of a top-antitop pair (tt¯), and hadronic production of multijet events. Variables of low and high level are included in the dataset. Figure 1: Scheme of the data processing pipeline used Data Pipeline For Machine for training the event topology classifier. Learning and container-based environments. The four steps of Data pipelines are of paramount importance to make the pipeline we built are (see also Figure 1): machine learning projects successful, by integrating • Data Ingestion, where we read data in ROOT multiple components and APIs used across the en- format from the CERN EOS storage system, into tire data processing chain. A good data pipeline a Spark DataFrame and save the results as a implementation can help to achieve analysis results large table stored in a set of Apache Parquet faster by improving productivity and by reducing the files. amount of work and toil around core machine learn- ing tasks. In particular, data pipelines are expected • Feature Engineering and Event Selection, to provide solid tools for data processing, a task that where the Parquet files containing all the events ends up being one of the most time-consuming for details processed in Data Ingestion are filtered, physicists, and data scientists in general, approach- and datasets with new features are produced. ing data analysis problems. Traditionally, HEP has developed custom tools for data processing, which • Parameter Tuning, where the hyperparame- have been successfully used for decades. Recently, ters for each model architecture are optimized a large range of solutions for data processing and by performing grid search. machine learning have become available from open • Training, where the neural network models are source communities. The maturity and adoption of trained on the full dataset. such solutions continue to grow both in industry and academia. Using software from open source commu- In the next sections, we will describe in detail each nities comes with several advantages, including low- step of this pipeline. ering the cost of development and support, and the possibility of sharing solutions and expertise with a Data Source large

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    15 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us