<<

Experimental Operations Plan for the ICARUS Experiment

Experimental Operations Plan ICARUS Experiment (SBN-FD, E-1052) Fermi National Accelerator Laboratory V1.0: November 30, 2020

1

Experimental Operations Plan for the ICARUS Experiment

1 Introduction ...... 4 2 Science goals ...... 4 3 Experimental Design ...... 5 3.1 Booster Beam ...... 5 3.2 NuMI Beam ...... 6 3.3 ICARUS Detector ...... 7 3.3.1 LAr TPCs ...... 7 3.3.2 LAr Scintillation Light Detection System ...... 8 3.3.3 Light Detector System Trigger ...... 8 3.3.4 Cryogenics and Purification ...... 9 3.3.5 Cosmic Ray Tagger ...... 10 3.3.6 Overburden ...... 12 3.4 Data Acquisition ...... 12 4 Organization and Governance ...... 13 4.1 Technical Working Groups ...... 14 4.2 Technical Board ...... 15 4.3 Detector Operations Group ...... 17 4.3.1 Shifts ...... 17 4.4 Data Reconstruction and Analysis Group ...... 18 5 Risk Analysis ...... 19 5.1 Drift HV System Risk Analysis ...... 20 5.2 TPC Readout Electronics and Wires Bias Risk Analysis ...... 20 5.3 Trigger System Risk Analysis ...... 21 5.4 Light Detection System Risk Analysis ...... 21 5.5 CRT Risk Analysis ...... 21 5.6 Detector Electrical Infrastructure Risk Analysis ...... 22 5.7 DAQ Risk Analysis ...... 22 5.8 Detector Control System Risk Analysis ...... 22 5.9 Online Computing and Networking Risk Analysis ...... 22 5.10 Cryogenic Plant Risk Analysis ...... 23 6 Roles and Resources ...... 23 6.1 Accelerator Division ...... 23 6.2 Neutrino Division ...... 23 6.2.1 Technical Support Department ...... 24 6.3 Fermilab Scientific and Core Computing Divisions ...... 25 6.4 Environment, Safety, Health & Quality Section ...... 26 6.5 Particle Division ...... 27 7 INFN Computng Center - CNAF ...... 27 8 Personnel Resource Summaries ...... 28 9 Spares ...... 28 10 Budget ...... 28 10.1 Detector Operations Budget ...... 28 10.2 ICARUS Computing Budget ...... 29 11 Run Plan ...... 31

2

Experimental Operations Plan for the ICARUS Experiment

Appendix A: Collaboration Institutional Responsibilities ...... 32 Appendix B: ICARUS Detector Spares ...... 35

3

Experimental Operations Plan for the ICARUS Experiment

1 INTRODUCTION Fermilab experiment E-1052, ICARUS, is the Far Detector for the Short-Baseline Neutrino program in the BNB beam line. It comprises an active target 600-ton fiducial mass liquid argon time-projection chamber with ionization charge and scintillation light detection for the reconstruction of neutrino interactions. It surrounded by a plastic scintillator cosmic ray tagger. The primary science goal of the experiment, in association with the SBND detector, is a search for evidence of eV-scale sterile by the simultaneous measurement of the rate of muon-neutrinos and electron-neutrinos in the BNB. It will also look for evidence for higher-mass scale sterile neutrinos exhibiting flavor oscillations on much shorter distance scales, perform a search for evidence of dark matter candidates produced in BNB and NuMI, and make cross section measurements of multi-GeV using neutrinos produced in the NuMI target.

2 SCIENCE GOALS The experimental observation of the three-neutrino flavor oscillations with non-vanishing mass eigenstates today represents the main evidence of a new phenomenology beyond the Standard Model (SM). Nevertheless, several experimental “anomalies,” consistent with a new oscillation frequency well above the well-established atmospheric and solar scales, have been observed at accelerators, in low energy anti-neutrino experiments at nuclear reactors, and by exploiting mega-Curie radioactive νe sources in gallium-based experiments originally designed to detect solar neutrinos. The original experimental motivation for consideration of an additional contribution to flavor oscillations came from the Liquid Scintillator (LSND) collaboration, which reported a 3.8σ excess of anti-νe events from anti-νμ produced in a muon decay-at-rest source in 2001. This result is consistent with oscillations at a characteristic mass splitting of Δm2 > 0.1 eV2, and inconsistent with the well-established atmospheric and solar mass splitting. More than a decade later, the MiniBooNE experiment explore a similar high-Δm2 parameter space with both neutrinos and anti-neutrinos from pion decay-in-flight using scintillation and Cherenkov light detection by photomultipliers immersed in liquid scintillator. In 2018 they reported a 4.7σ excess of events with a combination of their νμ → νe and anti−νμ → anti−νe searches. The results, in both neutrino and anti-neutrino beam exposures, are dominated by the “low energy excess” of electron-flavor-like events at energies 200 - 475 MeV. The LSND and MiniBooNE results hint at the existence of at least one new neutrino mass eigenstate, which, based on LEP measurements of the Z boson width, would have to be “sterile” with regard to its SM interactions. Sensitive searches have also been performed by other experiments with a null result. Most notably, KARMEN, ICARUS, and OPERA did not observe electron flavor appearance from a muon-flavor source at short and long-baselines, hence restricting the allowed region in the mass and mixing parameter space. In contrast to the anomalous appearance results, there are no observed anomalies in disappearance experiments, even though such disappearance is generally expected in models involving one or more light sterile neutrinos. Additional excitement has been generated recently by the Neutrino-4 experiment

4

Experimental Operations Plan for the ICARUS Experiment operating at the SM-3 nuclear reactor. They presented a new and remarkable result1 in which they claim the first direct experimental observation of neutrino produced oscillations due to an additional, sterile, neutrino. This was observed as a periodic disappearance in the positron plus neutron channel as a function of L/E, where L is the distance travelled, and E is the energy of the incoming anti-νe. In light of these anomalous and conflicting results, a definitive experimental clarification of the observed anomalies is essential. If there is evidence for new physics, a comprehensive set of measurements will be needed to compare the data to the underlying physics models. While reactor and source experiments will likely be able to provide a sensitive test of electron-flavor disappearance in the near future, definitive probes of electron-flavor appearance from a muon flavor source and of muon-flavor disappearance are likely only possible using neutrinos from a particle accelerator. The primary goal of the FNAL Short-Baseline Neutrino physics program is to definitively resolve the experimental anomalies and to perform the most sensitive search to date for sterile neutrinos at the eV mass scale manifested by oscillations in the L/E distributions in both the appearance and disappearance channels. The oscillation signal will be searched for by directly comparing the neutrino event spectra measured at different distances with respect to the source. In the absence of “anomalies,” the detectors’ signals, adjusted for distance and geometrical effects, should be very similar for all experimental signatures, with an almost complete cancellation of beam and detector common uncertainties. To reach 5s sensitivity of the LSND allowed 99% C.L. region for nμ ® ne appearance, the SBN program requires an exposure of 6.6 x 1020 protons on target (POT) in the Booster Neutrino Beam for the SBND and ICARUS detectors. An important early physics program, which can be performed standalone by ICARUS, is 2 2 the investigation of the Neutrino-4 signal, characterized by large Δm 14 = 7.25 ± 1.09 eV 2 and amplitude sin (2θ14) = 0.26 ± 0.09. This study will be performed by searching for a disappearance anomaly for both the BNB nμ and NuMI off-axis ne spectra. In this search ICARUS will look for an anomalous oscillatory pattern in L/E, in analogy to the Neutrino- 4 analysis. Sufficient candidate event statistics to verify the Neutrino-4 claim could be collected before the start of the joint operation with the near detector. In addition, ICARUS will study neutrino-argon cross sections in the energy range of interest for the LBNF/DUNE project and will perform a dark matter search with the off- axis NuMI beam. Moreover, the off-axis NuMI beam will offer the opportunity to study an enriched sample of νe charged current events in liquid argon and assess the event reconstruction efficiency and the residual background rejection for DUNE.

3 EXPERIMENTAL DESIGN

3.1 Booster Neutrino Beam The Booster Neutrino Beam (BNB) is created by extracting protons from the Booster accelerator at 8 GeV kinetic energy (8.89 GeV/c momentum) and impacting them on a 1.7λ beryllium target to produce a secondary beam of hadrons, mainly pions. Charged

1 JETP Lett. 112 (2020) 4, 199-212 5

Experimental Operations Plan for the ICARUS Experiment secondaries are focused by a single toroidal aluminum alloy focusing horn that surrounds the target. The horn is supplied with a current of 174 kA in 143 μs pulses coincident with proton delivery. The horn can be pulsed with either polarity, thus focusing either positive or negative particles and defocusing the opposite charge sign. Focused mesons are allowed to propagate down a 50 m long, 0.91 m radius air-filled tunnel where the majority of the particles will decay to produce muon and electron neutrinos. The remainder are absorbed into a concrete and steel absorber at the end of the decay region. Suspended above the decay region at 25 m are concrete and steel plates that can be deployed to reduce the available decay length, thus systematically altering the neutrino fluxes. The timing structure of the delivered proton beam is an important aspect for the physics program. The Booster spill length is 1.6 μs during which, nominally, 5×1012 protons are delivered onto the beryllium target. The main Booster RF is operated at 52.8 MHz, with some 81 buckets filled out of 84. The beam is extracted into the BNB using a fast-rising kicker that extracts all of the particles in a single turn. The resulting structure is a series of 81 bunches of protons each ∼2 ns wide and 19 ns apart. While the operating rate of the Booster is 15 Hz, the maximum allowable average spill delivery rate to the BNB is 5 Hz, set by the design of the horn and its power supply. As presented in the SBN proposal2, at the nominal BNB intensity there will be one neutrino interaction with a vertex in the ICARUS LAr-TPCs for every 180 spills. Hence, the total approved exposure of 6.6×1020 POT from the booster beam should provide a sample of ~700,000 neutrino events.

3.2 NuMI Beam The beam of neutrinos from the NuMI (Neutrinos at the Main Injector) facility is generated by focusing 120 GeV protons from the Main Injector onto a graphite target. This interaction produces mesons (pions and kaons) as well as other particles. These mesons decay in a pipe filled with helium at 1atm pressure, which is 675 m in length and 2 m in diameter. The pions and kaons decay and produce neutrinos. Two magnetic focusing horns are located next to the target. The two magnetic horns that focus the particles produced in the target are pulsed with a 200-kA current, yielding a maximum 30 kG toroidal field; the two horns focus positively charged particles and defocus negatively charged particles. These focused mesons decay and produce a neutrino beam. A change of current polarity through the horns produces an antineutrino beam. At the end of the decay pipe is a hadron monitor followed by an absorber to monitor and stop the remnant hadrons. The absorber is followed by rock of about 240 m to stop the muons, leaving only neutrinos3. The ICARUS Detector is located on the surface at Fermilab, off-axis from the NuMI beam at an angle of 103 mrad. Figure 1 shows the flux predictions of electron and muon neutrinos. In the active volume region, we expect to have 1 neutrino event every 13 spills. For 6×1020 POT, on the NuMI target, we expect about 600,000 muon neutrino events and 28,000 electron neutrino events.

2 arxiv.org:1503.01520 3 NIM A Vol 806, p 279 (2016) 6

Experimental Operations Plan for the ICARUS Experiment

numu_flux_forplotsν -6 Entriesµ 47218 10 Mean 1.398 RMS 0.9662 νe 10-7

10-8 /GeV/m^2/POT ν

10-9

10-10 1 2 3 4 5 6 7 8 9 10 ν Energy (GeV) Figure 1: Muon- and electron-neutrino fluxes from NuMI at the ICARUS detector.

3.3 ICARUS Detector 3.3.1 LAr TPCs The ICARUS-T600 detector consists of a large cryostat containing two identical adjacent modules with internal dimensions 3.6 x 3.9 x 19.6 m3, with the long dimension oriented along the beam direction. They are filled with about 760 tons of ultra-pure liquid argon continuously purified to minimize absorption of ionization electrons by electronegative impurities. Each module houses two TPCs separated by a central, vertical, common cathode running the length of the module. A uniform electric field (ED = ~500 V/cm), perpendicular to the beam direction, allows for preservation of the relative position of ionization electrons produced by high energy charged particles as they drift toward the corresponding TPC readout plane. Each TPC has three parallel read-out wire planes, 3 mm apart, facing the drift volume (1.5 m deep). The wire pitch is 3 mm for all planes. The first plane (Induction-1) has horizontal wires, while the intermediate plane (Induction-2) and the third plane (Collection) have wires at +/- 600 with respect to the horizontal direction. The maximum wire length is 18 m for the Induction-1, split in two separated 9 m wires read by its electronics channels. Each TPC has a total of 13,312 wires. Altogether there are 53,248 wires in the four identical TPCs that comprise the T600. By appropriate voltage biasing of the three wire planes, full transparency of the first two planes (Induction-1 and Induction-2) is achieved. New compact4 electronics in mini-crates mounted directedly onto the feed-through flanges has been designed and installed. Each custom mini-crate serves 576 wires. In total, 96 mini-crates were installed and connected to the data acquisition system (DAQ) through optical fibers. The TPC system, including high voltage, bias voltages, and readout electronics, is fully installed and all channels are being read out.

4 The mini-crates occupy about sixty times less volume than the rack-mounted electronics used during operation at Gran Sasso. 7

Experimental Operations Plan for the ICARUS Experiment

3.3.2 LAr Scintillation Light Detection System The T600 light detection system consists of 360 Hamamatsu R5912-MOD 8” diameter PMTs deployed behind each of the four TPC wire chambers (90 PMTs observing each TPC). The sensitive window of each PMT is coated with about 200 μg/cm2 of Tetra-Phenyl Butadiene (TPB) to convert VUV LAr scintillation photons (λ = 128 nm) to visible light, which are detected by the PMT. Mechanical supports were designed to hold each PMT a few millimeters behind the TPC wire planes. Each device is set inside a grounded stainless-steel cage to prevent the induction of PMT pulses on the nearby wire plane. A 50-μm diameter optical fiber directed towards the sensitive surface will allow timing calibration with nanosecond precision by means of fast laser pulses (Hamamatsu PLP10 laser diode, 60 ps FWHM, 120 mW peak power, emission at 405 nm). The readout electronics is designed to allow continuous read-out, digitization, and independent waveform recording of signals coming from the PMTs. This operation is performed by 24 V1730B digitizers. Each module consists of a 16-channel 14-bit 500- MSa/s Flash ADC. During the acquisition, the data stream of each channel is continuously written every 2 ns in a circular memory buffer of 5 kSa, corresponding to a 10 μs waveform segment, allowing the recording of signals from both the fast and slow components of the LAr scintillation light. The PMT system, including the readout, is fully operational. The laser calibration system installation is also completed and operational but needs some minor software implementation for remote control of the system. 3.3.3 Light Detector System Trigger The amplitude of the prompt signals from the fast component of the scintillation light is exploited for trigger purposes (described in SBN-docDB 141455), V1730B boards generate a pattern of digital pulses (200 ns, LVDS logic standard) mapping the PMT signals that exceed digitally programmed thresholds, typically set to a few photoelectrons. The trigger system will exploit the coincidence of the PMT signals with a beam gate window generated in correspondence with the expected arrival time of neutrinos in the T600 from both BNB and NuMI beams. A trigger signal to start the data acquisition of TPC, PMT, and CRT systems will be generated when a PMT coincidence signals exceed a majority level. Logic processing is performed by means of programmable FPGA units (NI-PXIe 7820), while a SPEXI board by INCAA Computers is used for the generation of the beam coincidence windows from ‘‘early warning’’ beam information on the proton spill extraction distributed by the White Rabbit protocol. The trigger electronics is completed by a Real Time (RT) controller (NI PXIe-8135) for control and communication. The Trigger system hardware has been fully deployed. The required firmware and LabVIEW code are being implemented in steps. The functionality of the single boards has

5 https://sbn-docdb.fnal.gov/cgi-bin/sso/ShowDocument?docid=14145 8

Experimental Operations Plan for the ICARUS Experiment been tested: the phase locking of clocks generated by SPEXI with a common time and the correctness of the decoding of the beam extraction signal have been successfully assessed. Final codes for cosmic event recording are in preparation as well as for BNB and NuMi neutrino beams.

3.3.4 Cryogenics and Purification To support operation of the ICARUS LAr-TPC a cryogenic plant was designed, built and installed at Fermilab by a scientific collaboration of three international institutions, CERN, INFN and Fermilab. The overall cryogenic plant was divided conceptually into three cryogenic systems, External, Proximity and Internal as shown schematically on Fig. 2.

Figure 2: Schematic of the ICARUS cryogenics system.

The External system, delivered by Fermilab, is responsible for storage and supply of cryogens, venting of gases, gas analysis, regeneration of filtration media, electrical power, and the process and safety controls systems. The Proximity system, delivered by CERN, is responsible for all transport and distribution of liquid argon and nitrogen, removal of impurities from argon, and recirculation and filtration of boil-off argon gas. The Internal cryogenic system, delivered by INFN and Fermilab, is responsible for interfacing internal volume of the cryostats with cryogenic plant, LN2 shields, and measurement of argon temperatures and levels. Table 1 lists requirements for the cryogenic plant.

9

Experimental Operations Plan for the ICARUS Experiment

Installation of the cryogenic system, including monitoring software, is complete and operating stably.

At present (Nov. 2020), the free electron lifetime in both modules is around 1.2 ms; the corresponding contamination of electronegative impurities is 0.25 ppb of O2 equivalent molecules (contaminants that have the same electron capture cross-section as O2 molecules). This value does not reach the target value specified for the argon purification and recirculation system (free electron lifetime > 3 ms; e-negative impurities contamination < 0.1 ppb O2 equiv.) and may affect the efficiency of neutrino detection and reconstruction. Tests indicate that the limited free electron lifetime in the bulk LAr is likely due to saturation of the argon gas recirculation filters earlier than anticipated. This is a consequence of their size, limited by the available space, and that they operate at LAr temperature, where the adsorption capacity of copper is reduced by a factor 10 with respect to room temperature. In the near term, the mitigation is to regenerate each of the four gas filters every 2-3 months. A plan has been developed for the implementation of additional warm filters that would increase the filtering capacity of the gas recirculation units by about a factor 20. It is hoped to install this system early in 2021. 3.3.5 Cosmic Ray Tagger The Far Detector Cosmic Ray Tagger (CRT) consists of approximately 1000 m2 of detector area that provides almost 4π solid angle coverage around the T600 with a tagging efficiency of more than 95%. It is divided into three different sub-systems: The Side CRT for lateral coverage (walls), and the Top and Bottom CRT systems above and below the TPCs. Each system is composed of individual planar scintillator modules with embedded wavelength shifting fibers, read out by silicon photomultipliers in the Top and Side CRT subsystems and by multi-anode photomultipliers tubes in the Bottom CRT. The readout electronics for the Top and Side CRTs is the same as that used for the SBND CRT system. Table 1: Requirements for the cryogenic plant.

Requirements for the Value cryogenics plant Store and supply Provide storage of LN2 (up to 75000 L) and LAr (30000 L) and cryogens support their uninterrupted supply to the cryogenic plant LN2 system Provide circulation of LN2 with Barber-Nichols pump for the cryostat shields, transfer lines and valve boxes as well as for re- condensing GAr from the cryostats LAr system Provide circulation of LAr with Barber Nichols pumps via filters at 2.5 – 8.0 m3/hr GN2 system Provide purge of the cryostat insulation space GAr system Provide collection and re-liquefaction of the boiloff GAr Regeneration system Provide means of regeneration the filtration media (mol sieve and activated copper) at elevated temperatures and with H2/Ar mix Control system Provide process and safety controls with Siemens and Beckhoff PLCs and iFix HMI

10

Experimental Operations Plan for the ICARUS Experiment

Requirements for the Value cryogenics plant Support operations 1. Vacuum cleanup (< 10-5 mbar) 2. Pressurization for qualification tests 3. Initial cooldown to below 150 K with LN2 flowing through the shields while maintaining the cold vessels at argon gas pressure of about 1.05 bara. 4. Final cooldown by slow transfer of liquid argon into each of the cold vessels 5. Fill with LAr from the LAr storage 6. Stabilization and normal operations

Control of the argon Maintain purity at <100 ppt O2 equivalent by: purity - filling the cryostat with purified argon via fill filter - circulating LAr via LAr filters at 2.5 – 8.0 m3/hr - removing impurities from condensed ullage gas

Control of the Normal operating pressure is 1.070 with +/- 5% bara maintained with cryostat pressure combination of heat removal with nitrogen shields and condensing of argon gas from the ullage with Ar/N2 condensers (2 per cryostat) Abnormal operating pressure is managed with vent valves (2 per cryostat), emergency vent valve (1 per cryostat at 250 mbarg) and magnetic disk reliefs (3 per cryostat)

The Top CRT scintillator planes are composed of 123 scintillator modules with two layers of scintillators orthogonal to each other: 84 of them placed horizontally and 39 of them vertically along the perimeter of the cryostat top surface. The Side CRT scintillator planes consist of 162 scintillator modules (refurbished from the MINOS experiment veto modules) arranged in two parallel layers of modules for West, East and North and in orthogonal directions for South plane. The Bottom CRT scintillator plane is formed from 14 modules (donated by the Double experiment) each with two layers of parallel scintillator strips. The Bottom CRT was installed before the warm vessel structure was in place. Installation of the side layers is partially completed with three scintillator planes (North, West and East) in the commissioning phase and the South plane installation is scheduled for January 2021. The installation of the Top CRT is to be staged in two periods, separated by a few months, to allow for the installation of services, such as air conditioning, lights, fire protections that are located below the horizontal support structures. The vertical support structures are planned to be installed in February 2021 by Fermilab personnel. If the appropriate experts from Europe are able to travel, the installation of vertical scintillator planes would start in March 2021; with expected completion in 4 to 6 weeks. The horizontal support structure will be in place around April-May 2021. The work for the horizontal plane installation will begin once the services are in place, currently expected in July 2021.

11

Experimental Operations Plan for the ICARUS Experiment

3.3.6 Overburden The final infrastructure component planned for the ICARUS detector is the concrete overburden that will fully cover the detector pit to reduce cosmic ray backgrounds. The installation involves two separate components: structural concrete bridging beams that span the width of the pit and concrete blocks installed on top of the bridging beams. The bridging beams rest on the concrete ledge approximately 0.5 m above the top cosmic tagger and are 42” tall. An additional 72” of concrete blocks will be stacked on top. The bridging beams were designed as part of the building design contract but remain to be procured. The remaining blocks have been mined from decommissioned beamline enclosures and are stored on the hardstand west of the MINOS building.

3.4 Data Acquisition The Data Acquisition system (DAQ) for ICARUS is responsible for online collection of the data from each of the ICARUS detector subsystems, combining together all data corresponding to interactions in the detector during a triggered readout period. The general flow of the DAQ is as follows. The trigger system, using inputs from the beam and timing systems and the PMT readout, forms a global trigger decision based on the coincidence of light with a neutrino beam spill. That trigger is propagated to the TPC readout crates across the detector, which capture 1.4 ms of data around the trigger time and transmits their data to TPC readout servers via optical links. Triggers are also propagated to the PMT readout crates, which also transmit their data to PMT readout servers via optical links. The CRT taggers do not receive a direct trigger, and instead stream recorded hits above threshold to dedicated CRT readout servers over CAT5 cables. Data on the trigger is also sent from the trigger crate through a dedicated Windows-based trigger computer, which sends data to the DAQ stream via UDP and TCP packets. The sbndaq DAQ software applications run on the readout servers, with customized software for configuration of the readout components and collection of the data. The sbndaq software utilizes the common Fermilab-supported artdaq DAQ software framework, which further provides utilities for event-building, online process management, and data quality monitoring. Events are built based on a combination of common trigger number (for trigger and TPC data) and event timestamps (for PMT and CRT data). Synchronization of the readout components is ensured through common distribution of a GPS pulse-per-second signal from the White Rabbit timing system. As events are collected in EventBuilder applications running on the SBN-FD DAQ cluster, they are written to local disk, and staged for transfer to Fermilab’s tape-backed central storage system via common FTS software. The SBN-FD DAQ cluster has a total of ~180 TB of RAID 10 storage, which during normal operations will be sufficient for about a week’s worth of storage space, should any interruptions in data transfer occur. The DAQ additionally includes tools for Run Control, DAQ application configuration, and an online data quality monitoring that runs and stores the results of offline algorithms in a fast key-value store (Redis) database. Additional operational monitoring of the DAQ, including checks on total event rates, buffer occupancies, and etc., is available from the

12

Experimental Operations Plan for the ICARUS Experiment artdaq software, and reported to a time-series database (Graphite6) and is monitored via a grafana web interface. The DAQ system is designed to accommodate a 15 Hz instantaneous event rate with 5 Hz average rate to match the maximum beam rate. In operations with the final trigger configuration, it is expected that all triggers (BNB, NuMI, and cosmic-based calibration triggers) will amount to ~1 Hz event rate. After compression, each event is currently ~140 MB in size; additional compression tools for implementation in firmware and software are being explored.

4 ORGANIZATION AND GOVERNANCE The ICARUS collaboration consists (Nov. 2020) of 118 Ph.D. physicists, 21 engineers, and 25 graduate students from 23 institutions in 4 countries. The collaboration is governed under a set of by-laws (SBN-doc-18434-v1) that were adopted by the ICARUS Institutional Board (IB) on April 16, 2020. The IB consists of one representative from each collaborating institution. The IB approves and modifies the collaboration bylaws, admits new institutions and senior members to the collaboration, and sets shift, authorship, and publication policies. The organization chart is shown in Figure 3. The scientific leadership of the ICARUS collaboration consists of a spokesperson and two deputy spokespersons. The founding spokesperson is Prof. . The two deputy spokespersons are elected by the IB from candidates proposed by the spokesperson, for two-year terms. The deputies are responsible for the day-to-day management of the collaboration and represent the collaboration in the event the spokesperson is not available. The spokesperson and deputy spokespersons are advised by an Executive Committee, which is chaired by the spokesperson and includes the deputy spokespersons, the IB chair, and additional members appointed by the spokesperson. The Editorial and Speakers Board (ESB) operates under rules approved by the IB (SBN- doc-17328). It is responsible for oversight and organization of most aspects of the collaboration related to its representation at conferences and the publication of results in professional journals. During the operations phase of the experiment, the Technical Coordinator (TC) and the Run Coordinator (RC) serve as the lead technical and run managers for the experiment. They are appointed by the Spokesperson after consultation with the Institutional Board. A detailed description of the coordinator roles and responsibilities appears in the ICARUS Coordination Roles document, SBN-doc-17331. A Commissioner (who is also the Deputy Technical Coordinator) oversees the commissioning activities prior to the detector transitioning to stable operations. The Technical Coordinator chairs the Technical Board described in Section 4.2. The Run Coordinator (RunCo) is charged with optimizing the operation of the ICARUS detector to meet the physics goals of the experiment. In consultation with the Deputy Spokespersons, the Run Coordinator will direct and decide the priority and scheduling of detector systems development and maintenance. The RunCo has responsibility for

6 https://graphiteapp.org/ 13

Experimental Operations Plan for the ICARUS Experiment maintaining shift procedures and maintaining the systems expert on-call list. The RunCo will be the primary contact between the experiment and the Fermilab Main Control Room and will be responsible for reports at the weekly All Experimenters’ Meeting. The RunCo will generally serve in this position for three months. A Deputy Run Coordinator will work with the RunCo for three months then take over as RunCo. The Physics Analysis Coordinator (PACo) serves as the coordinator and overseer of all analysis groups for the experiment. The PACo may establish analysis groups (AG) and appoint AG leaders following consultation with Spokesperson and Deputy Spokesperson(s). The PACo will act as a representative of the ICARUS collaboration in the SBN Joint Analysis Working Groups. The PACo will be appointed by the Spokesperson after consultation with the Institutional Board.

Figure 3: ICARUS Collaboration organization chart

4.1 Technical Working Groups The detector technical working groups (WG) are responsible for carrying out the various activities necessary for construction, commissioning and operation of the detectors, for the collection and storage of the data, and for data reconstruction and analysis. Each WG has defined a set of deliverables and a corresponding time schedule. Each WG has a chairperson and a co-chair (if needed) who is responsible for organizing the WG meetings. Chairs and co-

14

Experimental Operations Plan for the ICARUS Experiment chairs are members either of the Technical Board or of the Data Reconstruction and Analysis Group. • TPC: responsible for the installation, commissioning, operation and maintenance of the TPC readout electronics. • Calibration: responsible for the time and energy calibration of the detectors. • Drift HV: responsible for installation, commissioning, operation and maintenance of the HV system for the drift field of the TPCs. • PMT: responsible for installation, commissioning, operation and maintenance of the LAr scintillation light detection system, including the laser calibration system. • CRT: responsible for installation, commissioning, operation and maintenance of the Cosmic Ray Tagger. • DAQ: responsible for installation, commissioning, operation and maintenance of the DAQ for all sub-detectors. • Trigger: responsible for installation, commissioning, operation and maintenance of the trigger system. • Electrical System: responsible for overseeing all electrical equipment, including the power distribution systems, the grounding and the ground isolation systems. It is responsible for compliance with Fermilab regulations in matter of electrical equipment. Relies on Fermilab ND/Technical Support Department. • Cryogenics and vacuum: responsible for installation, commissioning, operation and maintenance of the cryogenic and argon purification systems. Relies on Fermilab ND/Technical Support Department. • Detector Control System: responsible for installation, commissioning, operation and maintenance of the detector slow control systems and of networking. Relies on Fermilab ND/Technical Support Department and Computing Division. • Documentation: responsible for designing and maintaining web based interfaces to the archives of technical documents of the Collaboration. • Data management: responsible for the setup, organization and maintenance of the infrastructures needed to collect, store, organize, backup and transfer of the data produced by the detectors.

4.2 Technical Board The Technical Board (TB) is formed by the chairs and co-chairs of the technical working groups, the Operations Support Manager (who is also the collaboration safety point-of- contact), the Experiment Liaison Officer, and the Division Safety Officer. The TB is chaired by the Technical Coordinator. The Spokesperson and Deputies are invited to the meetings of the Technical Board. During detector operation periods, the Technical Board typically meets monthly and is responsible for the high-level organization of the technical activities on the detector such as regular or extraordinary maintenance operations, or upgrades. It is responsible for verifying the status of the spare components and to formulate the necessary requests for their procurement. It also defines the needs for technical resources to be provided by Fermilab.

15

Experimental Operations Plan for the ICARUS Experiment

Figure 4: ICARUS Technical Board organization chart.

The Beams (BNB and NuMI) Interface group is responsible for interfacing with Accelerator Division for matters related with the neutrino beams. It is also responsible for ensuring and validating the beam early warning trigger signal to the experiment through the White Rabbit system. The Experimental Liaison Officer (ELO) serves as a line of communication between the ICARUS Collaboration and the Fermilab operations support groups inside the Neutrino Division, and the Accelerator and Particle Physics Divisions. The Experimental Liaison Officer is charged with identifying the resource needs for operation, maintenance, and repair of equipment required for ICARUS operations. The Technical and the Run Coordinators are charged with identifying the resource needs from within the Collaboration groups. The ES&H Division Safety Officer (DSO) oversees all ICARUS activities where there are safety concerns. The DSO is the interface with the ES&H division.

16

Experimental Operations Plan for the ICARUS Experiment

The Operations Support Manager (OSM) provides a broad range of operations support for the collaboration and provides continuity across the relatively short terms of the Run Coordinators. The OSM is the safety point-of-contact for the collaboration and is charged with working with the DSO and ELO to develop safety procedures for ICARUS operations consistent with Fermilab safety rules, for assisting collaborators in following the safety procedures, and with updating the Individual Training Needs Assessments (ITNA).

4.3 Detector Operations Group The Detector Operations Group (DOG) is responsible for ensuring the stable operation of the detector and efficient, high-quality data collection during beam periods. It consists of the Run Coordinator, the Deputy Run Coordinator, the Operations Support Manager, the Technical Coordinator, and the Experiment Liaison Officer. The DOG is chaired by the Run Coordinator. 4.3.1 Shifts Shifts for ICARUS began 24/7 in February 2020, around the time of cool-down and filling. For approximately a month, the shifts took place from ROC-West at Fermilab, until COVID-related restrictions made this no longer feasible. Since then, shifts have continued to take place remotely, at institutions or, more typically, from collaborator residences. Shifts are an institutional responsibility, with the fraction of total shifts assigned to an institution proportional to the number of authors at the institution. The shifts are 8 hours long (0:00-8:00, 8:00-16:00, 16:00-24:00), with a small overlap time before and after each shift to allow for a smooth transition. The shifts are grouped into blocks of 4 days (“weekday”) and 3 days (“weekend”). The collaboration Shift Manager (Diana Mendez) assigns shifts in blocks for a 3-4 month period based on collaborators’ preferences using software developed and used by the NOvA collaboration for several years. The shift calendar is loaded into the ICARUS (SBN-FD) Electronic Collaboration Logbook (ECL). Since the control room in ROC-West was set up to connect to remote computing sessions, web tools, etc. it is also possible for remote shifters to access DAQ machines, such as the primary data taking server or Slow Control monitoring tools. All shifters get a FERMI domain account, which allows them to access the iFix cryogenics monitor using a VPN and are given access to a server with iFix (controlled by Mark Knapp). Shifters have access to the Zoom room initially set up for use in the Control Room; it is also used for expert communications. In practice, shifters typically communicate with each other at shift hand- over via the Slack team communication platform7. Slack is also used for some communications with experts, in addition to phone calls. As more tools have become available and more actions have fallen into the scope of the shifter, the shifts have continued to evolve. A Redmine wiki8 helps to guide shifters. The basic function of the shifts is to monitor various aspects of the detector operation, e.g. that

7 See slack.com/. SBN maintains a professional license so that SBN slack channel communications are archived. 8 https://cdcvs.fnal.gov/redmine/projects/icarus-operations/wiki/Wiki 17

Experimental Operations Plan for the ICARUS Experiment runs are taking place and that the subsystems they are asked to monitor look acceptable/as expected in the monitoring. These generally utilize tools and applications provided by other groups (DAQ, Slow Controls, Data Quality, etc.). Checklists in the ECL at regular intervals guide the shifters through things to monitor. Checklists would need to be added for further subsystems and/or checks as monitoring expands or becomes available and integrated for more parts of the detector. Additional “general” ECL posts or specific forms (e.g., Expert Contact, Run Start) are expected to be filled out by the shifters as needed.

4.4 Data Reconstruction and Analysis Group The ICARUS Analysis and Software group works in conjunction with the SBN Joint Analysis Group to ensure the necessary ICARUS specific elements of simulation and reconstruction, as well as the ICARUS specific elements of event analysis, are integrated into the overall SBN analysis plan. The group also oversees analyses that are specific to the far detector, such as the Neutrino-4 and NuMI analyses discussed in Section 2. The structure of the group is show in Figure 5. The conveners of the group are the Physics Analysis Coordinator (Daniele Gibin) and the Offline Software Coordinator (Tracy Usher). Within the Analysis and Software group the effort is divided into four main categories: Infrastructure - this category handles those tasks which are required to support both the simulation/reconstruction and Analysis efforts. Simulation/Reconstruction - this category represents the breakdown of groups responsible for providing elements of the detector simulation (i.e. post physics simulation) and reconstruction of both simulated and real data for each subsystem of the ICARUS detector. Tools - this category utilizes elements of reconstruction, and combines from the different subsystems, to provide the elements required to be fed into the analysis stage. Analysis - this category provides tools and coordination to physicists engaged in specific analyses. The Physics Analysis Coordinator is responsible for monitoring and guiding analyses towards publication quality results. The Editorial and Speakers Board is responsible for oversight of papers and their timely submission to professional journals. Details of the activities within the broad categories are provided in the collaboration document SBN DocDB-201339.

9 https://sbn-docdb.fnal.gov/cgi-bin/private/ShowDocument?docid=20133 18

Experimental Operations Plan for the ICARUS Experiment

Figure 5: Data Reconstruction and Analysis Group

5 RISK ANALYSIS A comprehensive risk analysis was conducted by the Working Groups with the aim to capture all the events that would result into an interruption of the detector or DAQ operation. The analysis is structured as follows: 1) The possible event (risk) is identified and described (e.g., hardware failure). 2) The possible cause(s) of the event is described (e.g., failure of a subcomponent/ overheating, fatigue). 3) The possible consequences are analyzed (e.g., stop of a run that is in progress, damage to other components). 4) The estimated probability of the event is given: a. High Þ expected to occur multiple times during the lifetime of the experiment (an average interval is provided). b. Moderate Þ may occur one or two times during the lifetime of the experiment. c. Low Þ no occurrences expected during the lifetime of the experiment. d. Very low Þ very unlikely that it will occur during the lifetime of the experiment. 5) The mechanism that allows the detection of the event (e.g., alarm from the slow control or the DAQ systems). 6) The required intervention in case the event occurs (e.g., substitution with a spare unit and repair of the faulty one) with the identification of the necessary resources and the required time. 7) The mitigation procedures that are implemented to reduce the risk, i.e., either the probability of occurrence or the possible consequences or both (e.g., implementing alarms, keep spare a stock of spare units, implement redundancies).

19

Experimental Operations Plan for the ICARUS Experiment

The detailed risk analysis is contained in a separate document maintained in the experiment document database (SBN DocDB-19561)10. In the following sections we summarize the results for each one of the detector subsystems. A major risk for the experiment, which is not captured in the analysis described above, is the delayed installation and commissioning of the top CRT and of the overburden. Both the top CRT and the overburden are considered as essential to the data taking for sterile neutrino searches. Due to the planned long shutdown, in 2025, completion of the top CRT and overburden occurring beyond the restart of the beams next year, in fall 2021, would potentially reduce the POT exposure to below that required to achieve the science goals of the experiment. A detailed installation plan both for the top CRT and for the overburden is being prepared, accounting for the current restrictions due to COVID-19, which aims for completion of the top CRT installation before the 2021 summer shutdown and for the overburden to be installed during the summer. However, with the present level of uncertainty, it might be expected that such a plan will suffer significant delays, as it implies the participation of external companies and the presence onsite of colleagues from Europe. In this case, the only possible mitigation strategy would be to reconsider the beam allocation plan.

5.1 Drift HV System Risk Analysis The HV for the drift is the single most critical system of the LAr TPC as its failure would cause the stop of operation of the entire detector with possible damage to the internal components and to the readout electronics. The intrinsic reliability of the HV system is supported by the continuous operation for several years at Gran Sasso without problems and successful testing of the system to double the nominal voltage there. The HV system was activated without incident in August and has been operating without significant issues since then. The stability of the system (voltage and current draw) is continuously monitored by the shifters. Given the criticality of the system, spares, ready to be installed, of most of the external components (power supply, cables, feedthrough, etc.) are available onsite; some additional parts (HV cables) are funded and will be procured. An expert is always on call to go onsite for interventions if that is ever needed.

5.2 TPC Readout Electronics and Wires Bias Risk Analysis The TPC readout electronics has been commissioned, progressively, starting from several months before the start of the cryogenic commissioning with the so-called slice test. Data have been continuously collected through the entire cryogenic commissioning period, investigating the noise and other possible issues. Some defective boards (about 20) were identified at the beginning of this process; they were replaced with spares and sent back to CAEN for repair; they are now being returned to Fermilab to serve as spare units. No failures have been detected since the start of the cryogenic commissioning. However, an additional 20 spare boards have been funded by INFN, together with spare components for about 5000 channels and are being procured during FY21. These should suffice to cover the needs of spares for the entire run up to FY25. The low voltage power supplies have

10 https://sbn-docdb.fnal.gov/cgi-bin/sso/ShowDocument?docid=19561. 20

Experimental Operations Plan for the ICARUS Experiment required the periodic replacement of the fuses; 8 spare units are available onsite ready to be installed together with a storage of fuses that is periodically replenished. The wires bias system is a replica of the one used in Gran Sasso, which has demonstrated full reliability. A spare unit is available, ready to be installed. Two experts are always available for interventions onsite.

5.3 Trigger System Risk Analysis The reliability of the trigger system is fundamental to data taking with the beams. All hardware components have been installed and individually tested extensively. However, the trigger logic has not been fully implemented as the experts involved in this activity are in Italy and, due to COVID-19 travel restriction, have to work remotely. This is challenging since the control software is the graphical-interface-based LabView that works poorly over the network. Nonethless, implementation and partial testing of the trigger logic is quite advanced and is almost ready for first testing with the beam signals. For all hardware components spares exists onsite ready to be installed by experts, who are always available on call. Test of the trigger logic and of the distribution of the time stamps is part of the detector commissioning; it requires the beam signals, so this process is just beginning along with the beam operations ramp up.

5.4 Light Detection System Risk Analysis The scintillation light detection system provides the internal trigger signal, that, in coincidence with the beam signal, generates the trigger for the experiment. The light detection system has been successfully activated and calibrated and it has been operating without issues since several months. The stability of the system is continuously monitored by the shifters. Spares are available, ready to be installed for most of the system components and experts are present, on call, for interventions. The commercial HV system is no longer in production, so some components can no longer be acquired; spares of these components are available, however, a new system has been funded by INFN and will be procured in 2021.

5.5 CRT Risk Analysis At present, the Side and the Bottom CRT are almost completely installed, and undergoing commissioning. Several parts are operational since before the start of the cryogenics commissioning and have shown excellent reliability. Spare parts are available, ready to be installed and experts are available on call for remote and onsite interventions. A special case is represented by the bottom CRT, where several components cannot be reached and, therefore, cannot be substituted in case of failure. In the area where the CRT modules are located, there is the risk of water dripping on the electronic board from condensation on the bottom surface of the warm vessel. Also, in case of failure of the sump pumps, there is the risk of flooding, with possible submersion of the bottom modules. A special procedure is in place, with periodic visits of specialized personnel from FESS department to monitor the functionality of the sump pumps and the ambient humidity.

21

Experimental Operations Plan for the ICARUS Experiment

5.6 Detector Electrical Infrastructure Risk Analysis The main risk associated with the electrical infrastructure is the failure of the power transformers that supply the AC power to the detector equipment (all the HV supplies and the readout electronics). Though such an event is considered extremely unlikely, its occurrence would stop the detector operation for the time needed for replacement. A spare transformer is present at the Lab, shared with other experiments. However, it is foreseen to use this unit for another experiment, so procurement of another unit, to restore the spare, is highly desirable. Dedicated spare components are available for other critical units and ready to be installed. An expert is present to overview the installation to be done by specialized technicians.

5.7 DAQ Risk Analysis The DAQ has been developed as part of the activity of the SBN joint working groups. Testing of the DAQ system has been in progress for more than one year and is a fundamental part of the detector commissioning. Long duration runs are routinely taken with both T600 modules (2 TPCs per module), including the PMTs and parts of the CRT, with random triggers at different rates, equal or exceeding the expected trigger rate with the beams. At present, the rate of crashes is low, in spite of the many runs taken daily for detector commissioning. Bugs are quickly investigated and corrected, and monitoring tools are evolving rapidly. The next phase is the operation with the cosmic rays trigger of the full detector (4 TPCs) followed by the beam trigger. Most of the risks associated with DAQ are from the failure of some communication interface. Experts are available on call both for remote and onsite interventions.

5.8 Detector Control System Risk Analysis The DCS system has been developed as part of the activity of the common SBN working groups. The system is still under development with components that are progressively being implemented as part of the detector commissioning process. The hardware infrastructure is fully implemented, and spare parts are available for all components. Software is continuously developed and is a fundamental part of the tools available for remote operations. Alarms to detect hardware malfunctioning are progressively implemented and tested. The installed software components are very reliable and are continuously used for monitoring and for remote operation. Experts are available on call for remote and onsite interventions.

5.9 Online Computing and Networking Risk Analysis Online computing and networking are maintained as a service of the laboratory under the responsibility of the computing division. Hot and warm swappable spare parts (computers, network switches) are available for all components. Experts, from the computing division, are available on call 24/7.

22

Experimental Operations Plan for the ICARUS Experiment

5.10 Cryogenic Plant Risk Analysis The Risk Assessment and the What-If Assessment for the ICARUS cryogenic plant are listed in SBN-doc-2000411. The Risk Assessment lists risks for personnel and equipment, describes access to the SBNFD detector building, sets rules for lifting loads above equipment and other rules for working on cryogenic equipment, and defines safety perimeters. The What-If Assessment (provided as a separate document at the link SBN- doc-20004) lists various high-level failure scenarios, the consequences and mitigation for safety, e.g., power outage, fires, operator errors, loss of controls or network, loss of cooling nitrogen, accidental loss of vacuums, or inoperability of valves or instrumentation. Both documents have been reviewed by the Fermilab safety panel and approved by Neutrino Division management before the start of the detectors’ cryogenic commissioning.

6 FERMILAB ROLES AND RESOURCES The ICARUS experiment gets support mainly from the Accelerator Division (AD), Fermilab Computing (Core Computing and Scientific Computing Divisions), and the Neutrino Division’s (ND) SBN and Technical Support Departments (TSD).

6.1 Accelerator Division The Accelerator Division (AD) is responsible for the commissioning, operation, and maintenance of the primary proton beam line, the neutrino production target, the horn, and the decay pipe. AD is responsible for maintenance of all existing standard beamline elements, instrumentation, controls, and power supplies. AD will also be responsible for monitoring intensity and beam quality of the primary proton beam. The effect of delivering beam to ICARUS impacts the number of protons available to the NuMI and Muon experiments. The number of protons routed to each neutrino production target is set by the Fermilab Office of Program Planning. The External Beams Department provides the necessary beam timing signals for the ICARUS detector. This includes interfaces in the MI12 and the ICARUS building. External Beams Department will provide support in delivering the beam signals, via the AD network, from the sending locations (MI12 & MI60) to the experiment hall. The setting up of AD timing signals at the experiment hall is up to the experiment but the External Beams experts are available for consult and help. The replacement of broken modules (at the experiment's site) will be done by the AD Controls Department contacted by the External Beams Department upon experiments' notification.

6.2 Neutrino Division The Neutrino Division is the primary source of technical assistance to the ICARUS collaboration and provides oversight of experiment operations. Figure 6 shows the ND organization chart.

11 https://sbn-docdb.fnal.gov/cgi-bin/sso/ShowDocument?docid=20004 23

Experimental Operations Plan for the ICARUS Experiment

Neutrino Division Office NEUTRINO DIVISION S. Brice, Head GSR Last Updated: October 29, 2020 Admin Support S. Zeller, Deputy Head B. Baller (E. Johnson, Ldr.) R. Rameika, Assoc. Head for DUNE D. Bogert S. Barrett J. Cooper P. Wilson, Head for SBN D. Jensen C. Hillgard M. Anderson, FFM J. Saviano P. Lebrun E. Johnson, Div. Admin J. Morfin (N. Wiedman, WDRS) W. Smart (A. Aparicio, DSO)

Technical Support Department DUNE Department SBN Department NuMI Department B. Norris, Head J. Raaf, Head, ECA M. Stancari, Head P. Shanahan, Head M. Geynisman, Special Projects (C. Hillgard, Admin Support) (J. Saviano, Admin Support) (S. Barrett, Admin Support) (S. Barrett, Admin Support)

DUNE Near Cryo Group Operations DUNE Management SBN Project SBN Science NOvA Detector (B. Norris, Ldr.) Support Group E. James, Ldr. (P. Wilson, Ldr.) O. Palamara, Ldr. (P. Shanahan, Ldr. A Bross, Ldr. K. Laureto Z. Pavlovic, Ldr F. Cavanna K. Domann, C R. Acciarri (P. Adamson, AD) L. Bellantoni R. Doubnik W. Badgett M. Hronek, Program Supt. C. James M. Betancourt, WF (R. Bernstein, PPD) (M. Crisler, PPD) C.L. James F. Blaszczyk (R. Rameika) C. Montanari, GS (G. Cerati, SCD) (P. Derwent, AD) C. David, JA F. Schwartz H. Ferguson A. Fava (R. Hatcher, SCD) D. Harris, JA M. Zuckerbrot S. Hahn DUNE Slow Controls SBN RAs H. Greenlee (A. Himmel, ECA) T. Miao C. Joe HV & Cryo (S. Berkman, RA SCD) (W. Ketchum, SCD) (I. Kourbanis, AD) T. Mohayai, RA Electrical Group (T.Kobilarcik, AD) A. Hahn, Ldr. S. Balasubramanian, RA (M. Kirby, SCD) (A. Norman, SCD) L. Bagby, Ldr. D. Caratelli, RA A. Schukraft C. McGivern (F. Blaszczyk) DUNE Electronics, APAs J. Paley J. Brown M. Del Tutto, RA, LF (E. Snider, SCD) G. Savage S. Lockwitz & Light Collection L. Suter J. Harris K. Duffy, RA LF (T. Strauss, TD) D. Torretta (W. Pellico, AD) A. Marchionni, Ldr. (K. Yonehara, AD) M. Micheli (S. Gardiner, RA SCD) M. Toups S. Pordes (G. Cancelo, SCD) T. Nichols E. Gramellini, RA LF (S. Zeller) Technician Group D. Christian DUNE Simulation B. Howard, RA J. Zennamo, WF NOvA RAs K. Hardin, Ldr. C. Escobar, GS Mechanical Group & Reconstruction (T. Mohayai, RA) (L. Aliaga,Soplin, RA SCD) A. Childress S. Mishra M.J. Kim, Ldr. A. Himmel, Ldr. ECA F. Psihas, RA (W. Mu, RA) R. Davis (A. Para, SCD) M. Dinnon T. Junk (M. Wospakrik, RA SCD) A. Norrick, RA J. Judd B. Rebel, JA S. Hentschel B. Ramson, RA (W. Wu, RA) (B. Ramson, RA) (M. Toups) S. Kancharia M. Wei (W. Mu), RA J. Zettlemoyer, RA (A. Wickremasinghe, RA AD) M. Verzocchi S. Shetty W. Wu, RA A. Stefanik T. Yang DUNE Beam DUNE Computing (J. Hylen, AD) (P. Ding, SCD) (N. Mokhov, AD) (S. Fuess, SCD) (I. Rakhno, AD) (M. Kirby, SCD) (I. Tropin, AS) (A. Norman, SCD) (B. Zwaska, AD)

Figure 6: Neutrino Division Organization Chart

ND provides an administrative organization for the Fermilab staff working on ICARUS, as well as a center for experimental operations, data analysis and future planning. It also provides funds for the operation and maintenance needs of the ICARUS Detector and the SBN Far Detector facility including cryogenics systems. ND provides office space for both resident and visiting ICARUS collaborators. Office space provided is commensurate with the amount of time spent at Fermilab. The ND Technical Support Department provides an Experiment Liaison Officer (ELO) who works with the ICARUS Run Coordinator to identify necessary resources to support the experiment. The ELO will work with the Run Coordinator to ensure all work in the SBN FD facility is coordinated with required work planning and scheduling. 6.2.1 Technical Support Department The Neutrino Division Technical Support Department (TSD) provides the primary operations support by Fermilab for the ICARUS detector and SBN FD facility. In addition to the ELO, the Operations Support Group (OSG) provides technical support for various online and DAQ systems both as primary experts and through general online system expertise. Areas of primary expertise include slow controls, run control, and White Rabbit timing system. The Electrical Group provides support for electronics infrastructure such as detector AC power distribution, rack protection systems, and the ground impedance monitor (GIZMO). The TSD Cryogenics group has primary responsibility for operational support of the ICARUS cryogenics systems. The group provides 24/7 emergency response through on-

24

Experimental Operations Plan for the ICARUS Experiment call expert engineers. The group performs daily monitoring checks of the cryogenic system performance. They also arrange for scheduled maintenance such as LAr and LN2 pump rebuilds. Primary support of the cryogenics controls system is provided by the TSD Electrical group. The ICARUS cryogenics support is provided as part of an overall SBN cryogenics support plan described in SBN docDB 1398112. The TSD Electrical Group and Technician Group provide electrical and mechanical technician support on an as needed basis. The leaders of the groups work with the ELO to identify additional technician resources from other divisions if needed.

6.3 Fermilab Scientific and Core Computing Divisions Fermilab Scientific Computing Division and Core Computing Division support the needs of the SBN experiments’ computing, including for ICARUS, through provision, maintenance and support of common, and in some cases experiment specific, core and scientific services and software. Fermilab Scientific Computing assigns a Liaison to the ICARUS experiment (currently Wes Ketchum), whose responsibilities include maintaining communication between the experiment and the computing divisions, as well as giving attention to ensuring the computing needs, agreements, issues and any other relevant items between the experiment and Fermilab Computing are addressed in a timely and mutually agreed upon manner. For example, Fermilab Scientific Computing provides DAQ support to ICARUS by software support for the common artdaq DAQ software framework, and also by system administrative support of SBN-FD and control room computers. Tables 2 and 3 outline many of the services provided by Core and Scientific Computing that are important for operation of the experiment, and the collection and timely analysis of ICARUS data.

Table 2: The Core Computing Services that the ICARUS experiment uses. Core Services: Authentication and Directory Standard KCA and DNS services provided. Services Central Web Hosting Support for the SBN central web server, including the SBN online portal, and SBN DocDB and FNAL Indico. Database Hosting Database hosting and database infrastructure used by ICARUS. Desktop Services Windows and Mac desktop support for the computers covered by the Managed Services contract. Fermilab (Data Center) Facilities Support for laboratory space for DAQ test stands and collaboration common computing nodes. Network Services Standard support for detector facilities. Essential SBN-FD network devices are supported for 24x7 service. Networked Storage Hosting Support for home areas and NAS attached data disks Service Desk Issue and notification reporting, handling and tracking.

12 https://sbn-docdb.fnal.gov/cgi-bin/private/ShowDocument?docid=13981 25

Experimental Operations Plan for the ICARUS Experiment

Table 3: Various scientific services that ICARUS uses. Scientific Services Grid and Cloud Batch processing on Grid accessible systems at Fermilab as well as Computing offsite through the Open Science Grid and HEPCloud. Jobsub, GlideinWMS, CVMFS, “POMS” production software, and other software for enabling processing and analysis. Scientific Collaboration SBN code repositories hosted through cdcvs.fnal.gov, redmine, and the Tools electronic log-book application. Scientific Computing Support for control room and SBN-FD computing systems and Systems workstation administrative support. Support for interactive, batch processing, simulation and analysis computing systems at Fermilab. Scientific Data SAM, IFDH, FTS, RUCIO, and other data handling software and Management systems that are essential to online data transfer. Scientific Data Storage Enstore based tape storage services. Tape handling and curation. and Access dCache based data disk services and systems. Scientific Databases Applications and database infrastructure for identified ICARUS online and offline databases, including hardware mapping database, calibration database, run history database, and the IFBEAM database. Scientific Software Support for LArSoft, artdaq, art, ROOT, and other software tools. Simulation Software Support for GEANT4 and GENIE.

Each year, ICARUS and SBND present a yearly assessment of scientific computing needs to the Fermilab Computing Resource Scrutiny Group (FCRSG). Estimated needs for FY 2021, as presented at the last FCRSG meeting, are shown in Table 4. These computing assessments include the immediate processing of data; this allows full reconstruction on the data promptly, which can then be used for additional data quality checks and calibrations. Table 4: FY21 Estimates for Scientific Computing resource needs for ICARUS Scientific Data Storage and Access Dedicated Write 1 PB Dedicated Persistent 250 TB Networked Attached Storage 30 TB Tape Storage 11.8 PB Batch Worker Nodes FermiGrid and OSG Yearly Integral 16 MCPU-hours

6.4 Environment, Safety, Health & Quality Section Safe operations are a top priority of ICARUS. The Safety Coordinator is responsible, in consultation with the DSO, Angela Aparicio, with making sure that all operations on the detector are conducted according to the Fermilab safety rules. The collaboration requires a written work plan for work that will be occurring in the SBN- Far building, in accordance with FESHM 2060 – Work Planning and Hazard Analysis. The work plan is presented at a weekly work planning meeting, coordinated by the Experiment Liaison Officer, Carrie McGivern. Work that is considered higher risk requires written hazard analyses, which are reviewed by the Safety Coordinator and by the Subject Matter Experts and approved by the DSO.

26

Experimental Operations Plan for the ICARUS Experiment

Collaborators are required to be up-to-date on safety training at Fermilab and all shifters are encouraged to have ODH training. ITNAs are created or updated for new collaborators when they join the experiment, under the responsibility of the Safety Coordinator. Shifters have access to an emergency call list that lists at least two experts for each system. In order to enter the lower levels of the SBN-Far building, collaborators must have ODH training. A two-person rule is enforced for ODH areas at SBN-Far, requiring either more than one person with ODH training to be present for any work in the basement or on the mezzanine/detector, or one person in continuous visual and auditory contact (from the top level) with an ODH-trained person working on the detector. Collaborators making access to the ODH areas of SBN-Far are expected to have completed ODH training and SBN Far Hazard Awareness training. They will be required to carry an oxygen monitor and wear a hard hat. If the stairwell is an ODH area at the time (due to possible pressurization issues in the stairwell), it is also necessary for the collaborator making access to take an oxygen rescue pack with them down to the lower levels (basement/mezzanine/detector) in case of an ODH emergency. Additional safety hazards present at ICARUS (UV laser, high voltage power supplies) and the necessary safety training are outlined in the SBN Safety Assessment Document (SAD), in development. The Neutrino Division Safety Officer (DSO) oversees any SBN activities where there are safety concerns. The ND DSO also serves as the chair of the SBN Operational Readiness Clearance (ORC) committee and will ensure that new equipment for use on ICARUS will have an ORC review prior to operating the new equipment.

6.5 Particle Physics Division The Particle Physics Division (PPD) Mechanical Engineering Dept Process Controls group provides support of the computing infrastructure of the cryogenics controls. This includes routine software maintenance for the iFix system. They also provide occasional consultation for controls operation and modifications on an as-needed basis based on requests from the ND/TSD/Cryo group.

7 INFN COMPUTNG CENTER - CNAF The CNAF center (https://www.cnaf.infn.it/en/) hosts the Italian Tier-1 computing facility for the high-energy physics experiments at the Large Hadron Collider in Geneva. It provides the resources, support and services needed for all the activities of data storage and distribution, data processing, Monte Carlo production and data analysis. Moreover, CNAF represents a key computing facility for many astro-particle and neutrino-physics experiments, and one of the most important centers for distributed computing in Italy. The facility comprises a computing farm of ~30,000 cores (400 kHS06). Currently the amount of data present on disk storage is 35 PB, whereas the total storage capacity available on tape is 96 PB. In addition to the computing farm, the CNAF provides a HPC cluster, which is a small cluster (about 100 TFLOPS) based on special HW (such as Intel Manycore or NVIDIA GPUs), for any special needs. The INFN scientific commissions examine and collect the computing and storage needs of each experiment, and forwards the request to the CNAF which, on the basis of actual availability of resources, assigns a suitable amount of resources to each experiment.

27

Experimental Operations Plan for the ICARUS Experiment

ICARUS activities are supported by CNAF and resources are made available for the storage and processing of data. For the year 2021, the resources assigned to ICARUS are: 1) CPU: 4000 HSP06 (about 400 cores); 2) Disk storage: 1000 TB; 3) tape storage: 2000 TB. The long-term archiving resources are meant to store a copy of the experiment raw data, while disk space and CPUs are intended to permit the event reconstruction and analysis and the storage of reconstructed data. The requests were submitted and approved on the basis of an expectation of ~1PB/year of physical triggers of event. As the experiment starts to accumulate physics data, the CNAF resources are expected to increase accordingly.

8 PERSONNEL RESOURCE SUMMARIES The collaboration spends approximately 15% of its personnel resources on detector operations. That effort is summarized in Table 5. This does not include the personnel resources provided by Fermilab Core Computing and Scientific Computing Divisions described in Section 10.2.

Table 5: Collaboration resources in units of FTE devoted to detector and computing operations. Detector Operations Total 6.4 Run Coordination (RunCo+Deputy) 1.0 Detector Experts (6 x 0.2 FTE) 1.2 Shifts (8 hours a day, 7 days a week) 4.2 Computing Total 1.4 Production and Keep-Up Processing 1.0 Data Quality and Validation 0.2 Software Releases 0.2

Neutrino Division personnel resources required for support of the ICARUS Operations at the level of about 1.3 FTE, not including administrative support provided by the Neutrino Division; details are provided in Table 7 in the Section 10.

9 SPARES Spares for the ICARUS detector are listed in Appendix B.

10 BUDGET

10.1 Detector Operations Budget The annual operations budget with M&S costs estimate for FY21 is given in the Table 6 along with a breakdown of the personnel support from the Neutrino Division.

28

Experimental Operations Plan for the ICARUS Experiment

Table 6: Neutrino Division cost estimates, not including ICARUS collaborator scientific efforts. M&S Cryogenic liquids $ 222,000 Cryogenic system maintenance $ 12,000 Software licenses $ 12,200 Misc. maintenance expenses $ 50,000 Labor Cryogenic engineering support 0.25 FTE Electrical support 0.15 FTE Technicians 0.20 FTE Computing support 0.50 FTE Experiment Liaison Office (ELO) 0.20 FTE

10.2 ICARUS Computing Budget Fermilab Core Computing and Scientific Computing Divisions (CCD, SCD) support the computing needs of the ICARUS experiment through provision, maintenance, and support of common, and in some cases experiment-specific, core and scientific services and software. The Computing Liaison’s responsibilities include maintaining excellent communications between the experiment and CCD/SCD as well as attention to ensuring the computing needs, agreements, issues and other relevant items between the experiment and CS are addressed in a timely and mutually agreed upon manner. For computing support of essential experiment operations – especially off-hours support – ICARUS communicates a limited list of point-of-contacts to CCD and SCD. For other non-emergency support, requests are made through Service Desk tickets, and followed up on with the appropriate set of computing and experiment experts. Essential support for operations includes system administration, networking, and database services. The costs associated with computing for the experiment are outlined in Table 7. These estimates are derived from FNAL SCD calculations for costing the LSST (Rubin) data facility from 20 Dec 2019. These cost estimates amount to the following: $188 per TB for NAS storage, $96 per TB for dCache storage, $9722 per PB for tape storage, and $14,626 per MCPU Hr for data processing. We assume no reduction in costs over time, though it is likely to be the case that there will be some modest change (~5% decrease annually for most hardware, with an increase in effort costs year-to-year). These estimates are for FY2021-23 and based on computing estimates reported at the May 2020 Fermilab Computing Resource Scrutiny Group (FCRSG) annual review. “NAS Storage” refers to network-attached storage, used for application software development space and small data files. “dCache Storage” refers to dedicated temporary and persistent disk volumes made available through the dCache system and is used for larger data files and staging of data to and from tape storage. Tape storage is used for all permanent data storage. A backup copy of all necessary data – primary all raw beam data – will be kept at the INFN CNAF facility in Italy and is not included in cost estimates here (see Section 7). Note that tape costs are significantly higher in FY 2021 due to the necessity for collecting and storing large amounts of data for commissioning: it is anticipated that much of this data will be retired in coming years and used to offset later tape media costs.

29

Experimental Operations Plan for the ICARUS Experiment

We do not add additional cost for shared computing resources, like networking and tape library infrastructure. Tape driver costs are also not included, though at peak rates we expect these rates to require one tape driver for simulation needs, and two tape drivers for data, at a cost per driver of $12,000 per driver with $2,000 annual maintenance. Finally, calculation of effort levels is difficult due to the degree of sharing of resources across the Intensity Frontier experiments’ computing infrastructure at Fermilab. From [FNAL CS-DocDB 6934], additional effort costs for computing and media storage are roughly 0.5 times the hardware costs. Additionally, we estimate 6.0 FTE necessary for experiment support for data management, experiment networking, system administration, and additional experiment services. We assume a cost of $293,000 per FTE. Table 7: ICARUS Computing Budget FY21-23 FY 2021 FY 2022 FY 2023 CPU Processing Costs, Data $60,926 $60,926 $99,163 CPU Processing Costs, Simulation $50,545 $49,059 $49,059 CPU Processing Costs, Total $111,472 $109,985 $148,222

NAS Storage $611 $611 $1,128 dCache Storage $16,000 $18,400 $18,400

Tape Storage, Raw Data $36,622 $18,311 $18,311 Tape Storage, Reconstructed Data $66,870 $11,352 $22,748 Tape Storage, Simulation $11,044 $16,411 $16,333 Tape Storage, Total $114,537 $27,764 $57,392

Total computing and storage media costs $242,619 $156,759 $225,142

Effort: computing and data services $121,310 $78,380 $112,571 Effort: additional experiment support $1,758,000 $1,758,000 $1,758,000

30

Experimental Operations Plan for the ICARUS Experiment

11 RUN PLAN

To reach 5s sensitivity of the LSND allowed (99% C.L.) region for nµ ® ne appearance, the SBN program requires an exposure of 6.6 x 1020 protons on target (POT) in the Booster Neutrino Beam for the SBND and ICARUS detectors. Projections for BNB performance13 indicate delivery of ~3.0 x 1020 POT in FY21 and 4.0 x 1020 POT/yr in each of FY22- FY25. The experiment will also perform neutrino cross section measurements and dark matter search off-axis from the NuMI beam target with a nominal ~6.0 x 1021 POT/yr exposure. ICARUS anticipates completion of the detector (full CRT and overburden) in summer 2021. The SBND S-4b ready for physics data milestone forecast date is January 202314. With these assumptions (subject to modifications due to COVID restrictions) the ICARUS run plan is as follows: • January 2021 – May 2021 [Side CRT and vertical Top CRT installation complete. Install new cryogenic filter system. Commission TPC electronics and Trigger.] Take neutrino data from BNB and NuMI for trigger, calibration, reconstruction software and analysis development. • June 2021 – July 2021 Maintain stable operation of TPC and PMT system to accumulate neutrino data for ~4.0 x 1019 POT from BNB (~5.5 x 1020 POT from NuMI). • July 2021 – September 2021 [Top CRT installation.] Extended cosmic ray data taking with TPC and PMT. • October 2021 – January 2023 (13 beam-months) ICARUS-only Physics run with ~5.5 x 1020 POT from BNB and ~8.5 x 1021 POT from NuMI. • February 2023 – July 2025 (15 beam-months) ICARUS+SBND Physics run with ~6.7 x 1020 POT from BNB (~10.0 x 1020 POT from NuMI).

13 T. Kobilarcik, presentation at the December 2019 SBN Oversight Board Meeting https://indico.fnal.gov/event/22010/ 14 Near Detector Key and Intermediate Milestones October 2020, https://sbn- docdb.fnal.gov/cgi- bin/private/RetrieveFile?docid=9263&filename=SBND_Milestones_20201006.pdf&versi on=11 31

Experimental Operations Plan for the ICARUS Experiment

APPENDIX A: COLLABORATION INSTITUTIONAL RESPONSIBILITIES

The following list represents a snapshot of the FY21 responsibilities of the ICARUS institutions as of November 2020. In each case, a total FTE count follows the institution’s name. Specific responsibilities of the institution are identified as a group. An FTE unit is defined as the fraction of research time; thus, a faculty member or laboratory scientist spending 100% of her/his research time on ICARUS is counted as 1 FTE in this exercise. Engineers are not counted. Undergraduates are not counted. We estimate a total of approximately 57 FTEs contributing to ICARUS by this accounting scheme. Almost all the US effort is supported by DOE HEP. The list below does not include the important and substantive contributions from Fermilab technical and scientific staff who are collaboration members but are not authors. Nor does it include the enormous effort by some institutes to design, fabricate and install various hardware components prior to FY21.

US National Laboratories • Brookhaven National Laboratory / 4.1 FTE 0.3 FTE (TPC WG manager, expert shifts), 1.2 FTE (TPC expert, signal processing, reconstruction), 0.5 (SBN analysis), 0.2 FTE (software/DAQ management), 0.8 FTE (installation/commissioning/operations), 0.9 FTE (PMT expert, PMT calibration), 0.2 FTE (Operations org) • Fermi National Accelerator Laboratory / 6.85 FTE Icarus/SBN Management (SBN Oversight Board Chair, SBN Project Management) 0.5 FTE; Installation/Commissioning/Operations/Documentation (including Technical Coordinator, Deputy Technical Coordinator, Commissioning Coordinator, Deputy Commissioning Coordinator, convener of Documentation working group) 1.75 FTE; Electrical 0.1 FTE; TPC (including expert shifts) 0.25 FTE; PMT 0.1 FTE; Trigger (including expert shifts) 0.45 FTE; DAQ (incl. convener of working group and expert shifts) 0.5 FTE; Data management (incl. convener of working group) 0.3 FTE; Cosmic Ray Tagger system (incl. convener of working group and expert shifts) 0.3 FTE; Lab Interfaces (beam) 0.1 FTE; reconstruction software and simulations 1.45 FTE; BNB analyses (Neutrino-4, SBN osc.) 0.6 FTE; NuMI analyses (dark matter search) 0.7 FTE

• SLAC National Accelerator Laboratory / 4.4 FTE 0.3 FTE Collaboration Management (IB Chair); 0.4 FTE DAQ (incl. expert shifts); 3.7 FTE Sim/Reco (incl. ICARUS Software coordinator, SBN TPC Reconstruction Coordinator, ICARUS Release Coordinator)

US University Groups • Colorado State University / 6.05 FTE 0.60 FTE Collaboration Management (Dep. Spokesperson); 1.70 FTE Cosmic Ray Tagger system (incl. expert shifts); 0.25 FTE Calibration (incl. WG co-convener);

32

Experimental Operations Plan for the ICARUS Experiment

0.50 FTE TPC (incl. expert shifts); Reconstruction 0.20 FTE; 1.25 FTE BNB analyses (Neutrino4, SBN osc.); 1.50 FTE NuMI analyses (dark matter search) • University of Houston / 2.34 FTE 0.63 FTE Cosmic Ray Tagger system (DAQ+Calibration); 0.25 FTE Calibration/CRT; 1.46 FTE NuMI analyses (nu-Ar xsec). • University of Pittsburgh / 3.7 FTE 0.6 FTE Cosmic Ray Tagger; 1.1 FTE NuMI analyses (dark matter search); 0.55 FTE NuMI analysis (cross section); 0.9 FTE PMT; 0.25 FTE DAQ, 0.3 FTE commissioning. • University of Rochester / 2.25 FTE 0.35 FTE Cosmic Ray Tagger system; 0.5 FTE Trigger and Trigger Simulation studies (NuMI beam); 0.5 FTE Neutrino reconstruction algorithms (currently vertex for BNB and NuMI charged current events); 0.5 FTE TPC Commissioning; 0.3 FTE Operations/Shifts; 0.1 FTE SBN Organization (SBN IB chair) • Southern Methodist University / 0.55 FTE 0.3 FTE Cosmic Ray Tagger system; 0.25 FTE NuMI analysis • University of Texas at Arlington / 2.3 FTE 0.7 FTE HV system (incl. expert shifts); 0.6 FTE Commissioning/Operations; 0.4 FTE Reconstruction; 0.6 FTE NuMI analysis (dark matter search) • Tufts University / 1.0 FTE 0.25 FTE simulation/reconstruction; 0.75 FTE commissioning (PMT/trigger)

INFN Groups • INFN Sezione di Bologna and Università / 3.2 FTE 2.5 FTE CRT system (including expert shifters, and European shifts support), 0.5 FTE CRT/ICARUS Data analysis, 0.2 CRT slow control • INFN Sezione di Catania and Università / 3.1 FTE 0.9 FTE (Light Detection System); 1.0 FTE (Slow Control System); 0.6 FTE (Shift Management); 0.40 FTE data analysis; 0.20 FTE CRT Side INFN Sezione di Genova and Università / 1.7 FTE 0.7 FTE BNB analyses (Neutrino4, SBN osc.), 0.5 FTE TPC (incl. expert shifts), 0.5 FTE Cosmic Ray Tagger system; • INFN GSSI / 1.0 FTE 1.00 FTE Collaboration Management (Spokesperson) • INFN LNGS / 0.2 FTE 0.1 FTE Light Detection System; 0.1 FTE Cryogenics • University and INFN Milano / 0.8 FTE 0.2 FTE Cosmic Ray Tagger system; 0.4 FTE PMTs, 0.2 FTE Simulation/Reconstruction

33

Experimental Operations Plan for the ICARUS Experiment

• INFN Sezione di Milano Bicocca / 2.6 FTE 1.0 FTE Light detection system/laser calibration (incl. expert shifts); 0.6 FTE CRT system (incl. expert shifts); 1.0 FTE data analysis • INFN Sezione di Napoli / 0.3 FTE 0.3 FTE Data handling and analysis • INFN Sezione di Padova and Università / 7.2 FTE 1.00 FTE Collaboration Management (Dep. Spokesperson), 1.5 Trigger System (incl. Trigger WG convener and expert shifts); 2.0 FTE TPC System (incl. expert shifts); 2.7 FTE data analysis • INFN Sezione di Pavia and Università / 3.4 FTE 0.80 FTE Trigger System (incl. expert shifts); 1.0 FTE Light Detection System (incl. PMT WG convener and expert shifts); 0.60 FTE data handling and analysis

Other International Groups • CERN / 3.1 FTE 0.5 FTE Collaboration Responsibilities (CRT coordination, Editorial & Speakers Board, SBN management); 0.8 FTE Detector completion and commissioning; 0.5 FTE Detector shifts and experts on call; 0.4 FTE Performance and calibration studies; 0.7 FTE Physics studies; 0.2 FTE Cryostat Monitoring and cryogenics support • Centro de Investigacion y de Estudios Avanzados del IPN, Mexico / 1.95 FTE 1.45 FTE Calibration (PMT timing calibration); 0.5 FTE NuMI analyses (cross section)

34

Experimental Operations Plan for the ICARUS Experiment

APPENDIX B: ICARUS DETECTOR SPARES

Subsystem Component Installed Spares TPC CAEN A2795 Readout Board 864 0 CAEN A3818 Optical Card 54 4 Mini Crate 96 2 DC Low Current Power supplies 96 8 Power cables 96 3 Flanges 96 3 Optical fibers (jumpers) 672 181 Long Fibers (mini crate to trunk) 24 0 Short fibers (trunk to server) 24 0 Trunk fibers 24 0 Trunk connectors 24 7

WireBias Bertan Power Supplies 6 1 RG58 cables of various lengths 168 8 SMA Tees & 50 Ohms terminators 24 + few 6 + many Jumpers cables 24 few GPIB modules 2 1 GPIB cables 6 0

PMTs & Laser CAEN V1730B Digitizer Boards 24 1 + 1 CAEN SY1527 HV Distributor 2 1 [FNAL] CAEN SY5527 0 3 CAEN A1932BN HV Card 8 0 CAEN R648 48 CH. Radial to SHV 6 1 Adapter Bertan S515 Power Supplies 2 1 VME Crates 8 2 Laser head + power electronics 1 0 Optical switch board 1 0 Optical fibers 396 (360+36) few

SlowControl Beckhoff AI (KL3132) 6 1 Beckhoff AO (KL4132) 6 1 Beckhoff Bus Coupler 6 1 Beckhoff End Terminal 6 1 Resistors 12 6 SOLA 24V Power Supply 6 1

Trigger SPEXI Card 1 2 NI crate 1 1

35

Experimental Operations Plan for the ICARUS Experiment

NI FPGA 3 2 LVDS Fanouts 2 0 CAEN DT4700 Clock Distributor 1 1 TTL fanouts 2 0 LVDS flat cables 24 some High Density cables 8 7 Windows Laptop 1 1

DAQ Event Builder servers 4 2 Gateways 2 1 TPC servers 24 4 PMT servers 3 1 CRT servers 7 2 Timing 1 1 NFS servers 1 1 DB servers 1 1

Infrastructure AC switch box 24 2 RPS + smoke detector 24 2 AC power cable 24 PDUs APC AP8932 (tall) 17 1 PDU’s TrippLite (horizontal) 52 2 PDU Cyclades PM10 5 2 Transformers 2 1

HV Drift Heinzinger Power Supply 1 1 EDAS 1 1 Splitter-PS HVC Cable 1 1 Smoke Detector 1 1 RPS Unit/ Switch Box Pair 1 1 Voltage Devider Cable 2 2 Ribbon Cable Monitor-OS Chassis 1 1

Network 4 port switches 32 10 8 port switches 4 3 Power cords many Ethernet cables many 48 port switches (provided by FCC 4 FCC Netwok group) Network group

White Rabbit WR switch 1 1 GPS 1 1 SPEC/DIO cards 3 2

36

Experimental Operations Plan for the ICARUS Experiment

37

Experimental Operations Plan for the ICARUS Experiment

The EOP submitted for the ICARUS Collaboration by:

______Carlo Rubbia GSSI-INFN and CERN ICARUS Spokesperson

______Alberto Guglielmi Robert J. Wilson INFN Sezione di Padova and Università Colorado State University ICARUS Deputy Spokesperson ICARUS Deputy Spokesperson

The EOP reviewed and resource requests acknowledged by:

______Steve Brice, Head - Neutrino Division, Fermilab Date

______Joshua Frieman, Head – Particle Physics Division, Fermilab Date

______Michael Lindgren, Head-Accelerator Division & CAO, Fermilab Date

______Robert Roser, Chief Information Officer, Fermilab Date

______Luciano Ristori, Chief Research Officer Date

38