VACHON, B.M.C. PIN 207385 1

The ATLAS High Level Trigger

A. Introduction

The (LHC) at CERN will soon become the premier project in high-energy particle . The data to be recorded by the ATLAS experiment starting in 2008 is expected to revolutionize our understanding of the smallest constituents of nature and their interactions. Four decades of research have led to the development of the Standard Model of , a theory that has proved extremely successful at describing current experimental measurements over a wide range of energies. We have, however, not yet experimentally observed the existence of the predicted Higgs Boson, which is responsible for the breaking of the symmetry of the electroweak interaction into the more familiar electromagnetic and weak nuclear forces by generating mass for the elementary particles. Even with the discovery of the Higgs boson, the Standard Model seems to provide an incomplete description of nature at energy scales near 1 TeV. The ATLAS experiment is designed to discover the Higgs boson for any masses allowed within current experimental and theoretical constraints, as well as explore physics at energy scales where theories beyond the Standard Model strongly favour observable new physics. Major construction of ATLAS detector components is finished and the installation of these components in the ATLAS experimental area is nearly complete. The current emphasis of the ongoing work is in the areas of integration and commissioning of the detector and data acquisition systems. In addition to offline computing and physics studies, Canadians are playing leading roles in two distinct areas of the ATLAS experiment: the liquid argon calorimeter system and the High Level Trigger system. The ATLAS trigger is a system designed to identify, in real time, potentially interesting interactions out of the billions produced per second. A three-tiered system provides the necessary rejection to select approximately 200 Hz of collision data from the LHC beam crossing rate of 40 MHz. The first level of decision making is achieved using custom-built electronics and reduces the input data rate to approximately 100 kHz. The second and third levels of the trigger system, together referred to as the “High Level Trigger” (HLT), provide the additional three orders of magnitude rejection needed to reduce the data rate to permanent storage to a manageable level. The HLT consists of robust, time-optimized reconstruction algorithms running on commercial off-the-shelf computer components with demanding requirements on network data flow and management. To achieve the performance needed to fulfill the ATLAS physics program, it is estimated that the HLT will require the equivalent of nearly 20,000 computing units (CPU cores with a minimum clock speed of 2 GHz). Events that are not selected by the ATLAS trigger system are discarded, hence the performance of the trigger directly impacts the entire ATLAS physics program. Canadian groups, led by investigators who have joined ATLAS in the past few years, have contributed to the development of significant parts of the HLT. This includes the design and implementation of algorithms that can run in the specialized HLT framework, design and imple- mentation of benchmarks to assess HLT algorithm performance, and the deployment of an HLT testbed farm at McGill which is used for the evaluation of different hardware architecture config- VACHON, B.M.C. PIN 207385 2 urations and the detailed testing of algorithms. A summary of Canadian HLT activities is shown in Table 1. These contributions are strongly analogous to the prototyping and testing phase of detector construction, but do not yet include the capital funding necessary to complete the full system. The ATLAS HLT is not yet a fully funded project, and completing it is essential to the success of the ATLAS physics program. Although the complete HLT system does not need to be running from the first day of ATLAS data-taking since it will take a number of years for the LHC to ramp up to the full design luminosity, a significant increase in HLT capability between now and 2010 is required to avoid compromising the ATLAS physics program. The scope of this grant application is to request capital equipment funding for the purchase of HLT hardware components corresponding to an important fraction of the total system, on par with the significant, and still growing, Canadian contributions to the ATLAS HLT. The Canadian involvement on the ATLAS experiment has been supported via different sources of funding. The Canadian ATLAS group, with critical roles in the ATLAS liquid argon calorimeter system, received about $15M in NSERC funding for capital equipment for current ATLAS detector subsystems. This funding, together with the NSERC-funded ATLAS-Canada operating grant and computing support from CFI and provincial governments, places Canadians in a good position to be major players in ATLAS physics. The contributed levels of Canadian ATLAS funding per author to the initial detector construction were reasonable, although somewhat lower than other participating countries. Thus, additional Canadian capital contributions to ATLAS would not be out of scale. Since 2002, there has been 15 grant eligible researchers joining the Canadian ATLAS group, most of whom are new hires. The majority of these researchers were not part of the initial detector construction, but are now involved in complementary activities in the HLT. This more recent Canadian involvement further diversifies and increases the impact of Canadian contributions to ATLAS, thereby providing yet more opportunities for Canadian groups to take on leadership roles in the analysis of ATLAS data. While the HLT contributions are strongly analogous to detector construction contributions, there is an important difference to the way they are treated within the ATLAS collaboration. Contributions to the HLT, including the capital contributions requested here, do not imply additional contributions to the ATLAS construction common fund. In fact, ATLAS management has agreed that capital contributions to HLT computing hardware at CERN will count towards Canada’s construction common fund. We also note that the ATLAS experiment is a top priority in the recent NSERC Subatomic Physics long range plan, and funds were foreseen for ATLAS HLT capital contributions in the initial five-years (up to 2010) considered in that plan.

B. Canadian HLT Involvement

Canada’s participation in the High Level Trigger (HLT) has significantly increased over the past 3 years with the addition of new faculty in Alberta, Carleton, McGill and York and well as existing faculty moving to ATLAS at Montreal and Victoria. This has increased the number of faculty working on the HLT to 10, representing about 5 FTE. These are supported by 3 postdocs and 10 graduate students. In addition to this 5 undergraduate students, 3 with NSERC summer VACHON, B.M.C. PIN 207385 3

Investigators Activities ATLAS HLT Fraction Fraction David Asner Carleton Algorithm development/studies 75% 25% Georges Azuelos Montr´eal Algorithm development/studies 100% 10% Kamal Benslama Regina Data Quality / e-gamma slice 100% 30% Bryan Caron Alberta/TRIUMF remote farms 100% 30% Robert Kowalewski Victoria Algorithm development/studies 75% 60% Robert McPherson Victoria/IPP Monitoring/Data Quality 100% 20% RogerMoore Alberta remotefarms 100% 60% Jim Pinfold Alberta remote farms 100% 50% Steve Robertson McGill/IPP Alg.Devel/DQ/testbed studies 80% 50% Wendy Taylor York Algorithm development/studies 75% 20% Brigitte Vachon McGill Alg.Devel/testbed/management 100% 75%

Table 1: ATLAS-Canada investigators currently active in areas relevant to the HLT, along with their fraction of research time spent on ATLAS and the fraction of their ATLAS time dedicated to the HLT. Also listed are their principal areas of activity in the HLT. student scholarships, have worked on the trigger effort to date. We expect the number of graduate students to grow by another 3 next year. ATLAS Canada’s role on the HLT is focused into four overlapping areas.

(i) Trigger Algorithms, Performance and Menu (TAPM) Group

This group is tasked with the development, tuning and validation of the various trigger recon- struction algorithms. The focus for the Canadian role within this group is built on the existing Canadian expertise in ATLAS calorimetry. ATLAS Canada is currently the dominant leader of the HLT jet slice with Canadians respon- sible for the software development, maintenance and validation, including the cluster algorithms, as well as the the jet trigger menu configuration for the Event Filter. In addition Canadians are involved in performance, calibration and pile-up studies. The entire ATLAS Jet slice effort is coordinated by B. Vachon who is the ATLAS HLT Jet convener and a member of the TAPM steering committee.

Canadians are also working on the data access for the L2 missing ET algorithms. This is of the crucial aspects for this algorithm because the HLT trigger design at both the L2 and EF levels is based on regions of interest whereas missing ET algorithms require access to the entire detector’s calorimetry readout. The expertise gained from these tasks is already enabling Canada to have a direct impact on the ATLAS physics program with Canadians developing trigger criteria for the charged Higgs and top pair production triggers. This synergy between trigger algorithms and physics topics is one of the primary reasons for a strong Canadian participation in the trigger and as our involvement VACHON, B.M.C. PIN 207385 4 in the trigger matures we expect this effort to expand to include other physics topics aligned with the interests of Canadian physicists. As well as these specific contributions to the trigger algorithms Canada is playing a leading role in the technical testing of the trigger executables. One contribution in this regard was the use of Westgrid Linux farm for large scale testing of the monitoring and control framework. At the height of the tests all 1680 CPUs in the farm were used. These tests discovered several important bugs in the low level network routines that only became apparent at this large scale and which might otherwise have caused delays to ATLAS. Canadian physicists are now using the McGill testbed facility to perform memory leak tests and timing measurements on the trigger code. These are both crucial to ensure that the trigger executable can run for extended periods of time without crashing and also run within the time alloted. The McGill testbed is a critical resource for ongoing tests of the ATLAS HLT technologies and software development.

(ii) Testbeam Studies

Our involvement in HLT related testbeam studies has been significant. Testbeams in 2003 and 2004 used a full slice through the ATLAS trigger system with the Canadian MAGNI cluster being used to run the HLT. This system consisted of 29 computers controlled via DAQ running the latest TDAQ and EF software releases. As part of these test beams raw, online data was also shipped in real-time to Canada via a dedicated 1 Gbit lightpath as an initial test for running remote monitoring farms (see section (iv) below). There is also ongoing Canadian involvement in HLT technical runs. This includes running shifts and bug diagnosis and fixing.

(iii) Data Quality Monitoring and Assessment

The monitoring and assessment of the quality of data, DQM and DQA, will be a cornerstone of successful, robust physics results from ATLAS. Early detection of problems will lead to a minimum amount of problematic data in offline analysis, and the High Level Trigger is the first part of the ATLAS event flow where full events can be reconstructed and detailed checks for problems can be made. In addition, trigger rates and volumes need to be carefully monitored at all stages of the event selection. Monitoring and assessment algorithms running on the High Level Trigger, or on sampled events using High Level Trigger infrastructure, will be critical to the successful ATLAS DQM and DQA efforts. Tools for ATLAS data quality monitoring are being developed and tested in the ongoing detector commissioning runs, TDAQ technical runs, and offline studies. Using the commissioning and technical runs has proved to be the most useful way of debugging the DQM/DQA tools in the real, online environment, up to and including histogram browsing by the shift crew and automatic histogram checking with problem flagging. Canadians have taken leading roles in the ATLAS data quality group. R.McPherson is the VACHON, B.M.C. PIN 207385 5 overall Data Quality Coordinator, starting this effort within ATLAS and coordinating the de- velopment and deployment of the DQM/DQA system. The data quality tool implementation for the detector, trigger and offline combined performance groups are led by the system experts themselves, and Canadians also play a strong role in this effort. Canadians lead DQM/DQA efforts in three of the seven HLT slices: e/gamma, tau and jets. These efforts are proving to be a very useful training ground for students and postdocs, since assessing data quality requires a deep understanding of the data themselves. The implementation of reconstruction, histogram- ming and data quality assessment is an ongoing and expanding effort, including contributions from a number of students across Canada.

(iv) Remote Monitoring Farms

Canada has been a leader in the development of remote monitoring farms for ATLAS. These computing farms will use realtime data from the point-1 network to monitor both the detector performance and the trigger decisions. By using remote farms CERN based computing resources can be dedicated fully to trigger algorithm processing. In addition to monitoring these farms will also allow new, untried algorithms to be tested on large numbers of events without the risk of impairing online data taking. Canada has led the test of dedicated lightpath connections to CERN with a 1 Gb lightpath being used to ship data from the 2004 testbeam directly to Alberta. Currently Canada has the only dedicated remote farm facility of 200 CPU cores and 30 TB of disk. There is considerable overlap between this effort and the data quality monitoring with the eventual aim to start running calorimeter data quality monitoring on live ATLAS data from the point 1 network.

References

[1] A Reference

C. Details of Request

(i) Architecture of the ATLAS High Level Trigger

The ATLAS trigger system is responsible for the initial selection of potentially interesting physics events from the approximately 40 MHz interaction rate at design luminosity, to a manageable rate of O(100) Hz going to mass storage. Because events which do not satisfy the trigger are discarded, the trigger system must be both reliable and versatile in order to maintain the highest possible efficiency for interesting physics while maintaining the necessary ∼ 5 orders of magnitude of rejection power in the face of continuously changing luminosity and detector conditions. VACHON, B.M.C. PIN 207385 6

The ATLAS trigger system is organized as a three-tiered system, with the lowest level(L1) being implemented in hardware, and the remaining two levels, which together comprise the High Level Trigger (HLT) system, implemented in software. Figure 1 shows a block diagram of the trigger system. At L1, special purpose processors act on reduced-granularity data from a subset of detectors, and only in “regions of interest” (ROI) identified by L1 as containing interesting information. A L1 decision is made at the end of a fixed ∼ 2.5µs latency period, during which the full raw event information is stored in a digital pipeline. The output rate of the L1 trigger system is limited to about 100 kHz by the capabilities of the subdetector readout systems as well as the capacity of the HLT itself. The HLT consists of the Level 2 trigger system (L2) and the Event Filter (EF), which together are responsible for refining the L1 selection to pare down the event rate from its nominal ∼ 100 kHz input rate. When an event has been accepted by the L1 trigger, the full subdetector data is readout and moved to L2 buffers within the HLT for the duration of L2 processing. The L2 trigger system has access to detailed detector information with full granularity, but only within the ROI identified by the L1 system as containing interesting information. This is necessary in order to avoid the time cost of unpacking the full detector information, which would substantially exceed the L2 time budget of ∼ 10 ms/event. It is also not possible to apply full offline detector calibrations and corrections at L2. The Event Filter is therefore the first trigger level at which the full detector information is (in principle) available and full offline calibrations can be applied. The EF is anticipated to have an input rate of O(1kHz) and to provide an order of magnitude or more of additional rejection power within a time budget of averaging approximately 1s per event. While time constraints and the ROI-based nature of the trigger dictate that dedicated, time-optimized reconstruction algorithms be used at both L2 and EF levels, it is also desirable that these algorithms, e.g. jet reconstruction or muon identification algorithms, resemble those used in the offline reconstruction as closely as possible. The specific algorithms which are run in the HLT and the detailed menu of trigger signatures which is available at any given luminosity of data taking therefore represents a trade-off between physics performance and processing time. Since the size of the HLT computing farm ultimately determines the average processing time per event which can be used to make a trigger decision, the computing capacity of this system represents the primary limitation on the overall performance of the HLT. It is anticipated that the HLT will require (the equivalent of) approximately 500 current dual-cpu nodes for L2, plus an additional 1500 nodes for the EF. At the time of writing, only approximately 10% of complete HLT hardware has been purchased and installed.

(ii) HLT processing performance and capacity

Table 2 lists the main assumptions from which the size of the final HLT system has been deter- mined [1] The L1 output event rate is limited to 100 kHz by the designed capability of the subdetectors readout systems. Assuming that the L2 system achieve an event rejection factor of approximately 30 implies a L2 output rate of approximately 3 kHz. On the other hand, the target EF output event rate is constrained by the requirement of writing to permanent storage a maximum of about 3 PB of raw data per year [2]. Assuming a duty cycle of about 50% for ATLAS and LHC during two thirds of each year it is running implies VACHON, B.M.C. PIN 207385 7

Figure 1: Block diagram of the trigger and data acquisition system.

parameters assumed values L1 output event rate 100 kHz Event size 1.5 MB L2rejection 30 L2 event processing rate 100 events/s EF event processing rate 1 event/s

Table 2: Main assumptions used to design the size of the HLT system. that the EF output data rate should be capable of achieving 300 MB/s (for 107 seconds per year). An average event size of 1.5 MB then further implies under these conditions an EF output event rate of approximately 200 Hz. The concrete implementation of the HLT system designed to achieve the required performance target is also based on the assumption that the L2 and EF systems can process 100 events/s and 1 event/s, respectively, or equivalently, that the event processing time for the L2 and EF systems is on average 10 ms and 1 s. Recent timing measurements of realistic algorithms run on a fraction of HLT hardware newly procured indicates that these timing targets, although challenging to VACHON, B.M.C. PIN 207385 8 achieve using current CPU chip design and software code performance, should be realistically attainable on the time scale of LHC turn-on [?]. In order for the L2 to sustain a 100 kHz input event rate with an event processing rate of 100 event/s implies a system composed of approximately 1000 processing units, or CPU-cores with sufficient clock speed to meet the target processing rate. The EF system, using a similar argument, will require approximately 3000 CPU-cores to meet its target design performance.

(a) HLT system bottleneck

It is challenging to determine the possible bottleneck of the HLT system design and current baseline implementation. It is unlikely that either the Sub-Farm Output buffers (SFO) or the link from the ATLAS pit to the CERN computing center will be a limiting factor. The capacity of these two components is defined by the amount of data that can be handled offline, largely dictated by the cost of data storage (disks and tapes). Unless data storage prices drop dramatically, the achieved performance of the SFOs and data link to the CERN computing center already exceed the nominal requirements. If the bottleneck of the HLT turns out to be too low a rejection rate at L2, additional Sub- Farm Input nodes (SFI), dedicated to the process of assembling complete events (event building), as well as additional EF processing units would be required in order to achieve the system design without compromising the ATLAS physics program. The SFI represent only a 5% effect on the cost of the HLT. Therefore, the major challenges in addressing such a situation would be to provide sufficient additional processing power in the EF within the finite allocated space and infrastructure. Given the very tight space constraint in housing the HLT system, solutions using lower power (i.e. Watts) and higher performance (and hence more expensive) HLT machines would have to be explored. If, on the contrary, the bottleneck of the system turns out to be too low a rejection rate at the EF, there is no need to purchase additional SFI. The solution would be to implement more processing power in the EF under the constraints discussed in the previous paragraph. The current design for the data collection network should support at least a factor of 1.5 increase in event building rate, and possibly up to a factor of 2. Should there be a need to achieve even higher performance, this would require an additional investment in the network and the ROS. However, in terms of relative costs, the EF processing capability is still likely to be the cost driver.

(b) Physics performance

The rejection rate achieved by the HLT system depends directly on the performance of the reconstruction algorithms and the trigger menu configured. A large efforts continues to be dedicated to the development of optimal reconstruction algo- rithms for the L2 and EF systems that meet the appropriate performance targets of the HLT baseline design. These algorithms must perform under stringent external constraints beyond VACHON, B.M.C. PIN 207385 9 those of typical offline reconstruction algorithms. These constraints include, particularly at L2, a strict time budget, access to detector information only within a “Region of Interest”, and access to limited detector granularity. Within this context, only small incremental improvements to the performance, and thus event rejection, is foreseen in the future. The design of a trigger menu consists in achieving an optimal compromise between the differ- ent competing goals of recording statistically meaningful samples of events with widely varying kinematic characteristics, all within the available limited bandwidth of the system. The trigger menu strategy in ATLAS is to use, as much as possible, simple inclusive event selection criteria that provide an adequate signal efficiency for events of interest for a wide range of physics analysis. A number of dedicated triggers are also foreseen to be implemented, for example, for recording sufficient samples of events for monitoring and detector calibration, as well as exclusive triggers complementary to the general set of inclusive physics triggers. Even a modest reduction in HLT processing capability, for example due to financial constraints, would have a considerable negative impact on the ATLAS physics program. To cope with a reduction in HLT processing capability would imply that higher energy thresholds be used for objects identified by the L1 system, thereby significantly reducing the signal efficiency for many different physics analysis. For example, a 20% reduction in HLT power from the design baseline would imply a maximum L1 output rate of approximately 80 Hz. A reduction in L1 event rate would typically be achieved by a combination of increase in the momentum threshold of unprescaled triggers and an increase in the prescale factors of already prescaled triggers. An increase of L1 electron threshold from 20 GeV to 25 GeV would result in an efficiency loss of order 10% for identifying top-antitop quark events, even before applying an additional HLT selection. It is therefore imperative to build a HLT system capable of delivering the processing power foreseen in the baseline design.

(iii) Details of hardware request

The relative fraction of HLT computing capacity that will be utilized for either L2 or EF processing will evolve with time and depend in detail on the composition of the trigger menu and trigger algorithm performance, the LHC luminosity profile and beam and detector conditions. It is therefore desirable to be able to reallocate resources from the EF computing farms to the L2 farms (and vice versa) according to need. The HLT farms are therefore organized as modular units, with the basic unit being a standard 47U computing “rack”. Because the L2 and EF trigger systems have different networking needs, three categories of standard HLT racks have been defined: dedicated L2 racks, dedicated EF racks and “EF2” racks with the capability of operating as either L2 or EF farm resources depending on need. The ATLAS HLT makes use of Commercial Off-The-Shelf (COTS) components based on widely-supported industrial standards wherever possible, permitting future improvements from industry to be incorporated. HLT performance specifications and estimates of hardware needs are described in terms of current COTS costs and performance based on manufacturer specifications and dedicated tests within ATLAS HLT testbeds in a realistic HLT environment. Consequently, the specified hardware represents the current state-of-the-art at the time of writing. Due to rapid VACHON, B.M.C. PIN 207385 10

Figure 2: Example of HLT rack arrangement. Note that slots are left unoccupied to limit the power output of a rack. changes in computing technology, it is anticipated that the hardware which is actually purchased will be that which provides the best price/performance at the time that the purchase is made. Details of the proposed hardware purchase profile are presented in Section (iv) below. No specific assumptions have been made as to what technologies will be available by the end of the period covered by this grant request. A standard EF2 “rack” consists of 30 1U processor nodes, a 48-port switch for the control network and a second 48-port switch with a 10 Gb uplink for the data network. Figure 2 shows the suggested arrangement of one of these HLT racks. A single rack will therefore host the equivalent of approximately 240 cores (assuming dual-socket quad-core CPUs). HLT hardware will be physically located in the surface building (SDX1) above the ATLAS pit and adjacent to the ATLAS control room. All elements of a rack are required to conform to appropriate electrical, mechanical and environmental standards, including power supply and cooling, consistent with sustained operation in the CERN SDX1 environment. Detailed spec- ifications for each component of a rack is presented in the budget justification section of this proposal. Standard 47U 19” computing “racks” and associated cooling equipment have already been purchased and installed by ATLAS in order to provide the backbone infrastructure of the HLT system. Cooling is provided by forced air circulation with air flow from the front to back of the racks via redundant fans on each of the processor and file server nodes. All empty racks are already in place and Figure 3 shows the footprint of the existing rack arrangement on the two dedicated floor that will house the HLT system. The funding requested is therefore for the purchase of equipment to populate the existing empty infrastructure. VACHON, B.M.C. PIN 207385 11

Figure 3: HLT and data acquisition rack arrangements in SDX1 building at CERN.

Because the HLT system is anticipated to be operational with 100% duty cycle for extended periods, computing equipment is required to be of high quality, with build in redundancy in e.g. cooling systems and the ability to replace failing components with the minimum disruption to the system. Latest benchmark tests suggest that in order to achieve the target performance summarized in Section (b) using the current software code infrastructure and algorithms will require that a large fraction of the HLT compute nodes use dual-socket motherboards with quad-core (Intel or AMD) processors of at least 2.33 GHz. Projected power output of some of these chips may however require the purchase of more expensive low-voltage CPUs in 2009/2010 in order to satisfy the tight electrical constraints in the SDX1 building. Given the current market price of quad-core CPUs, it is likely that the initial equipment purchase in 2008 will be based on dual-core processors on motherboard sockets compatible with possible future CPU upgrades. HLT computing resources will run using a CERN Linux operating system (current versions are SLC3.06 and SLC4.3). Nodes will be benchmarked for CPU, memory and i/o performance with this operating system prior to the purchase of the final hardware. HLT performance tests of nodes with similar hardware to that described above have yielded performances compatible with VACHON, B.M.C. PIN 207385 12 expectations, and in particular have demonstrated that performance scales reasonably well with the number of cores/CPU.

(iv) Costing and purchase profile

Purchasing of the bulk of the HLT computing and network hardware will take place at as late a date as possible compatible with ATLAS installation and commissioning requirements in order to derive the maximum price/performance benefit from improvements in computing technology. It is anticipated that L2 and EF processing hardware will comprise approximately half of the total cost of the final ATLAS HLT/DAQ system, with a substantial majority of the hardware being acquired in the period between 2008 and 2010 which is covered by this grant request. In this proposal we request funding for the purchase of HLT EF2 racks as a hardware contribution commensurate with the ongoing and increasing Canadian involvement the ATLAS HLT. The details of the requested hardware are described in the following. We request funding for the purchase of a total of ten HLT EF2 racks, accounting for approximately 12% of the total ATLAS HLT system, with two racks purchased in 2008, and increasing contributions in the two subsequent years. This purchasing profile is consistent both with the anticipated ATLAS hardware needs as LHC data begins to accumulate, and with overall cash-flow situation within the HLT system - although the hardware need will be greatest in 2010, it is foreseen that ATLAS will be short of funding during this period. Thus the proposed funding profile ensures that the Canadian contribution will be both timely and (by incorporating the latest computing hardware) provide maximal performance.

Year 2007 2008 2009 2010 L2 0 (0) 13 (13) 4 (17) 0 (17) EF 0 (0) 28 (28) 12 (40) 11 (51) EF2 5 (5) 6 (11) 0 (11) 0 (11) Total ATLAS 5 (5) 47 (52) 16 (68) 11 (79) Proposed Canadian contribution (EF2) 0 (0) 2 (2) 4 (6) 4 (10)

Table 3: Anticipated hardware purchase profile for the ATLAS HLT. Table entries indicate the number of HLT racks to be purchased during each calendar year. Numbers in parentheses are the integrated total for each row.

D. Equipment Utilization

Although the specific hardware comprising the HLT racks is COTS, the implementation of this hardware for ATLAS HLT purposes is unique in the world. The performance and capacity of the HLT system will determine both the quality and the scope of the physics program of the ATLAS experiment, and all members of ATLAS worldwide will explicitly rely on and benefit from this system. The HLT hardware requested in this proposal will be utilized with high efficiency for VACHON, B.M.C. PIN 207385 13 the duration of its operational lifetime. The LHC is designed to maintain colliding beams at the highest luminosity for a large fraction of its duty cycle. Although stored bunches are injected into the LHC for tens of minutes prior to acceleration to nominal collision energy, individual fills are expected to be held for ten or more hours at a time, during which the data acquisition system will be essentially saturated with data with the HLT system operating at close to full capacity. As the LHC luminosity decreases during a fill (from the loss of beam particles due to collisions), the ATLAS trigger menu will be constantly adjusted to maintain a constant data rate of events sent to mass storage. This implies that the load on the HLT system will also be approximately constant throughout an LHC fill. As the LHC begins to provide colliding-beam data, the focus of Canadian HLT activities will naturally move from development, testing and integration towards trigger commissioning and operation and hence will rely more heavily on the installed HLT hardware at CERN. As has been noted previously, several members of the Canadian groups have participated in trigger commissioning technical runs during 2007, in which simulated data is injected into the ATLAS data acquisition chain and the HLT system is operated, using HLT “preseries” hardware, to select events from the data flow stream. Several Canadian faculty, postdocs and students have partici- pated in these commissioning runs, which provide an excellent training ground for computing and data acquisition experts as well as a high degree of synergy with ongoing Canadian HLT activities relating to the McGill HLT testbed, algorithm development and data quality monitoring. For example, the HLT testbed can be used to emulate the ATLAS data acquisition system, and thus can be used for initial training of Canadian TDAQ experts. Additionally, initial data quality monitoring will be performed by ATLAS shifters, and this system is currently being tested and validated during the TDAQ technical runs and detector commissioning runs. It is anticipated that Canadian participation in the technical runs and related activities will continue in 2008 and eventually evolve into formal operational responsibilities for Canadian trigger and data acquisition experts within the HLT group. For example, students who are resident at CERN for a portion of their PhD program would be expected to receive training at ATLAS TDAQ experts and act as the on-call expert responsible for aspects of HLT operations during ATLAS data taking. As such, members of the Canadian group are also expected to directly utilize, and have some responsibility for the operation of, the hardware requested in this proposal. However, HLT computing hardware which is installed at CERN will be centrally maintained by the ATLAS TDAQ group and the central CERN computing services. Thus, unlike other contributions to ATLAS detector hardware, a contribution to the HLT hardware does not incur any additional common-fund cost to the contributing group. Hardware installation and services will also be provided through CERN, and therefore are not included in the proposed budget.

References

[1] ATLAS TDAQ Collaboration, “ATLAS High-Level Trigger Data Acquisition and Controls Technical Design Report”, CERN/LHCC/2003-022, 2003 (Available: http://cern.ch/atlas-proj-hltdaqdcs-tdr/). VACHON, B.M.C. PIN 207385 14

[2] ATLAS Collaboration, “Computing Technical Design Report”, CERN-LHCC-2005-022, 2005 (Available: http://atlas-proj-computing-tdr.web.cern.ch/atlas-proj-computing-tdr/Html/Compu