CERN 87-07 Vol. I 4 June 1987

ORGANISATION EUROPÉENNE POUR LA RECHERCHE NUCLÉAIRE CERN EUROPEAN ORGANIZATION FOR NUCLEAR RESEARCH

PROCEEDINGS OF THE WORKSHOP ON PHYSICS AT FUTURE ACCELERATORS

La Thuile (Italy) and Geneva (Switzerland) 7-13 January 1987

Vol. I

GENEVA 1987 © Copyright CERN, Genève, 1987

Propriété littéraire et scientifique réservée pour Literary and scientific copyrights reserved in ail tous les pays du monde. Ce document ne peut countries of the world. This report, or any part of être reproduit ou traduit en tout ou en partie sans it, may not be reprinted or translated without l'autorisation écrite du Directeur général du written permission of the copyright holder, the CERN, titulaire du droit d'auteur. Dans les cas Director-General of CERN. However, permission appropriés, et s'il s'agit d'utiliser le document à will be freely granted for appropriate non• des fins non commerciales, cette autorisation commercial use. sera volontiers accordée. If any patentable invention or registrable design Le CERN ne revendique pas la propriété des is described in the report, CERN makes no claim inventions brevetables et dessins ou modèles to property rights in it but offers it for the free use susceptibles de dépôt qui pourraient être décrits of research institutions, manufacturers and dans le présent document; ceux-ci peuvent être others. CERN, however, may oppose any attempt librement utilisés par les instituts de recherche, by a user to claim any proprietary or patent rights les industriels et autres intéressés. Cependant, le in such inventions or designs as may be des• CERN se réserve le droit de s'opposer à toute cribed in the present document. revendication qu'un usager pourrait faire de la propriété scientifique ou industrielle de toute invention et tout dessin ou modèle décrits dans le présent document.

CERN—Service d'Information scientifique — RD/737 -3000-juin 1987 - III -

ABSTRACT

A workshop took place at La Thuile and at CERN in January 1987 to study the physics potential of three types of particle collider with energies in the TeV region, together with the feasibility of experiments with them. The machines were: a (LHC) placed in the LEP tunnel at CERN, with a total proton-proton centre-of-mass energy of about 16 TeV; an electron-proton collider, using the LHC and LEP, with a centre-of-mass energy in the range 1.3 TeV to 1.8 TeV; and an electron-positron linear collider with centre-of-mass energy about 2 TeV. This volume of the Proceedings contains summary talks given at CERN by the conveners of the study groups. They cover the possibilities for discovery of new phenomena anticipated in the energy region up to the order of a TeV in the centre of mass of colliding partons, or of the electron and positron. Also discussed are the limits of current technology in the construction of particle-detector systems suitable for use at these energies, and especially in the high event rates provided by a proton-proton collider of luminosity 1033 cm" 2 s~1 or more. - IV -

Advisory Panel on the Physics Potential and the Feasibility of Experiments at Multi-TeV Energies

G. Altarelli Rome J. Ellis CERN G. Flügge Aachen M. Holder Siegen P. Jenni CERN J. Mulvey (Chairman) Oxford F. Richard Orsay J. Sacton Brussels

Organizing Committee for the La Thuile Workshop

J. Ellis CERN M. Greco Rome P. Jenni CERN J. Mulvey Oxford C. Petit-Jean-Genaz (Secretary) CERN - v -

PREFACE

Under the auspices of CERN and ECFA, a 'Workshop on Physics at Future Accelerators' was held at La Thuile, in the Val d'Aosta, Italy, from 7 to 10 January 1987. On 12 and 13 January, the conclusions reached by the study groups were summarized by their conveners before a large audience at CERN. The papers contained in this volume are based on those summary talks. A second volume will publish contributions from individual participants, giving details of the studies carried out. I should like to take this opportunity, on behalf of the Workshop Organizing Committee, to thank all those who made the Workshop possible, and especially the conveners and members of the study groups whose hard work, both in the preceding months and during the days and nights at La Thuile, guaranteed its success. The Organizing Committee is greatly indebted to the Public Education Authority of the Autonomous Region of the Val d'Aosta and to René Faval, Minister of Education for the Region*', for the most generous financial assistance granted to the Workshop. We are also indebted to Professor Bruno Baschiera who was our very effective link with the Education Authority, and to the manager, Claudio Robba, and staff of the Hotel Planibel for their warm hospitality and readiness to make arrangements which would assist the progress of the Workshop. CERN and CERN staff helped in many ways. Christine Petit-Jean-Genaz (CERN-LEP), the secretary to the Organizing Committee, worked tirelessly on the preparations for the Workshop and during it, making sure everything was done which should properly be done. At La Thuile, Michelle Mazerand (CERN-EP) cheerfully helped with the continuous flow of physicists coming to the secretariat with questions to be answered and jobs to be done. We are very grateful to them both, and to their respective groups for releasing them from their normal duties.

J.H. Mulvey, Oxford Editor.

*) Assesseur à l'Instruction publique de la Région autonome de la Vallée d'Aoste. - VII -

CONTENTS

Page

Preface v

Introduction, J.H. Mulvey 1

The Large Hadron Collider in the LEP tunnel, G. Brianti 6

Linear e+e" colliders, K. Johnsen 16

Status of the Superconducting Super Collider, M. Gilchriese 30

The Standard Theory Group: General overview, G. Altarelli 36

Experimental studies, D. Froidevaux 61

Beyond the Standard Model, /. Ellis and F. Pauss 80

Large-cross-section processes, Z. Kunszt 123

Detection of jets with calorimeters at future accelerators, T. Äkesson 174

Vertex detection and tracking, D.H. Saxon 205

Particle identification at the TeV scale in pp, ep and ee colliders, F. Palmonari 233

Report from the Working Group on Triggering and Data Acquisition, J.R. Hansen 21A

Design and layout of pp experimental areas at the LHC, W. Kienzle 295 ep interaction regions, W. Bartel 303

The CLIC interaction region, J. Augustin 310

The nature of beamstrahlung, P. Chen 314

Physics and detectors at the Large Hadron Collider and at the CERN Linear Collider, U. Amaldi 323

List of Participants 353 INTRODUCTION

J.H. Mulvey, Nuclear Physics Laboratory, University of Oxford, UK.

In June 1985 the CERN Council asked Professor Carlo Rubbia to chair a committee charged with exploring options for the long-range scientific future of CERN. The Long-Range Planning Committee (LRPC) in turn set up three Advisory Panels to assist it in its tasks. One, chaired by G. Brianti, continued the design studies which had already started for a Large Hadron Collider (LHC) in the LEP tunnel. Used in conjunction with LEP, this would also make it possible to study collisions between electrons and protons. Another Panel, under K. Johnsen, was given the challenging task of investigating the technical feasibility of an electron-positron linear collider in the TeV energy range (the CERN Linear Collider, or CLIC). The third Panel was commissioned to study 'the physics potential, experimental instrumentation, and other parameters' relating to the use of these particle colliders. Following the pattern set by similar studies in the past, the Panel was to engage members of the community as participants in the studies, and to form a link between the LRPC and ECFA. Study groups, listed in Table 1 with the names of their conveners, were formed in the first half of 1986. They each met several times later in the year to prepare for the four-day Workshop to be held at La Thuile from 7 to 10 January 1987. About 150 particle physicists took part in the study groups and 93 attended the Workshop. Summary reports from the study groups were presented at open meetings held at CERN on 12 and 13 January.

Table 1

Study groups and conveners

Topics Conveners

1. Physics of the Standard Model G. Altarelli Rome D. Froidevaux Orsay 2. Beyond the Standard Model J. Ellis CERN F. Pauss CERN 3. Large cross-section processes Z. Kunszt ETH, Zurich W. Scott Liverpool 4. Jet detection and calorimetry T. Ákesson CERN 5. Vertex detection and tracking D.Saxon RAL 6. Particle identification F. Palmonari Bologna 7. Triggering and data-acquisition J. Renner Hansen Copenhagen 8. Intersection regions, machine backgrounds, etc.: PP W. Kienzle CERN ep W. Bartel DESY e+e" J. Augustin Orsay

Instead of structuring the studies according to the machines, it was decided that comparisons would be clearer if all three types of particle collider were considered in parallel by each study group. As Table 1 indicates, three - 2 -

'physics' groups analysed the machines in terms of their potential for 'discovery' in a selection of physics areas considered to be of the greatest importance for the development of our understanding. The third of these groups had the task of assessing the many processes with relatively large cross-sections which, as well as being of intrinsic interest, might be expected to be sources of backgrounds hampering identification of certain 'discovery' signals. The task given to the detector groups was to concentrate on the technical limitations in their area of responsibility rather than to accomplish designs of complete detector systems, with all the compromises such a step entails, and to identify directions for R&D. Over the past three years, several Workshops have been held in the USA to examine the physics possibilities of, and detector designs for, experiments at a 40 TeV centre-of-mass energy proton-proton collider—the Superconducting Super Collider (SSC). European physicists have participated in some of these studies and in the most recent 'Snowmass '86' Workshop. Our study groups have benefited from, and built upon, the work of the SSC studies and the Lausanne LHC Workshop held in 1984. H. Williams (Pennsylvania) reported on the Snowmass '86 detector studies at a meeting of study groups held at CERN in October 1986. M. Gilchriese (Cornell), Chairman of the SSC Central Design Group Task Force on Detector Development, took part in the Workshop. In these studies, for the first time, a comparison has been made of the physics interest and the feasibility of experiments at three types of particle collider: proton-proton (pp), electron-proton (ep), and electron-positron (e+e"). The basic machine parameters assumed are given in Table 2. The case of a proton- (pp) collider, which was studied at the Lausanne LHC Workshop, has not been considered again as it appears to offer poor value for money compared with the pp version of the LHC: the maximum luminosity would be about two orders of magnitude less, giving a much reduced physics potential but with no substantial saving in cost. Nevertheless, pp operation would be possible at the LHC with a luminosity of » 1030 cm-2 s_1 by using one beam channel, if an injection system is built.

Table 2

Collider parameters

Machine Vs~ L (TeV) (cm-2s_1)

r PP 16 1033 - 1034 LHC } r 1.3 1032 L ep 31 I 1.8 10 CLIC e+e- 2 1033 - 1034

The LHC is also capable of being used as a collider for heavy ions; for example, collisions of oxygen nuclei could be obtained at a centre-of-mass energy of 128 TeV and a luminosity of 2.5 x 1026 cm-2 s"1 with the present injection system, and improvements could give heavier ions and higher luminosities. The physics potential of this possibility has not been considered, neither has fixed-target operation; and no further thought has been given to the possibility of experiments using the neutrinos and muons emanating from the intersection points, as pointed out by De Rújula and Merkle at the Lausanne Workshop. Although they were not part of the Workshop programme, updated versions of the short reports given at the CERN open meetings by G. Brianti and K. Johnsen on the LHC and CLIC studies have been included in order to provide a machine background to the physics and detector discussions. Full details can be found in the complete reports of the two Panels. On the same occasion, M. Gilchriese made a brief statement regarding the status of the SSC, and his transparencies are also included. - 3 -

The great success of the standard SU(3)C ® SU(2) ® U(l) gauge theory of strong and electroweak interactions, which lies in its ability to describe essentially all available data on fundamental particle interactions without contradiction, provides a firm base from which to view and define the next major questions to be tackled. Foremost is the question of electroweak symmetry breaking and the introduction of mass: is the Higgs mechanism responsible, and is there only one Higgs boson or are there more? If the Higgs mass is = 1 TeV or higher, the weak interaction becomes strong, spawning a new spectroscopy of massive particle states. In the natural scale set by the Planck mass (mp) the hierarchy problem—the 17 orders of magnitude gap from the W and Z masses up to mp — cannot be understood without new physics entering in the energy range up to the order of a TeV. Possible solutions include: compositeness of the W and Z, and perhaps the Higgs, implying a new 'strong' interaction such as technicolour; another is supersymmetry, with new partners for all existing fermions and bosons. Supersymmetry may also be a key to the further unification of the forces and is contained within superstring theories which offer, for the first time, the prospect of a theory incorporating quantum gravity. Among other outstanding questions, there is no explanation for the replication of quark-lepton families, which might again be a reflection of compositeness; and the left-right asymmetry of the weak interaction may be a 'low-energy' phenomenon, with symmetry restored by a right-handed W boson appearing at a mass greater than a few hundred GeV. Arguments based on fundamental ideas which have so far been trustworthy guides lead to the expectation that a rich phenomenology is waiting to be found in the energy region up to the order of a TeV. One may look forward to major discoveries from experiments now in progress or in preparation at the CERN pp Collider, the Tevatron, the SIX, LEP, and HERA, but it is unlikely that these will be able to settle completely the choices to be made among present theoretical speculations. The guidance necessary to achieve the next major synthesis—perhaps along a path not yet visualized since 'surprises should be expected' — seems to require experiments at energies up to an order of magnitude higher, that is « 1 TeV in the parton-parton centre of mass. The first three study groups have investigated in some detail the possible experimental manifestations of the new physics, including the crucial search for signs of a Higgs boson. They have also carefully considered the problems of distinguishing small signals in the presence of other processes which produce similar observable effects. This problem of 'physics background' is especially difficult in the case of pp colliders. The quality of the information obtained from the detectors is another critical factor in distinguishing signal from background; here, realistic estimates of detector performance have been used, backed up in the pp case by the unique experience of those already familiar with the analysis of data from the CERN pp Collider. As this is the first study of its kind to consider experiments at an e+e~ collider of TeV energy, rather more attention was naturally given to this option than to the pp and ep cases; for these one may still rely, for several aspects of the physics, on the work of earlier Workshops, especially the one on the LHC at Lausanne in 1984. The situation was somewhat the reverse for the detector studies where the most difficult technical problems confront the experimentalist at a pp collider, largely because of the very high event rates, = 10s Hz, and short interval between bunch crossings, 25 ns, at L == 1033 cm-2 s" The Jet Detection and Calorimetry Group chose to consider a compact, high-granularity uranium plus silicon system; in the pp case, radiation levels would require a different solution within ~ 10° of the beams. A large (2.5 m outer radius) tracking chamber, made up of eight super-layers of drift chambers, is proposed by the Tracking and Vertex Group; this would operate satisfactorily in a pp collider up to L « 1033 cm-2 s-1. At present, radiation damage is the main limitation on placing a vertex detector close to the beam in a pp collider at high luminosity, and a way of solving this problem has still to be found. This is an area in which R&D is particularly necessary. The Particle Identification Group have reviewed the study of muon detection presented at the Lausanne Workshop, and considered the questions of electron, r-lepton, and neutrino detection. One of the hardest tasks at a future high-luminosity pp collider will fall to those responsible for the triggering and data-acquisition systems. This group has outlined a scheme which involves pipelining the data stream while decisions are made at the first trigger level in order that the dead-time can be kept close to zero. But - 4 - there would have to be substantial technological developments in order to achieve the desired performance; for example, the 12- to 16-bit 80 MHz FADCs required are not yet available. Although techniques exist today which would be adequate for experiments at e+e~ and ep colliders, the pp case at L = 1033 cm"2 s_I goes up to and beyond what is currently feasible. Research and development in the detector field is vital. In the pp case there are strong motives for attempting to go towards even higher luminosity since this may bring parton collisions at higher centre-of-mass energy within effective reach. The prospects for pp collider experiments using luminosities in excess of 1033 cm-2 s~\ even with only a limited 'non-general-purpose' detector system, are being further assessed by a special working group. Detectors and machines come together at the beam intersection regions, where the requirements of one place constraints on the other. In addition, with LEP and the LHC sharing the same tunnel, provision has to be made for access to one set of experiments while the other machine is in operation. This point is addressed in the report on the LHC studies, and some solutions are outlined in the one dealing with the LHC pp intersection region. However, further studies are needed; these should include different approaches to the assembly and dismantling of large detector systems. The ep intersection region must include the means to bring the electron beam of LEP into collision with the LHC protons in a way which minimizes synchrotron radiation. It would also be very desirable to introduce spin rotators to obtain longitudinally polarized electrons. Given these requirements, it seems unlikely that one region can be optimally used for both ep and pp physics.

Consideration of the intersection region for the e+e~ case faces unique difficulties: first, no design yet exists for a final-focus system capable of achieving the required luminosity, but it is certain that focusing elements—perhaps of a novel kind involving plasmas—will be placed close to the intersection point, say within 30 cm of it; secondly, the intense electromagnetic interaction between the two dense bunches causes the emission of synchrotron radiation, termed 'beamstrahlung'. These questions have been investigated in a provisional way at this Workshop. The results suggest that the effects of beamstrahlung are not serious for the CLIC parameters but further studies will be necessary, and the solution of the final-focus problem remains one of the major challenges for linear collider designers. The mechanisms which generate beamstrahlung are reviewed briefly by P. Chen (SLAC) in a paper based on his talk given at CERN on 12 January. The final contribution to this volume is the report which U. Amaldi presented at CERN on 13 January, and which summarizes the main points emerging from the Workshop. As he says, at the start of the programme of studies initiated by the LRPC, the general view was that whereas building an LHC would be straightforward, great technical difficulties stood in the way of those wishing to build general-purpose detectors for use at the LHC, capable of operating effectively at the high luminosities required for 'discovery' physics, whilst the opposite was true for CLIC. Now, although major technical problems remain, some solutions for the LHC detectors are in sight, and there has been remarkably encouraging progress towards a design for CLIC. The physics studies show that both the LHC and CLIC would be very powerful, complementary tools for the exploration of the TeV energy region and for the investigation of the new phenomena anticipated there. The La Thuile reports published in this volume have formed the basis for the Advisory Panel's report to the LRPC. The main recommendations of the Panel can be briefly summarized as follows: i) The construction, in the LEP tunnel, of a Large Hadron Collider having the maximum energy technically feasible and a luminosity reaching 1033 cm-2 s"1 to 1034 cm-2 s"1, would be a major step forward in the exploration of fundamental processes at energies approaching a TeV, covering much of the energy range within which new phenomena are expected to occur. The opportunity would also be created to collide electrons and protons, thus greatly enhancing the overall potential for discovery, and this option must be maintained. ii) A 2 TeV centre-of-mass energy e+e~ linear collider with luminosity in the range 1033 cm"2 s~1 to 1034 cm-2 s"1 would be an outstandingly effective tool for the investigation of the energy region up to = 2 TeV. Such a collider will be required for a full elucidation of new phenomena in this energy region. The CLIC studies should be continued as a strongly supported R&D programmme. - 5 - iii) A vigorous programme of detector R&D is required, especially for the pp case, in order to obtain the necessary performance levels at reasonable cost, particularly in the following areas: a) vertex detectors, preferably of the pixel type, with fast readout and high radiation tolerance; b) radiation-hard, low-power, fast VLSI electronics (e.g. GaAs); c) 12- to 16-bit 80 MHz FADCs; d) Digital Signal Processors (DSPs) and Transputers; e) fast, high-density data storage (e.g. optical disc). Many of these technologies are also of great commercial importance, and rapid progress, including lowering of costs, can already be seen. In addition, the R&D programme should develop existing technologies where significant physics benefits are to be obtained through improved performance or a substantial cost decrease. These include f) development and tests of jet calorimetry, including U or Pb//Si calorimetry, and techniques for producing very large areas (= 2000 m2) of silicon detector. g) development of calorimetric techniques suitable for jet detection within 10° of the pp collider beams; h) compact tracking detectors; i) alignment systems at the 10 ¡aa level of accuracy. The Advisory Panel concluded its report with the following remarks: 'We are firmly convinced that experiments in the energy region up to the order of a TeV are necessary to reveal the directions to be chosen in achieving a further synthesis in our understanding of the fundamental processes which determine the nature of the physical world. Particle physicists using European facilities have played a major part in the remarkable discoveries of recent years, and are well placed to make further leading contributions in the future. The CERN facilities provide an attractive basis for making a very substantial exploration into the TeV energy region by the construction of a hadron collider in the LEP tunnel. It is also clear that an e+e~ collider in the TeV region of energy, like CLIC, offers outstanding opportunities for discovery, and would be the natural choice of machine to build as a complement to such a hadron collider.' - 6 -

THE LARGE HADRON COLLIDER IN THE LEP TUNNEL G. Brianti, CERN, Geneva, Switzerland

The complete description of this collider, which can operate in two modes, proton-proton and proton-electron, together with LEP, can be found in the CERN report "The Large Hadron Collider in the LEP Tunnel", edited by G. Brianti and K. Hübner. Therefore only its main features relevant for the experiments are summarized in this report, which is a revised version of the summary talk given at CERN on 12 January 1987.

1. PROTON-PROTON PERFORMANCE For proton-proton collisions, two proton beams of 8 TeV nominal energy circulate in opposite directions in two separate magnetic channels which are side by side in the horizontal plane 0.9 m above the median plane of LEP (Fig. 1); the horizontal separation of the two channels is 180 mm. Since the

Fig. 1 - 7 - circumference of the LHC orbit is fixed by the LEP tunnel, the field in the guiding dipoles must be as high as possible because it determines the top energy. Therefore the nominal field is chosen to be 10 T, which seems tech• nically attainable and economically feasible, provided a vigorous research and development programme is undertaken. The beam orbits, separated in the arcs and over most of the long straight sections, are combined in a single channel just in the region of the experiments so that the counter-rotating bunches collide only in 8 interaction points at a maximum. One of the long straight sections is reserved for the beam dumping system where the beams do not interact. Apart from the beam energy, the most important parameter from the user's point of view is the luminosity L, given by:

N2 f k. L = -E_°- . Ana2

Here is the number of particles per bunch, f is the revolution frequency, k^ is the number of bunches in each beam, and o is the r.m.s. beam radius at the crossing points. The normalized emittance e has a well-defined lower limit of the order of a few units in it urn, given by the injector chain. If it is defined by:

4nvo2 £ * P '

the important beam-beam tune shift parameter Ç becomes simply :

N r Ç = -E-E . ^ E

Here v is the usual relativistic factor and r is the classical proton p radius. Combining these equations, the luminosity becomes :

N fk. vYEq = p b r ß • Pp The number of events in a single beam-beam collision and the bunch

spacing in time units Tx are of considerable interest for the experiments at the LHC. They are related with the total p-p cross-section, Z « 100 mb, by the following equation, which shows that it is impossible to impose con•

ditions simultaneously on L, , and Tx:

T T = LETx = ETf • D The preceding equations imply the following emittance e :

r2ß e = E- .

An upper limit of the stored energy in the beam, U, might arise from the increasing difficulties of dumping the beams, whilst an upper limit on - 8 -

the synchrotron radiation power Pg might arise from the heat load on the cryogenic system. These quantities are given by the following expressions:

U = NpEkb ,

s 3 p p b'

Here, E is the particle energy, c is the velocity of light and B the magnetic field in the dipoles. The beam-beam collisions occur at a small angle cp so that the two beams are well separated where the first unwanted bunch-bunch collisions

would occur. At Tx = 25 ns this would happen +3.75 m away from the inter• action point, i.e. well inside the drift space 1^. = +10 m or +20 m.

The nominal set of LHC parameters is based on a bunch spacing Tx = 25 ns which was adopted after discussion with the experimenters, and on a number of protons per bunch = 2.6 x 1010 such that the beam-beam limit Ç = 0.0025 is just reached. The number of bunches, k^ = 3564, is one of the magic numbers permitted by the LHC, SPS and PS circumference ratios, (297/7) : 11 : 1. The risetime of the injection kicker and of the dump kicker requires some spacing between the four bunch trains which are in• jected one after the other. This will reduce the number of bunches by about 7%. Since this reduction is small and not very well determined, it is pre• ferred to quote the luminosity for the full number of bunches without gaps. The normalized beam emittance is assumed to be the smallest one that can be obtained from the injector chain. The total beam current results in accep• table values for the stored energy in one beam, U = 117 MJ, and for the syn•

chrotron radiation power from both beams, Ps = 3.93 kW. A summary of our as• sumptions and the resulting nominal LHC performance are shown in Table 1.

With the nominal luminosity of 1.42 x 103 3 , the average number of events per crossing is = 3.55. For those experiments which cannot ope• rate with such a high value of , the luminosity can be reduced by increa• sing the ß at the relevant crossing points. In this way can be adjusted from 3.55 down to 0.89. To reach lower luminosities the machine must be ope• rated at reduced beam intensity. On the other hand, a substantial increase in luminosity is conceivable for special experiments that can cope with a higher number of events per bunch crossing and/or a smaller bunch spacing: i) The ß-values at the crossing points can be made smaller than 1 m, in particular, if the quadrupoles near the collision point are moved closer to it. The potential gain factor is 2-3 implying that the expe• riment would have to cope with of about 10. ii) The bunch spacing can be reduced from 25 ns in steps of 5 ns down to the minimum of 5 ns. The gain in luminosity is inversely proportional to the bunch spacing and, hence, can reach a factor of 5. The number of events per bunch crossing is not increased but the detector must have a 5 ns resolution. - 9 -

Table 1

Nominal LHC p-p performance

Number of bunches 3564 Bunch spacing 25 ns Number of interaction points 4 ß values at crossing points 1 m Normalized emittance 4nyo2/ß 5n urn r.m.s. beam radius 12.11 urn

Full bunch length (4os) 0.31 m Full crossing angle ( at I = 100 mb 3.55

iii) The number of collision points kx simultaneously exploited could be reduced, for instance, in special runs, and the number of protons per bunch N^ could be correspondingly increased. In the extreme, with only one collision point, the number of particles per bunch could be increased by a factor of 4. However, the emittance of these intense bunches would certainly be blown up, and beam stability problems in the injectors and the LHC will reduce the potential gain factor of 16 to something of the order of 5 or 10. Since these individual measures are, in first approximation, indepen• dent from each other, one could be tempted to multiply all these potential 3 5 -2 - 1 gain factors, which would lead to a luminosity of 10 cm s However, increasing the number of bunches (ii) and the bunch intensity (iii) implies a higher stored beam energy and enhanced synchrotron radiation, which has repercussions on the cryogenic system and the power consumption. Beam stabi• lity becomes of concern at high beam intensities and, in case (i), the lumi• nosity lifetime is decreased. Hence, a more moderate luminosity increase should be expected and, therefore, it appears reasonable to assume that a luminosity of 1034 cm"2s"1 can be achieved in physics runs for special expe• riments by applying a suitable combination of the measures enumerated above. p-p operational procedures a) Luminosity lifetime The main cause of luminosity decay during a physics period is the beam- beam collisions themselves. The initial characteristic decay time of the beam intensity due to this effect is: - 10 -

T = LIk° x

where NQ is the initial total number of particles in the beam, L the initial luminosity, I the total cross-section for p-p collisions (which we suppose

25 2 equal to 10" cm at 8 TeV) and kx is the number of interaction regions which are active at the same time. Other causes of beam loss are the scattering of particles on the resi• dual gas and the beam-beam effect. With a nitrogen equivalent pressure of 10"9 mbar or less, the effect of scattering on the residual gas can be ignored. The beam-beam effect is known, from experience with the SPS pp Col• lider, to produce a diffusion of particles into the far tails of the trans• verse distributions. The diffusion rate cannot be predicted accurately since it is extremely sensitive to machine imperfections. In the SPS it is respon• sible for a beam lifetime (inverse decay rate) of the order of 50 hours.

The luminosity is also influenced by the evolution of the transverse emittances. These tend to increase because of intrabeam scattering, but this effect is more than compensated in the LHC by synchrotron radiation damping, so that a net decrease of the emittances should occur during a physics period. For simplicity we suppose in the following that synchrotron radiation damping just compensates the influence of beam-beam effect and of intrabeam scattering, leaving only the beam-beam collisions as a source of luminosity decay. Under these conditions the initial characteristic decay time of the

intensity in each beam is t = 44.6 hours with kx = 4 active interaction

regions. The luminosity half-life is = X(J"2 - 1) = 18.5 hours.

b) Filling time Although the time it takes to transfer 8 batches of protons from 8 suc• cessive SPS cycles into the two LHC channels is only about 3 minutes, the total filling time will be considerably longer. After the old beams have been dumped, the currents in the magnetic systems have to be lowered to the injection values. This may take about 10 minutes and will be followed by a programmed check of the main systems (about 10 minutes) and then by fine tuning of the transfer and capture processes using pilot beams of low inten• sity (another 20 to 30 minutes). After the final transfers have been done the beams will be accelerated from 450 GeV to 8 TeV approximately in 20 mi• nutes. Another 20 to 30 minutes will probably be needed for fine tuning of the beams in collisions and the setting up of the physics detectors. In to• tal these operations may take approximately one and a half hours. The optimum duration of a physics period depends both on the luminosity lifetime and on the time it takes to refill the machine. With a filling time

of 2 hours the ratio /LQ of the average luminosity to the initial lumino•

sity is maximum if the physics period lasts 9.4 hours. In this case /Lo reaches a value of 0.68 and refilling the LHC about two times per 24 hours would be required. - 11 -

2. ELECTRON-PROTON PERFORMANCE The parameters of the electron and proton beams are adjusted so that

the beam-beam tune shifts Çp for the protons do not exceed 0.0033 with 3 interaction regions, and that the beam-beam tune shifts for the electrons do not exceed the LEP design value scaled to 3 interaction regions, which

yields Çg = 0.04. Adequate RF power is available from the LEP RF system to compensate the synchrotron radiation losses for an average circulating current of 5 mA at - 4 100 GeV. It is assumed that the current scales like E at lower electron energies and that it is distributed over a number of bunches such that the proton beam-beam limit is not exceeded. This is possible up to a maximum of 540 bunches where the bunch spacing becomes 52.3 m. This is the smallest bunch spacing which is simultaneously a multiple of the LEP and the SPS RF wavelengths, and also larger than the length of the common trajectory of electrons and protons next to the interaction point. The gaps due to the kicker risetime are also ignored here as under point 1. The beam-beam limit for the electrons E and the limits on the total e proton current discussed in point 1 both allow more intense proton bunches since their number is at most 540. We therefore designed the e-p insertion for a proton intensity of N^ = 3 x 1011 per bunch and a normalized proton beam emittance e = 20rt urn. Both figures are close to those obtained in the SPS pp Collider. The proton emittance e = 20n iim assumed for e-p collisions is four times the value assumed for p-p collisions, and doubles the beam size which

Luminosity I RF power limited Dynamic ! ( 5mA at lOOGeV ) / cm .s aperture • limited I Electron beam intensity 60°/cell 90°/cell i / mA

Energy in the center of •300 mass / TeV

• -100

50 60 70 80

Electron beam energy / GeV

Fig. 2 - 12 - has to be stored in the LHC. Since this larger beam for e-p is more demand• ing in terms of magnetic field quality and aperture, a very welcome margin is created for p-p, which will facilitate the boosting of the p-p perform• ance as discussed under point 1. With the assumptions and optimization procedures described above, the luminosity obtained in e-p collisions depends strongly on the energy of the electron beam. This variation is shown in Fig. 2, for the insertion with a free space of +3.5 m. In order to provide more space for the detector an insertion with +10 m has also been designed (see point 3). The parameters at 50 GeV electron energy for both insertions are given in Table 2.

Table 2

LHC/LEP e-p performance for two typical e-p interaction regions

Shifted quads Dipole in IP Units

+3.5 m free +10 m free

Protons

N 3.0 x 1011 3.0 x 1011 P 4ne\ 20rc 2 On urn * ß 2.8 2.8 m *YP * 45.3 45.3 m ß

*xk p 540 b 540 TC/2 rc/2 ^cell E 8.0 8.0 TeV P 0.0033 0.0033 ÇP Electrons

N 8.2 x 1010 8.2 x 1010 e

ox/ßx 26.5 28.5 nm

oy/ßy 3.4 3.4 nm * ß 0.20 0.24 m rye * 0.64 0.97 m ß

k 540 540 b

11 ir/3 rt/3 cell E 50 50 GeV e 0.04 0.04 ^e

32 32 - 2 - 1 Luminosity 2.69 x 10 2.06 x 10 cm s - 13 - e-p operational procedures a) Beam lifetime For e-p collisions, both luminosity and total cross-section are one order of magnitude smaller than for p-p collisions; therefore particle losses due to luminosity are no longer a limiting effect. The lifetime limitation of the electron beam due to longitudinal ef• fects is determined by the RF voltage in LEP. At a LEP energy of 50 GeV, the circumferential voltage needed for compensation of the synchrotron losses is 1/16 of the maximum possible at 100 GeV, therefore the longitudinal beam lifetime can be made as large as needed with the RF voltage. Equally, the lifetime limitation due to transverse effects is made large by imposing the condition that particles with amplitudes up to 10o of the six-dimensional phase-space distribution are stable. The lifetime of the proton beam is limited by the diffusion of parti• cles to large amplitudes due to the beam-beam effect. Taking the same beam- beam parameter as for the SPS pp Collider (i.e. 0.003), the lifetime is 50 hours. The smallest intrabeam scattering time constant is the horizontal one: it is 300 hours under physics conditions. As the synchrotron radiation damping time is 8 hours, all possible sources of beam blow-up are more than compensated by it. Hence the beam lifetimes are quite large for e-p collisions, and the run duration will be determined more by practical considerations than by beam dynamics.

b) Filling time The time needed for injection of one proton beam is 20 minutes. This is considerably longer than in the p-p option because in the e-p case the PS can accelerate only one bunch per cycle so that the filling of the SPS takes longer. Since the electron injection into LEP and the ramping of LEP are done during the ramping of the LHC, the electron injection does not influ• ence the total filling time. Adding to the proton injection time the other manipulations discussed under point 1 yield a total ep filling time of ap• proximately 2 hours.

3. INSERTIONS 3.1 Proton-proton insertions Starting at the interaction point, the "standard" insertion contains a drift space of +10 to +20 m free from machine components, depending on the exact requirements of the experiments, and two quadrupole triplets for focu• sing the beams down to a transverse dimension of a few tens of microns. Of course, if, for a specific experiment, a luminosity substantially higher than 1033 cm-2 s"1 would be required, it may be necessary to install the quadrupoles closer to the interaction point and to consequently reduce the free space for the experiment to less than +10 m. The layout of a typical low-ß p-p insertion and the optical functions are schematically shown in Fig. 3. As mentioned above, if a free space of - 14 -

40.0 • LHC LOW-BETR INSERTION BETR*=1M , 1OM FREE SPACE 2.5

SQRT ABS (D) BETR

32.0 • - 2.0

24.0 -- 1.5

16. 0 - 1.0

8.0 -- 0.5

0.0 0.0 110. 220. 330. 440. 550. 660. 770. 680. 990. 1 100.

Fig. 3

+20 m is wanted, the ß in the triplet increases from 1150 m to 1927 m — max and might require quadrupoles of increased aperture.

3.2 Electron-proton insertions In the e-p insertion, the LHC beam stays at its normal vertical level, while the LEP beam is bent vertically to reach the level of the proton beam. Weak bending magnets are included in the vertical bends to minimize synchro• tron radiation effects. Between the electron quadrupole doublets on either side of the interaction point, the e and p beams have a common trajectory, thus permitting head-on collisions. The electron beam is bent into and out of the LHC plane either by vertically displacing the electron quadrupole doublets or by inserting around the interaction point a weak dipole field, which would extend over more than 10 m. The advantage of the latter solution would be to allow a larger "free" space of +10 m for the experiment, which however must incorporate the weak dipole field, possibly as part of a calo• rimeter. The problem of the synchrotron radiation released by the electron beam has been examined at least in first approximation and found manageable, but more detailed studies will be needed in relation with actual experimental proposals. - 15 -

4. COMPATIBILITY BETWEEN LEP AND THE LHC Two phases can be distinguished. A first phase of progressive instal• lation of the LHC, in the years when LEP is the only operational collider in the tunnel, and a second phase when both colliders (LEP and LHC) are ope• rational . During the first phase LEP will operate approximately 4000 hours per year and, therefore, the installation of the LHC, which takes place primar• ily in the arcs at considerable distance from the LEP experiments, could be carried out during the rest of the time (also approximately 4000 hours). In this context, it is worth noting that the magnet cryostats contain also the pipes for the cryogenic fluids, making the installation in the tunnel more rapid than in the case of a separate He distribution system. During the first phase, the construction of additional p-p or e-p expe• rimental areas could also proceed, even during LEP operation, except for the part involving the tunnel and its immediate surroundings. In fact this was done around the SPS for the UA1 and UA2 experimental areas. Once the LHC is fully installed and commissioned, one could divide the year in two operational periods of approximately 5 months each, one devoted to LEP operation and the other to LHC operation, with a period of two to three weeks in between for the change over of the experiments. When LEP is operating the corresponding experiments would be in data-taking position and the LHC experiments in their garages and vice versa. As far as p-p colli• sions are concerned, it is in principle possible to produce them in 7 out of the 8 interaction points symmetrically arranged around the LEP circumfe• rence; the straight section around insertion point 3 is reserved for the dump system of both LHC beams. A possible initial attribution of collision points to e*e", p-p, and e-p is outlined in the contribution of W. Kienzle in these proceedings. - 16 -

LINEAR e+e"COLLIDERS

Presented by K. JOHNSEN on behalf of the CLIC Advisory Panel*

1. INTRODUCTION

By around the mid 70's when the HEP community in Europe was deeply involved in an evaluation of possible future accelerator facilities to be constructed it had essentially two main options to choose from : a) A Large Storage Ring (LSR) for protons of cm. energy of 1-2 TeV, or b) A Large Electron-Positron ring (LEP) of cm. energy of 100-200 GeV. There was in the end general agreement to go for the latter in spite of the fact that at first the former option might have looked most natural for CERN.

By the early 90's when choices for the next generation of accelerator facilities in Europe will have to be made, the option of a Large Hadron Collider (LHC) of cm. energy of 12-18 TeV and luminosity of 1033-1034 cm_2s-1 will certainly be available. There is also the hope that another approach might be among possible options, namely a linear collider (CLIC) for electrons and positrons with cm. energy of about 2 TeV and luminosity above 1033 cm~2 s"*. However, the latter option will not be available unless substantial and detailed linear collider studies are organised rather soon.

Electron-positron colliders have a few clear advantages over hadron colliders. The main ones are: the e+e" collisions are much cleaner than hadron collisions, and a much higher luminosity can be exploited, if achievable. In addition, an order of magnitude less energy is required since the colliding particles are themselves "constituents". Energy-wise an e+e" collider with centre-of-mass energy of 2 TeV is roughly equivalent to a hadron collider with 20 TeV centre-of-mass energy.

*) Membership : U. Amaldi, K. Johnsen (chairman), J.D. Lawson, B.W. Montague, W. Schnell, S. van der Meer, W. Willis, - 17 -

However, the disadvantage is that the radiation makes it so much harder to make very high energy e+e" colliders. It is therefore believed that LEP presently under construction at CERN, with a circumference of 27 km, will be the largest circular e+e~ collider ever to be built.

The radiation problem in a circular e+e" collider is fundamental and can only be counteracted or solved by increasing the circumference of the machine, hence the very large circumference of LEP. The ultimate limit of this approach is to go to infinity with the bending radius, in other words to consider colliding beams from linear accelerators. The first speculations on this kind of approach, with a host of new problems, started more than 10 years agol»2) and have recently gained considerable momentum.

The Long Range Planning Committee, under the Chairmanship of C. Rubbia, considered it very desirable that the community have both these options to choose from in the future. Therefore, one of the Advisory Panels to this Commmittee was charged with analysing the linear collider possibility. At the time of preparing this Report the activities of the Panel have given considerable insight in many aspects of a possible future CLIC project. There are, however, still important areas of uncertainties and lack of knowledge that will need studies beyond the existence of this panel. It is believed that the Panel has created a good starting base for such studies.

The main conclusions and recommendations of the Panel can be summed up as follows.

At the present state of knowledge one approach to a TeV collider seems to hold the promise of leading to a real project within the foreseeable future. This approach is based on a normal conducting radio-frequency linear accelerator with a resonant frequency one order of magnitude above that of present-day linacs. The drive power can be derived from an auxiliary beam, which is in turn powered by a super-conducting structure. Section 3.2 gives an outline. Even for this seeming extrapolation from present-day technology several fundamental problems remain as yet unsolved while few of the innumerable details have been studied so far. It is recommended, therefore, that problems related to this kind of scheme should in the future be given first priority by a study team, in addition to general problems such as the injectors, the final focus system and tolerances along the main accelerator.

On a much longer time scale more exotic schemes of acceleration might lead to solutions offering higher performance, in particular a higher energy for - 18 - given total length of collider. Sufficient effort should, therefore, go into a continuation of these approaches, so as to keep the corresponding options open.

2. GENERAL CONSIDERATIONS

At the time the CLIC Advisory Panel was established J. Lawson had, as a visitor to CERN, worked on some general considerations related to linear colliders3). An analysis had also been done by U. Amaldi4), and one finds similar analyses in recent publications and conference contributions from SLAC and others. The general idea is to single out those collider parameters that do not depend on a particular accelerating structure, to establish their interrelation and to see how these interrelations constrain the choice of parameters.

The first conclusion to be drawn from such analyses is that the relations between the parameters set by general accelerator theory impose important constraints on the parameters. However, there are a few parameters that do not have hard limits, like acceptable power consumption of the facility, and also the beam radiation limit may not be a hard one but may depend on the acceptable experimental conditions.

Another important observation is that the physics demand of a very high luminosity, compared with what is presently achievable even in circular colliders, results in beam properties that require extensive studies to see if they can be realised. As will be seen in later examples beam sizes much below microns are required in the collision area. A combination of a very small beam emittance and a very strong final focus will be needed.

Studies of beam emittance problems show that emittance damping is of paramount importance for high-luminosity TeV colliders both because of their length and their cost4). Various ways of achieving a very strong final focus are being studied, which may need to include strong plasma lenses in addition to ordinary focusing elements, and may also make use of the focusing properties of trains of oncoming bunches5). In general, final focus considerations will be of tremendous importance not only for the performance of the collider but also for experimental conditions and detailed design of the detectors.

Finally, it should be noted that many of the constraints arrived at for the general parameters do not depend directly on the accelerating structure. - 19 -

Indirectly there is a connection e.g. through power considerations. The beam power enters directly in the relations, whereas a much more important parameter is the power taken from the power grid.

3. VARIOUS APPROACHES TO LINEAR COLLIDERS

The first attempt at colliding beams from a linear accelerator will be done with a facility now under construction at SLAC, the so-called SLAC Linear Collider (SLC)6), with a beam energy of 50 GeV and luminosity of

6 x 1030cm-2s-l# Contrary to circular colliders the beams pass through each other only once, which means that beam parameters must be optimised such as to make maximum use of this one crossing. One consequence of this is the very small beam sizes that have to be achieved in the collision region with correspondingly difficult requirements on the quality of the system for the final focus. The project will provide crucial information in such areas when it starts its operation in 1987.

The CLIC Advisory Panel has been considering the possibilities of a linear collider (CLIC) with energies in the TeV range, and luminosities above lO-^cnr^s-1, i.e. more than an order of magnitude higher energy than SLC, and about three orders of magnitude higher luminosity. This requires more than a simple extrapolation of present-day techniques.

Some years ago it was felt that the main problem would be to create very high accelerating fields, orders of magnitude higher than in present day linear accelerators. This certainly called for rather exotic ideas, some of which died quickly, some still stay in the picture with varying degrees of promises. The Panel synthetised some of the more promising ones, and studied a few in some more detail. Below will first be given a short overview of a few of the most interesting schemes, and then some more information will be given on the most promising one, based on normal conducting RF linacs.

3.1 Overview of interesting accelerating schemes

Many ideas have been put forward for dealing with the problems of acceleration to very high energies, and several workshops on these topics have been held in Europe and in the USA. Some of the schemes are based on more or less conventional accelerator structures but for shorter wavelengths. The - 20 - variants are characterised by the methods used for generating the RF power, which could be by a Free Electron Laser (FEL) or through wake-fields induced by a driving beam consisting of very short bunches. Most of these methods are in fact based on the extraction of power from a low-to-medium energy, high-current driving beam, the extracted energy being restored at intervals either by a second superconducting linac, acting as an energy storage device, or by a pulsed induction linac. Superconducting main-linac structures have also been considered, then in a more moderate frequency range (say ~ lGHz)7). A less conventional proposal is to excite the accelerating structure by high-voltage pulses of a few picoseconds duration, produced by photo-diode switches triggered from laser pulses^).

The semi-conventional schemes could offer accelerating fields up to a few hundred MV/m. A much less conventional approach is to excite accelerating fields directly in the same structure as the accelerated beam from the wake- fields of an intense bunched driving beam9). Although such wake-field accelerators might give energy gains up to around 1 GeV/m, there is still some concern about the beam stability in such a tightly-coupled system.

The only means so far proposed to achieve energy gains substantially above 1 GeV/m involve the use of plasmas. The principle is based on charge separation in a fully-ionised plasma with a density in the range of ÎO^-IO*8 cm"3, which can in principleproduce fields of several GV/m. In the Plasma Beat-Wave Accelerator^) the charge separation is obtained by excitation from two coincident laser beams whose frequency difference resonates

with the plasma frequency. The Plasma Wake-Field Acceleratorll»12)0bt,ajns the charge separation from the wake-fields of an intense bunched driving beam. Althoughplasmas have a bad reputation for instabilities, the time scale of interest for accelerator applications is only a few tens of picoseconds which is expected to be too short for instabilities to develop. Nevertheless it will probably be many years before a full-scale project based on plasma waves can be contemplated. On the other hand, the exploitation of these extremely high fields on a smaller scale, for focusing near the interaction region, could perhaps become feasible much earlier.

3.2 Normal conducting RF linacs

With normal conducting radio-frequency structures accelerating gradients of several hundred megavolts per metre are possible in principle. In practice, - 21 - maximum attainable gradients are given by considerations of efficiency and limitations of peak power rather than by electrical breakdown. Another fundamental problem is presented by self-deflection and self-deceleration due to the electromagnetic wakefields left behind by the particles. A short qualitative review of these problems and possible solutions are given here, based on a proposal by w. Schnell 13).

Travelling wave structures offer the important advantage of presenting a matched load to a short pulse of RF power at a single feed point per section. It is proposed, therefore, that the accelerator be made of travelling wave sections, each one of length L, group velocity v and fill time Vs L/v for electromagnetic energy.

The enormous dissipation per unit length associated with accelerating

gradients EQ of the order of 100 MV/m or more requires the RF power to be applied in the form of very short pulses with low duty cycle. The duration of each power pulse is made approximately equal to the fill time T and a beam pulse (consisting of a bunch of particles or a train of several bunches) is made to pass at the end of the power pulse. As the decay time of stored energy will be much shorter than the repetition period any energy not extracted by the beam is lost. Therefore, the efficiency of transferring power from the RF feed point to the beam approaches, at best, the fraction r¡ of energy extracted. On the one hand this extraction efficiency is limited to about 10% at most by the concomitant energy spread (about Tj/2) which must remain correctible before the final focus system is reached. On the other hand 1) is proportional to the charge per beam pulse, the square of the resonant frequency and the inverse of the accelerating gradient. The charge per bunch of particles is limited by the wake-fields and by beam-beam radiation in the final focus. Therefore, the price for reaching a high value of accelerating gradient at acceptable efficiency is a very high frequency, much higher than the customary 3 GHz of present-day electron linacs. A value of about 30 GHz, corresponding to 1 cm wavelength, appears to be a limit imposed by transverse wake-fields and by constructional problems of travelling wave accelerating structures. Test structures for about 1 cm wavelength have, indeed, been manufactured and tested*4). It is proposed, therefore, that about 1 cm wavelength should be used in spite of the considerable extrapolation from present-day technology implied by this choice.

If the RF to beam efficiency is to approach the energy extraction TJ dissipation during the fill time has to be made as small as possible. The only - 22 - way to do this is to make the fill time very short in spite of the concomitant increase of peak power. A reasonable compromise may be a choice of fill time that makes the peak power per metre of section length twice the classical minimum. The corresponding dissipation during the structure fill time amounts to 28% of the input energy. With the typical Q-factor of a copper structure at 1 cm wavelength this fill time amounts to only 11 ns.

Column A of Table 1 represents a fairly conservative choice of parameters resulting from the arguments outlined above. There is only one bunch of electrons or positrons per pulse, extracting 8% of the stored energy. The accelerating gradient is 80 MV/m giving the accelerator a total active length of 2 x 12.5 km for 2x1 TeV. The efficiency of energy transfer from the RF input to the beam is a little over 6% yielding 5 MW beam power (and a luminosity of 1033 cirrus"!) for 80 MW average RF power per linac. Beam power and luminosity may be doubled or the input power halved if the electromagnetic energy reappearing at the output end of each accelerating section after the beam passage can be recovered. The superconducting drive system described below appears to permit just this but the details remain to be studied.

The accelerating gradient could be doubled and the total active length reduced to 2 x 6.5 km if two bunches per beam pulse could be used (Column B of Table 1). Moreover, at the price of a 20% reduction in average accelerating gradient, an RF to beam to efficiency of as much as 30% may be reached by using a larger number of bunches whose interval is adjusted so as to make the fresh influx of RF power cancel the bunch to bunch depletion of energy due to beam

3 loading 15,1- ). This is what is shown in the two columns of Table 2. The corresponding luminosity of 6 x 1033 cm'^s"^ represent an optimistic prediction since a final focus system accepting multiple bunches at close interval has yet to be designed and the effects of bunch to bunch wake-fields remain to be studied in detail. Note, however, that the actual hardware is the same for both Table 1 and Table 2 (apart from an extra 25% in overall length in the second case). At the present state of knowledge it would, therefore, be a safe plan to design the collider so as to yield the minimum required luminosity with single bunch operation. It will hold the potential of a five to six-fold increase in luminosity by compensated multibunching in a later stage of development.

The problem related to the design of an accelerating structure for 1 cm wavelength and to the preservation of the transverse emittance of the beam in the face of intense deflecting wakefields have been discussed elsewhere^»!7»" - 23 -

TABLE 1

Main linac parameters for two accelerating gradients. (The parameters are for one linac)

Case A B Final energy ell 1 1 TeV Frequency f 29 29 GHz Average accelerating gradient Eg 80 160 MV/m

Total active length Lt0t 12.5 6.25 km Shunt impedance per unit length R' 170 170 M /m Quality factor Q 4150 4150 R'/Q = r' 41 41 k /m Attenuation constant for power ee. 0.5 0.5 Fill time T 11.4 11.4 ns Peak power per section length P^/L 96 384 Mw/m

Bunch population N 5.35 5.35 x 109 Energy extraction per pulse r¡ 0.08 0.08 Number of bunches per pulse 1 2

Repetition rate frev 5.8 5.8 kHz Average RF power < P^p > 80 80 MW

Beam power < P^y * 5 5 MW Beam radius at collision oy 65 65 nm Disruption D 0.91 0.91 Pinch enhancement H 3.5 3.5 Beam-beam radiation loss ô 0.19 0.19

Bunch length Gz 0.5 0.5 mm Luminosity 1.1 1.1 x 1033cm-2s"1 Quantum parameter y 0.28 0.28 Normalized emittance Si[ß*= 3 mm) 2.8 2.8 x 10-6 rad m - 24 -

TABLE 2

Table 1 modified for compensated multibunch operation Parameters for one linac

Case A B Final energy ell 1 1 TeV Frequency f 29 29 GHz Accelerating field EQ 80 160 MV/m Filling factor for first bunch X 0.8 0.8

Average accelerating gradient^E0 64 128 MV/m

Total active length Lt0t 15.6 7.8 km Peak power per section length P /L 96 386 MW/m L Bunch population N 5.35xl09 5.35xl09 Energy extraction t\ per bunch 0.08 0.04 Energy spread within bunch 5% 2.5% Number of bunches per pulse b 6 11

Repetition rate frev 5.8 3.2 kHz Average RF power 100 100 MW

Beam power PD 30 30 MW Structure fill time T 11.4 11.4 ns

Bunch interval D (not adjusted for 0.456 0.228 ns

integer TDf) RF cycles between bunches X^f approx. 13 7

Beam pulse duration (b-l)xrD 2.28 2.28 ns

34 Luminosity 0.6xl0 0.6x1034 cm •2c-l

Table 2 shows the two linacs of Table 1 extended by 25% in length and operated in compensated multibunch mode. Case B illustrates the flexibility of this scheme in accommodating higher accelerating gradients without increase of operating frequency or decrease of RF to beam efficiency which is as high as 30% in both cases listed. - 25 -

18). It results that the wake-fields can, indeed, be stabilized by a combina• tion of very strong external focusing and the "Landau damping" due to the rather large momentum spread which would be difficult to avoid anyhow. In spite of this, very tight alignment tolerances for focusing quadrupoles and accelerating structures will be needed. The fast repetition rate of many kilohertz, required in any cases, will help to build active feedback systems for steering the beam.

The main remaining problem is the generation of the enormous peak power required. All known power converters contain space charge limited electron guns limiting the current density of the beam of electrons which is used to transfer energy from d.c. to RF. It follows that the output power decreases as the square of the wavelength if a given design is scaled. The kilohertz repetition rate poses another very serious problem. No suitable power converter at 1 cm wavelength is available at present and even if it could be developed the very large number of units required is likely to make this solution economically unattractive.

Instead of the multiple of d.c. to RF power converters a continuous drive beam running along the main linac may be employed. The drive beam supplies energy to the main linac at regular intervals via transfer structures. The drive beam energy is restored by accelerating structures forming a "drive

9 2 linac". Free electron 1 aserl »20,21)anfj direct RF deceleration sections ?)

have been proposed as transfer structures, induction unitsl9,20)anc¡

superconducting RF accelerating cavities as drive 1inacs21»22) #

A drive linac formed by superconducting cavities, combined with decelerating RF transfer structures, opens the possibility of a fully relativistic drive beam, thus eliminating all phasing problems. The mains input is converted to RF power at UHF frequency by large CW klystrons. Such klystrons of over 1 MW output and nearly 70% transfer efficiency are available to-day. The CW operation of the drive linac, made possible by the high Q-factor of the superconducting cavities, means that the main linac repetition rate is limited by pre-injector considerations only.

Drive beam pulses of a duration equal to the main linac fill time V have their energy periodically restored by being passed through the superconducting cavities. Energy conservation along the drive beam demands that the "transformer ratio", i.e. the ratio of the accelerating gradient Eg in the main

linac to that, E\t in the drive linac be proportional to the ratio of frequencies. The resulting choice of drive linac frequency in the low UHF - 26 - range is quite suitable for superconducting cavities. In fact, the 350 MHz superconducting cavities developed for the second stage of LEP could be used at their present state without any change.

Table 3 gives parameters of superconducting drive linacs. The first column is for the main linac of column A, Table 1. The corresponding drive linac parameters (Ej = 6 MV/m and Qj = 5 x 109 at 350 MH/z) are present-day performances. The second and third column correspond to E] = 15 MV/m, a development that is expected to occur in a few years' time. In case B 2 x 6.25 km of main linac are powered by only 2 x 800 m of superconducting drive linac. In case C (admittedly an extreme example) the entire installation is compressed to only 2 x 2.24 km active length, main linac and drive linac alike. This would, however, require multiple bunches from the start.

Energy transfer to the main linac may be via free electron laser units or by RF deceleration in short sections of travelling-wave structures, each one coupled to the input of a main section via a short run of waveguide. The latter scheme requires the drive beam to be tightly bunched at the main linac frequency. It has, however, the great advantage of permitting drive beams of

TABLE 3

Case A B C Main linac energy ell 1 1 1 TeV Main linac freqency f 29 29 29 GHz Main linac accelerating gradient En 80 160 445 MV/m

Main linac active length Lt0t 12.5 6.25 2.24 km

Drive linac voltage gain U-| 15 12 33.6 GV Drive linac frequency f] 350 350 350 MHz Drive linac R over Q parameter r\ 270 270 270 /m

Drive linac accelerating gradient E] 6 15 15 MV/m Drive linac active length mLtot 2.5 0.8 2.24 km Drive linac quality factor Qj 5xl09 5xl09 5xl09

Cryogenic input power fycr = 0.2%)^(c r 33 67 186 MW - 27 - several GeV energy. This assures rigid drive bunches and the absence of any phase slip between the beams, thus eliminating all phasing problems for the tens of thousands of main linac sections. The required impedance of the transfer structure is very low. This will permit a design with a large enough aperture to cope with the longitudinal and transerse wake-fields due to the intense drive beam. The required drive charge is rather large. For the parameters of the first columns of Tables 1 to 3 each drive bunch has to contain 4 x 10^- electrons and there are forty such bunches per main linac pulse. Generation and acceleration to relativistic energies of these drive bunches appears to be the main difficulty with this scheme. At least this difficulty is confined to the injector.

If the output of each accelerating section is connected to an input of the following transfer section a suitably timed and phased recovery pulse, following the drive beam pulse, permits transfer of the energy left after the beam passage back into the superconducting cavities. This means a factor two in power economy for single bunch operation at the cost of extra complication but little additional cost of hardware.

4. CONCLUDING REMARKS

When demands for e+e" collisions first moved into the TeV energy region, with corresponding increases in desired luminosity, conventional approaches to acceleration seemed unpromising and this gave a strong incentive to conside more exotic ideas. In particular it seemed desirable to create extremely high accelerating fields to reduce the physical size of the accelerator. A large number of ideas were born, much larger than those listed earlier. Some were rather short-lived, but quite a few are sufficiently promising that it is well worth putting effort into further studies and development.

As stated already in the Introduction, however, at the present state of knowledge only one approach to a TeV collider seems to hold the promise of leading to a real project within the foreseeable future. As was shown in section 3.2, this approach is based on a normal conducting radio-frequency linear accelerator with a resonant frequency one order of magnitude above that of present-day linacs. The drive power can be derived from an auxiliary beam, which is in turn powered by a superconducting structure. Even for this seeming extrapolation from present-day technology several fundamental problems remain as yet unsolved while few of the innumerable details have been studied so far. - 28 -

The CLIC Advisory Panel has advised that CERN urgently create a self-contained team, consisting of a few full-time staff, some part-time staff and visitors, to study in depth the problems related to linear colliders. It is recommended that problems related to the scheme described in section 3.2 should be given first priority by the study team, in addition to general problems such as the injectors, (emittance shaping) the final focus system, disruption and beam radiation, and tolerances in all parts of the systems. On a much longer time scale more exotic schemes of acceleration might lead to solutions offering higher performance, in particular a higher energy for given total length of collider. Sufficient effort should, therefore, go into a continuation of these approaches, so as to keep the corresponding options open. It is important to keep close contact with similar studies in other laboratories, where expertise exists in laser and plasma physics. This could, for instance, be through the activities organised by ECFA.

With such an approach CERN and Europe could create a balanced research effort to get answers to the most urgent questions, and thus hopefully prove that an e+e~ linear collider is a valid option for a future exciting high-energy accelerator facility. - 29 -

REFERENCES

1. Tigner, M. (1965) : Nuovo Cim. 37, p.1228

2. Amaldi, U. (1976) : Phys. Lett. B61, p.313

3. Lawson, J.D. (1985) : CERN 85-12 (and CLIC Note 1)

4. Amaldi, U. (1985) : CERN CLIC Note 2 (also in Nue. Instr & Meth. A 243,

pp.312)

5. Montague, B.W. (1986) : CERN CLIC Notes 11 and 17

6. SLAC Linear Collider Conceptual Design Report (1980) : SLAC Rep. 229

7. Amaldi, U.; Lengeier, H. and Piel, H. (1986) : CLIC Note 15

8. Willis, W. (1984) : Proc. CAS-ECFA-INFN Workshop, Frascati, p.166

9. Voss, G and Weiland, T. (1982) : DESY Report M 82-10

10. Tajima, T., and Dawson, J-M. (1979) : Phys. Rev. Letters, 43, p.267

11. Chen, P.; Huff, R.W., and Dawson, J.M. (1984) : Bull. Am. Phys., 29, p.1355

12. Van der Meer, S. (1985) : CLIC Note 3

13. Schnell, W. (1985 and 1986) : CLIC Notes 4, 7, 13, 24

14. Hopkins, D.B., and Kuenning, R.W. (1985) : IEEE NS-32-5, p.3476

15. Wilson, P. (1985 and 1986) : SLAC-PUB-3674 and 3985

16. Balakin, V.E.; Novokhatsky, A.V. and Smirnov, V.P. (1983): Proc.

12th Intern. Conf. High En. Ace, p.119

17. Bane, K.L.F. (1985) : SLAC-PUB-3670

18. Henke, H. and Schnell, W. (1986) : CLIC Note 22 19. Sessler, A.M. (1982): Proc. Laser Acceleration of Particles AIP Conference, p.163

20. Sessler, A.M., and Hopkins D.B. (xxxx) : LBL 21613

21. Amaldi, U. and Pellegrini, C. (1986) : CLIC Note 16

22. Schnell, W. (1986) : CLIC Note 13 - 30 -

STATUS OF THE SUPERCONDUCTING SUPER COLLIDER

M. Gilchriese, SSC Central Design Group, Lawrence Berkeley Laboratory, Berkeley, Calif., USA

Professor M. Gilchriese gave a brief report on the status of the SSC proposal at the Summary Talks Session at CERN. His transparencies are reproduced on the following pages. Shortly afterwards, the SSC proposal was approved by the President of the USA for inclusion in the FY1988 budget submitted to Congress in January 1987. Congressional hearings are now taking place, and the site selection process has begun.

20 x 20 TeV

0 5 10 km L_i ' • ' I i i i i—J

SSC COLLIDER RING LAYOUT 31

Origins of the SSC ssc

1978,79 International workshops on ideas for VB A (Very Big Accelerator)

1982 American Physical Society DPF Summer Study Snowmass

1983 HEPAP Recommendation to DOE

1984 Reference Designs Study Initiation of SSC R&D by DOE (SSC Phase 0) Numerous Workshops and Studies

Summer & DOE contracts with URA for SSC, Phase 1 Fall 1984 URA founds SSC Central Design Group

Oct. 1,1984 Central Design Group commences officially LBL acts as host institution

FY 1985 & National R&D effort at the CDG and four R&D Centers FY 1986 (BNL, FNAL, LBL, TAC)

Major Recent Dates SSC

October 1984 Central Design Group begins

November 1984 Establish Basic SSC Design Features Delimit Magnet R&D Objectives

April 1985 SITING PARAMETERS DOCUMENTCPreliwnmr^}

June-July 1985 Review Magnet Designs and R&D

September 1985 MAGNET TYPE SELECTION

October 1985 Focus R&D on Chosen Magnet Prototypes

November 1985 Begin Conceptual Design

March 1986 CONCEPTUAL DESIGN REPORT,

Other Documentation April-May 1986 DOE Reviews of Conceptual Design and Cost Estimate August 1986 Begin Full-scale Magnet Tests at FNAL - 32 -

Possible SSC Time Scenario (unofficial) SSC

November 1986 Favorable DOE Decision January 1987 President's Budget Message February 1987 Start of Congressional Hearings Summer 1987 Initiation of Site Search October 1987 First Construction Funds Available » September 1988 Site Selected > July 1989 Start of On-site Activities » November 1989 Start of Conventional Construction » December 1993 Completion of Tunnel • February 1995 Completion of Installation of Technical Components • Summer 1995 Start of Experimental Program

Cost Summary

FY 86 K$

Superconducting Super Collider 3,010,318 Technical Components 1,424,161 Injector systems 189,252 Collider ring systems 1,234,909 Conventional facilities 576,265 Site and infrastructure 85,433 Campus area 42,860 Injector facilities 39,758 Collider facilities 346,803 Experimental facilities 61,412 Systems engineering and design 287,607 EDI 195,404 AE/CM services 92,203 Management and support 192,334 Project management 114,749 Support equipment 52,635 Support facilities 24,950 Contingency 529,951 - 33 -

PRESENT STATUS

DOE support of R&D

Funding level of $20M/year

No committment to construction as of yet

£"h/( fonder ''^-fitf-e diócaf&on."

Possible inclusion in FY88 budget. £ipec{ and hcf*-

-fixr 4e<Ù6cô*t èy end of ^fanua^t

Open competition for site when DOE decides to go ahead

Kl eu) at\â -fivuil ó¡-f«. ptnatue'Wí» ¿oca ment

~ redan v\©u_>. - 34 -

Groups Represented at the Third Annual (?) Sites Meeting

Arizona Calif ornia Colorado Florida Georgia Idaho Illinois Michigan Nevada New Ménica New York North Carolina Ohio Oregon Tenas Utah Washington

and ••• Canada Superconducting Super Collider States with Proposed Sites

12/18/B6 - 35 -

THE STANDARD THEORY GROUP

Conveners: G. Altarelli and D. Froidevaux

Experimental physicists: P. Bloch, D. Denegrí, L. DiLella, P. Igo-Kemenes,

J.P. Mendiburu, M.N. Minard, F. Richard and J. Sass

Theoretical physicists: E. Franco, E. Gabrielli, M. Greco, B. Meie and F. Pitolli

Contributions were also made to the work of this group by: A. Maggi, G. Kane, Z. Kunszt, J.J. Scanio and W.J. Stirling

ABSTRACT

The Standard Theory Group has concentrated its activity on 'discovery' physics at the Large Hadron Collider (LHC) and at the CERN Linear Collider (CLIC) within the scope of the electroweak model, purposely leaving aside precision experiments (which, however, are clearly important). Thus the main part of its work was devoted to the search for the Higgs boson and related subjects (charged Higgs, weak interactions becoming strong, etc.). Other problems considered were the search for new quarks and leptons and the production of intermediate vector bosons. - 36 -

THE STANDARD THEORY GROUP: GENERAL OVERVIEW

Presented by G. Altarelli Dipartimento di Fisica, Université 'La Sapienza', and INFN-Sezione di Roma, Rome, Italy and CERN, Geneva, Switzerland

1. INTRODUCTION The activity of our working group was concentrated on 'discovery' physics within the framework of the Standard Model. Thus the main emphasis was placed on the search for the standard Higgs particle, which is certainly the most important open issue, as well on to the study of the production and detection of charged Higgs scalars, heavy leptons, and quarks. The cross-sections for single and double W/Z production were also studied, not only as a background to new particle detection, but also for their own sake, e.g. for the experimental determination of the three gauge vector couplings or the search for new heavy weak bosons. Many important aspects of physics within the Standard Model were deliberately left aside (for example b- and t-flavour physics, two-photon processes, etc.) as we wanted to focus on the discovery potential of the new accelerators under examination. The energy and luminosities of the relevant colliders are summarized in Fig. 1. Although our main interest is, of course, focused on CLIC and the LHC, the comparison with the Superconducting Super Collider (SSC) is always kept in mind. Whilst the experimental possibilities of pp supercolliders have been extensively studied in recent years [1] not only in connection with the SSC [2] but also with respect to the LHC [3], the present study is the first one where the physics of e+e_ annihilation in the TeV energy domain is systematically considered. As a consequence, most of our original work is concentrated on e+e" colliders, in order to achieve a balanced overview of e+e" and pp supercolliders.

SCC PP Vs = 40 TeV

L = 1033cm-2s-'

CLIC LHC e+e" PP Vs = 2 TeV Vs = 16 TeV

L = 1033-1034cm~2s_l L = 1033 crrr2s_1

ep

Vs = 1.3-1.7 TeV

L = 1032-103lcm_2s-

Fig.l - 37 -

The possibility of studying ep collisions in the LEP tunnel will be a very important by-product of the LHC realization. A detailed and still up-to-date analysis of the ep physics with Vs~ » 0(1 TeV) was reported at the Lausanne Workshop [3, 4] (see also Ref. [5]). The conclusion was that ep experiments are very powerful, especially for studying three important domains of physics: i) the proton (and electron) structure (QCD scaling violations, precise measurement of parton densities in the proton, study of proton and electron compositeness, etc.);

ii) new currents (new left-handed currents mediated by heavy WL/ZL , right-handed currents induced by WR/ZR , possible new currents connecting light fermions with heavy ones, effective four-fermion interactions, etc.);

iii) particles with the electron lepton number (excited electrons, supersymmetric partners of the electron, heavy vc,

Majorana ve and, of course, leptoquarks). In the following we shall add some new results for ep physics, mainly in the search for the Higgs. The experimental clarification of the electroweak symmetry-breaking mechanism can be considered as The Problem in particle physics today. Not only the missing part of the experimental proof of the Standard Model but also essentially all unnatural features of that theory, and the consequent quest for new physics, are connected in one way or another with this crucial problem. It is of the utmost importance to know whether a Higgs particle exists or not. If it is found, one would like to study its properties in order to establish whether it is a composite object or a fundamental particle like, and to the same extent as, quarks, leptons, and gauge bosons. If it is composite, one should look for the signals associated with the existence of the new interaction responsible for the binding. If it is elementary, then supersymmetric particles with masses msusY < 0(1 TeV) should probably be found if the hierarchy problems connected with fundamental scalar fields are indeed to be solved by supersymmetry. If the Higgs does not exist, then possible scenarios include: technicolour, the onset of weak interactions becoming strong, composite quarks and leptons and/or composite W/Z. Clearly, a rich phenomenology should appear in all cases at some level, and a large amount of experimental work will be needed in order to clarify these complicated sets of possibilities. A reasonable and efficient way to organize the experimental attack on the electroweak symmetry-breaking sector is to set up the search for the standard Higgs particle. What is the interesting mass region for this search? Little is known about the Higgs mass niH. The experimental lower bounds on niH are surprisingly small. From the absence of long-range forces in atomic and nuclear physics experiments, one obtains [6] niH > 15 MeV. According to a recent reanalysis [7], from the experimental limits on K* -» ir±£+£~ one can rule out a Higgs boson in the mass range 50 MeV < niH ^ 211 MeV. But some model dependence is necessarily associated [8] with the evaluation of the hadronic matrix elements of the weak current. From the decay Y -» H7, one would obtain a limit niH > 4 GeV [9] if the Wilczek [10] expression for the relevant branching ratio in Born approximation was adopted. However, the QCD corrections, first computed by Vysotsky [11] and recently confirmed by Nason [12], are large and therefore the decay rate is not reliably computed in perturbation theory. Unfortunately, the corrections appear to go in the direction of substantially reducing the theoretical branching ratio, so as to make all meaningful limits on niH evaporate.

On the other hand, theoretical lower limits on mH based on stability against quantum fluctuations of the unsymmetric vacuum [13] depend on the assumption that the top quark is light and that there are no heavy

generations. For example, the limit mH > 7 GeV which is obtained [14] in the light-top case from the requirement that the vacuum be stable (which is not really necessary—it could be almost so), is in fact completely washed out if

mt ~ rriw- In view of the smallness of all lower limits on mH, it is quite plausible that the Higgs boson will be relatively

light, say mH < mw/z, and that it will be discovered at LEP 100 or LEP 200. The best channels for Higgs discovery at LEP 100/200 [15] are e+e" ZH(H -+ (where the intermediate Z's can be either real or virtual (according to the value of Vs) and H -» H7 (H = toponium) provided that the 2 mass is in the LEP range. Thus, for the sake of the following discussion, we shall assume that mH >mw/z. - 38 -

Upper bounds on the Higgs mass are obviously important for fixing the accelerator energy. All theoretical upper bounds on DDH are based, in one form or another, on requiring perturbation theory to be valid up to some large energy scale A. Clearly this perhaps appealing requirement is in no way necessary. As is well known, the coupling constant X of the quartic term Xfàty)2 in the Higgs potential increases with mh, because of the tree-level relation niH « X/Gf. Also for a given IHH, X increases logarithmically with energy since the theory is not asymptotically free in the Higgs sector. Thus the requirement that X does not grow too large, so that perturbation

theory is valid up to the Planck mass mp, leads [16] to mH < 200 GeV. Similarly, from the need to avoid problems

2 (up to mp) due to the possible triviality of the X(#fy) theory, the limit mH < 125-200 GeV was obtained [17]

(depending on mt). However, it must be stressed that if mH is made to increase, no physical contradiction is actually met. All that happens is that at HIH > 0(1 TeV) the Born scattering amplitudes for longitudinal gauge bosons violate unitarity [18], clearly showing the breakdown of perturbation theory [19]. The helicity-zero state of a gauge boson is obviously connected with symmetry breaking because it does not exist for massless vector bosons. So it is not surprising that the first signals of the non-perturbative regime appear in the sector of amplitudes involving helicity-zero states of vector bosons. Also for mH > 1 TeV, where the weak interactions become strong, the Higgs resonance becomes very broad [1],

rH « -mH(inTeV) for mH >2mw, (1) and eventually is dissolved in a continuum. In conclusion, if an experiment is designed such that it can detect the Higgs boson—if such a particle exists—with mass mH < 0(1 TeV), then either the Higgs boson will be discovered or new physics will be found or, at the very least, one will be able to study the onset of the new regime where the weak interactions become strong

[20].

2. HIGGS PRODUCTION IN pp COLLISIONS The most important Higgs production mechanisms in pp reactions in the multi-TeV energy domain lead to the cross-sections shown in Fig. 2 (taken from the Proceedings of the Lausanne Workshop). Two dominant

pp —*-H» X /s = 20 TeV

KT

» 100 200 300 ¿00 500 600 700 800

Fig. 2 - 39 -

—i—i—i—i—r—i—i—i—r- i ' • ' ' i ' ' 1 1 i ' ' ' ' i ; K2, V 10» p p —

W,2 3, i' í —>j/S=40TeV 10° —— Fig. 3

b

lO"1

^\J0 -

10~2 —i • • • 1 i i i i 1 .... 1 .... 1 .... 1-, 250 500 750 1250

mH (GeV)

Fig. 4

mechanisms clearly emerge (Fig. 3): WW fusion (at large HIH) and gg fusion [21] (at relatively small mH). The WW (we actually mean WW plus ZZ) fusion cross-section and distributions can be computed almost without ambiguity. The total cross-section for WW fusion in pp reactions is shown in Fig. 4 (taken from Ref. [27]), as a function of mH at different values of Vs~. On the other hand, the gluon fusion cross-section is uncertain [1] not only because of our

relatively rough knowledge of the gluon density but also, and especially, because of our present ignorance of mt and of the spectrum of heavy coloured particles that could contribute in the loop. In particular, the dependence of the quark-loop form factor |r/f on the quark mass m¡ is shown [1] in Fig. 5. The form factor \r¡\2 reaches a maximum for

m, = 0.4niH. Thus for large mH the cross-section increases very rapidly with mt. The gg cross-section for Vs~ =

20 TeV, and mt = 40 GeV and 100 GeV is compared with the WW cross-section in Fig. 6 (obtained by B. Meie).

The WW process is dominant for mH > 300 GeV or mH > 700 GeV if m, = 40 GeV or 100 GeV, respectively. The effect of the possible existence of heavy scalar quarks, as predicted by supersymmetry, is sizeable but not as large as it could be naively imagined. In fact it has been shown [22] that the squark contribution in the loop is proportional to quark (and not to squark) masses.

b 10"

200 400 600 B00

mH (GeV)

Fig. 5 Fig. 6 - 40 -

In conclusion, the WW cross-section is a guaranteed lower limit for the Higgs production in pp reactions. The additional gg contribution is certainly important for mH < 300 GeV. Its exact value cannot be given at present. In any case there is plenty of cross-section in pp reactions. At Vs = 15 TeV (40 TeV) and f L dt = 1040 cm-2 there are

3 4 3 4 « 10 (10 ) events at mH « 1 TeV, and > 7 x 10 (> 4 x 10 ) events at mH » 0.5 TeV. As is well known, in pp reactions the main problem for Higgs detection is the formidable background [1-3, 23].

3. HIGGS PRODUCTION IN e+e" ANNIHILATION All production cross-sections for the Higgs particle in e+e~ annihilation can be precisely predicted. The dominant production process at Ebeam » 0(1 TeV) is WW fusion. The corresponding cross-section for Vs~ = 2 TeV

is plotted in Fig. 7 as a function of mH (from Ref. [27]). The ZZ fusion contribution is almost negligible. The WW fusion cross-section at different centre-of-mass energies is shown in Fig. 8 (from Ref. [27]). The corresponding cross-sections in pp reactions are also plotted for comparison. As seen from Fig. 8, the cross-sections in e+e~ annihilation are smaller than the corresponding ones in pp reactions. However, the production rate is quite good also in the e+e~ case. For J L dt » 1040 cm-2 and at vT = 2 TeV, » 4000 (*= 300) events are predicted for mH =

0.500

0.100

0.050 "2 & b

0.010

0.005

0 200 400 600 800 1000

mH (CeV)

Fig. 7

loi

10° 'S b

îo-i

ÎO-2 0 200 400 600 BOO 1000

mH (GeV)

Fig. 8 - 41 -

100 GeV (1 TeV). As we shall see, the smaller cross-section in e+e" is compensated by the absence of the very large QCD backgrounds that make Higgs detection difficult in pp collisions. We shall first briefly discuss some other less prominent Higgs production processes in e+e~. The reaction e+e" -+ Z -» ZH, which is dominant at LEP 200 energies, dies off as (1/s), as is the case for all processes that proceed via s-channel exchange. Thus its cross-section is quite small at VF » 0(1 TeV). At vT = 2 TeV,

39 2 x 10" cm for mH < 1 TeV, as shown in Fig. 9. In Fig. 10 the HZ cross-section for mH = 400 GeV is compared with the e+e~ Higgs cross-section from WW fusion at different values of Vs~. One finds that the WW fusion process is dominant over e+e~ -» ZH for Vs~ > 0.5 TeV. We also studied Higgs production via 77 fusion (Fig. 11), which in e+e~ reactions is analogous to gg fusion for pp reactions. The 77H coupling is induced by a quark or W loop. For

mt » 40 GeV the W-loop contribution is dominant. For larger mt the cross-section actually decreases because of the negative interference between the quark and W loop diagrams. The resulting cross-section is plotted in Fig. 12 (obtained by A. Maggi). It is clear from this figure that the 77 fusion process can in no way compete with the WW fusion reaction.

e+e- ~> HZ" Ebeam = 500,750,1000 BeU

0.0100 0.0090 0.0080 0.0070 0.0060 _ 0.0050 £! 0.0040 b

0.0030

0.0020

U.UUUD 0.000B 0 250 500 V50 1000 1250 1500

mH (GeV)

Fig. 9

lo-i

b 10-2

10-3 0.7 0.8 0.9 1 2 3 4 5 6 VS (TeV)

Fig. 10 The WW fusion process has been extensively studied. The first estimates were obtained [24] from the equivalent W/Z approximation, modelled after the Weizsäcker-Williams approximation for the photon case. This approximation is valid for energies which are large in comparison with mw (e.g. in the case of Higgs production, for mH • mw). A number of numerical evaluations of the exact matrix elements also exist [25]. More recently, some exact analytic calculations of double-differential distributions were produced. In Ref. [26], Cahn has derived the exact double-differential distribution in the energies of the two final fermions. In Ref. [27] our group, by performing a different set of integrations than those in the work of Cahn, derived the exact expression of the differential cross-section with respect to the Higgs three-momentum. In fact, the final fermions are in general not observed (in e+e~ annihilation the final leptons associated with the Higgs are mostly neutrinos) and in any case are not the primary object of interest. Instead, the differential cross-section in the Higgs variables directly leads to the rapidity (y) and transverse momentum (kr) distributions of the Higgs. The y and KT are the important variables for computing the relevant distributions of final states from Higgs decay and for background separation. In particular

an essential feature of the Higgs KT distribution is that (da/dkT) is concentrated near kT « mw for all values of Vs" and mH of interest here. This is illustrated in Figs. 13 and 14 [27], which show the KT distribution for mH =

1 1 r- ,,,,,, 1 , 1 , , i ,

mH = 200 GeV / / VS = 2 TeV \ — 1 TeV - /

- / \ \ — / / s \ • / / \ \ • • / / \ \ - • / / \ \ - \ \ \ \ 0.5 J ' Ii \ - J V. II I 1 1 u_ 1 .... 1 . i 0 100 200 300 PT (GeV)

Fig. 13 200-500 GeV and VF = 1-4 TeV. This feature plays an important role in the separation of the Higgs signal from the 77 background (e.g. 77 -» WW etc.) which is concentrated at kT = 0 (kT is the total transverse momentum of the pair of W's). This issue will be discussed in more detail in the following.

4. HIGGS PRODUCTION IN ep REACTIONS In this section, we also report for completeness on the production of Higgs particles in ep collisions, already discussed in Ref. [28]. In this case the dominant production mechanism is again WW fusion. The production cross-section for Vs" = 1-2 TeV as computed in Ref. [27] is reported in Fig. 15. (For comparison, the cross-section at HERA energies is shown in Fig. 16.) It is obvious that with respect to the Higgs problem the situation in the ep case is much worse than for e+e" collisions at VF « 0(1 TeV). In fact the cross-section is smaller (because only a fraction of the proton energy is available to the interacting quarks), the luminosity is smaller (recall that for the LHC the luminosity of ep collisions is expected to be >= 1031 cm-2 s"1 at large VF and » 1032 cm"2 s-1 at small VF), the background is larger (whilst quark pairs are produced by 77 reactions in e+e" annihilation, they are obtained by 7g fusion in ep collisions), and finally the unbalanced kinematics of electrons and protons in the laboratory frame

10-»

'S -S b

10-2

10-3 0 100 200 300 m„ (GeV)

Fig. 15 - 44 -

p + e- VS = 0.32 TeV

0.010

£ 0.005

0.001

10 20 30 40 50 60

mH (GeV)

Fig. 16

makes the analysis a lot more difficult. Thus there is no comparison between the discovery potential of CLIC and that of the ep collider of the LHC. The really interesting question is, however, the following: It is well known [23]

that the detection of an intermediate-mass Higgs with mz < mH < 200 GeV is virtually impossible at pp colliders because of the overwhelming QCD background. This mH interval where pp colliders are Higgs-blind is actually more important than its relative smallness would naively suggest. In fact the domain of masses near mw/z or (Gf)-1/2 « 300 GeV could be a very natural location for the Higgs particle, because it is precisely the Higgs mechanism that generates the Fermi scale. Thus if a Higgs boson with mH in the window where the LHC pp collider cannot detect it, could instead be visible in the ep mode of operation of the LHC, then this would lead to the important result that the LHC is altogether suitable for discovering an 'intermediate' Higgs. Unfortunately, we shall see in the next section, where the problem of the 'intermediate' Higgs is discussed in detail, that this is not the case. Actually detecting a Higgs boson in ep collisions is almost as difficult as in the pp case.

5. THE 'INTERMEDIATE' HIGGS: mw/z < mH < 200 GeV The dominant decay mode of the 'intermediate' Higgs is into heavy quarks: H -> QQ. Actually the quark mode has a large enough branching ratio for detection up to mH < 300 GeV. Most likely the heavy quark Q is a

t-quark (if mH > 2mt, i.e. for relatively light top) or a b-quark (if mH < 2mt, i.e. for heavy top). As already mentioned, it is well known that the detection of a Higgs particle with decay mode into a pair of heavy quarks is considered almost impossible in pp collisions (for recent analyses see, for example, Ref. [23]). In fact the signal is overwhelmed by the QCD background from gg -» QQ. The associated production of H and W*, i.e. pp -* HWX, might seem promising because of the possibility of triggering on the leptonic decay modes of the W*. However, also in this case the background from pp -» QQWX (Fig. 17) is very large, and its rejection would

I

Fig. 17 - 45 - require values of the QQ mass resolution and the b versus t discrimination which so far are unrealistic. The use of rare decay modes [29] of the Higgs (e.g. H° -* yy or T+T~) does not appear to solve the problem of Higgs detection. The existence of a fourth generation with appropriate mass could perhaps better the situation [29]. For example, the decay H Li, with L* a fourth-generation lepton, could provide a good signal. In addition, the speculative idea of looking for the Higgs in the decays of a paraquarkonium made up of charge -1/3 fourth-generation quarks has also been recently considered [30].

It is interesting to remark [23] that the perspectives of Higgs discovery in pp reactions are less hopeless if mH

< 2mt and the dominant decay mode is H -* bb. This is because the possibility of tagging the b may be better than for the t. Also, the branching ratios of rare decay modes of the Higgs increase because of its smaller total width in this case. The situation is entirely different in e+e" annihilation. The main advantage is the absence of QCD backgrounds. Here the most important background is given by the production of a QQ pair with invariant mass near mH through yy or 7W fusion (Fig. 18) (the possibly radiative QQ production through s-channel exchange of y and Z does not contribute appreciably at such small invariant masses). The corresponding analysis of the signal-to-background ratio has been worked out in detail by our study group. Some of our results have already been published in Ref. [31]. There it is shown that the detection of a Higgs particle of intermediate mass is indeed possible in e+e~, even without the need to tag the heavy quarks in the final state of H -» QQ. Before all cuts and with perfect resolution the comparison between the signal and the 77 background is shown in Fig. 19 (due to

E. Franco) for mH = 200 GeV and mt » 40 GeV at VF = 2 TeV. The upper background curve refers to the sum of all kinds of quarks (if light and heavy quarks are not experimentally distinguished), whilst the lower background curve is the contribution of tt pairs. Note that there is no appreciable interference between the signal and the back• ground reactions. In fact, except for the small ZZ -> H component, there is a vv pair in the final state of e+e" -»

Fig. 18

E =1 TtV M =200 GeV beam H

I<\<\

190 195 200 205 210 M (GeV)

Fig. 19 - 46 -

1.0

0.0 o 100 300 300

Fig. 20

HX and an e+e~ pair in the background reaction through 77 fusion. For realistic mass resolution, the 77 back• ground can be sufficiently suppressed by imposing suitable cuts on the total transverse momentum of the produced QQ pair and on the angles of the outgoing quarks with respect to the beam direction in the laboratory frame.

The kx distribution of a system of invariant mass M produced by 77 collisions is peaked at kx » me—me

being the electron mass—and drops as l/kT apart from logarithmic corrections. According to Eq. (5.31) of Ref. [32], one has

1 d£ 1 2k2E2 « const. —In 2 2 E dkT kT m M '

2 where E(M) = (dcr/dM ). As an example, Fig. 20 shows a comparison between the kT distribution of the Higgs signal for M = mH = 500 GeV and H -» WW/ZZ, and the 77 background corresponding to 77 -» WW [what is plotted is (dov/dkTdM^H, where TH is the Higgs width]. The result is quantitatively similar for the intermediate-mass Higgs with decay H -+ QQ. By imposing a cut at small kr, one can easily improve the signal-to-background ratio by a factor of about 3. An additional opportunity for suppressing the qq background is to make use of the sharp forward-backward peaking of the quark and the antiquark produced by 77 collisions. For a given invariant mass of the qq pair, the

transverse momentum of the individual quark is of order niH for the signal and of order mq for the background. Thus the angular cut is particularly effective for suppressing light quarks which make the largest contribution to the background when heavy and light quarks are not experimentally distinguished. The relevant formulae for the angular distribution of the signal and background reactions are collected in Ref. [31] together with a description of the numerical results. Assuming for the mass resolution the value (in GeV) R = O.&Jmn we obtain the results of Figs. 21 and 22 for the signal-to-background ratio. The curves shown correspond to the resolution R with an angular cut (for at least one quark with) cos0 < 0.75. The cut on the total transverse momentum of the qq pair is not applied and could be used to deplete the background even further. Obviously, if one was able to tag the heaviest quark then the signal would stand out much more prominently, but this is not necessary. The possibility of

enhancing the t-quark signal (with mt => 40 GeV) over that of light quarks by imposing an acoplanarity cut (the t events are in fact more spherical) has been studied by Bloch and Sass [33]. The conclusion is that a rejection factor of the order of 4-5 could be achieved, but at some cost to the signal. Also, note that for the purpose of the 'intermediate' Higgs search, the centre-of-mass energy can be considerably smaller than 2 TeV. We see from Fig. 22 that Vs~ = 1 TeV is also good (with a luminosity of 10" cm- 2 s- '). - 47 -

+ e e~: EDeam = 1000GeV

100 150 200 250 300

mH (GeV)

Fig. 22

The background due to the process 7W -» qq' was also studied in detail. It can be particularly dangerous because in this case the total transverse momentum of the qq' pair is of order mw, exactly like the signal. Thus for its suppression we can only rely on the angular cuts, which, however, are less effective than for the 77 background (just because of the large kr of the qq' pair as a whole). The calculation of this process is not as simple as the 77 reaction, because the equivalent W approximation is not applicable for mH not much larger than mw. Thus one must restrict the Weizsäcker-Williams approach to the photon and compute the cross-section for 7e -> Wqq' v. The calculation is described in detail in a paper contributed to this Workshop [34]. The result is that this additional background never exceeds the 77 contribution (for mH > mw), and thus does not alter the previous conclusions. The situation is summarized in Fig. 23.

+ 33 -2 1 In conclusion, an e e" collider with Ebeam «•» 0(1 TeV) and L « 10 cm s" is an ideal tool to use in the search for and discovery of the 'intermediate'-mass Higgs boson. - 48 -

coso < 0.75 Ebeam = 1 TeV

100 150 200 250 300 350 400

MH GeV

Fig. 23

We now return to the question of whether an intermediate-mass Higgs could be detected in ep collisions at the LHC. As already mentioned, this is a very important question because a positive answer would imply that the LHC, taking the ep and pp modes together, can cover the whole region of Higgs mass from mH > mw,z up to some large mass (the upper limit is discussed in the following section). According to the cross-section for Higgs production in ep collisions plotted in Fig. 15 (from Ref. [27]), at Vs~ = 1.5 TeV and L = 1032 cm"2 s"1 (corresponding to f L dt - 1039 cm- 2 per effective year) there are 180 (100)

Higgs events per year for mH = 100 GeV (mH = 150 GeV). The qq background from yg fusion is displayed in Fig. 24 (obtained by Z. Kunszt) and is also in agreement with Ref. [28]. First of all, it is completely clear that no possibility of detecting the Higgs exists if the heaviest quark into which the Higgs decays is not experimentally distinguished from all lighter quarks. We thus assume that this is the case and that the mass resolution is

! ! , 1

M (GeV)

Fig. 24 - 49 -

R = O.óVrriH (in GeV). Then from Fig. 24 we obtain the integrated cross-section ~ (da/dM)R. The number of background events per effective year is roughly 104 (5 x 103) for mH = 100 GeV (mH = 150 GeV). Thus even for total rejection of light quarks, the signal-to-background ratio is about 1:50. A combination of angular and total transverse momentum cuts can still be applied. They are less effective here than in the 77 case because an average kx of order a,mH can be expected from QCD effects. A reasonable estimate is to foresee a gain of a factor of 5 in the signal-to-background ratio from the application of these cuts. Thus, even with very optimistic assumptions regarding the luminosity and the heavy-quark versus light-quark rejection, we are still left with a difference of one order of magnitude between the signal and the background. We conclude that the ep mode of the LHC does not seem to help solve the intermediate Higgs problem at that machine.

6. HEAVY HIGGS: H - WW/ZZ In this section we consider a Higgs of mass mH > 2mw with main decay modes H WW or ZZ. The branching ratio for H -» WW is twice as large as that for H -• ZZ. For mH • 2mw , the width of the Higgs is given approximately [1] by TH == (1/2) (mH)3, where TH and mH are expressed in TeV. We start by summarizing the state of the art for Higgs searches in pp reactions as we have analysed it in our working group, after taking into account the large body of existing studies on the subject. This problem is also discussed in great detail in the accompanying paper by Froidevaux [35]. In Section 2 we have seen that the total production cross-section for a heavy Higgs in pp reactions at Vs" = 10-40 TeV is comfortably large. Thus the problem is the background. There are two types of background. The first

type is the direct production of a pair of weak bosons. For mt

Denegrí [36] has analysed in detail the case of mt > mw, with the consequent decay mode for the t-quark, t -» b + W. The number of WW events from tt production and subsequent decay is in this case larger than the ordinary WW

continuum by about two orders of magnitude (for mt < 150 GeV). Leaving aside this possibility which would certainly pose an additional problem, the second type of background is represented by the production of quarks and/or leptons so as to simulate the decay products of WW or ZZ. In particular, the QCD backgrounds in the four-jet channel or the mixed channels Wjj or Zjj are terribly large [37-39]. In fact, according to a detailed analysis carried out in our working group by DiLella and Froidevaux and discussed in the already mentioned contribution by Froidevaux [35], the conclusion was reached that no convincing proof exists so far that the channel H ->• í yjj is detectable (and the channel H -» jjjj is hopeless). The existing claims that the signal from H -» lv\\ can be rescued from the background [39] appear too optimistic regarding the resolutions and efficiencies when confronted with the experience collected at the CERN pp Collider. The only hope of detecting the heavy Higgs in the hadronic final states appears to be restricted to the possibility of implementing an efficient system of quark tagging, as discussed in Ref. [40]. If the Higgs boson is sufficiently heavy, the dominant production mechanism is WW fusion. The outgoing quarks, after emission of the bremsstrahlung W's or Z's, have transverse momentum of order mw and large longitudinal momentum. Typical angles for these outgoing quarks are 8 « (5-10)°. Background processes such as qq -» VV or qq.gg -» Vjj (V = W,Z) can then be suppressed. Other backgrounds such as qq -* jjVV or jjjjV have been roughly estimated [40]. The conclusion is that double tagging could indeed work. This is a formidable challenge for the calorimetry, but the Calorimeter group appears to have reacted positively. The problem is really important because the large branching ratios of the hadronic decay modes of the W or Z would allow a more extended reach in mH.

Considering that the possibility of detecting a heavy Higgs in the hadronic modes is not really established, we are left with the leptonic modes. In particular, the best possibility, which is certainly available, is to observe the Higgs in the channel H -» ZZ -+ l+i~vv, with I = e,/i. As studied in Refs. [40] and [41] and also by DiLella and Froidevaux in our group, this mode allows the Higgs to be discovered up to masses which we estimate as follows: - 50 -

rriH < 0.6 TeV (LHC) ,

mH

of mH corresponding to a number of events that is not less than 50 per year in the different decay modes. The first column refers to L » 1033 cm-2 s~ \ whilst the second one refers to L = 5 X 1033 cm-2 s- \

Table 1

Higgs mass (TeV) reach at CLIC

Modea) jLdt = 1040cm-2 {Ldt = 5 x lO^cnr2

WW - jj-jj 1.2 > 1.2 ij-iv 1.1 > 1.2 iv-lv 0.4 0.9

ZZ - jj-jj 1 > 1.2

jj-a 0.4 0.9 Ü-vv 0.8 > 1.2 ll-vv - 0.5 ll-lt - -

a) I = e, /t; vv = E¡ vü>\

Fortunately, it turns out that discovery of the Higgs by using the hadronic decay channels of W's and Z's is indeed possible in e+e~ annihilation. The experimental problems connected with the heavy-Higgs detection in e+e~ annihilation have been studied in our working group by Richard [42]. The reader is referred to his paper for a complete discussion (see also the talk by Froidevaux [35]). Here only the main points will be described and the results given. The problem of heavy-Higgs production and detection in e+e" annihilation has been studied in Refs. [25] and [43]. The main backgrounds (see Fig. 25) turn out to be the reactions 77 -* WW and 7W WZ (the latter applies to the case of interest when at least one of the weak bosons decays into a pair of jets). Whilst the 77 -> WW reaction has been extensively studied in the past [44], the channel 7W WZ was studied in our working group for

Fig. 25 - 51 -

T T

e+e- VS = 2 TeV H-(WW),(ZZ)

mH=300,500,700GeV_ 77 -> WW

7WI-» ZW*

1(T

200 600 800 1000

mvv (GeV)

Fig. 26

1 r 0.16 ee —eeW w—ee4j 1 ii 1— ee — ee w Z — ee4j

pT > 15 GeV YY — *w ee — ee y*y-- ee 4i 160* > S| > 20» (pb) 7.0 yW — WZ «Ü > 30' 0.12 rr — IQIQ. a«200GeV

mH = 300 GeV ww —(ww, zz)H ! 5.0 •Ss ' 2 TeV p{ >ZOGeV 0.06 E¡°¡>200 GeV L4.0 fljj >30» ; 3.0 160 >0¡ >20° 0.04 2.0

1.0

500 600 2.0 3.0 4.0

M4j (GeV) Vs (TeV)

Fig. 27 Fig. 28

the first time (to our knowledge) by Meie (Fig. 26) [45] and, independently, by Kunszt (Fig. 27). Note that the equivalent W approximation is quite accurate at large mn. The direct production of four jets by yy fusion was considered by Kunszt and found to be small (Fig. 28). Similarly, the radiative production of a pair of W's, i.e. the process e+e~ -»• W+W~7, has been studied by Greco et al. [46], and shown not to pose any problem. The comparison of the signal and the main backgrounds before all cuts is shown in Fig. 26 for VF = 2 TeV and mH = 0.3-0.7 TeV (see also Fig. 27). Note that the Higgs production cross-section in Fig. 26 is evaluated in the resonant approximation. As mH increases up to about 1 TeV, the width of the Higgs becomes large, and one should consider Higgs exchange as a particular contribution to WW -» WW + ZZ scattering [47]. The most effective way to deplete the background is once again to impose acollinearity cuts (which suppress both 77 -* WW and 7W WZ) and transverse momentum cuts for the VV pair as a whole. This second cut is effective against the largest background, i.e. 77 -* WW (recall Fig. 20), whilst 7W WZ has a kT distribution which is quite similar to that of the signal. It is important to stress that it should be possible to remove the 77 WW background completely by rejecting events with electrons at small angles with respect to the beam [42]. One - 52 -

e*e" /s = 2 TeV

2 100 MH = 500 GeV/c

-*WWor 21—1, jets

.50 t Background from — ' n and f W after cuts

N > 60

1.00 500 600 700

Total mass in GeV/c2

Fig. 29 - 53 - can also implement polarization tests [43] using the fact that vector bosons from Higgs decay are longitudinally polarized. The conclusion was that H -> 4 jets appears to be the best mode for the detection of a heavy Higgs. The reach in one effective year (107 s) was found to be

-2 1 L = 10" cm s" mH < 0.6-0.8 TeV (CLIC)

34 -2 _1 L = 10 cm s mH

The luminosity is the crucial parameter that fixes the upper boundary of the discovery region in e+e~

annihilation. The signal-to-background ratios obtained by Richard for mH = 0.5 TeV and mH = 0.8-1 TeV at Vs" = 2 TeV are shown in Figs. 29 and 30a,b.

7. CHARGED HIGGS Charged Higgs appear as soon as more Higgs doublets are added to the standard electroweak theory. In principle, different doublets could separately give masses to u-quarks, d-quarks, and charged leptons without destroying the natural cancellation of flavour-changing neutral currents induced by Higgs exchange. A very interesting fact is that, in general, supersymmetry—whether broken or exact—requires at least two Higgs doublets. As supersymmetry appears to be a reasonable completion of the Standard Model if fundamental scalar Higgses are present and the theory is to be natural, the possible existence of charged Higgses is made more plausible. Charged scalars are also expected in technicolour and compositeness scenarios. For a 'normal' doublet charged Higgs [47]: a) couplings to fermions are in proportion to masses and mixing

+ angles; b) there are no H W~Z and H+W"y vertices at the tree level. Thus, in first approximation, the above couplings can be safely ignored. One important consequence is that a charged Higgs decays into a pair of heavy quarks or leptons. As for the case of the neutral Higgs, one assumes here that the charged Higgs has a large mass,

above the LEP 200 range, i.e. MH > mw,z- Then, assuming MH > mt + mb, the typical decay mode to be expected is H+ ->• tb. The discovery of a heavy charged Higgs in pp collisions is nearly hopeless. The production cross-section is relatively small, the main mechanisms being the Drell-Yan process with y and Z exchange or tb fusion. Then, as for everything that is produced by electroweak interactions and decays into hadron jets, the detection is made prohibitively difficult by the QCD background. Thus we conclude that presumably charged Higgs cannot be detected in pp reactions. The only possibility is the existence of a heavy quark Q with niç. > mH. In this case the decays Q -» qW and Q -* qH+ have comparable branching ratios [49]. Since the cross-section for QQ production is large, we could try to observe the chain pp QQX -» qHqWX. We did not study this possibility in itself, but we refer to the next section where heavy-quark detection (in the channel QQ -»• 4 jets + Iv) is discussed.

The main production mechanism for heavy charged Higgs bosons in e+e~ annihilation is by exchange of a single y or Z in the s-channel which, at VF = 2 TeV, is dominant over the production by yy fusion if mH > 130 GeV (as seen in Fig. 31 computed by Meie). The resulting H+H~ cross-section is relatively small (Fig. 32). In e+e~ collisions the production cross-section of heavy quarks is not significantly larger than the H+H~ cross-section. Consequently, the rate of charged Higgs production from heavy-quark decay can at most be of the same order as that from direct pair production (although the detection problem would be somewhat different). Thus in the following we do not consider the possibility of H+ production by Q decay. For a pair of heavy charged Higgs with MH < 0.8Ebeam, the production cross-section at Vs" « 2 TeV from

3 single y/Z exchange is computed to be aH+H- « 6.5 x 10" pb. For 0.1 < MH < 0.2 TeV there is an additional yy contribution of the same order of magnitude. The final state is made up of four jets: H+H" -» tbtb. If mH is too close to mw the backgrounds from e+e" -» W+W" or e+e" -» e+e"W+W" are impossible to beat [at Vs" = 2 TeV, - 54 -

H+H"

' ' 1 1 1 1 1 ' 1 ' 1 ' 1 1 1 • ' • i - -

mH=50,100,150.200,250 GeV so lu"1 \

-

/ \ w _ -

îcr2

• / -

-3 10 DJ i . . Ii i. .X. . i Li _j . Lf<\_i i \ I .i i. i . i . L 1 0 12 3 4 y/S (TeV)

Fig. 31

E+E" CROSS SECTIONS

VS(TeV)

Fig. 32

a(e+e- -» W+W~) is two orders of magnitude larger than a(e+e~ -* H+H~), and o(e+e~ -» e+e~W+W_) is even bigger, as seen from Fig. 32]. In conclusion, the detection of charged Higgs bosons is generally impossible in pp reactions. At CLIC the production cross-section is small. Thus the luminosity is crucial for this problem. At vT = 2 TeV and L = 1033 cm-2 s~ the problem of detecting the charged Higgs is a difficult one. As a consequence, the detection of charged Higgs bosons is only possible for large enough MH—the larger the better—up to mH < 0.8Ebeam- A signal given by two sufficiently heavy particles with the totality of the cm. energy, each decaying into a pair of heavy-flavoured jets, should be visible without problems. As for all processes dominated by s-channel exchange, it

œ may be convenient to reduce the cm. energy to Ebeam mH (TeV)/0.8. The discovery range at CLIC is given by

2mw < mn < 0.8Ebeam- - 55 -

8. HEAVY QUARKS AND LEPTONS The production and detection of sequential heavy quarks and leptons at the LHC and CLIC has been studied by several subgroups of our team. We recall that the mass splitting within an isospin doublet is limited by the experimental value of q, the ratio of neutral-to-charged weak-current processes, and by the observed agreement

2 between the low-energy determinations and the collider measurements of sin 0w [50]. The maximum allowed splitting is about 300 GeV for quarks and about 500 GeV for leptons. However, there is no limit on almost-degenerate doublets. This is also the reason why an upper bound on masses of squarks and sleptons (the supersymmetric partners of quarks and leptons) cannot be derived from the experimental smallness of radiative corrections [51]. Heavy charged leptons at the LHC have been studied by Froidevaux and Mendiburu. The reader is referred to

+ the paper by Froidevaux [35] for a complete discussion. The main signal is from the subprocess qq -* W -» L+pl with subsequent decay L Wv -» jjv. The background is from qq -+ Z + jj followed by Z -* vv. The conclusion is that the discovery of heavy leptons in pp reactions is possible within the range (for L = 1033 cm"2 s~1 and one year of 107 s):

mL < 0.5 TeV (LHC),

mL<0.7TeV (SSC).

The problem of heavy quarks at the LHC has been analysed by Minard [52]. The heavy quarks can be produced in pairs by gg fusion. Assuming the decay Q qW, the six-jet mode is not suitable for detection because of the QCD background. More promising is the decay QQ 4 jets + lv, with tagging for the W mass of two jets. We refer to the papers by Froidevaux and Minard for details and simply quote here the result. For L = 1033 cm- 2 s~1 and one effective year, the range of masses explored is given by

m„ < 0.8 TeV (LHC),

mQ < 1 TeV (SSC).

In e+e~ annihilation there are two competing contributions to the fermion-antifermion cross-section: the single 7/Z exchange in the s-channel, and the 77 fusion term. At fixed VF the 77 contribution is dominant for small fermion masses. For example, at VF = 2 TeV, the one 7 or Z term is dominant at m > 150, 100, and 50 GeV for charged leptons, u-quarks, and d-quarks, respectively, as obtained from Figs. 33, 34, and 35 (computed by Franco).

CHARGED LEPTONS

\ 1 1 ' 1 ' ' ' 1 ' 10° : \ - W _

so

10-1 -120 -

a. -IbO b ^ 700 10-2

" ~¿3d —

^\ , 1 , c 500 1000 1500 EBEAM GEV . , 1 , Fig. 33 - 56 -

UP QUARKS : \ ' ' 1 1 1 i J í 1 1 1 J rz ~~w¡X ;

lo-i — /

A. ' /

b

-2 10 zojL—— ~

I 1 1 i |_¿J \S i i L_

0 500 EBEAM GEV 1000

Fig. 34

DOWN QUARKS

EBEAM GEV

Fig. 35

Thus, for most of the range of masses of interest here, the s-channel mechanism is dominant, and the cross-sections are not large (decreasing the energy may be an advantage also in this case) but are almost independent of the fermion mass up to mt = 0.8Ebcam- The experimental aspects of the search for heavy quarks and leptons at CLIC were studied by Igo-Kemenes [53]. The main backgrounds for the heavy-lepton signal e+e~ -» L+L" -» W+ FW" v are, as usual, 77 WW and 7W -> WZ. The missing energy from the neutrinos in the signal is replaced by the final lepton energies in the beam pipe in the background processes. The signal-to-background ratio improves at smaller VF for kinematically allowed values of mi. The conclusion for heavy leptons is that the discovery reach at CLIC is given by mi < 0.8Ebeam at L = 1033 cm"2 s" However, it must be noted that the search for heav leptons is difficult at VF = 2 TeV. It would be simpler at VF = 1 TeV provided that the luminosity can be maintained at L = 1033 cm"2 s" The analysis for heavy quarks was also carried out by Igo-Kemenes [53] and is reported in the experimental papers. The signal leads to at least six jets in the final state, with a total energy equal to the cm. energy. The selection is done by imposing the same invariant mass in two back-to-back hemispheres. The heavy-quark peak stands out of the multijet background up to almost the kinematical maximum for niQ. Thus the conclusion for the

discovery range in this case is also niQ < 0.8Ebeam- - 57 -

9. PRODUCTION OF WEAK BOSONS AT CLIC This section summarizes the results of some new calculations by Gabrielli [54] on weak-boson production in e+e_ annihilation at CLIC energies. Pair production of W+W~ was studied by many authors [25, 43, 44, 45]. Here we concentrate on single W+ or Z production: e+e_ -» e^W*?, e+e~Z. The rate of single W/Z production was computed in the Weizsäcker-Williams approximation by keeping only the diagrams with quasi-real photon exchange. As seen from Fig. 32 (and in the article by Gabrielli [54]), the resulting cross-sections are quite large (corresponding to « 2 x 105 W* and 6 x 104 Z at Vs" = 2 TeV and jLdt = 1040 cm-2). At LEP energies the present calculations reproduce the results previously obtained [55]. The W cross-section is quite sensitive to the anomalous magnetic moment of the W. By varying the W mass at fixed couplings, the result in Fig. 36 was obtained. Similarly, Fig. 37 shows a plot of

2 2 2 2 the cross-section for Z production divided by the factor v c + a e = 1/4 [1 + (1 - 4 sin 0„) ] describing the effect of the Z coupling to e*, as a function of mz. From these results it is apparent that the discovery reach of CLIC for heavy weak bosons may extend up to mw < 1 TeV and mz < 2 TeV. A complete discussion of these results can be found in the paper by Gabrielli [54].

e+e~-*e+i/W" VS=2.TeV

_J i i i i I i i i i L_ 500 1000 1500 Mz(GeV)

Fig. 37 - 58 -

10. CONCLUDING REMARKS A concise summary of the results obtained by our study group is presented in Table 2. It clearly emerges from the above synthesis that the discovery potential of the LHC (taking into account both the pp and ep options) is quite large and exciting. The LHC presents itself as the most natural and economical continuation of the physics program of CERN in the next decade, together with LEP 100 and LEP 200. Certainly the LHC cannot compete with the SSC. If the SSC is built, then the LHC must necessarily be operational from three to five years before the SSC. This calls for a rapid decision. On the other hand, from a purely scientific point of view, the CLIC e+e~ facility, especially with luminosity L = 5 x 1033 cm-2 s_ is complementary and competitive even with the SSC. We think that it is urgent, whatever the decision on hadron colliders, to invest the necessary manpower and financial support in a strong R&D program for CLIC.

Table 2

Summary of results

CLIC LHC

Intermediate-mass Higgs: Yes No

mz < mH < 200 GeV Vi" » 1 TeV also good (SSC: No) H-+QQ (L = 1032cm_2s-1 - (Good up to marginal,

33 mH < 300 GeV) L « 10 OK)

Heavy Higgs: Yes Yes

mH > 200 GeV H -• 4 jets, H ~+ ZZ -

+ + H->WW mH < 0.6-0.8 TeV -» vv + e e" ,IÍ IT ,

34 2 _I H - ZZ (IfL = 10 cm- s , mH < 0.6 TeV

mH < 1-1.2 TeV: (mH < 1 TeV luminosity crucial) with quark tagging),

SSC:mH < 1-1.2 TeV, VsTcrucial.

Charged Higgs: Difficult: No vT = 2 TeV 60 ev. per year (SSC: No) Vs" = 1 TeV 250 ev. per year. H+ -tb" May be possible for

2mW < mH < 0.8Ebeam, Large mH better; Luminosity crucial.

Heavy leptons: mi < 0.8Ebeam Possible

Possible mL < 0.5 TeV L -> eW Vs~ = 1 TeV: better S/B (SSC: 0.7 TeV)

Heavy u,d quarks: Yes (easy) 6 Jets: No

IRQ < 0.8Ebeam 4j + (v. Promising

Q-qW Large mç> better mQ < 0.8 TeV (SSC: 1 TeV) - 59 -

REFERENCES

[1] E. Eichten, I. Hinchliffe, K. Lane and C. Quigg, Rev. Mod. Phys. 56 (1984) 579. [2] Proc. 1984 Summer Study on the Design and Utilization of the Superconducting Super Collider, Snowmass, Colo., 1984, eds. R. Donaldson and J. Morfin (AIP, New York, 1985). Proc. 1986 Summer Study on the Physics of the Superconducting Super Collider, Snowmass, Colo., 1986, eds. R. Donaldson and J. Marx, in preparation. Supercollider Physics, Proc. Oregon Workshop on Super High Energy Physics, Eugene, Oregon, 1985, ed. D.E. Soper (World Scientific, Singapore, 1986). Proc. UCLA Workshop on Observable Standard Model Physics at the SSC, Los Angeles, 1986, eds. H.U. Bengtsson et al. (World Scientific, Singapore, 1986). [3] Proc. ECFA-CERN Workshop on a Large Hadron Collider in the LEP Tunnel, Lausanne and Geneva, 1984 (ECFA 84/85, CERN 84-10, Geneva, 1984). [4] G. Altarelli, B. Meie and R. Rückl, Ref. [3], p. 551. [5] J. Bagger and M. Peskin, Phys. Rev. D31 (1985) 2211. [6] J. Ellis, M.K. Gaillard and D. Nanopoulos, Nucí. Phys. B106 (1976) 292. R. Barbieri and T.E.O. Ericson, Phys. Lett. B57 (1975) 270. [7] R.S. Willey, Phys. Lett. B173 (1986) 480. [8] T.N. Pham and D.G. Sutherland, Phys. Lett. B151 (1985) 444. R.S. Willey and H.L. Yu, Phys. Rev. D26 (1982) 3287. [9] J. Lee-Franzini, in Physics in collision 5, eds. B. Aubert et al. (Ed. Frontières, Gif-sur-Yvette, 1985), p. 145. [10] F. Wilczek, Phys. Rev. Lett. 39 (1977) 1304. [11] M. Vysotsky, Phys. Rev. 97B (1980) 159. [12] P. Nason, Columbia Univ. preprint CU-TP-346 (1986). [13] A.D. Linde, Sov. Phys.-JETP Lett. 23 (1976) 64. S. Weinberg, Phys. Rev. Lett. 36 (1976) 294. [14] A.D. Linde, Phys. Lett. B92 (1980) 119. See also A.A. Ansel'm et al., Sov. Phys.-Usp. 28 (1985) 113. [15] J. Ellis and R. Peccei (eds.), Physics at LEP, CERN 86-02 (1986). [16] L. Maiani et al., Nucl. Phys. B136 (1978) 115. N. Cabibbo et al., Nucl. Phys. B158 (1979) 295. [17] M.A.B. Beg et al., Phys. Rev. 52 (1984) 883. M. Lindner, Z. Phys. C31 (1986) 295. [18] M. Veltman, Acta Phys. Pol. B8 (1977) 475. B.W. Lee, C. Quigg and H.B. Thacker, Phys. Rev. D16 (1979) 1519. [19] J.J. Van der Bij and M. Veltman, Nucl. Phys. B231 (1984) 205. J.J. Van der Bij, Nucl. Phys. B161 (1985) 341. M.B. Einhorn, Nucl. Phys. 264B (1984) 75. P.Q. Hung and H.B. Thacker, Phys. Rev. D31 (1985) 2866. R. Casalbuoni, D. Dominici and R. Gatto, Phys. Lett. 147B (1984) 419 and 155B (1985) 95; Nucl. Phys. B282 (1987) 235. M.S. Chanowitz and M.K. Gaillard, Phys. Lett. 142 (1984) 85; Nucl. Phys. B261 (1985) 379. [20] M.S. Chanowitz, Berkeley preprint LBL-21973 (1986), presented at the 23nd Int. Conf. on High-Energy Physics, Berkeley, 1986. M.C. Bento and CH. Llewellyn Smith, Higgs boson production and the scattering of longitudinally polarized vector bosons at very high energy electron-positron colliders, Oxford preprint (1986). [21] H.M. Georgi et al., Phys. Rev. Lett. 40 (1978) 692. - 60 -

[22] J.F. Gunion and H.E. Haber, Nucí. Phys. B272 (1986) 1. [23] See, for example, N.G. Deshpande, Proc. Oregon Workshop on Super High-Energy Physics, Eugei Oregon, 1985, ed. D.E. Soper (World Scientific, Singapore, 1986), p. 148. J.F. Gunion, Univ. California Davis preprint UCD-86-39 (1986). [24] M.S. Chanowitz and M.K. Gaillard, quoted in Ref. [19]. G.L. Kane, W.W. Repko and W.B. Rolnick, Phys. Lett. 148B (1984) 367. S. Dawson, Nucl. Phys. B29 (1985) 42. See also R.N. Cahn and S. Dawson, Phys. Lett. 136B (1984) 196. [25] D.R.T. Jones and S.T. Petcov, Phys. Lett. 84B (1979) 440. K. Hikasa, Phys. Lett. 164B (1985) 385. [26] R.N. Cahn, Nucl. Phys. B255 (1985) 341. [27] G. Altarelli, B. Meie and F. Pitolli, Nucl. Phys. B287 (1987) 205. [28] D.A. Dicus and S. Willenbrock, Phys. Rev. D32 (1985) 1642. [29] J.F. Gunion et al., Phys. Rev. D34 (1986) 101. [30] V. Barger et al., Univ. Wisconsin, Madison, preprint MAD/PH/297 (1986). J.F. Gunion and Z. Kunszt, Univ. California Davis preprint UCD-86-21 (1986). [31] G. Altarelli and E. Franco, Mod. Phys. Lett. Al (1986) 517. [32] V.M. Budnev et al., Phys. Rep. 15 (1975) 181. [33] P. Bloch and J. Sass, these Proceedings. [34] E. Franco et al., these Proceedings. [35] D. Froidevaux, these Proceedings. [36] D. Denegrí, these Proceedings. [37] J.F. Gunion, Z. Kunszt and M. Soldate, Phys. Lett. 163B (1985) 389, E168B (1986) 427. [38] W.J. Stirling, R. Kleiss and S.D. Ellis, Phys. Lett. 163B (1985) 261. [39] J.F. Gunion and M. Soldate, Phys. Rev. D34 (1986) 826. [40] R.N. Cahn, S.D. Ellis, R. Kleiss and W.J. Stirling, Berkeley preprint LBL-21649 (1986). See also R.N. Cahn, preprint LBL-22920 (1987). [41] M.S. Chanowitz and M.K. Gaillard, Nucl. Phys. B261 (1985) 379. R.N. Cahn and M.S. Chanowitz, Phys. Rev. Lett. 56 (1986) 1327. [42] F. Richard, these Proceedings. [43] G.L. Kane and J.J.G. Scanio, CERN-TH.4532/86 (1986). See also M.C. Bento and CH. Llewellyn Smith, quoted in Ref. [20]. [44] See, for example, M. Katuya, Phys. Lett. 124B (1983) 421. [45] B. Meie, these Proceedings. [46] M. Greco et al., these Proceedings. [47] M.J. Duncan, G.L. Kane and W.W. Repko, Nucl. Phys. B272 (1986) 517. M. Chanowitz and M.K. Gaillard, Nucl. Phys. B261 (1985) 379. [48] See, for example, A.A. Ansel'm et al., Sov. Phys.-Usp. 28 (1985) 113. [49] I. Bigi et al., Phys. Lett. 181B (1986) 157. [50] See, for example, G. Altarelli, Rome Univ. preprint 529/1986, presented at the 23rd Int. Conf. High-Energy Physics, Berkeley, 1986. [51] R. Barbieri and L. Maiani, Nucl. Phys. B224 (1983) 32. [52] M.N. Minard, these Proceedings. [53] P. Igo-Kemenes, these Proceedings. [54] E. Gabrielli, these Proceedings. [55] H. Neufeld, Z. Phys. C17 (1983) 145. O. Cheyette, Phys. Lett. 137B (1984) 431. - 61 -

EXPERIMENTAL STUDIES IN THE STANDARD THEORY GROUP

Presented by D. Froidevaux Laboratoire de l'Accélérateur linéaire, Orsay, France

1. INTRODUCTION This summary is not an exhaustive overview of all the work contributed to the Standard Theory Group by experimentalists before and during the La Thuile Workshop. It is designed rather as a complementary part of the theoretical review by G. Altarelli [1], with the main emphasis on the experimental problems in the search for the Higgs. The experimental studies were divided into seven topics: 1. Search for an intermediate-mass Higgs at CLIC (P. Bloch, J. Sass) 2. Search for a heavy Higgs at CLIC (F. Richard) 3. Search for a heavy Higgs at LHC (L. DiLella, D. Froidevaux) 4. A heavy top-quark as a source of Wpairs (D. Denegrí) 5. Heavy quarks and leptons at CLIC (P. Igo-Kemenes) 6. Heavy quarks at LHC (M.-N. Minard) 7. Heavy leptons at LHC (D. Froidevaux, J.-P. Mendiburu) Most of these items will be the subject of detailed separate contributions in this or the forthcoming volume II. This summary deals extensively with items 2 and 3, and in somewhat less detail with items 4 to 7. Unless otherwise stated, it is assumed that CLIC would operate at Vs~ = 2TeVandL = 1033 cm-2 s_1, and the LHC at Vs~ = 20TeV and L = 1033 cm "~ 2 s ~ \ It is also assumed that both machines would deliver an integrated luminosity of 104 pb _ 1 per year.

2. SEARCH FOR A HEAVY HIGGS AT CLIC 2.1 The physics generator As described in detail in Ref. [1], a signal from a heavy Higgs at the CERN Linear Collider (CLIC) would be identified through the purely hadronic decays of the two bosons resulting from the Higgs decay, since, as will be shown below, this final state corresponds to the largest rate and to a nicely constrained reconstruction of the Higgs mass. Roughly 50% of the Higgs decays will result in four-jet final states, more or less independently of the top-quark mass. We recall that the final-state system will be fairly central and have a large total transverse momentum, (p?) = 100 GeV/c (see Fig. 20 of Ref. [1]). The two main sources of background that have been studied are 77 -» WW and 7W -» WZ, which lead to strongly forward/backward peaked angular distributions for the final-state bosons. In the case of 77 -» WW, the transverse momentum of the boson pair will in general be small with, however, a very long tail (see Fig. 20 of Ref. [1]). There are two other striking differences between the signal and the background, which have not been used explicitly in our experimental studies because of their strong dependence upon the exact detector characteristics: i) The first is the distribution of the angle 0* of the final-state jets in the boson-pair centre of mass, which will be proportional to sin20* for the Higgs signal (longitudinally polarized bosons produced dominantly from scalar Higgs decay) but proportional to (1 + cos20*) for the background (transversely polarized bosons produced dominantly through 77). The measurement of this angular distribution must, however, rely heavily on the ability of the calorimeter to separate the two jets from the decay of a high-momentum W or Z boson, in order to - 62 -

measure their directions accurately. For a Higgs mass of 1 TeV/c2, one would need to separate jets down to angles of about 15° between the two jets in order to measure the coso* distribution with the required accuracy, ii) The second difference arises from the fact that, contrary to the Higgs signal, there is always an e + e" pair in the final state for the yy -» WW background. These electrons will emerge at angles from 2° to 10° with respect to the beam, for events where the final-state system has large transverse momentum. The possibility of tagging these outgoing electrons depends of course upon the exact shape of the interaction region, and, in particular, on the position and shape of the final focusing quadrupoles. Preliminary thoughts from the CLIC Machine Study Group indicate that a small calorimeter for electron tagging could fit into a region where enough space is available and where the radiation expected from the beams will be negligible.

2.2 Detector simulation The hadronic decays of the boson pairs include gluon bremsstrahlung from the final-state quarks, which, in this study, are always taken to be light (i.e. we have not looked at the reconstruction of W -* tb decays). The final-state partons are then naively assumed to be measured with an energy loss of average value 10% and r.m.s. 5%, and with an energy resolution OB = 0.5VE. Because of the uncertainties on the exact shape of the interaction region, final-state partons at polar angles smaller than 15° with respect to the beams are assumed to be lost. Finally, in order to obtain a result that is more or less independent of the exact granularity of the calorimeter, partons which are separated by less than 30° in space are conservatively assumed to merge into a single jet. In this way, a set of measured jet energies and directions is obtained. If at least four such jets are measured, we reconstruct the best boson pair by minimizing the quantity (my - mw)2 + (niki - mw)2 over all possible sets of four jets (i, j, k, Í). The W-boson mass constraint is then used to rescale the jet energies, which corrects for the energy losses but does not improve much on the Higgs mass resolution.

2.3 Description of results Table 1 shows in some detail how the signal and background events survive the detector simulation. Also shown are the effects of the angular cuts (cosôwis the boson angle in the boson-pair centre of mass) and of the boson-pair transverse-momentum cut (pT")-

Table 1

Heavy-Higgs rates per year at CLIC

Signal Background Signal Background

mH = mww = mH = mww = 500GeV/c2 450-550 GeV/c2 800 GeV/c2 600-1000 GeV/c2

Produced 1400 3000 600 4650

Purely hadronic final state 660 1390 260 2140

After detector acceptance and jet reconstruction 530 460 240 500

Angular cut: |cos owl < 0.8 480 260 210 160

vw PT^cut: pT >20GeV/c 420 160 190 130 - 63 -

The background calculation [2] includes both 77 -• WW and 7W -> WZ, where the events are integrated for boson-pair masses within ± TH of the Higgs mass, where TH is the theoretically expected width of the Higgs. The

2 signal-to-background ratio at production is therefore much worse for mH = 800 GeV/c (= 1/8) than for mH = 500 GeV/c2 (= 1/2). After detector simulation the signal-to-background ratio improves considerably, essentially because events with jets at small polar angles (0 < 15°) are lost. After the angular cut and the pt* cut, the signal-to-background ratio is improved still further, and the resulting boson-pair mass distributions are shown in Figs. 1 and 2 for mH = 500 GeV/c2 and 800 GeV/c2, respectively.

The signal for mH = 500 GeV/c2 is quite clean and its width is already dominated by the Higgs width (TH = 62 GeV), the experimental boson-pair mass resolution being about 20 GeV/c2. This result justifies the choice of the purely hadronic final state for heavy-Higgs detection at CLIC. In particular, if we select the H -» WW -> £pjj decay modes, where I is an electron or a muon, the mass resolution becomes much worse because the neutrino transverse momentum is not much larger than the Higgs transverse momentum.

Figure 2 shows, however, that because of the rapid increase of TH as a function of mH, the Higgs signal is not overwhelmingly convincing for mH = 800 GeV/c2. We recall that the background shape has not been accurately computed theoretically, nor will it be directly measurable experimentally; therefore, some additional handle with which to extract the signal would be welcome. As was discussed in subsection 2.1, this handle could be the polarization measurement, i.e. the measurement of the jet angular distributions in the WW centre of mass, to the extent that the calorimeter allows jet separation down to angles of 15°. This would provide clean evidence for a signal, since the Higgs production is the only process dominated by pair production of longitudinally polarized bosons, resulting in a sin20* distribution for the final-state partons.

We have also illustrated in Fig. 2 the effect of a higher luminosity at CLIC (L = 1034 cm _ 2 s ~l). In this case

2 for mH = 800 GeV/c , the H -* ZZ -* U)] channel would provide about 160 reconstructed events per year, a - 64 -

Mww(GeV)

Fig. 3 number similar to the reconstructed H -» WW -* jjjj event rate at the standard luminosity. The background from 77 -> WW is completely suppressed, however, for this decay channel, and we are left with only the 7W WZ background, which means an improvement of a factor of 2.5 in the signal-to-background ratio. Finally, as an example of the gain obtained by eliminating 77 -» WW background events through tagging of one of the outgoing electrons, Fig. 3 shows, for a Higgs mass of 1 TeV/c2, the expected signal over the remaining 7W -> WZ background, for a total integrated luminosity of 50,000 pb ~ From these studies, we conclude that a signal from a heavy Higgs would be observable at CLIC, for Higgs masses ranging from 200 to 800 GeV/c2. For larger Higgs masses, the signal could be extracted if the luminosity were increased to 5 x 1033 cm-2 s_1 and if the outgoing electrons from the 77 -» WW background could be rejected by a small-angle calorimeter. More details on this subject can be found in the contribution by F. Richard.

3. SEARCH FOR A HEAVY HIGGS AT THE CERN LARGE HADRON COLLIDER 3.1 Introduction A great amount of work has gone into the study of heavy Higgs production at high-energy hadron colliders, mainly at the Superconducting Super Collider (SSC) Workshops [3]. A large fraction of this work was devoted to the search for H -> WW -> f^jj decays (£ will stand for electrons or muons throughout this section) [4, 5]. However, most of it did not include either the effects of the large transverse momentum px expected for Higgs production through WW fusion, or, to a large extent, the experimental effects. In our study we have compared, where it is important, the expected signal and background rates with and without reconstruction of px ; we have also included, in a general way, detector effects. - 65 -

3.2 The physics generator We have used a modified version of an event generator*' which produces the Higgs signal through gg fusion and WW fusion, and also produces the WW (or ZZ) continuum and W + 2-jet background processes.

3.2.1 Higgs production and transverse momentum We have conservatively used a t-quark mass of 30 GeV/c2, which minimizes the Higgs production through gg fusion. In particular, for Higgs masses above 300 GeV/c2, the Higgs production is dominated by WW fusion, a process which has been accurately computed to first order, as discussed in Ref. [1]. For Higgs masses below 300 GeV/c2, the large uncertainties on the gluon structure function at low x (we have used the EHLQ1 parametrization [6]) result in a rather large theoretical uncertainty on the total Higgs production cross-section; but

2 2 for this mass range (200 GeV/c < mH < 300 GeV/c ) we shall see that Higgs detection at the LHC is fairly straightforward, and the result would not be changed if the production cross-section were too high by a factor of 2. The first line of Table 2 shows the Higgs production cross-sections, with their gg and WW fusion contributions, for various Higgs masses, at VF = 20 TeV. We have checked that these numbers are in rough agreement with previous computations [6, 7]. We would like to stress here that the Higgs production cross-section would be five times larger at the SSC (Vi = 40 TeV), which means that Vs~is a crucial design parameter for a hadron collider in the particular case of the search for the Higgs.

Table 2

Heavy Higgs at the LHC (Note: all lepton detection efficiencies are assumed to be 100%)

Higgs mass (GeV/c2) and [width] (GeV)

200 [1.6] 400 [23.8] 600 [91.5] 800 [227]

Higgs production cross-section (pb): gg fusion 15 0.5 0.1 0.0 WW fusion 4 1.6 0.8 0.5

No. of events per year: H-WW 117000 14000 6000 3300 H-*ZZ 40000 6500 3000 1650 WW continuum 900000 ZZ continuum 100 000

No. of events per year after acceptance cuts: H - zz - un 110 19 10 6 ZZ -> till continuum 200 H ZZ -> llvvox Uli 650 120 55 33 ZZ-» llvv or UU continuum 980 H-» WW - Ivlv 2550 330 150 80 WW -> Ivlv continuum 14600

*) We thank W.J. Stirling for providing us with the original version of this event generator. - 66 -

We have then generated the Higgs transverse momentum, p?, according to Fig. 20 of Ref. [1] for the WW fusion contribution. In the case of gg fusion, p? is expected to be much smaller on the average, and we have generated it according to the recipe of Ref. [8]. As will often be shown in the following sections, the large values expected for p? (when the cross-section is dominated by WW fusion) distort the observed experimental distributions considerably. We have therefore looked into the feasibility of measuring p? through detection of the small-angle jets expected from fragmentation of the quarks radiated by the W bosons, in the case of the WW fusion mechanism.

3.2.2 Measurement of through the detection of outgoing quark jets This method was first suggested in Ref. [9]. Figure 4, taken from this reference, shows the pseudorapidity distribution of the outgoing quark jets which, similarly to the outgoing electrons in 77 -» WW discussed in Section 2, are emitted at angles between 1° and 15° with respect to the beams. Typically, these jets will have transverse momenta of about 50 GeV/c and energies of about 1 TeV. They will therefore be relatively easy to separate from the underlying spectator hadrons, which are present in hadron-hadron collisions. The feasibility of a long-lived calorimeter at such small angles in the LHC environment is not within the scope of our work, and we refer the reader to the report of the Calorimeter Study Group [10] for further discussion on this topic. In our studies we have incorporated the possibility of reconstructing the Higgs transverse momentum through the measurement of these two outgoing quark jets, assuming that p? is shared at random between two jets generated with a flat pseudorapidity distribution T; for 2 < |TJ| < 4. Figure 5 illustrates the effect of this procedure, for the decay mode H -» WW -• with mH = 600 GeV/c2. As we shall see, this is the only Higgs decay mode which would allow us to reach Higgs masses of order 1 TeV/c2 at the LHC. Unfortunately, this mode is the one that would be most affected by the absence of p? measurement, since the transverse momentum of the neutrino pï is not much larger than pH. In order to reconstruct the total (tvjj) invariant mass, we have to use the by now standard procedure of imposing the W mass constraint, in order to extract pL, the neutrino longitudinal momentum. As shown in Fig. 5, if pH is reconstructed through the measurement of the forward quark jets, this procedure does not

From Cahn et al., LBL-21649

Typically P¡ Z 50 GeV/c EJ = 1 TeV

nJ Ln 15.4° 5.7° 2.1° 0,8° ¿1 _J L_ 1 2 3 4 5 300 400 500 600 700

Pseudorapidity hi of fagged quark Total mass in GeV/c2

Fig. 4 Fig. 5 - 67 - distort the generated distribution too much (Tu = 92 GeV). If p? is not reconstructed, pLis very badly measured, and this results in a large fraction of non-physical solutions for pL: almost 40% of the events are lost. The overall effect is a decrease by a factor of 4 for the peak value of the signal.

3.2.3 Higgs decays The partial width of the Higgs decaying into boson pairs increases as ma, whereas its decay into any fermion pair fF increases as niHUif. This means that regardless of the t-quark mass—or, for that matter, of the existence of any other heavy quark—the Higgs will decay predominantly into boson pairs, for mH larger than 300 GeV/c2. Table 2 shows the expected rates for one year of running at the LHC (integrated luminosity of 1040 cm-2 s-1), for H -» WW and H ->• ZZ decays and for various values of mH. We have studied most of the possible final states for Higgs decay: i) H -» ZZ -> Uli, the least favourable mode in terms of rate, but the cleanest in terms of experimental signature and background; ii) H - ZZ - llvv; iii) H ->• WW - Ivlv, iv) H -» WW -» lv']i, which is the most favourable of these modes in terms of rate, but unfortunately is the most difficult one to extract from the background, as we shall see below. Other possible decay modes were not studied, because we expect them either to be swamped by background (H -» WW -» jjjj) or to provide less convincing evidence for a Higgs signal (H ZZ -» ££jj) than one of the cleaner or more copious modes already studied.

3.2.4 Background processes The first obvious source of background to the decay modes discussed above arises from WW or ZZ continuum production with the same final state. The cross-sections for these processes [6, 11] are about ten times larger than the Higgs production cross-sections, but, as we shall see, they will not give rise to severe backgrounds, because of the substantial differences in kinematics between the Higgs signal and the boson-pair continuum. Table 2 shows the WW and ZZ continuum rates at the LHC, integrated over the whole range of boson-pair masses. These are the only backgrounds to the first three (purely leptonic) Higgs decay modes that we have studied. As is well known by now [4], the main background to the semihadronic H -* WW -* £cjj decay mode arises from (W lv) + jj production, where the dijet invariant mass is compatible with the W mass within the experimental mass resolution. The cross-section for this process is almost two orders of magnitude larger than the (W -<• Iv) + (W -» jj) continuum cross-section. Possible ways of reducing this background will be discussed in more detail in subsection 3.7.

3.3 Detector simulation The Higgs signal and the background processes discussed above have all been run through a detector simulation, which includes the following:

3.3.1 Acceptance cuts Leptons and jets are required to be emitted at angles larger than 5.7° with respect to the beams (|r/| < 3), with the exception of the outgoing quark jets in Higgs production, which are emitted at even smaller angles. The Higgs signal does tend to be more centrally produced than the background processes, but the losses in rate resulting from a more stringent cut (e.g. |T;| < 1.5) are not compensated by very large background rejection factors. Leptons and jets are required to have transverse momenta larger than 20 GeV/c, which allows for fairly straightforward schemes for triggering on the interesting Higgs decay modes, at the expense of a negligible reduction in event rate. - 68 -

3.3.2 Particle separation If two jets are present in the final state with an angular separation of less than 30°, they are merged into one single jet. In most of these cases the two jets would have been required to be compatible with W -» jj decay. The event is therefore rejected because, as shown by work done by the Large Cross-Section Study Group [12], a single light-quark jet at the LHC will have a reconstructed mass close to the W mass, and will therefore easily fake a W -> jj decay, where the two jets were not separated in the calorimeter. We have also required a minimum angular separation of 20° between any electron and any jet in the final state. Muons inside jets have been assumed to be reconstructed with full efficiency. At this point we would like to stress that previous work on the subject of jet-jet mass resolution at the LHC [13] has shown that, contrary to CLIC, a jet-jet mass resolution of about 5% can only be achieved through jet clustering algorithms within a finite cone, in order to eliminate the effect of the spectator particles and, above all, of event pile-up in the calorimeter electronics. We shall come back to this crucial point of W -> jj reconstruction in subsection 3.7.

3.3.3 Energy resolution Electron energies were smeared with an energy resolution oe = O.lVË + 0.01E, and muons were assumed to

be reconstructed in a large magnetized iron detector [14], resulting in a momentum resolution

The jet energies were assumed to be measured with an accuracy aE = 0.5VË + 0.05E, unless otherwise stated. Jet directions were assumed to be measured with an accuracy of 10 mrad, which affects the W -> jj mass reconstruction, the measurement of p?, and especially the measurement of the jet angular distribution in the WW centre of mass in the case of H -> WW -> ivj]. Table 2 shows the expected event rates per year for the three purely leptonic Higgs decay modes which we have studied and for their respective backgrounds after all detector effects have been included.

3.4 H -» ZZ -> Uli decay mode This mode clearly provides the cleanest and most accurate reconstruction of the Higgs mass. As shown in Table 2, the small branching ratio results, however, in insufficient event rates for Higgs masses above 300 to 400 GeV/c2. The rates of Table 2 are further reduced by requiring that two pairs of final-state leptons have a reconstructed mass compatible with the Z mass and, mainly, by requiring that the transverse momentum of each Z be large, as expected from Higgs decay. If we apply a cut requiring that the sum of the transverse momenta of the two Z bosons be larger than 100 GeV/c, we are left with 70 events per year for mH = 200 GeV/c2 and 33 events per year for mH = 300 GeV/c2. The ZZ continuum background is reduced, by more than a factor of 2, to about 90 events per year. Figure 6 shows the resulting {Ml) invariant mass distributions for the background and the signal corresponding to mH = 200 GeV/c2 and mH = 300 GeV/c2. Clearly, a Higgs of mass larger than 300 GeV/c2 could not be observed in this decay mode in one LHC year. Here we would like to stress—using this decay mode as the most striking example—the effects of luminosity and centre-of-mass energy on this result: 1) Effect of luminosity: if the luminosity of the machine were increased by a factor of 2, the accessible mass range would increase from 300 GeV/c2 to 500 GeV/c2, ignoring the problems which undoubtedly will arise when the Higgs width becomes large. An extreme case, discussed in these Proceedings [15], would be that of a special intersection region with a luminosity of 5 x 1034 cm-2 s~ \ 50 times larger than the standard LHC luminosity. In this high-luminosity region, only a 'beam-dump' experiment with muon detection would probably survive. The net gain would then be only a factor of 12.5 (no electrons detected), resulting in 75 H ZZ events per year for

2 mH = 800 GeV/c . - 69 -

60 pp /s=20TeV H-.ZZ-1U1 Background from S~\ ZZ continuum > 90 events/year Ol & 40 2 { MH=200CeV/c X 70 events/year

2 ID { MH=300CeV/c X 33 events/year

î 20 LiJ 10

300 400 500 Total mass in GeV/c2 Fig. 6

2) Effect of Vs: at the SSC (vT = 40 TeV), the accessible mass range will increase from 300 GeV/c2 to 800 GeV/c2, again ignoring the problems due to the Higgs width. It should be noted here that the signal-to- background ratio is better at VF = 40 TeV, because the WW and ZZ continuum cross-sections increase by a factor of 2.5, whereas the Higgs production cross-section increases by a factor of 5. Clearly, the LHC would benefit from the maximum energy available.

3.5 H -» ZZ -* llvv decay mode Owing to the large Z -* vv branching ratio (18%), the H -» ZZ -> llvv (or Uli) decay mode can be used to extract a clean Higgs signal for Higgs masses beyond 300 GeV/c2. In this decay mode, however, we can no longer reconstruct the total Higgs mass. We have not used any transverse mass, contrary to what was suggested in Ref. [16], because of the problems linked to neutrino reconstruction when pH is large (see subsections 3.2.1 and 3.2.2), and because the use of this variable does not in any way improve the directly measured quantity, which is the transverse momentum p? of the reconstructed Z -» II decay. As shown in Fig. 7, the situation becomes much worse than in the H -» ZZ -» IUI decay mode, because the large value of p? leads to a large distortion of the expected pf shape: the combined effect of the Higgs width and transverse momentum decreases the Jacobian peak by a factor of2.5. The signal, therefore, does not appear as a peak on top of the ZZ continuum background, but rather as a wide shoulder, as shown in Fig. 8 for mH = 500 QeV/c2 and mH = 600 GeV/c2; the latter is clearly the largest accessible Higgs mass in this particular decay mode. The exact shape of pf for the ZZ continuum background is not exactly known theoretically (in our studies we have assumed the transverse momentum of the ZZ pair to be small, i.e. of order 20 GeV/c), nor will it be easily measurable experimentally, in the presence of a Higgs signal, unless we use the angular distribution of the leptons in the ZZ centre of mass. As in the case of a heavy Higgs at CLIC, these angular distributions help to discriminate the signal from a scalar Higgs particle from the boson-pair continuum background. We have also checked that the background to H -» ZZ -> Uw from WW -> Ivlv continuum events is tt negligible, once one requires the (II) invariant mass to be compatible with the Z mass, and pj to be larger than 100 GeV/c, where, as shown in Fig. 8, most of the Higgs signal is expected. - 70 -

i r

Fig. 7 Fig. 8

T 1 pp vT = 20 TeV H—WW— Ivlv

z MH = 600 GeV/c

Background from WW continuum 7800 events /year

SignalxlO 106 events/year

200 400 600

2 Total (llvT) mass in GeV/c

Fig. 9 - 71 -

3.6 H -> WW -» decay mode For completeness, we have also studied the H -> WW ->• Ivlv decay mode, which is the most abundant, purely leptonic, Higgs decay mode—but, unfortunately, the least kinematically constrained one. In addition, the WW continuum background has a much larger cross-section than the ZZ continuum (see Table 2). All this results in a total background of 7800 WW -* Ivlv events, after some kinematic cuts to improve the signal-to-background ratio, for an expected signal of 106 events for mH = 600 GeV/c2. Figure 9 shows the distribution of the total (llvj) mass for signal and background, using the total transverse momentum of the neutrino pair to calculate the mass. It is obvious that it is hopeless to try to extract a Higgs signal in this decay mode. This pessimistic conclusion should be alleviated by the fact that, in these studies of purely leptonic Higgs decay modes, we have not mentioned the effect of measuring pH. As shown by a crude theoretical calculation [9], the WW and ZZ continuum backgrounds are expected to drop by a factor of about 100 if one requires the presence of two small-angle jets, which measure pH. If this were indeed so, all the backgrounds discussed above would become negligible, including the H -»• WW Ivlv case, and we could probably extract a Higgs signal for mH s, 800 GeV/c2, combining the three leptonic channels discussed in subsections 3.4 to 3.6.

3.7 H WW -• £»>jj decay mode We now turn to the most difficult Higgs decay mode, for which the background is predominantly (W -* Iv) + jj production. As illustrated in Fig. 10, the cross-sections for (W -» ev) + X, (W -» ev) + jj (with nry = mw + 10 GeV/c2), (WW - evjj) + X, and H -»• WW ->• exjj are respectively 10 nb, 0.7 nb, 0.01 nb, and 0.00006 nb, for mH = 800 GeV/c2. This means that we have to extract a signal of about 1000 events per year from a background of about 5 x 10* events per year. As discussed extensively in the literature [3-5], kinematic cuts and the measurement of angular distributions provide large background rejections. However, we have found that detector effects tend to reduce these large background rejection factors. At the same time, they reduce the efficiency for the signal, sometimes in such a drastic way that some of the cuts advocated by theoretical studies can simply not be applied at all. The reconstruction of the (jj) invariant mass, in the case W -* jj, is a crucial point in these studies. Present experience at the CERN pp Collider and extrapolations thereof to LHC energies, have led us to conclude that the

0.1 1 10 20 100 /s in TeV

Fig. 10 - 72 -

= 20TeV

Generated shape H-t-WW-Mvjj ru = 3 GeV 2 MH = 4-OOGeV/c • « = Í..0 GeV (see text) o = 5.7 GeV Isee text)

jj background before cuts

Wjj background ' after cuts no tagging

Higgs signal after cuts no tagging

Higgs signal after cuts with tagging

-Wjj background after cuts estimated with tagging

400 600 800 1000 (W — jj) reconstructed mass in GeV/c2 (Ivjj) mass in GeV/c2

Fig. 11 Fig. 12

requirement that the (jj) mass differ from the nominal W mass by less than ± 10 GeV/c2 introduces large inefficiencies. This point is illustrated in Fig. 11, where the (W -» jj) reconstructed mass is compared with the generated shape for H WW -> £cjj and mH = 600 GeV/c2. We have considered two cases: i) our standard assumptions (see subsection 3.5) regarding the jet energy and angular measurement errors, which lead to a mass resolution of 5.7 GeV/c2: clearly, a cut of ±2 GeV/c2 around mw, as done in ref. [5], would eliminate most of the signal; ii) here we assume a measurement accuracy cm = 0.35VË + 0.02E for the jet energies and a perfect measurement of the jet directions: in this particular case the mass resolution obtained is 4.0 GeV/c2. Figure 12 shows a summary of the detailed studies made by us in the case of H -» WW -* lv)j. We have concluded that kinematic cuts of the type quoted in Ref. [5] lead at best to a factor of 20 in background rejection for a reasonable efficiency for the Higgs signal. The dashed curves in this figure show that for mH = 400 GeV/c2 and in the absence of measurement of pH, the signal-to-background ratio is less than 1% at the peak value for the total (tvji) invariant mass. We conclude that, without calorimetry at small angles to measure px, the detection of the H -> WW -* £»>jj decay mode is hopeless. However, as shown in Fig. 12, the measurement of px considerably improves the shape of the signal and, as mentioned previously, decreases the background by about two orders of magnitude [9]. We can then reasonably hope to extract the H -* WW -» £cjj decay mode from the (W -+ lv) + jj background, and therefore reach Higgs masses of 1 TeV/c2 in one year of running at the LHC. For lack of a theoretical computation of (W -* lv) + jjjj final states at the LHC, we have not made a more quantitative study of the signal-to-background ratios as a function of the Higgs mass for this decay mode. - 73 -

3.8 Another possible source of background to H -* WW -> £yjj

In this section we show that a t-quark with mass mt just above the W mass would give rise to a severe background to the H ->• WW -+ tvji decay mode discussed in the previous section. This is due to the fact that the t-quark, produced strongly through gg -> tt fusion, would then decay into a W boson and a light-quark jet:

gg - tt - WWbb.

The final state contains two W bosons and two jets of small average PT, especially if mt is not much larger than mw.

2 In fact, as shown in Fig. 13 by the WW invariant mass spectra from tt with mt = 100 GeV/c and mt = 150 GeV/c2, the kinematics of this WW final state are very similar to that of the WW continuum, but the production rate is more than two orders of magnitude larger. Clearly, it would be very difficult to extract the WW continuum from this background. Since the rate of this tt -» WWbb background is of the same order as that of the (W -> iv) + jj background to the H -> WW ->• Ivji decay mode, the results of subsection 3.7 remain essentially unchanged. This is true provided the PT measurement results in a rejection of about 100 against the tt background, as for the Çff -* Iv + jj) case. If the t-quark mass becomes larger than 150 GeV/c2, the b-quark jets are clearly observable in the calorimeter and will therefore help to discriminate against this background. More details on this subject can be found in the contribution by D. Denegrí to these Proceedings.

200 400 600 800 1000

2 Mww in GeV/c

Fig. 13

4. HEAVY QUARKS AND LEPTONS AT CLIC In this section we briefly summarize the work done by P. Igo-Kemenes on this subject, and we refer the reader to his contribution for more details. - 74 -

4.1 Heavy quarks at CLIC For heavy-quark masses r/iQ larger than 100 GeV/c2, heavy-quark production at CLIC is dominated by s-channel pair production, and the cross-section is therefore constant for values of m<3 between 100 GeV/c2 and 800 GeV/c2. This results, however, in 400 ft' events and 200 b'b' events expected per year, where t' (resp. b') are up-like (resp. down-like) heavy quarks. If we assume the t' to decay into Wb', with a subsequent b' -* Wt decay, we expect WWWWtt final states from t'{' production and WWtt final states from b'b' production. We have chosen, as in the case of the heavy Higgs at CLIC, to use hadronic W decays, which allow full event reconstruction. We have required all final-state partons to be emitted at angles larger than 20° with respect to the beam line. Once the W -» jj branching ratios are included, we are left with 100 reconstructed events per year for both ft' and b'b' production. The generated events are reconstructed by measuring the total invariant mass in each appropriate hemisphere, assuming that we have a fine-grained calorimeter with jet energy resolution

values of mb<, compared with the expected shape from the light-quark background, which peaks at masses close to mw as we have already mentioned previously (see Ref. [11] for more details on this subject). From Fig. 15 we conclude that, for 150 GeV/c2 < DIQ < 800 GeV/c2, heavy quarks are clearly visible as a clean signal above background at CLIC. The rates are marginal, however, and this type of physics would, of course, greatly benefit from a higher luminosity, but also from the possibility of running at Vs" = 1 TeV, where the rates are four times larger for heavy-quark masses below 400 GeV/c2.

Vs= 2TeV e*e" /s = 2TeV With reseating -b'b'-WWtr e*e-*r,F,*WWWWtr' with W-~jj wifh W-^jj

2

Mr = 500 GeV/c Mb.=200GeV/c'

3 Mb=500GeV/c2

With rescaling Background from light quarks y (Webber MCI

0 200 400 600 0 200 400 600 800 Invariant mass per hemisphere in GeV/C Invariant mass per hemisphere in GeV/c'

Fig. 14 Fig. 15

4.2 Heavy leptons at CLIC

2 For charged-heavy-lepton masses mL larger than 150 GeV/c , heavy-lepton production at CLIC is dominated

2 by s-i-channel pair production, with an expected rate of 330 events per year for mL < 800 GeV/c . Each heavy lepton is assumed to decay into a W boson and a light neutrino, with the subsequent decay W — jj, in order to fully reconstruct the boson-pair mass mww. The event rate is then reduced to 170 events per year, with a distribution of mww which is almost independent of itiL, because of the two neutrinos which escape detection in the final state. For this reason we encounter a severe background, in this study, from 77 -» WW and 7W -> WZ processes, which amount to about 20,000 events per year for all values of mww (or mwz). We therefore need to achieve a gain of about two orders of magnitude in the signal-to-background ratio. As in Section 2, we use the angular distribution of the W bosons in the WW centre of mass, cosOw, and the transverse momentum distribution of the WW pair, PTw. If we require |cos0wl < 0.7 and px™ > 100 to 200 GeV/c, depending on the value of HIL, we achieve a rejection of 20 to 30 against the background, but we are left with only about 60 events per year. Figure 16a shows the distributions of mww for the signal and background, after these cuts. Further possible cuts are discussed in the contribution by P. Igo-Kemenes, but it remains that the heavy-lepton signal at CLIC, for Vs~ = 2 TeV, is marginal. Figure 16b shows that, at Vs~ = 1 TeV, the situation is much better, because the signal has increased by a factor of 4 to about 250 events per year, whereas the background has decreased by a factor of 0.7. This results in a total gain of a factor of 6 in the signal-to-background ratio, and a heavy-lepton signal should be clearly visible at

2 2 CLIC, with Vs" = 1 TeV, for 100 GeV/c < mL < 400 GeV/c . As for the heavy quarks, a heavy-lepton search would, of course, greatly benefit from higher luminosity at CLIC.

5. HEAVY QUARKS AT THE LHC Heavy-quark production at the LHC will be dominated by pair production through gg fusion and will there• fore be very copious, as shown in Fig. 17, where the total production cross-section for heavy-quark pairs as a - 76 - function of the heavy-quark mass mç. is shown. There are 1 7 still 100 events per year produced for itiq =1.5 TeV/c2. \/s=20 TeV Our calculations are in agreement with previous theo• retical calculations (see, for example, Ref. [6]). We stress \\ (W-MvMjefs \\/final state that in the heavy-quark case, even more than in the Higgs case, the production cross-section depends crucially upon Vs: for niQ = 500 GeV/c2, it decreases by a factor of 10 if VF decreases from 20 TeV to 10 TeV. Also shown in Fig. 17 is the cross-section for pp -» QQ -> (W -> iv) + 4 jets, assuming that each heavy quark decays into a W boson and a light quark. We have studied the possibility of extracting this heavy-quark signal for final states where both W bosons decay hadronically (6-jet final state), and where one _l_ 0.75 1.5 2.25 3.0

W boson decays leptonically and the other hadronically 2 MQ in TeV/c (W -» Iv + 4-jet final state). Fig. 17

5.1 Study of pp -»• QQ -> 6 jets This final state is of course expected to be heavily contaminated by QCD background. A complete theoretical calculation of 6-jet final states in hadronic collisions will probably never be done, but we have used an approximate calculation of W.J. Stirling, normalized to the observed 3-, 4-, 5-, and 6-jet events observed in the UA1 detector, where each jet is required to have a transverse energy larger than 15 GeV (or 50 GeV) and to be separated by AR =

2 VAtj2 + A<¡b > 1 from any other jet in the event. Figure 18 shows that the QCD 6-jet background, for both jet transverse energy thresholds of 15 GeV and 50 GeV, completely swamps any heavy-quark signal. The situation is therefore hopeless in this case, given that the additional topological requirements of finding two W -» jj decays will not reduce the 6-jet background by orders of magnitude.

pp /s = 20 TeV pp-»0.5l—»6 jets

6-jet background from OLCD (p( » 15 GeV/cl

6-jet background from QXD

j |p T» 50 GeV/c)

0,0. signal » 103 2 Ma= 600 GeV/c

500 1000 1500

6-jet invariant mass in GeV/c2 Fig. 18 - 77 -

5.2 Study of pp •-» QQ (W > lv) + 4 jets The main background to this final state is from W production, where the W boson is accompanied by four jets. A reliable theoretical calculation for this process does not exist at present, as we already mentioned in subsection 3.7. We have therefore compared the inclusive electron transverse momentum p| spectrum from W tv decay, as obtained from ISAJET, with the expected distribution of pï for heavy-quark masses of 200 GeV/c2 and 500 GeV/c2. The event rates are very large, as shown in Fig. 19, and for all quark masses there is a range of pf- values for which the signal exceeds the inclusive W -> ec background. Given that the presence of four jets in the final state will enhance this already favourable signal-to-background ratio, we can be reasonably sure that heavy quarks can be detected at the LHC for masses below 800 GeV/c2, a value which corresponds to the limit in rate for the (W -> lv) + 4-jet final state. More details on this subject can be found in the contribution by M.-N. Minard.

I 1 1 1 1

pp /s = 20 TeV

108 - pp-*-Q.a-»- (W-*ev) + 4 jets

103 - N \ s 102l I I I I l_ 0 100 200 300 400 500

p' in GeV/c

Fig. 19

6. HEAVY LEPTONS AT LHC Cross-sections for heavy-lepton production at high-energy hadronic machines, through quark fusion and boson fusion, have been calculated recently [6, 17]. Heavy-lepton pair production is expected to be swamped by W-boson pair production and/or W + 2-jet production. In addition, as shown in Fig. 20 (taken from Ref. [17]), single charged-heavy-lepton production is larger then heavy-lepton pair production by an order of magnitude, if the associated neutrino is light, for the obvious reason of available phase space. In our study we have restricted ourselves to the case where the heavy charged lepton is produced in association with a light neutrino and subsequently decays into a W boson and a neutrino. If we require the W to decay into two

jets, we will measure a final state with a reconstructed, high-pT, unbalanced W, as expected from the L Wj»L decay. Taking into account detector effects, which are dominated by the 20° angular separation required between

2 the two final-state jets, we expect 14000, 570, and 73 events per year for mL = 200, 500, and 800 GeV/c , respectively. - 78 -

pp /s=20TeV 5000 pp -»- LvL —- WvLvL with W—jj 10 —i 1 1 1 r pp Js= 40 TeV • r\=200 GeV/c2 4000 o r\=500 GeV/c2" qq*L*vL •s; . M^BOO GeV/c2 > From S.Dawson et al. cu LBL-220B7 u 3000 150

o. 2000 100 -

1000 50

_1_ 100 100 200 300 400 500

,W—jj in GeV/c

Fig. 21

The backgrounds to this signal arise from WZ or ZZ continuum production where one Z decays into a neutrino pair and escapes detection (40000 events per year with, however, a PT distribution peaked at small values), but mainly from (Z -* vv) + jj production, where mjj is compatible with the W mass (720,000 events per year). This dominant background is also characterized by a pj¡! distribution peaked at small values. Figure 21 shows the expected heavy-lepton signal after background subtraction as a function of PT = PT, for triL = 200, 500, and 800 GeV/c2. Even though this figure is an optimistic way of presenting the results, we have to stress that this crude study has not used any of the kinematic cuts used in subsection 3.7 to discriminate the H -» WW -+ tv}} signal from the overwhelming (W -» iv) + 2-jet background. These cuts will clearly also improve the signal-to-background ratio in the search for heavy leptons, and we conclude that heavy leptons can be observed at the LHC for masses between 100 and 500 GeV/c2.

7. CONCLUSIONS We refer the reader to the summary of Altarelli [1] for general conclusions, and we shall not repeat them here. We prefer to conclude this experimental summary with some remarks which are related to the work done in the other study groups: 1) From a theoretical point of view, a most urgent need, in order to progress further, is for a reliable perturbative calculation of W + 4-jet processes in pp collisions, both for the measurement of the Higgs transverse momentum and for the associated rejection obtained against background processes, and also for the study of (W -> iv) + 4-jet final states arising from heavy-quark pair production. 2) From the machine point of view, we would again like to stress that, at the LHC, the energy in the centre of mass is crucial, whereas the luminosity is less important and should be balanced against detector feasibility. At - 79 -

CLIC, the heavy-lepton study has shown that flexibility in Vs~ is important, provided the luminosity does not drastically decrease. In addition, given the small cross-sections for s-channel processes and for Higgs production at Higgs masses beyond 1 TeV/c2, luminosity is clearly an essential point of the CLIC design. 3) Finally, from the detector point of view, we conclude that both at CLIC and at the LHC we need to separate jets from W decay down to angles of 15° with the calorimeter, in order that a Higgs signal can be reliably extracted for Higgs masses above 1 TeV/c2. A crucial point for the LHC is whether or not we believe we can build a long-lived calorimeter for measuring the Higgs transverse momentum at angles between 2° and 15° with respect to the beams. Similarly, at CLIC, the feasibility of an electron-veto calorimeter reaching angles as small as 2° with respect to the beams is very important for detecting a Higgs of mass larger than 1 TeV/c2.

REFERENCES

[1] G. Altarelli, these Proceedings. [2] B. Meie, these Proceedings. [3] J.F. Gunion, Univ. California Davis preprint UCD-86-39 (1986) and references therein. For earlier references, see also Ref. 2 of [1]. [4] W.J. Stirling et al., Phys. Lett. 163B (1985) 261. [5] J.F. Gunion et al., Phys. Lett. 163B (1985) 389. J.F. Gunion and M. Soldate, Phys. Rev. D34 (1986) 826. [6] E. Eichten et al., Rev. Mod. Phys. 56 (1984) 579. [7] R.N. Cahn, Nucl. Phys. B255 (1985) 341. [8] M. Greco, Phys. Lett. 156B (1985) 109. [9] R.N. Cahn et al., Berkeley report LBL-21649 (1986). [10] T. Ákesson, these Proceedings. [11] R.W. Brown and K.O. Mikaelian, Phys. Rev. D19 (1979) 922. [12] Z. Kunszt, these Proceedings. [13] P. Jenni (LHC Jet Study Group, T. Ákesson et al.), Proc. ECFA-CERN Workshop on a Large Hadron Collider in the LEP Tunnel, Lausanne and Geneva, 1984 (ECFA 84/85, CERN 84-10, Geneva, 1984), p. 165. [14] W. Bartel (Muon Group, A. Ali et al.), ibid., p. 209. [15] W. Kienzle, these Proceedings. [16] R. Cahn and M. Chanowitz, Phys. Rev. Lett. 56 (1986) 1327. [17] S. Dawson et al., Berkeley report LBL-22087 (1986). - 80 -

BEYOND THE STANDARD MODEL

SUMMARY REPORT OF THE PHYSICS-2 WORKING GROUP

V. Angelopoulos, P. Bagnaia, G. Barbiellini, R. Batley, D. Bloch, A. Blondel, W. Buchmüller, G. Burgers, R. Cashmore, P. Chiappetta, F. Cornet, C. Dionisi, M. Dittmar, J. Ellis, J.-Ph. Guillet, N. Harnew, P. Igo-Kemenes, R. Kleiss, H. Komatsu, H. Kowalski, B. Mansoulié, P. Méry, A. Nandi, F. Pauss, M. Perrottet, F. Renard, R. Rückl, A. Savoy-Navarro, D. Schaile, D. Schlatter, B. Schrempp, F. Schrempp, K. Schwarzer, N. Tracas, D. Treille, N. Wermes, D. Wyler, N. Zaganidis, P. Zerwas and F. Zwirner

Presented by J. Ellis and F. Pauss CERN, Geneva, Switzerland

1. INTRODUCTION This Working Group has tried to compare, on a similar basis, the possible contributions of all the different proposed accelerators (pp, ep, and e+e~) to the investigation of a selection of physics topics. Theorists have proposed a large number of ideas going beyond the Standard Model, not all of which could be explored within the constraints of time and people available. Table 1 shows the physics matrix of topics and accelerators which we have studied. Listed against each entry in the matrix are the persons who have worked on the corresponding accelerator. In this report we summarize their work, details of which can be found in individual contributions to appear in vol. II of these proceedings. Among the subjects not discussed by our group are: technicolour; 'classical' additional neutral Z' bosons as

found in SU(2)L x SU(2)R X U(l) models; right-handed currents; and additional charged W' bosons. Our selection of topics has been guided in part by a quest for complementarity to previous studies [1,2], and in part by a general theoretical perspective on Physics beyond the Standard Model which we now describe. There is no confirmed experimental result that conflicts with the Standard Model, but this is well known to have many theoretical defects. Outstanding problems left unanswered by the Standard Model can be sorted into three main categories: Unification, Flavour, Mass. The Unification Problem consists in finding a simple mathematical framework that encompasses all the non-gravitational interactions [3]. A favoured approach is to

look for a single non-Abelian group which includes the SU(3)C x SU(2)L X U(1)Y factors of the Standard Model.

2 Such a Grand Unified Theory (GUT) makes predictions for sin 0w, and perhaps for the bottom quark mass, exotic new phenomena such as proton decay, and magnetic monopoles. Unfortunately, the scale of Grand Unification must be at least 1015 GeV, which often makes the predictions for low-energy phenomena rather ambiguous, and renders GUTs difficult to test at accelerators. Consequently, our Working Group has not pursued this physics topic. The Flavour Problem is that of understanding the number of matter species, and the ratios of their masses and of their weak charged-current couplings. A favoured approach is to suggest that quarks and leptons are composite, being made of more elementary constituents [4]. These preons would be bound on a distance of 10"16 cm or less, corresponding to an excitation energy of 1 TeV or more. Composite models of the massive vector bosons W* and Z° have also been proposed by analogy with the Q, <£> etc. vector mesons of QCD, the motivation being to understand why these bosons are massive, unlike the photon y and the gluon g. Among the signatures of such composite models would be new contact interactions with scale parameters A > 1 TeV, form factors, excited states, and new composite ground states, many of them exotic. The majority of such models predict new effects with A < 102 TeV, within the physics reach of the accelerators under discussion. Accordingly, composite models are extensively discussed in this report. - 81 -

Table 1

Physics matrix and contributors'0

PP ep e+e-

Supersymmetry Batley Komatsu Dionisi Ellis Rückl Dittmar Kowalski Mansoulié Pauss Savoy-Navarro Zaganidis

Z' Bagnaia Angelopoulos Barbiellini Chiappetta Cornet Blondel Guillet Rückl Ellis Zwirner Tracas Schlatter Tracas Treille Zerwas

Leptoquarks Ellis Buchmüller Igo-Kemenes Kowalski Cashmore Schaile Rückl Harnew Schrempp B. and F. Zerwas Rückl Zerwas Tracas Wyler

Compositeness Bagnaia Cornet Bloch Kleiss Rückl Kleiss Nandi Méry Wermes Perrottet Zerwas Renard Schrempp B. and F. Schwarzer Treille Wermes Zerwas

a) In addition, G. Burgers has compared the cross-sections for new particle production by e+e annihilation and 77 collisions.

The problem of Mass is that of understanding the origins of the different elementary particle masses, and why the quark, lepton, W, and Z masses are all so much smaller than the Planck mass of order 1019 GeV, which is the only serious candidate we have for a fundamental mass scale in physics. In the Standard Model it is believed that the source of particle masses is a Higgs boson with mass s 1 TeV [5], and the searches for this are discussed by the Physics-1 Working Group [6]. There is a general theoretical consensus that an elementary Higgs boson by itself has - 82 - insuperable technical problems, which can only be resolved by making the Higgs boson composite (the technicolour idea not discussed here) or else by protecting the Higgs with supersymmetry. Supersymmetric theories predict a large number of new particles which should all weigh < 1 TeV if they are to do their job of stabilizing the mass of the Higgs boson [7]. Accordingly, supersymmetry suggests a rich domain of new physics at the next generation of accelerators. We discuss this extensively in the following. The above list of problems is not exhaustive, but they and all other problems are by definition resolved in the Theory of Everything (TOE). In addition to solving the Unification, Flavour, and Mass Problems mentioned above, the TOE should also include gravity and reconcile it with quantum mechanics. It should probably be finite, may explain the origin of space and time, etc. The first serious candidate for such a TOE is the superstring [8]. It is, unfortunately, even further removed from everyday energies than are the GUTs mentioned above, and its predictions for current and forthcoming experiments are correspondingly even more ambiguous. Nevertheless, some preliminary attempts to relate the superstring to experiments have suggested the possible existence of one or more additional neutral Z' bosons, and/or additional matter particles which might have leptoquark signatures [9]. Such speculative possibilities are also studied in this report. Having revealed our theoretical prejudices, we now state the detector characteristics which we have assumed in a consistent way for the different accelerators. We assume energy resolutions

ÔE/E = 10%/VË + 1% fore/7 50«%/VE + 5% for hadrons (1.1)

or

5E/E = 15%/VË + 2% fore/7 35%/Vl + 2% for hadrons, (1.2)

and momentum resolution

ôp/p=10_4p for charged-particle tracking (1.3)

or

ôp/p = 0.10 for muons passing through magnetized iron . (1.4)

The granularity of the detector is assumed to be

Ôîj = 60 = 0.03 fore/7, ôij = 00= 0.06 for hadrons . (1.5)

The angular coverage for track detection and calorimetry is assumed to be

< 5 in pp collisions, (1.6a)

10° < 6 < 170° ine+e_ collisions , (1.6b)

and a combination of the two [(1.6a) in the p beam direction, (1.6b) in the e beam direction] in ep collisions. We believe it may be possible to install luminosity monitors down to 2° in e+e~ collisions; but this is not essential to our subsequent discussion, and an adequate measurement of the luminosity may be possible using only detectors in the angular range (1.6b). The above numbers (1.1) to (1.6) are to be regarded as default options which apply to all the subsequent analyses unless a different assumption is clearly stated. Finally, we recall the accelerator energies and luminosities which we have assumed. For the Large Hadron

33 -2 1 Collider (LHC) pp option, we generally take Ecm = 17 TeV and L = 10 cm s" corresponding to an integrated luminosity of 1040 cm-2 = 10 fb_1 per year [10]. We also make some comments in the concluding section on

34 -2 the additional physics available if L > 10 cm s~ For the LHC ep option, we mainly consider Ecm =1.4 TeV and L = 1032 cm-2 s- corresponding to an integrated luminosity of 1039 cm-2 = 1 fb~1 per year [10]. We also

31 2 1 compare its capabilities with those made available by running at Ecm =1.8 TeV and L = 10 cm" s~ , bearing in - 83 - mind that longitudinal electron or positron beam polarization is only likely to be available at the lower energy. In

+ _ view of the small e e cross-sections discussed later, for the CERN Linear Collider (CLIC) with Ecm= 1 to 2 TeV,

33 2 -2 _1 2 _1 we assume L = 10 (ECm/l TeV) cm s , corresponding to an integrated luminosity of 10 (Ecm/1 TeV) fb per year [11]. In this case, we comment explicitly on the physics one loses if only L = 1033 cm-2 s~1 is available at

Ecm = 2 TeV.

2. PHYSICS MATRIX In this section we go row by row through the matrix of subjects in Table 1, treating in parallel the capabilities of the different accelerators for each physics topic. The numbering of the subsections (sub-subsections) corresponds to the rows (elements) of the matrix.

2.1 Supersymmetry The most important searches for supersymmetric particles have been conducted at the CERN pp Collider and in e+e" collisions. The former is more powerful in the search for strongly interacting spanieles, the squarks q and gluinos g, whilst PETRA and PEP establish the cleanest lower limits on the masses of electroweakly interacting

sparticles such as sleptons ? and winos W. No definite lower limit from the pp Collider on m9 or mg has yet been published, but indications are that the UA1 Collaboration can probably establish [12]

niq > 65 GeV and mg > 55 GeV (2.1)

+ if one assumes that q -» q*y and g -» qc¡7, respectively, with m- < m9,mê. The ensemble of the e e" experiments establishes [13]

m¿ > 20 GeV and mw > 20 GeV . (2.2)

For a complete list of current limits, some of which are stronger but more model-dependent, see Ref. [13]. The above limits are likely to be significantly improved before the LHC or CLIC comes into operation. In the field of pp collisions, the FNAL Tevatron Collider should be able to reach [14]

m5, mg = 200 GeV , (2.3) whilst LEP 200 should be able to reach [15]

mf, mÄ = 90 GeV . (2.4)

Our task will be to see how far beyond the limits [Eqs. (2.3) and (2.4)] the LHC and/or CLIC could reach, remembering also that theory expects sparticle masses < 1 TeV.

2.1.1 Supersymmetry in pp collisions This sub-subsection first summarizes the results of a new analysis [16] of q and g production in pp collisions based on a Monte Carlo evaluation of the signals and backgrounds using ISAJET version 5.23, which includes initial- and final-state gluon bremsstrahlung and the underlying event [17]. These results are then compared with those of an on-going analysis [14] that applies a uniform analysis technique to q and g production at the CERN pp Collider, the FNAL Tevatron Collider, and the Superconducting Super Collider (SSC). Finally, we discuss the possibilities of looking for electroweak sparticles such as ê and W at the LHC [18]. In the first analysis, the reactions

pp - qq + X , pp - gg + X U q + y (2.5) —» q + y —> qq-y - 84 - are studied, assuming m- < m^.nig, giving the signature of jets + missing transverse momentum pp. The signatures are unaffected if the 7 is replaced by some other light neutral sparticle such as an H, and other studies [19] have indicated that the cross-sections for these signatures are not greatly reduced for massive photinos in the range m- <

l/2(mq or mê). However, we have not studied the effects of q -> q(W,Z) or g -• qq(W,Z) decays, which would tend to reduce the missing transverse energy Et signature [20]. The backgrounds we have evaluated include i) QCD jets, which can give real Et due to heavy flavour c,b,t -+ qiv decays, or fake Et owing to detector effects; ii) (W -* iv) + X and (Z -+ vv) + X, which give real Et; and iii) WW, WZ, and ZZ + X events. The first two sources of background have been studied quantitatively as discussed below. The third class of source seems to have a much smaller cross-section [21]: a quantitative study will be given elsewhere [16]. The jet-finding algorithm used was based on the one adopted by UA1 [22]: one takes an initiator cell with Et> 5 GeV, and then

2 adds to the jet other cells with Et > 0.5 GeV if AR = V(Atj) + (At/>)2 < 1. Figure la shows the distribution in true

a) b) pp->qïj • X at /s = 17TeV 10 pp-».gg + X at /s = 17TeV

10

10 m5 = 1TeV ms = 1TeV I0

1

10"

10"

10~

10~

10~

10~

10"

10~

10~

10" 0.8 1.6 2.4 2.4

TRUE ET (TeV) TRUE ET (TeV)

c) 10

10 BACKGROUND 10 laCD • W • Z)

10

1

10"

10"

10"

10

10'

10'

10'

10' Fig. 1 Distribution [16] of true missing Et for a) qq 10' production with m« = 1 TeV, b) gg production with mg = 10' 1 TeV, and c) total background contribution (QCD jets and 0.8 1.6 2.4 W,Z decays)

TRUE ET (TeV) - 85 -

JÍT due to v and 7 from qq production with mg = 1 TeV, assuming mg >• m^. Figure lb is the same for gg

production with mê = 1 TeV assuming mg < m^, and Fig. lc shows the true missing energy distribution from background sources (i) and (ii) above, assuming a top quark mass of 40 GeV (the backgrounds due to a heavier top quark or to fourth-generation quarks are discussed elsewhere [16]). We see that the signal and background

œ distributions 'kiss' at JÎT 0.8niq,mê, which is also true for smaller values of m^.m^. As can be seen in Figs. 2a and 2b, the signal distributions are not greatly affected by detector smearing, whilst the steeply falling background distribution of Fig. lc is significantly broadened by the detector as seen in Fig. 2c, resulting in a signal-to-background ratio of 1 to 5 or 10.

10

10 BACKGROUND (QCD+W+Z0) 1 AFTER DETECTOR SMEARING > 10" 13 _ Lft ^ 10~

10~ + I 1°" .tttt i TD _ S. 10

10~ 10" 10~ 1. Fig. 2 Distribution [16] of missing ET after detector t smearing for a) qq production with m^ = 1 TeV, b) gg 0.4 0.8 1.2 1.6 2.4 production with nig = 1 TeV, and c) total background contribution (QCD jets and W,Z decays) E" (TeV) - 86 -

Possible ways to reduce the background include the following:

i) A cut in ET- We find that ET ^ 2/3m9 or mg is the most sensitive domain. ii) Remove events with an identified e or ¡i. (It might also prove possible to remove taus with some efficiency, but a detailed understanding of jet shape and charged multiplicity distribution would be required.) The lepton veto is necessary to reduce W -» Iv and semileptonic heavy-flavour decay backgrounds, although it does remove some of the sparticle events. Quantitatively, we have rejected events with PT (e or ¡i) > 30 GeV in < 5, and EpT < 5 GeV in a cone of AR < 0.4 about an electron. iii) Cuts in event topology. These are based on the observations that qq events give mainly dijets + ET final states, where the jets are not back-to-back and the ET vector is isolated in azimuthal angle, and gg events typically give multijet, quasi-isotropic final states, whilst QCD typically gives back-to-back dijet events with the ET vector aligned with the jet axes, apart from QCD radiative corrections. iv) A top quark tag. This may be possible using, for example, a vertex detector and making a cut in apparent impact parameter. Such a tag would be very useful, because after the above cuts 1% of the background is due to b and c jets, 0(1/3) is due to the tt jet-pair production, and 0(2/3) is due to gluon jets splitting into tt pairs.

(These numbers are for mt = 40 GeV, and are expected to diminish for the case of larger mt, which is studied elsewhere [16].) As examples of possible topological cuts, Fig. 3a shows the distribution in azimuthal angle difference A#(ET, jet 1) and the cut A<4 < 130° which is used subsequently in the qq analysis, and Fig. 3b shows the distribution in circularity

2 C = 1/2 min (EpT • n) /(£pT) (2.6) and the cut C > 0.25 which is used subsequently in the gg analysis. Finally, Fig. 3c shows the distribution in the multiplicities of calorimeter jets with Eï5' > 250 GeV for the qq case, and Fig. 3d shows the same distribution for

the gg case. We use a cut on jet multiplicity to separate qq candidates (N¡et s 2) and gg candidates (Njet ^ 3). The results of applying these cuts are shown in Fig. 4a for the qq signal and for the backgrounds, which in this case are mainly W and Z events. The corresponding graphs for gg events and their backgrounds are shown in Fig. 4b: in this case most of the backgrounds are QCD jets. In Table 2 we show the expected event rates in a 10 fb~1 run for squarks and gluinos of different masses, and the different backgrounds for the two sets of cuts. We see that the signals exceed the backgrounds for lower values of m^ and mg, falling to a ratio of one-to-one for

mg » 1 TeV , mg « 1 TeV , (2.7) which are plausible values to quote as a discovery limit [16]. In the case of qq production, the dominant W background could perhaps be reduced by improving the lepton cuts which are not optimized. In the case of gg production, the dominant QCD jet background could perhaps be reduced by the t quark tag mentioned earlier. In view of these possible improvements, we regard the limits (2.7) as being quite conservative. Nevertheless, the search for sparticles in pp collisions is a delicate affair, and would require a good understanding of (a) the Standard Model physics contributions and (b) the detector response to leptons and jets. Therefore we are fortunate to be able to compare two distinct analyses. A uniform approach has been developed over a period of time and applied to squark and gluino searches at the CERN pp Collider, the FNAL Tevatron Collider, the LHC, and the SSC [14]. This analysis uses ISAJET version 5.25 to generate the three processes pp -* gg, gq, qq + X, and simulates a 4ir fine-grain calorimeter. A simple cluster algorithm is used, which looks for the highest ET cell above 2 GeV, includes all nearest-neighbour cells above 500 MeV, and also determines if two or more clusters touch. Clusters whose centres have

2 2 AR = V(Aij) + (Ac/)) < 1 are merged and called jets if their ET > 20 GeV. - 87 -

a) 199 b) 174h

pp at £ = 17 TeV 174 150h . qq , mq= 1 TeV Background (QCD+W+Z) _ 150 125h

1251 100h

>> 100 CUT 75 ™ 75h-f

50 > 50

25

25 50/ 75 100 125 150 175

A

0

160h d) pp at <íi = 17 TeV

— qq lm¡¡ = 1 TeV) 239 at fl = 17 TeV 140h pp • Background — gg Img = 1 TeV)

(Q.CD • W • Z) ion ] ra • Background J ET 199 120h E 250 GeV (QCD . W.ZI T S N 73 E o 100h c 160 ra 80h ja ID 120

t—i/ï 60h ~z. > LU 80 40h

40 20h ^ l . L^SwL 12 16 20 24 12 16 20 24

"JETS

Fig. 3 Distributions [16] of a) azimuthal angle difference A between highest jet ET and missing ET for qq (m^ = 1 TeV) and total background, b) circularity for gg (mg = 1 TeV) and total background; calorimeter jet multiplicity, c) for the total background and qq" (m^ = 1 TeV), d) for the total background and gg (mê = 1 TeV). Only events with Er > 300 GeV are included. - 88 -

10 I a) 10

10 r 10 : MISSING E AFTER CUTS MISSING E T AFTER CUTS 1 r — gg IITeV) 10~ io~ • Background > > 10~ r qq (1 TeV) (QCD + W • Z) QJ 10 M 10~ |_ • Background RM ! (aCD . W . Z) 10 t- \ "tt 10~ r- -O 10~ tt _C 10~ p 3: 1- 10~

-O 10~ \ O 10~ -O 1(f r 10~ tl

10~ r 10~

10~ r 10~

10~ 1 L. i i i i 10~ 0.4 0.8 1.2 1.6 2.4 0.4 0.8 1.2 1.6 2.4

E (TeV) (TeV) T

Fig. 4 Missing-ET distribution [16] after all cuts a) for q^(m^ = 1 TeV) and total background predictions, b) for gg (mg = 1 TeV) and total background events, after the cuts described in the text

Table 2

Expected events rates [16] in a 10 fb~1 run for squarks, gluinos, and background contributions, computed from the IS A JET Monte Carlo version 5.23. The errors quoted are the statistical ones from the Monte Carlo generation.

Nje, i> 3, C > 0.25 Nje, < 2, A

Er > 600 GeV ET > 800 GeV ET > 600 GeV ET > 800 GeV

QCD 940 ± 300 320 ± 200 0 0 130 ± 45 30 ± 19 18 ± 5.1 0

Z -» vp 26 ± 8.8 7.3 ± 3.6 7.2 ± 5.1 0

EBGD 1096 ± 303 357 ± 201 25.2 ± 19 0

Sparticle masses Gluinos Squarks

2000 GeV 11 ± 0.6 7.4 ± 0.5 5.8 ± 0.8 3.8 ± 0.7 1500 GeV 100 ± 6.5 53 ± 4.7 21 ± 3.3 9.8 ± 2.3 1000 GeV 1000 ± 100 300 ± 53 64 ± 17 4.3 ± 4.3 800 GeV 2200 ± 290 520 ± 140 160 ± 47 14 ± 14 600 GeV 4000 ± 960 700 ± 410 660 ± 210 - 89 -

The ranges of sparticle masses studied in [14] have been restricted to those expected in a Minimal Supersymmetric Standard Model (MSSM), in which the gluino may only be slightly heavier than the squark—in

which case g -» qq and q -* qy, or mg < m^—in which case g -+ qq-y and q -+ qg, or mê < mg—in which case g -+ qqY and we restricted ourselves to the q -» q-y decays which have a branching ratio of a few percent. Most of the resulting final states at the Tevatron and higher energies have > 3 jets with pr, and the main Standard Model background is due to QCD jets containing light or heavy quarks. A detailed quantitative study [14] of this background indicates that for large ET jets it closely resembles the supersymmetric signal, and that a cut in pr alone is not sufficient in most of the cases considered, as seen in Table 3. Therefore we make cuts on the event topology using the variables

pr-ei |pT x eil

XE = ~W ' XOUT = ET where gi is a unit vector along the direction of the highest ET jet, or (almost equivalently) the sphericity axis. This is

sufficient to detach the signal from the background, as seen for the xout variable in Fig. 5. Details of the effects of

these cuts on the signal-to-background ratios for various choices of m^ and mg are shown in Table 3. More details, including a comparison of the Tevatron Collider with J Ldt = 1036 cm-2 s~1 or 1039 cm-2 s~ the LHC at VF =

40 2 17 TeV and the SSC at Vs" = 40 TeV with f Ldt = 10 cm" s" and choices of mg and mg from < 100 GeV to 1.5 TeV are given in Ref. [14]. The main conclusions of this analysis are as follows: 1) The Tevatron will be able to reach m^irig = 100 GeV when it achieves J Ldt = 1036 cm-2 s"1, and m^«

39 -2 350GeV,më « 200 GeV if it achieves f Ldt = 10 cm s~ 2) The LHC overlaps with the Tevatron for squarks and gluinos with masses between 200 and 400 GeV.

1 7 3) The rate at the LHC with 10 fb~ varies from about 2 x 10 events for m^ ~ mg ~ 300 GeV, compared with

8 4 about 3 x 10 QCD background events, to about 2 x 10 events for m9 => mê= 1 TeV, compared with about 2 x 107 QCD background events. After applying the proposed cuts, the signal-to-background ratio falls from 50

to 90 in the low-mass case, through about 10 to 20 when m^ « mg « 500 GeV, to about 1 when mq « mg = 1 TeV. A bound of about 1 TeV seems to be attainable at the LHC with effort. 4) The SSC could reach out to sparticle masses »1.5 TeV. It should be noted that neither of the above pp -> squark, gluino analyses includes contributions from jet fluctuations, which could be important for light sparticle masses. We conclude this subsection by reporting on an analysis [18] of pp -»• electroweak sparticles + X. The processes studied were

pp-»f+?"+X, pp-q + W+X ue + y evy (2.9) e+ + y

For the first process, the backgrounds studied were

+ i) Drell-Yan: pp -» (y,Z -» e e~) + X, which can be removed by simple cuts me+e- > 200 GeV and ET > 40 GeV, and ii) pp -> (W~ -* e" i>) + (W+ -• e+ v) + X, which can be removed by strengthening the ET cut to ET > 100 GeV. After these cuts one could see

më = 300GeV, (2.10)

under the traditional assumptions mw > më > m- [18]. However, these assumptions are not necessarily valid in realistic supersymmetric models based on supergravity theories [7], and the changes in the spectra have a significant effect on the bound (2.10). In minimal models of the supergravity type, the physical sparticle masses are given in terms of two bare mass parameters (mo,mi/2) by 90

xout lrela,'ve 10 sphericity axis)

Fig. 5 Distribution in do7dxoUt for the process pp -* gq assuming mg = 315 GeV, mg = 285 GeV and the decay modes g ~* qq, q -* q7 at Vs~ = 17 TeV (full line), compared with one obtained for the corresponding QCD background (i.e. where the QCD jets are required to have PT > 300 GeV). A fit to the QCD background (dashed line) has been made so as to estimate the tail of this background.

Table 3

Comparison [14] of supersymmetry signal and QCD background for pp collisions at Ecm = 17 TeV. The supersymmetry signal includes all three processes pp -> gg, gq and qq.

Sparticle masses a Signal/background ratio (% of signal retained) (GeV) (mb)

m No cut g Er > 200 GeV xE > 0.24 Xout > 0.08

210 483 0.59 x 10"5 0.21 13 (7.6%) 8.7 (6%) 94 (29%) 315 285 1.84 x 10"6 0.06 21 (42%) 14(31%) 58 (50%) 350 805 0.47 x 10"6 0.21 5 (27%) 4(11%) 17 (32%) 525 475 1.3 x 10-7 0.06 3.3 (63%) 4.2(41%) 9 (54%) 700 1610 0.74 x 10"8 3.4 x 10"3 0.27 (65%) 0.1(12%) 0.54 (40%) 1050 950 1.92 x 10"7 8.4 x 10-2 0.75 (80%) 0.1 (52%) 0.2 (68%)

For more details of the QCD background evaluation, see [14]. - 91 -

2 m!= ml + 7mf/2 , m = ml + (0.5, 0.15)m2/2 , L,R (2.11)

mg = 3mm , m~ OAlm^ , mw = 0.84mi/2 .

The domain of the (mo,mw) parameter plane which can be excluded by the search for pp -» è + è ~ + X is shown in

Fig. 6: it only extends out to m5 = 200 GeV if mm = 0 [18]. For the second process in (2.9), the main background considered has been pp -+ (W -» ev) + jets + X. This can be removed by cutting on mr(e, ET) > 150 GeV, after which one has sensitivity to

mw= 450GeV (2.12)

if mw = m^P- m-[18]. Again, the precise value of the reach (2.12) depends on the ratios of sparticle masses assumed, and the accessible domain of the (mo,mw) plane in the minimal supergravity model (2.11) is also shown in Fig. 6. This will be compared later with the capabilities of other present and future accelerators.

Fig. 6 Discovery limits [18] in the (m0, mw) plane for 0 100 200 300 400 500 600 electroweak sparticle production in pp collisions at vT =

17 TeV m0 (GeV)

2.1.2 Supersymmetry in ep collisions The following sparticle production processes in ep collisions have been considered [23] :

ep -• êq + X , vq + X , 67 + X, PW + X, egq + X , eqq + X . (2.13)

The first two processes have the largest cross-sections and are the only ones discussed here. These final states are produced by gaugino and shiggs exchanges in the crossed channel: tVZ/H?^ exchange in the case of the êq final state, and W*/fl* exchange in the case of the vq final state. The model for gaugino and shiggs mixing can be specified up to a possible discrete ambiguity by two masses, which we take to be the lightest neutral state and the lightest charged state, which we call the 7 and W *, respectively, although these are not pure states.

As can be seen in Fig. 7, the cross-sections are quite sensitive to the chosen values of m - and mw [24]. Taking as our detection limit a cross-section of 10"37 cm2 corresponding to 10 (100) events per year at a luminosity of 1031 (1032) cm-2 s-1, and adding together the êq and vq processes, we find that the following sparticle masses can be reached [23] :

equal mass: m¡¡ = = 350 GeV , (2.14)

unequal mass: m^ » 700 GeV , if m5 = 50 GeV . - 92 -

1 1 1 1 1 1 1 1 j— 1 1—1—1—lili 100 a (ep vq +X ï

^^•0/30(CeV)

— 10 **S s • s ^50/100(GeV) • s / / ^ s t / S ' A' x e~ ''/ / e + B- 1 - /// /

aî'1 i / 0.1 ni - ¡¡ / nil II 1 1 LHC options '•¡ / 0.01 / 1 . i i i 11 i " ' i i iiiiiI îI Lí i i i ''i' II 1 11 1 1 1 1 1 1 11 102 103 104

Fig. 7 Cross-section a(ep ?q + X) as a function of VF [23]. Assumed mass values for v and q are 50 GeV. The two sets of curves represent different choices of m- and mw.

We have not made an extensive study of all the possible backgrounds, since no ep event generator analogous to ISAJET is yet available. A careful study of heavy-flavour backgrounds such as ep etf + X , vtb + X is desirable. We have considered the possible backgrounds from conventional neutral- and charged-current events. In the case of neutral-current events, one can measure the kinematic variables x and y in two independent ways, using either the outgoing e or the outgoing hadronic jet, and also measure their azimuthal angles. In usual neutral-current events

xc = xj, ye = y.j, and <£e = ¡ + v\ but in (è e7)(q -> qy) events, Ax = xe - Xj, Ay = yc - yj, and A(¡> = c/»c - ¡t>i - 7T are all non-zero in general. Previous HERA studies [25] have shown that the neutral-current background can easily be removed with cuts |Ay| > 0.2 and \A\ > 0.2, which reduces the signal by at most 20%. We believe that similar arguments can be extended to the LHC ep option, and hence that the limits (2.14) are reasonably conservative.

2.1.3 Supersymmetry in e+e~ collisions An essential fact to remember is that all e+e" annihilation cross-sections are small [26]. It is useful to take as a standard of comparison the QED cross-section for e+e~ /i+fi~ :

+ 2 2 2 a = a(e e" -+y -> p*^) = 47ra /3Ecm = 87 fb/[Ecm (TeV)] (2.15)

7 33 -2 which gives 220 n*¡i~ pairs per year at Ecm = 2 TeV, in the canonical year of 10 s at L = 10 cm s"Other annihilation cross-sections, e.g. e+e_ -» qq, may be rather larger than (2.15), but many interesting sparticle pair-production cross-sections are actually significantly smaller than (2.15). For pair-production of a generic spin-0 pair ss,

2 3 R = — ; — = l/4Q Nc/3 , ß = p/E , (2.16) a(e+e -7 -* Ii- li )

where Qs is the charge of s, and Nc is the number of colours it has (one for sleptons, three for squarks). Using

3 3 Eq. (2.16), one has R^ =1/4|3 , RD0 = 1/12/3 (where D0 is the charge-1/3 colour triplet scalar discussed in - 93 -

sub-subsection 2.3.3) and j33 = 0.65 (0.22) for m/Ebeam= 0.5 (0.8). Figure 8 shows a compilation [27] of typical

supersymmetry cross-sections as a function of Ecln, including Z*as well as 7* exchange. To do some reasonable physics, we need about 103 /i~V~ P^rs Per year, corresponding to

33 2 2 1 L = 10 (ECIn/l TeV) cm" s" , (2.17)

33 2 1 which means L = 4 x 10 cm" s" at ECm= 2 TeV. The implications for physics if the target (2.17) is not reached are discussed in the Conclusions (Section 3). Many Monte Carlo studies of the search for supersymmetry have been made for planned e+e" machines at high centre-of-mass energies, such as LEP 100 and 200. Based on these studies, we tried to estimate the detector and machine requirements for a very high e+e~ linear collider in the TeV range. The goal was to see how far the energy range could be explored and investigated. As discussed below, our selection criteria are optimized for sparticle masses in the region of (0.3 to 1) times the beam energy, with emphasis on the high mass range. The characteristic signature of supersymmetric particle pair-production is events with missing energy, missing transverse momentum and total momentum, and dijet or dilepton final states which are acollinear and acoplanar. The subsequent analysis [27] emphasizes the use of missing transverse momentum and acoplanarity, which is insensitive to any longitudinal momentum imbalance occasioned by beamstrahlung. For the CLIC parameters [11] studied, the amount of this imbalance does not, in fact, seem large enough to be troublesome. We will discuss three sparticle searches in some detail: those for ß+ ß~, ë + ë ~, and W + W ~.

1) e+e" ->• ß+ji~ We select events with a n+¡i~ pair in which each ¡i is well contained in the detector with |cos0| < 0.87 (0 < 30°) and has an energy of more than 30 GeV. The background sources considered which can give two muons in the

+ + final state are e e~ -+ ^(7), 77(7) with both taus decaying to ¡ivv (4% of the total T+T" cross-section), and W W" pairs with both W's decaying to ¡iv (< 1% of the total W+W~ cross-section). The /u's from ^(7) and 77(7) are either back-to-back in the plane transverse to the beam, or have a high-energy 7 within the detector acceptance. Because of the high energy of the W (1 TeV) compared to its mass, the jt's from W's are also almost back-to-back in the plane transverse to the beam, and in addition are sharply forward peaked. The distributions of ¡ijj, nn(y), and 94 -

a) b)

e e —• u u —•uYuíf

at fs = 2 TeV

16 í' « i.

'i- e+e" WW-»-HVUv

at il = 2 TeV

-0.75 -0.5 0.25 0.5 -0.75 -0.5 -0.25 0. 0.25 0.5 0.75

Cos Cos 0 *mii2 MlMj

c)

Fig. 9 Scatter plot [27] of EEV¡S and cosc/v1)12 for a) e+e" -» /j+jT (signal), b) e+e" -+ WW -» /ivfiv -1. -0.75 -O,'S -0.25 O. 0.25 0.5 0.75 1. + (background), c) e e~ /i/i(y) (background) at Ecm =

Cos 0^1^^ 2 TeV

WW (-» niivv) events in the (EEViS, cosc>MlJl2) plane are shown in Fig. 9. There is a clear separation between the signal and the considered backgrounds. A signal with negligible background can be obtained by requiring that the

angle 4>w between the two n's fulfil cosw > - 0.9 and that the total visible energy be < 0.9Ecm. The acceptance is of the order of 50-60% for masses above half the beam energy. Figure 10 shows the accepted cross-section with the cuts described above. From this we conclude that with 10 fb_1 integrated - 95 -

n 1 1 1 r

100 jfs = 0.5 TeV ri}¡lp

0.2 0.4 0.6 0.8 1.0

m T (TeV)

+ + + Fig. 10 Accepted /t ¡i and T T cross-sections [27] in e e collisions as functions of m-f for different energies

luminosity, a clean signal can be detected up to masses of 800 GeV. An integrated luminosity of 50 fb 1 would allow this range to be extended up to

m- « 850-900 GeV . (2.18)

This luminosity would also allow detailed studies with the detected events, as will be described below.

2) e+e~ -» è + ë" Because of the t-channel photino exchange the cross-section is larger, as shown as the solid line in Fig. 11a. The signal can be isolated by selecting electron pairs with pr > 125 GeV. This cut gets rid of most of the QED background such as ee-y. The events from WW -»• evev and evrv (where T -* evv) have a product of cross-section and branching ratio of ~ 0.007 pb, which after the PT cut goes to 0.001 pb. The dependence of the accepted electron-pair cross-section on the selectron mass for m- = 40 GeV and m¿- = 1000 GeV is shown as the dashed line in Fig. 1 la. The selectron can easily be detected up to masses of 800 GeV with a luminosity of 10 fb-1, and up to

i ~L 1 1 1 r- a) r~ i i 1 1 1— b) 0.20 G = 2 TeV Mv = 1000 GeV M? = 40 GeV

o oTo, 0.15 100 w-IJ,)vï,w-jjï)

o o o 0.10 0 O i; 10 - o _ o 0.05

o

0.6 1.0 w*w m, (TeV) I 7 Fig. 11 Cross-sections [27] before and after selection cuts V + 1 (see text) a) of è ë~ production as a function of slepton 0.1 • i i 1 1 1 + i i mass, and b) of wino production in e e~ collisions as a 200 400 600 800 function of the wino mass M5 (GeV) - 96 -

m, = 900 GeV. (2.19)

1 with an integrated luminosity of 50 fb" at Ecm = 2 TeV [27].

3) e+e~ -» W+W" The cross-section as a function of is shown in Fig. lib. Here we base our analysis on the expected dominant decay mode W -» W-y, and investigate the final state where one W decays to an e(ji)v and the other to qq -* jets, corresponding to roughly 30% of the total W + W _ cross-section. Possible backgrounds for this could be W pair-production and the higher-order processes e+e~ -» e+e~W+W~ or e+e~ -» e^v + W±Z° followed by W* -+ v + Z° qq. Other new exotic particles such as a new heavy lepton or an extra W would give a signature similar to the expected signal; however, these backgrounds (being also a discovery) were not considered in the analysis described below. To obtain a signal we required the following selection criteria: each parton (jet) should have an energy of more than 100 GeV; E E(jets) < 0.7Ebeam (to eliminate the W+W_ pair background); cos d < 0.87 (leptons) and < 0.94 (jets). With these cuts we retain a visible cross-section of 1/10 of otot for the W signal. Applying the same cuts to the Monte Carlo simulation of the reaction e+e_ -» e+e"W+W_ and e+e~ -» e±»W±Z0 [21], the remaining background has a cross-section of roughly 10 fb. To remove it, one has to apply strong cuts, using the supersymmetry signature of missing transverse energy [ET > 200 GeV] and acoplanarity angle [cos0 (W-lepton) > -0.8]. With these cuts it is possible to reduce the background cross-section to below 0.2 fb,

while keeping a signal cross-section of 0.38 fb (m^ = 500 GeV) to 0.34 fb (mw = 800 GeV). Thus we conclude that a significant (about 5a) signal of about 17 events above a background of < 10 events can be seen with an integrated luminosity of 50 fb~1 for

mw = 850 GeV . (2.20)

It should be kept in mind that we have assumed a pessimistic scenario for the branching ratios and the cross-section for W +W ~ production. In addition, the significance could be increased by searching for a W signal in the total hadronic final-state W decays. Similar analyses of e+e~ -» TT, and qq lead to analogous bounds on these sparticles [27].

i i i i r b)

20 - e*e"—*¡T+ ¡i- , m¡¡ = 0.5 TeV

—i 1 1 1— a) e+e--»»¡I*íi-at f%= 2 TeV 7» 80 m ¡¡ = 0.5 TeV

sin20 Í 60

2 O (1+cos 0)

> LU 20

I I I l_

(COSQJJ) (TeV)

Fig. 12 a) Average angular distributions of muons at Ecm = 2 TeV. The solid line corresponds to a spin-0 smuon of mass 500 GeV, the dashed line to a spin-'A particle, b) Total cross-section as a function of beam energy. The solid (dashed) (dotted) lines correspond to smuons of mass 500 (±50) (± 10) GeV. The bars represent the statistical errors in cross-section measurements with the indicated luminosities (see Ref. [27]). - 97 -

It is important to note that the greater cleanliness of sparticle production in e+e" collisions compared with pp collisions allows one to verify the supersymmetric nature of the observed new phenomena, and also to investigate in detail the properties of the new particles. For example, the spin and the mass of the ji could be determined. Because they have spin 0, the ¡i are produced with sin20 angular distributions, and there is a significant correlation between

the angles of the parent ¡í and the daughter /¿ if m- /Ecm is small enough, which can be used to verify the spin of the

£. Figure 12a shows the distribution in (cos0„) = 1/2 (cos0„, - cosdn) for m- = 500 GeV at Ecm= 2 TeV, and the corresponding distribution that would be given by two-particle decay of a comparison particle of spin Vj. We find

1 that with 190 events detected, it would be possible to distinguish spin 0 from spin /2 at the 5a level [27], provided beamstrahlung effects can be corrected for. As for measuring m-, Fig. 12b shows that a measurement of the total

cross-section at ECm= 2 TeV can give an error of 10% on the mass if m- » 500 GeV. Once such a measurement was

made, one could reduce the centre-of-mass energy and measure the threshold rise at Ecm > 2m-, thereby determining m- with an error of 2%, as also seen in Fig. 12b [27].

2.1.4 Comparison of supersymmetry limits This is complicated by the facts that hadron and lepton accelerators tend to produce different types of sparticles—for example the LHC can produce heavy, strongly interacting sparticles such as gluinos, which do not have electroweak interactions, whereas CLIC can produce all heavy, electroweakly interacting sparticles, including sleptons. In general, it can be concluded from the above studies that CLIC, with an integrated luminosity of about 50 fb-1 at Ecm = 2 TeV, would allow detection of almost all sparticles with electroweak couplings in the mass range of 500 GeV to 850 GeV. In addition, the clean signature of the events would allow one to identify their supersymmetric nature and to measure the detailed properties of the newly found particles. However, since the gluino could be produced only via the decays q -» qg, only indirect information could be obtained. The LHC machine can only produce sparticles with masses O(l) TeV with observable cross-sections if they are strongly interacting, i.e. squarks or gluinos. Because of the high and difficult background situation for jets in such a machine, an analysis to prove the presence of squarks and gluinos is very complicated. However, we think that it is possible to detect or exclude these sparticles up to masses of about 1 TeV. Table 4 summarizes the possible mass limits from different accelerators for each sparticle type. Model-dependent assumptions used in obtaining some of the limits are also shown. We have indicated in italics those bounds which we believe can be established, but which need further evaluation before they can be confirmed. It is impossible to compare the significance of bounds on different sparticle species in a model-independent way, so we have tried to compare the different rows in Table 4 using two theoretical models [7]. One is the minimal supersymmetric Standard Model (MSSM), which has two bare supersymmetry-breaking mass parameters (mo, m 1/2) in terms of which the observable sparticle masses can be estimated as in Eq. (2.11). The other is a minimal superstring-inspired model (MSIM), which has one supersymmetry-breaking mass parameter m 1/2, and the physical masses are related to it in somewhat different ways [9]:

mq = 1.9mi/2, mê = mw, m- = 0.16mi/2, (2.21)

m m?L,R~ (°-7> °-4)mi/2 - w - 0.3mi/2 .

Within these model assumptions, it can be seen that a slepton mass of about 1 TeV corresponds to a squark mass of about 3 to 5 TeV. However, this and the previous model are used only as guides to the relative powers of the different accelerators. The significance of the present sparticle mass limits (2.1) and (2.2) as bounds on the parameters (mo, mi/2) of the MSSM is shown in Fig. 13a. We see that present-day pp and e+e" limits are almost equally powerful in constraining the model. Figure 13b shows the corresponding bounds which could be obtained from the new

accelerators discussed above, as compiled in Table 4. We see that CLIC at Ecm = 2 TeV reaches further into the

(m0,mi/2) plane than does the LHC at ECm= 17 TeV, although this conclusion only holds if CLIC attains the - 98 -

Table 4

Comparison of possible mass limits from the different accelerators

+ sparticle hh ep e e- type Vs~ = 17 TeV Vs~ = 1.8 TeV Vs" = 2 TeV (J L dt = 500 fb" ')

slepton ë, ¡i ë ë, ¡i, f 300 GeV 350 GeV 850 GeV

m5 = m^

squark 1 TeV 700 GeV 850 GeV

m^ < mg m-e = 50 GeV 350 GeV nig = m¡j

wino* 450 GeV No useful limit 850 GeV

mw = m9

gluino 1 TeV No useful limit No useful limit

The limits in ITALICS need more study

a) 1 1 1 b)

- EXPECTED LIMITS -

eV (TT, ww) - —PP Igg ,qq) ~ >n pp (Î , W) \ ep

\ WW

gg

i 3 1 2 3

m0 [TeV)

Fig. 13 Significance of sparticle mass limits as bounds on the parameters (mo, m^) of the MSSM a) for present sparticle mass limits, and b) for the accelerators discussed in this summary report

luminosity (2.17): otherwise the event rate is not sufficient to discover and study sparticles within one year. In this particular model, as shown in Fig. 13b, the physics reach in the (mo.mw) plane of the LHC turns out to be roughly

+ _ equivalent to that of an e e collider with Ecm = 1 TeV, whilst CLIC has a comparable reach to the SSC with ECm = 40 TeV. Although they are not shown, similar conclusions apply to the comparison in the MSIM. It should be noted that the overall scales in Fig. 13b are 50 times larger than those in Fig. 13a. Recall also that theory expects, on the basis of the hierarchy argument reviewed in Section 1, that sparticles should weigh < 1 TeV [7]. - 99 -

2.2 Z' physics Many different possible additional neutral gauge bosons Z' occur in different models with different couplings. 'Traditional' Z' bosons such as those appearing in SU(2)L X SU(2)R x U(l) models have often been considered in previous studies [1, 2], and some composite model Z'-like isoscalar vector bosons are discussed in subsection 2.4. Here we concentrate on novel and speculative Z' possibilities. It may be that the TOE is some superstring theory, which may have an underlying Es x Eg gauge group whose

2 observable part is some subgroup of E6. This subgroup may be SU(3)C x SU(2)L x U(1)Y x [U(l) or U(l) ], which

may break down to the Standard Model SU(3)C x SU(2)L x U(1)Y at relatively low energies s 1 TeV. If so, there would be one (or two) new Z' with masses in this range [9]. Clearly this scenario is uncertain and ambiguous, but there are three favoured possibilities for the new Z' and its couplings [28]. In model A, there is a unique extra U(l) gauge group in four dimensions, whilst in models B and C one starts from a theory with two extra U(l) gauge groups and reduces these down to one at high energies via a large vacuum expectation value (v.e.v.) for either B, an

c SU(3)c x SU(2)L x U(1)Y singlet field N or C, a conjugate sneutrino field ? . The extra hypercharge couplings to all the known particles are fixed in each of these models, as shown in Table 5, and the magnitude a' of their gauge coupling strength is fixed by renormalization effects: a' = 0.015.

Table 5

Possible neutral currents in superstring models [28]

V(V )Y T3L 3 V(Vj,Y' Y"

±72 0 UL v. 73

2 0 UÎ. 0 - /3 73

dt 0 -76 72

7 (».0L ±v2 -v2 -V6 2

ei 0 1 73 0

-V, 2 DL 0 - /3 0

DL 0 V, -72 • -7S

-7 XÏ. 0 0 % 2

5 NL 0 0 /« 72

+ 2 (H ,H°)L ±v2 v2 - /3 0

+ (H ,H°)L ±v2 -V, -V« -72

Model A: Z'couples to Y'

Model B: Z' couples to (V(V8) Y' - V(V8)Y")

Model C: Z' couples to (V(V8) Y' + V(V8)Y")

In each model the additional Z' can mix with the conventional Z, so the model parameters can be characterized as a new mass m' and a mixing angle 9. These parameters are determined in any given model by the v.e.v.'s of generalized Higgs fields. In model A, these v.e.v.'s are those of supersymmetric Standard Model Higgses: v = (0|H|0>, v = (0|H|0), and an additional v.e.v., x = (0|N|0). The squared mass matrix in model A is

2 2 2 2 1/3 sinw [(4v - v )/(v + v )] (2.22) 3112 = ml

2 2 2 2 2 2 2 2 2 2 1/3 sin0„ [(4v - v )/(v + v )] 1/9 sin 0w [(25x + 16v + v )/(v + v )] - 102 -

Fig. 16 Forward-backward asymmetry [34] for Model A on the Z' peak 0 10 20 30 UO 50 60

PT (GeV)

for mz. » 1 TeV. The advantage of this channel is that it can be studied even at higher pp luminosities such as

34 -2 1 10 cm s~ or more as proposed in some LHC options. This would allow one to explore mz, « 5 to 6 TeV [33]. The QÇD corrections to the cross-sections for pp ->• (Z' TV) + X are also available [34], including pt distributions and the forward-backward asymmetry. Figure 16 shows the forward-backward asymmetry for Model A in different ranges of pt and y. The asymmetry vanishes at y = 0 and is unfortunately too small to be

measured unless mz. is quite small, enabling large statistics to be gathered. 3) Z' -• qq: This is expected to be the dominant Z' decay but is subject to a large QCD background, which may make it impossible to see. (We recall that so far there is only a 3CT signal in the UA2 experiment [35] for the W and Z°-> qq, despite their intrinsically larger couplings to qq.) As an example of the problems to be faced in a

search for Z' -> qq, we record the following numbers. For mz. = 1 TeV in Model A, we expect crB(Z' -» qq) » 10 pb, corresponding to 5 x 104 events in a detector with efficiency e = 0.5 for an integrated luminosity of 10 fb_ This is to be compared with the QCD background in three bins of width Am = 100 GeV (the resolution taken from Fig. 15c), which we estimate at 6000 pb corresponding to 3 x 107 events (assuming e = 0.5 and J L dt = 10 fb -1 as before), with a statistical error of 6 x 103 events. Thus in this case statistics alone would allow an 8CT bump, but to realize this would require very good control of systematic errors and the ability to process O(108) complicated events.

4) Z' -* W+W_ : In many models this has a branching ratio comparable to that for Z' -> e+e~, yielding aB(Z' ->

+ W W~) « 0.5 pb if mz, = 1 TeV [30]. The best channel for detecting this decay seems to be via (Wi

jet + jet) (W2 -* IV), which has aBiB2 == 0.17 pb corresponding to 800 events in a detector with E = 0.5 and 10 fb_ 1 of luminosity. The W+W_ continuum background in three bins of Am = 100 GeV is 0.05 pb, giving

aBiB2 = 0.017 pb in the channel (Wi-» jet + jet)(W2 -* IV), but it is estimated that the rate for the QCD background events with indistinguishable topology is about 70 times higher [36], corresponding to a cross-section of 1.2 pb or 6000 ± 80 events in a detector with e = 0.5 and 10 fb- \ Thus it seems that the Z' may still be detectable in this channel, at least if it has previously been observed in Z' -* TT.

In summary, one can search for Z' with masses up to 4 TeV using the Z' e+e~ decay channel, and 5 TeV if one uses Z' n+n" running the LHC at L = (1034 to 1035) cm-2. If a Z' with mass < 4 TeV is found, its W+W~ decay mode can probably also be measured, but looking for Z' qq appears very difficult.

2.2.2 Z' in ep collisions Here the cross-sections for direct production of the Z' are too small to be observable, and we must rely on the search for indirect effects detectable by measuring different asymmetries. We assume that et,R are all available with

+ the same luminosity, which may only be true for e beams of energy < 50 GeV giving Ecm < 1.4 TeV, since - 103 -

Tableó

Asymmetries measured in ep collisions and their sensitivities [37] to Models A, B, and C

A B C

* * eL-eR

* et - eR

el - e£ *

* eR - eR

eE - eR

* * eR - et - 104 - polarized e+ beams are unlikely to be available at LEP 200. We can construct six different asymmetries between the four cross-sections with et,R beams, and we have studied which asymmetries are most sensitive to which models [37]. The conclusions for the Models A, B, and C introduced above are shown in Table 6, under the simplifying

assumption that the mixing angle 0 is negligibly small and considering measurements at Ecm =1.4 TeV, x = 0.05, and Q2 = 7 x 104 GeV2. The statistical errors which can be expected in asymmetry measurements are ¿5A = 0.03 to 0.05, if one uses data in the range Q2 = 6.3 x 104 GeV2 to 105 GeV2 integrated over x from a run of 1 fb-1 with 100% beam polarization. The systematic errors due to uncertainties in the luminosity and in the beam polarization can be removed by measuring simultaneously the same asymmetry at lower Q2, where it can be compared confidently with the Standard Model prediction. We believe the systematic error can be controlled sufficiently well so that an overall error SA » 0.02 to 0.04 is attainable. Figures 17a to 17c show contours of the shifts OA in favourable asymmetries

over the (mz,,9) plane for Models A, B, and C respectively. Comparing Fig. 17a with the region of the (mz.,9) plane which is theoretically favoured by the squared-mass matrix (2.22) for Model A, as seen in Fig. 14, we conclude that these asymmetry measurements give access to [37]

mz, « 500 GeV . (2.28)

This is comparable to the physics reach [Eqs. (2.23) and (2.24)] of asymmetry measurements on the Z° peak at LEP [32], and could be useful in distinguishing between possible sources of any discrepancy from the Standard Model which might be seen there.

2.2.3 Z' in e+e~ collisions CLIC could be a Z' factory—perhaps after the discovery of the Z' at the LHC—in much the same way as the SLD and LEP follow on after the Z° discovery at the CERN pp Collider. Treating the Z' as a conventional Breit- Wigner resonance, the cross-section on the peak is given by unitarity to be:

Z' - X)/o-p, = (9/a2)B(Z' - e+e")B(Z' ->X). (2.29)

The e+e~ branching ratios in the superstring models described earlier vary between 0.6% (Model A, three complete generations of particles and sparticles in 27 representations of EÔ) and 6% (Model B, three generations of conventional quarks and leptons only). Putting a typical branching ratio B(Z' -> e+e~) = 1% into (2.29), we get [38] a total cross-section on the Z' peak of

a(e+e- -> Z' - all) = 0.13/[m(TeV)]2 nb . (2.30)

Thus for m = 1 (4) TeV and L = 1033 cm-2 s~1, one has 1 event per 8 s (1 event per 2 min). However, before we get carried away, we should compare the centre-of-mass energy spread with the natural width of the Z',

T(Z' - all)/mz, = (0.65 to 3.8)% , (2.31) where the minimum is for Models A and B with only decays into conventional quarks and leptons, and the maximum is for a Z' in any of the models decaying into full 27 representations of particles and sparticles. The natural width (2.31) is smaller than the centre-of-mass energy spread, which is not negligible in linear e+e~ colliders

[11]. Figure 18 shows an example of the centre-of-mass spectrum for a Z' at mz. = 2 TeV, folded with the machine resolution as compared to a pure Breit-Wigner shape. At the peak the rates drop by more than a factor of 3. We have assumed B(Z' -» e+e~) =1% and folded in bremsstrahlung (which causes a non-Gaussian energy spread), a

beam energy spread of 1 % (taken to be Gaussian), and a Breit-Wigner peak with T(Z' -» all)/mz, = 0.02. The rates [39] at the Z' peak for several different CLIC design parameter choices are shown in Table 7. The centre-of-mass energy spread of the machine reduces the rate computed using a naïve Breit-Wigner [Eqs. (2.29) and (2.30)] by a factor of up to 6. Nevertheless, in a typical 'year' of 107 s one should hope to accumulate between - 105 -

400

300

-ïï 200 C >ai LU 100

0 12 3 4 Mass (TeV)

Fig. 18 Centre-of-mass spectrum of a Breit-Wigner resonance at mz. = 2 TeV for CLIC with a disruption parameter of 1.7 folded with a Gaussian beam spread of 1% (full curve). For comparison a pure Breit-Wigner shape is shown (dashed curve).

Table 7

Expected rates [38, 39] at the Z' peak [R(Z' -> all)/mz, = 0.02] for different CLIC design parameter choices in 107 s (= 'year')

c.m.s. energy (TeV): 1 2

Disruption: 0.56 3.0 0.32 0.70 1.5 Luminosity (cm" 2 s~ '): 6 x 1031 1033 6 x 1031 2 x 1032 1033

Number of events with 1) no beam radiation + 7.2 x 104 1.2 x 106 2.0 x 104 6.0 x 104 3.0 x 105 no beam spread

2) beam radiation + 2.2 x 104 3.5 x 105 6.0 x 103 1.8 x 104 8.8 x 104 1 % Gaussian spread

2 x 104 and 2 x 105 Z' decays. Notice that parameter choices with larger disruption D and hence larger bremsstrahlung effects nevertheless give larger event rates on the Z' peak, because their higher luminosities overcome the increase in the centre-of-mass energy spread [39]. With a sample of 104 to 105 Z' events, one can consider searching for rare Z' decays. The following candidates come to mind:

i) Z' -» W+W" : This is not really a rare decay mode, since its decay branching ratio, whilst model-dependent, is typically of the same order as B(Z' -> e+e~) * (1 to 3)% [30].

ii) Z' ->dDiy7 + dDi/2: This is a flavour-changing Z' decay, with d a generic, light charge, -1/3 quark (d, s, or b), and D1/2 a heavy charge, -1/3 colour-triplet fermion with a leptoquark or diquark decay signature (see subsection 2.3): D1/2 -» £q7 or qqy. The branching ratio for this decay would be of order

R(Z' -> dDi/2 + dDi/2) (2.32) T(Z' - d3)

We do not know what this ratio of v.e.v.'s might be, and a search for (or observation of) this rare Z' decay would give us valuable information about it. - 106 - iii) Z' -> H'l+t : This is the analogue of conventional Z° -+ H£+£ decay and the relative rate

T(Z' -» H'f+r ) . r(Z° H£+r ) 4 = O(sin 0w)- (2.33) T(Z' ->£+r) v(z°-*tr)

for similar values of mH/mz. Formula (2.33) assumes that the t+l~ pair emanates from a virtual Z', and gives

very small rates unless mH, < mz. However, in some models the decay Z' -» H' + (Z° -» l*t~) is also possible, and could have a much larger branching ratio [40]. What if the Z' has not previously been detected at the LHC, and one must scan [41] for it at CLIC? Let us

33 2 1 take as an example mz, = 1 TeV and assume a luminosity of 10 cm" s~ at D = 3.0 as in the second column of

Table 7. Then the event rate on the Z' peak would be 0.035 s" If one goes off resonance by AEcm = 50 GeV, the total cross-section is reduced by a factor of » 3, giving an event rate of 0.012 s_ This is to be compared with a background annihilation rate from R ~ 20 of 0.0018 s- '. A run of 10 hours will therefore give a signal of ~ 180 events above a background of = 60 events. Proceeding by two steps per day, one can scan a range of 0.5 TeV in

Ecm within a week. Scanning for a Z' should therefore be easy [41].

What are the prospects of searching for indirect effects [42] of a heavy Z' which weighs more than Ecm? The

+ most sensitive place to look is probably via effects on e e~ -»• /t+/i~. The angular distribution for this reaction

33 -2 1 carries essentially the same information as the total cross-section. Assuming L = 4 x 10 cm s" at Ecm = 2 TeV, which yields 870 ¡i+¡i~ pairs per year, we expect a statistical error ha/a = 3%. We believe it is reasonable to expect the absolute luminosity at CLIC to be known to within a few percent, so that the total error ha/a ~ 4% to 10%. Figure 19 exhibits the (x/v, v/v) parameter space of the MSIM Model A introduced earlier, showing contours

+ _ + of mz, between 2 and 4 TeV. Also shown are the contours of ha/a for e e /i n~oi 4%, 6%, and 10%. We see

that these can explore indirect Z' effects for mz, < 3 to 4 TeV [42]. However, we would not claim these as possible discovery limits for the Z' in e+e~ collisions, since a small discrepancy from the Standard Model prediction for ff(e+e~ ->• fi+ii~) would be ambiguous, with other possible interpretations (see e.g. sub-subsection 2.4.3).

We close this sub-subsection with a few comments on the problem of luminosity measurement at CLIC. The latest design of the e+e~ interaction point presented at this meeting has free space in the angular range 10° < 9 < 170°, and the possibility of ±30 cm free space at angles between (2 and 10)° and (170 and 178)°. If the latter are

4 TeV

X/V

e-e-»-u \i at 20 FS = 2 TeV

_I 1 I I I 1 L__J l_ 0 0,2 0.4 0.6 0.6 1.0

V/V

+ + Fig. 19 Contours [42] for mz- and ha/a for e e -> /¿ /¿ in the (x/v, v/v) plane using the MSIM Model A - 107 - indeed available, we see no reason why the luminosity could not be measured as accurately as at PETRA and PEP, i.e. with an error of 4%, using Bhabha events. However, it still seems possible that beamstrahlung-induced y backgrounds may exclude detectors from the regions 0 < 10° and > 170°, and the existence of a Z' with unknown mass and width could obscure the interpretation of Bhabha cross-section measurements. Nevertheless, we see a possible way [43] of measuring the luminosity with an error of » 10%, based on the following steps, assuming that the Bhabha cross-section can be measured down to 0 = 10°, which yields » 100 events per day. First, assume

approximate values for mz, and Tz. and incorporate them in a calculation of the cross-section for small-angle Bhabha events. Then, use this cross-section and the observed small-angle event rate to compute a first approximation Li to the luminosity. The final step is to check with the event rate of large-angle Bhabha events, comparing the observed number and the expected number, assuming the first approximation Li to the luminosity. This enables a second approximation to be computed:

L2 = Li x a( large angle)obs, assuming L /enlarge angle)expected . (2.34)

We can now go back to recompute the parameters (mz., Tz.) of the possible resonance using as input L2 and the

observed small-angle Bhabha rate. By iteration we can eventually arrive at accurate estimates of mz., Tz,, and L. As an alternative, it has also been suggested [21] that one might monitor L using a large cross-section yy process such

+ + as e e" e e~¿í+/t~. making cuts pf-ä 10 GeV, which leave a very large event rate. However, here there is the possible theoretical problem of computing the cross-section with a very small error.

2.3 Leptoquarks Particles with leptoquark quantum numbers appear in many different theoretical frameworks. They could have spin J = 0 or 1, and electric charge Q = -4/3, -1/3, 2/3, or 5/3. The most common possibility, which we choose to study, is J = 0 and Q = - 1/3. The couplings of J = 0 leptoquarks are often correlated with fermion masses, and they may well be associated with Higgs boson couplings,

SLQff <* SHff <* % • (2.35)

However, this property is not inevitable. Many approaches [9] to the compactification of the superstring predict

that light particles fill out 27 representations of E6, which contain J = 0, Q = -1/3 particles that may have leptoquark couplings whose magnitudes are completely decoupled from those of the Higgs fields. Indeed, the J = 0 particles Do which are candidates for leptoquarks could even have diquark couplings instead: Do -* qq. Thus the possible generic signatures for the Do are given by the following tree:

Do (colour 3, J = 0) I 1 °r 1

D0 -* £q coupling D0 -» qq coupling (2.36) • and/or• ! Do -» £* + jet Do -> v + jet D0 jet + jet

To accompany J = 0 leptoquarks, any supersymmetric theory would predict J = 1/2 supersymmetric partners. In the case of the superstring-inspired model introduced above, where we call these spartners Di/2, they could have

(D1/2 -> £* + q + y and/or v + q + 7) or (Di/2 -* q + q + 7) (2.37)

as generic decay modes. The possible decay chains (2.36) and (2.37) give experimentalists plenty of signatures to explore, namely (£* + jet) or (jet + jet) mass bumps from Do decay, missing-energy events from Do -> v + jet,

D]/2 -» 7 + v + jet or Di^ -» jet + jet + y, and £* + jet + missing energy from Dm decay [44]. In the rest of

this subsection we will mainly emphasize the leptoquark decays D0 -• £* + q, Di/2 -* l* + q + y, but the other signatures will also receive some consideration. - 108 -

Composite models can also yield leptoquark bosons. Of particular interest is the rich spectrum of J = 0, 1 leptoquarks with Q = -1/3, 2/3 expected in the strongly coupled version of the Standard Model. These leptoquarks are distinguished by the following features: i) any possible conflict with bounds on flavour-changing neutral currents is naturally avoided, and ii) each leptoquark has only two decay channels, q£* and qp\ with branching ratios of 50% each. This leads to clear-cut experimental signatures.

2.3.1 Leptoquarks in pp collisions We have mainly considered the pair-production mechanisms gg, qq -» DnDn, D1/2D1/2. As can be seen in Fig. 20, observable rates exist out to masses [45]

mDo, mDl/2 « 2 TeV . (2.38)

It should be noted, moreover, that these rates are independent of the unknown magnitude of the D -» iq or qq

coupling. If there was a D qq coupling, it would enable the D0 to be produced also singly via qq annihilation. The rate for this production mechanism depends on the unknown D -» qq coupling strength, but it is easily estimated that any Do -» jet + jet mass bump is drowned in QCD background for any plausible magnitude of this coupling

strength. Accordingly we concentrate on the observability of the signatures for DoD0 and D1/2D1/2 pair-production.

Fig. 20 Leptoquark production cross-sections [45] in a hadron collider at VT = 17 TeV

m (TeV)

The decays Do -» q + v would give signatures very similar to the q -» q + 7 decays discussed in sub-subsection 2.1.1. We conclude that in this case there would be sensitivity to Do particles with masses up to [16]

mDo « 1 TeV . (2.39)

The pair of Do -* q + I decays would give (l+ + l~ + jet + jet) final states. The dominant backgrounds are expected to come from pair-production of heavy quarks followed by their semileptonic decays. We have looked in detail at the background from tt production decay, and find that simple cuts render it negligible for all values of

niD0 and mo1/2if mt < 110 GeV. If mt > 110 GeV or if there is some other heavier quark Q, the same simple cuts

leave mDl/2 s 1.2 TeV. More work is required to beat down the background if mt (or mq) > 110 GeV and mt>0,

mo1/2a 1.2 TeV, but we believe this can easily be done. Therefore the event-rate limit (2.38) is the discovery limit

for D0 -* q + I decays [45]. By contrast, pairs of D0 -» qq decays also seem always to be lost in the background from QCD four-jet events. - 109 -

One can also look for leptoquarks indirectly in pp collisions, through their possible effects on the Drell-Yan pair-production cross-section [46], analogous to the composite model contact terms discussed in sub-

+ subsection 2.4.1. We find that a 50"% measurement of

mDo » 1.2 TeV VF, (2.40) where F measures the leptoquark coupling strength

gLo/ 4T - FX Ofem • (2.41)

This limit is unlikely to be competitive with the direct limit (2.38), and has the relative disadvantages of being more model-dependent as well as less direct.

2.3.2 Leptoquarks in ep collisions

- + Here there are simple production mechanisms: e + q -» D0 or e + q -» D0 which could give copious single

leptoquark production. The rate clearly depends on F = GLQ_/4iracnl, but is large enough to be observable for mo, s

1.6 TeV at Ecm = 1.8 TeV if F = 1 [47, 48]. If the D0 has the leptoquark coupling necessary for this production mechanism to exist, it cannot simultaneously have a diquark coupling, and its possible signatures are therefore

Do -• q£~ and/or D0 -» qv. There are no significant backgrounds to searches for Do -* q/t~ or Do -* qr~, but these decay modes may be suppressed in view of the fact that production occurs via a Do -» qe" coupling. There is a large background to Do -» qe~ decays from the continuum of neutral-current events. Nevertheless, a Do peak may be visible in the (Í + jet) final-state invariant mass, even before cuts, as seen in the upper part of Fig. 21 [48]. The different kinematics of Do decay (isotropic and hence flat in y) and of neutral-current events (mainly at low Q2 and hence at small y) can be exploited to improve the signal-to-background ratio, as shown in the lower part of Fig. 21. The rates for signal and

backgrounds with y > 0.5 are shown as functions of mD(, in Fig. 22. The dominant background to a search for Do -» q + i» decay comes from conventional charged-current events ep -» v + X. As seen in Figs. 21 and 22, a y-cut

1 1 1 10' —i 1 1 1— - 1 1 1 ~i 1 r D—>qe D—>qv Neutral Current Events Signal/ V background m„ = 500 GeV m0 = 500 GeV (Background) F=0.02 F=0.02

: 103 R y > 0.05 \ y * 0.0S

n

i, y »0.5 J y > 0.5 1 1 10*

% 0.5 1.0 0 0.5 1.0 0 0.5 1.0 LEPTO 0.UARK MASS (TeV) Win. Fig. 22 Expected rates [48] for leptoquarks and for

background neutral-current events as a function of niD0 in ep collisions

1 1 i i r 1 Fig. 21 Number of expected events per year for leptoquark 0.05 0.10 0.15 0.20 0.10 0.15 0.20 decays in ep collisions [47, 48], assuming mo, = 500 GeV and X (from lepton) X (from Jet) F = 0.02 - 110 -

0.4 0.8 1.2 1.6 0 0.4 0.8 1.2 1.6 LEPTO O.UARK MASS (TeV)

Fig. 23 Observability [48] of the D0 in ep collisions at HERA and LHC

similar to that used for reducing the neutral-current background can also be used to reduce the charged-current background and thereby render Do-* q + v decay observable [48]. Figure 23 summarizes our conclusions on the observability of the Do in ep collisions at HERA as well as at the LHC. We see that for F = 1 one can reach almost to the kinematic limit

mDo~ 1.6 TeV (2.42)

at Ecm =1.8 TeV [48]. This is one case where the higher-energy LHC ep option has a clear advantage. We have also studied the indirect effects of virtual Do exchange on the neutral-current cross-section for ep -»

e + X, as a way to probe for mD() ä Ecm. However, we find that such effects are negligible for Do > Ecm if F < 1 [47].

2.3.3 Leptoquarks in e+e collisions Unless there are large Yukawa couplings, the dominant production mechanisms are e+e~ (y,Z,Z')* -»

+ DoD0 , D1/2D1/2. Because the Do has spin 0 and Qem = - 1/3, the rate for e e" -* 7* D0D0 is very small,

+ 3 2 R = a(e e" -> 7* -> D0Do)/crp, = (l/12)/3 , ß = V(l -4mD /s), (2.43) and this is not greatly improved by including the Z* and Z'* exchanges. On the other hand, the rate for e+e~ ->

7* -* D1/2D1/2 is significantly larger, with R -» 1/3 at large Ecm • mDl/2, and a threshold rise oc ß. Figures 24a and 24b show the rates for D0D0 and D1/2D1/2 pair-production in two different cases: a) with no Z' contribution, and

b) on the Z' peak, which is assumed to be at the Ecm of the corresponding CLIC option [49]. In view of the small event rates in Fig. 24a, we consider that

33 2 2 1 L = 10 (Ecm/1 TeV) cm - s - (2.44)

+ is necessary to be able to do D0 physics in e e" collisions. Among the backgrounds considered are: i) e+e~ -> qq; ii) e+e" -> QQ or LL from a fourth generation; iii) e+e~ -> W+W~ and Z°Z°; and iv) two-photon processes. To simulate these and the signal process, we have used the Lund e+e" event generator including initial-state radiation, electroweak corrections to 7*exchange, and second-order QCD for the hadronic final states. As discussed in detail in a contribution [49] to this study, it was found that any Do or D1/2 final state could be picked out with high efficiency by cuts that suppressed the background below the level of the signal. As an example, we quote the case of the Do ue" decay mode. A — 1 •• 1 1

: eV-^0,0,,, D1/2D,/2 n>r = - I ~X ß=1.TeV °V2n i/2 Y ^ —D.D. 1 s 1Ó"2 _£=2TeV ._ vs=1TeV - Vs=1.5TeV\ - v7=2TeV\ ^ \ ~- \ \ 10 i i t 0.2 0.4 0.6 0.8 1.0 m„ (TeV)

Fig. 24 Leptoquark pair-production [49] in e+e_ collisions: a) with no Z' contribution and b) on the Z' peak which is assumed to be at the indicated Ecm of CLIC

selection of events with an e* pair (pc,, pe2 > 10 GeV), at least two hadronic jets, and mo, - 40 GeV < mej,, mej2 < mo, + 60 GeV, had a combined efficiency of 76% and reduced the background by a factor of 103 or more. Similar results [49] were found for other Do decay modes. We conclude that the Do and Dm could be detected if they were produced, but that the high luminosity (2.44) would be necessary in the case of the Do search in the e+e~ continuum with no Z' contribution. In these conditions, the discovery limits would be [49]

mD() « 650 GeV , 950 GeV. (2.45) mD,

If there were a Z' in the CLIC energy range, it would be possible to detect mo,, mDW s l/2mz,. We have also considered an indirect search for Do exchange which could give a contact term analogous to the terms discussed in the context of composite models in sub-subsection 2.4.3. We find that under assumptions similar to those used in 2.3.3 for the attainable statistical and systematic errors, the following limit could be reached:

mD„ = 1.8 TeV (2.46)

at Ecm = 2 TeV if F = 1 [49]. However, we do not quote this as a discovery limit in view of its model-dependence and indirect nature. The leptoquarks of the strongly coupled Standard Model, with J = 0, 1 and Q = -1/3, 1/3, may also be pair-produced in e+e~ collisions. The J = 0, Q = - V3 leptoquarks manifest themselves in the same way as the Do discussed above. The J = 1 leptoquarks do not suffer from the ß3 threshold suppression factor. Therefore they can be detected for all masses essentially up to the beam limit of 1 TeV. An indirect search can also be made, looking for leptoquark exchange contributions to e+e" -» qq, which is sensitive to masses of ~ 10 TeV if the effective coupling is 1.

2.4 Compositeness As was mentioned in Section 1, many theorists propose as a solution to the Flavour Problem the possibility that quarks and leptons are composite, whilst others hope to understand the W and Z° masses by interpreting them

as composite [4]. We denote by A the energy scale at which this compositeness would be manifest. The radius Rc of

composite states would be related to the corresponding Compton wavelength: Rc= 0(1/A). Among the possible manifestations of compositeness could be contact terms scaled by inverse powers of A, form factors F(Q2/A2), and excited states e*, q*, V*, ... with masses O(A). There is a common expectation that A « (1 to 10) TeV, but this - 112 - should be interpreted with caution, since no satisfactory and theoretically consistent composite model yet exists. In what follows, we concentrate on contact interactions and on excited states as possible signatures of compositeness. We discuss four-fermion contact interactions, using an effective Lagrangian in the notation of Eichten, Lane and Peskin [50]

£cff = geff [0?LL/2ALL)(&7^L)($L7/^L) + (l»RR/2AÍu0(Í?RVtfR)(#R7|ift0

+ WZALOtfL/toij^W + (^i-/2AiiMh-fM(hyM] > (2.47)

where the rjy may be ± 1, and seek to bound the parameters Ay using the interferences between £Cff (2.47) and the conventional gauge interactions. Following convention, we do this assuming geff = 4;r in (2.47). When considering the single production of excited quarks or leptons, we use the transition couplings

£eff = (g/2A*)FV(v - a7s)fF", (2.48)

where F*" is some gauge field strength, g is the corresponding gauge coupling [e in the case of U(l)em, g2 in the case of SU(2)L, etc.], and v2 + a2 = 1/2. Rather than explore the full plane of (nif*,A*) values, we will usually quote discovery limits for mf» = A*. Note that, because of the way we define A in (2.47) and A* in (2.48), we expect A* «

Va,0!2,Q!3A.

2.4.1 Compositeness in pp collisions The possibility of a four-quark contact interaction can best be explored by a detailed study [51] of pp -+ jet + jet + X. The QCD 2 -* 2 subprocesses have characteristic angular distributions which are sharply peaked forward and backward, whilst the conjectured contact terms (2.47) would give more isotropic distributions. Thus a study of the angular distribution of two-jet events is more sensitive than a measurement of the total two-jet cross-section, and is also largely independent of theoretical errors induced by uncertainties in the initial parton distributions and higher-order QCD effects. We use the variable [52]

X= (1 - |cos0|)/(l + |cos8|) (2.49) to plot the two-jet angular distributions in Fig. 25. Shown there are the expected distributions of jet-jet pairs with masses between 7 and 8 TeV, including a vector-vector (VV) contact term with different values of the

compositeness scale AQQ = ALL = ALR = ARL = ARR: AQQ = 11 TeV, 12 TeV, and » corresponding to pure QCD.

1 1 1 1 i i i i

0.08 pp at fk = 17 TeV

7TeV

Xp/fl p 0.06

LZJ 1 -

- V 0.05 X . A = - _ O o A =12 TeV 0.02 a A =11 TeV

0 i i i i i i i 3 7 11 15 19 X

Fig. 25 Expected angular distribution of jet pairs [51] for different A values and pure QCD (A = oo) for pp collisions at vT = 17 TeV - 113 -

The range of jet-jet masses considered is the result of a competition between high sensitivity (large ni2j) and large statistics (small nuj): the range shown is not necessarily optimized, but was based on previous experience with UA1 data. The 95% confidence level limit obtainable at the LHC with 10 fb_ 1 was found to be [51]

Aqq « 12 TeV , (2.50)

to be compared with Aqq > 415 GeV from present UA1 data [52]. One can also use pp collider data to probe an eq contact term via the Drell-Yan process pp e+e~ +X. We have not made a detailed estimate of the sensitivity that can be reached with this reaction, but a 50% measurement

of da/dmi+r for mf+r < 1 TeV would be sensitive to

Aeq = 20TeV, (2.51)

where we have again assumed a VV form of interaction with Aeq = ALL = ALR = ARL = ARR [53]. The final probe of compositeness in pp collisions that we consider is q* production via g + q ->q* and/or q + q -»q + q* via contact interactions. The generic form (2.48) of interaction, with g = g3, P" the gluon field strength, and A* = m„», can be used to estimate the q* production cross-section via the first subprocess. One can look for q* -» q + g decay through the same interaction as a jet-jet bump on top of the QCD continuum. A

theoretical estimate [54] indicates that the signal-to-background ratio falls as mq* increases, becoming one-to-one when

mq. ~ 5 TeV , (2.52)

which we take to be the theoretical discovery limit for this type of composite model effect.

2.4.2 Compositeness in ep collisions Here the principal sensitivity is to an eq contact term. As in the case of different Z' models, different asymmetries probe contact terms with different combinations of helicities. A complete survey has been made [55] of

2 4 the relative sensitivities of different asymmetries at x = 0.05 and Q = 7 x 10 GeV in an ep collider with Ecm =

1.4 TeV. (The Ecm = 1.8 TeV ep option would be less sensitive, because of its smaller luminosity.) Four representative asymmetries are shown in Figs. 26a to 26d as functions of Q2. The effects of the different helicity

structures are shown for contact terms with Aeq = 5 TeV and positive signs 17 = + 1 at V7 = 1.4 TeV and x = 0.05. Also shown are the Standard Model predictions for the four asymmetries. Table 8 shows the most sensitive

39 2 asymmetry for each helicity combination, and the value of Aeq to which the asymmetry is sensitive with 10 cm" = 1 fb~1 of integrated luminosity, after integrating over x. The values lie in the range [55]

Aeq « (8 to 13) TeV . (2.53)

It should be noted that although the range of Aeq probed is apparently smaller than that accessible in pp collisions (2.51), there it would not be possible to unravel the helicity structure of any observed contact term with the clarity possible in ep collisions. Another important test of compositeness in ep collisions is the search for an e*, produced by e + 7 collisions

and itself decaying into e + 7. Using the effective interaction (2.48) with g = e and A* = mc», it can be estimated that there is an observable rate up to

mc.«1.5TeV (2.54)

in ep collisions at Ecm = 1.8 TeV [55]. This is one case where the higher centre-of-mass energy overcomes the luminosity advantage of the lower-energy ep option. - 114 -

0.0 1 ' ' 1 ' 1 • ' • • 1 1 -0.6 1 1 1 j

0.0 15.0 30.0 15.0 60.0 75.0 90.0 *10 0.0 15.0 30.0 45.0 60.0 75.0 90.0 *10

2 2 2 2 a (GeV ) a ,GeV )

Fig. 26 Expected asymmetries [55] in ep collisions assuming A = 5 TeV at Vs~ = 1.4 TeV and x = 0.05 for a) A(el - et), b) A(e£ - et), c) A(eE - e£), and d) A(et - efe). The Standard Model predictions are also plotted.

Table 8

Expected values [55] of Aeq for the most sensitive asymmetry for each helicity combination in ep collisions

Helicities Asymmetries Aeq(TeV)

LL eE -- eR 8 LR el -- et 8 RL e£ --el 13 RR et --et 11 VV el --et 13

AA eR -- et 10 - 115 -

2.4.3 Compositeness in e+e" collisions One may look for ee, e/t, er, and eq contact terms in e+e" collisions. We have considered in detail the ee, e/t, and eq cases [56, 57]. The best limits on Aee and Ae,, come from the consideration of the full angular distribution

da/dcosG; the total cross-section alone is considerably less sensitive. In general, the range of Aee or Ae/, accessible is

limited by the statistics available at any given Ecm, resulting in:

AocEc^lj Ldt)1/4, (2.55) sothatL = 4 x 1033 cm-2 s~1 gives access to values of A a factor of 1.4 higher than L = 1033 cm-2 s~ '. With L

+ + + s, Eq. (2.55) gives A

The 95% CL lower bounds on Aee and Ae/l as determined from doVdcosG at Ecm = 2 TeV are plotted as functions of the integrated luminosity in Figs. 28a and 28b, assuming positive interference and no polarization [57]. Table 9a shows the values of Aee and Ae„ accessible with unpolarized beams for the various helicity combinations. It should be noted that there is a particular (implausible) combination of helicities, called the worst-case scenario or WCS, for which interference with the Standard Model amplitude exactly cancels, and thus the accessible range of A is significantly smaller than for the other forms of contact terms. However, the left-right asymmetry for e+e" -» /i+n~ with longitudinal polarization is particularly sensitive to the WCS model, as seen in Fig. 29, and

10° 101 102 cos 0 /Ldt lib"1!

Fig. 27 Expected deviation [57] from the Standard Fig. 28 Lower bounds [57] at the 95% confidence + Model predictions in the angular distribution for e e" level as a function of luminosity for a) Aee and b) Ae(l. 1 collisions at Ecm = 2 TeV and 20 fb" integrated The curves were obtained from the doVdcosG + + luminosity: a) for e e" e e" using Aee = 45 TeV, distributions without polarization, for various helicity + + and b) for e e~ -> n pT using Ae^ = 50 TeV combinations. - 116 -

Fig. 29 Expected deviation [57] from the Standard Model predictions for a measurement of AM., assuming 50%

longitudinal polarization of the e~ beam and AC)L = 45 TeV

Table 9

Expected 95% CL lower limits [57] on Aee and Ae„ (in TeV) accessible with unpolarized and polarized beams

+ _ 1 in e e collisions at Ecm = 2 TeV with 20 fb~

LL RR VV AA WCS

a) Unpolarized beams

Aíe 49 52 104 76 34 A^e 52 54 105 64 36 Aí„ 56 58 95 86 32

A~EIÍ 53 55 93 90 35

b) Longitudinally polarized beams 0.125 Ate 47 48 12 15 67 Aee 47 48 12 10 67 Fig. 30 Expected deviation [57] from the Standard + AT 40 42 -- 60 Model prediction in the angular distribution for e e" -> -1 qq at Ecm = 2 TeV and 20 fb integrated luminosity A~C¡Í 43 44 -- 59

using Aeq = 35 TeV

gives a dramatic improvement in the accessible range of A, as is shown in Table 9b. On the other hand, transverse beam polarization would only give a marginal increase in the sensitivity to any of the forms of contact terms.

1 Summarizing, the 95% confidence level limits attainable with 20 fb" at Ecm = 2 TeV are in the range

Aee, Ae)l * (60 to 100) TeV , (2.56) and we refer the interested reader to Table 9 and Refs. [56] and [57] for more details. These limits were obtained including a realistic systematic error of 3% in the luminosity measurement. In the search for eq contact terms, one can hope to get information from the total cross-section and (if the quark charge cannot be measured) from da/d|cos 9|, which provides a sensitivity identical to that obtainable from the total cross-section. In this case, the sensitivities of the bounds on A are mainly limited by the systematic error in the total cross-section measurement. Figure 30 shows the angular distribution for e+e_ qq in e+e_

collisions at Ecm = 2 TeV for composite models with different forms of contact terms, compared with the Standard

Model predictions. As seen in Table 10, the bound on Aeq again depends on the helicity structure, and lies in the range [57]

A», » (30to80)TeV, (2.57)

which is less than the accessible range of Aee and Ae)l (2.56) but considerably exceeds the range accessible in pp or ep collisions. - 117 -

Table 10

Expected 95% CL lower limits [57] on Aeq (in TeV) accessible in e+e" collisions at

1 Ecm = 2 TeV with 20 fb"

RR LL VV AA WCS

Aeq 29 33 36 55 41

Aëq 51 45 82 55 51

Other possible manifestations of compositeness can also be probed in e+e~ collisions. For example, the internal structure of the W can be studied using the reaction e+e" -* W+W" [58]. In the Standard Model the total cross-section for this process exhibits strong cancellation between crossed-channel v-exchange diagrams and direct-channel (7, Z°) exchange diagrams. Hence it could be a particularly sensitive monitor of any deviation from the Standard Model that upsets the cancellation. (The reaction e+e" -» eWj» could also be a sensitive probe of such effects, but we have not studied it in detail.) The reaction e+e" -» W+W" is sensitive to possible anomalies in the leptonic and bosonic sectors. In the leptonic sector, these could include form factors, an excited v* state, or gauge-invariant e+e"W+W~ contact interactions. In the bosonic sector, these could include modifications to the three-boson vertices (e.g. in the anomalous magnetic dipole moment of the W, or in the electric quadrupole moment), an excited Z°* or W* state, and non-gauge-invariant e+e~W+W~ contact interactions. Since the Standard Model cross-section for e+e" -» W+W~ is sharply peaked in the forward direction at CLIC energies, one generally finds that the most sensitive observable is the angular distribution doYdcosG, particularly in the backward direction. The details can be found in Ref. [58]: here we simply note that the typical sensitivity to compositeness scales entering into the specification of the new couplings is

A* » 2Ecm « 4 TeV (2.58)

+ + for CLIC with Ecm = 2 TeV [58]. As an example, we show in Fig. 31 the angular distributions for e e" -» W W" in the Standard Model, including both 5% systematic and likely statistical errors, and for / exchange with A* = m„* = 2.3 TeV at Vs" = 1 TeV, assuming different helicity structures for the eWc* vertex.

10

-° 1 a. 1 o i/) o U TD \ b -a

lo"1

Fig. 31 Angular distributions [58] for e+e" -> W+W" in the Standard Model and for v* exchange, assuming different helicity structures. The error bars indicate the expected systematic and statistical errors for the measurement of the angular distribution in the Standard Model. Cos 0 - 118 -

e e —> e e -y Vs = 2 TeV 50 fbarn -i 200

175

150 -

0.25 0.5 0.75 1. 1.25 1.5 1.75 2 ey MASS (TeV)

Fig. 32 The e-7 mass distribution [59] for e* decays and QED background in e+e collisions at Ecm = 2 TeV and 50 fb"1 integrated luminosity

One can also search more directly for excited states in e +e" annihilation. For example, using the coupling (2.48), or contact interactions, one finds that the reaction e+e" ee* produces observable numbers of e* essentially up to the kinematic limit

= 2 TeV (2.59) if A < 10 TeV [59]. Figure 32 compares the signal from ee ~* e(e* -* ey) with the QED background for e+e"

+ e e"7, plotted as the number of events in 40 GeV bins corresponding to the expected mass resolution [me» = 1.92 TeV, BR(e* -» 67) = 30%]. On the other hand, the conventional electromagnetic charge coupling of the e*

+ + would give an observable cross-section for e e" -* e *e" * if me* ^ 1 TeV. Finally, the ee7 coupling (2.48) can also

+ be used to probe indirectly for e* weighing more than Ecm via the e* exchange diagram contributing to e e" -* 77.

If A* = me», this process is sensitive to me» < 3 TeV. One can also reach up to 2 TeV masses for excited quarks q* by postulating a qq*7 vertex of the form (2.48)

+ + and searching for e e" -* qq* + qq*, whilst the reaction e e" -» q*q* gives access to mq» s 1 TeV. In models with composite W* and Z° bosons, one generally expects i) excited Z* bosons and ii) composite

isoscalar vector bosons Y (or YL) coupling to the hypercharge current (or its left-handed part). At CLIC they would

+ S manifest [60, 61] themselves as resonance peaks in the e e" cross-sections for mz« YIYL 2Ebeam = 2 TeV. CLIC also is sensitive [60, 61] to indirect manifestations of Z*, Y, YL up to m = 5 TeV, provided their couplings to ff

pairs are not much smaller than gwfy = 0.64. A consistent candidate model for composite W* and Z °, the strongly coupled Standard Model, predicts correlated effects in e+e" -» e+e", pfpT, qq at CLIC energies [60]. i) In the limit of exact U(12) symmetry, the reaction e+e" -> e+ e" would have a large peak due to the formation of a YL -type resonance, whilst the other reactions such as e+e" n +¡i~, qq would show no effect, ii) Exotic t-channel exchanges in all three reactions e+e" e+e", n+n~, qq (due to leptoquarks, etc.) would lead to correlated dramatic deviations from the Standard Model predictions. Based on an exposure of 50 fb~\ CLIC would be sensitive to masses [60]

m £ 10-15 TeV (2.60)

of these exotic composites. Such bounds are well above the naturally expected mass range for partners of composite W * and Z° bosons. - 119 -

50 100 500 1000 5000 /s (GeV)

+ _ + Fig. 33 Possible values [57] of R as a function of ECm for e e -• n ¡i~ assuming Ae(l = 5 TeV

It should be re-emphasized that most composite modellers favour a compositeness scale A «» 1 to 10 TeV. If they are correct, there could be very dramatic effects in all channels at CLIC [57]. Figure 33 shows the cross-sections for e+e" -+ n+p~ given by contact terms of the form (2.47) with A = 5 TeV, including a form factor

to ensure damping when Ecm = 0(A). Needless to say, when Ecm is 0(A) and R becomes large, the form (2.47) of the additional interactions can be at best approximately correct. Nevertheless, there is no reason why R should not be at least as large as on the Z° peak. The only upper bound on R is provided by unitarity (2.29), and R could even be larger than on the Z° peak if there is a resonance with leptonic branching ratios larger than the 3% of the Standard Model Z°. If the composite modellers are correct, there could be plenty of events to study at CLIC!

3. CONCLUSIONS Each of the three accelerators we have studied has unique capabilities. We believe one should emphasize the complementarity of different machines, not a false competition between them. For example, gluinos can be studied at the LHC, but not at CLIC, whilst supersymmetric events are in general much cleaner at CLIC, and one can get more information from them. One can probe for a high-mass Z' at the LHC, but CLIC could be used as a Z' factory to study its properties in great detail. An ep collider produces leptoquarks singly through eq collisions, whilst the LHC and CLIC pair-produce them via conventional gauge interactions. Lepton substructure can best be probed at CLIC, whilst quark substructure can best be probed at the LHC. Moreover, there are many possibilities for physics beyond the Standard Model that we have not studied in this Working Group. Any comparison between different accelerators is necessarily subjective, since it depends on the selection of physics topics studied as well as on the quality of the measurements that can be made. This latter aspect cannot be adequately reflected in any numerical comparison. It is with these caveats in mind that we present Fig. 34, which brings together the discovery limits for our selection of physics topics at the different accelerators. We emphasize again that the numbers presented do not tell the whole story—for example, the superior quality of supersymmetric particle searches and measurements in e+e~ collisions is not apparent from Fig. 34. At this meeting, the possibility has been mentioned of running the LHC at a higher luminosity, L > 1034 cm"2s-\ for some special physics purposes. One clear example of the increased physics reach this would provide is in the search for a Z' through its \L+\T decay. Such an increase in luminosity would enable this search to

be extended to between 5 and 6 TeV. If the cross-sections for producing pairs of large-ET jets could be measured at such a high luminosity, then the sensitivity to quark compositeness would also improve. By combining the /t and jet measurements, the discovery limit for leptoquarks Do -» ft + jet decays could be increased. More detailed work - 120 -

1 TcV pp STRONG 700 GeV ep

850 GeV e*e- SUSY 400 GeV WEAK 350 6eV ' TcV GeV T 500 2 TeV 2 TcV TeV LEPTOaUARK 1.6 850 GeV 12 TeV

oo LU 20 TeV z A 13 TeV JJJ '\eq 30 TeV 00

o 5 TcV 1—1 1.5 TeV m * 2 TeV

Fig. 34 Summary of discovery limits for the selected physics topics at the different accelerators discussed

would be needed to analyse whether the ET signature for supersymmetry could be seen in the multiple event environment at L > 1034 cm" 2 s~ In the case of CLIC, we re-emphasize that some important aspects of the physics would be lost if the

33 2 1 luminosity L were to be < 10 cm" s" at Ecm = 2 TeV. The cross-sections for pair-production of scalar particles are so small that we can no longer be sure that sleptons or leptoquarks could be found. However, only a little of the

sensitivity to the compositeness scale would be lost: Aee, Ae)l would be reduced by < 10%, whilst the limit on Acq would be unchanged as this is dominated by systematic errors. Clearly any Z' peak would be interesting at L «

33 -2 -1 10 cm s , whilst the search for indirect effects of a Z' with mass greater than Ecm would be crippled by a substantial decrease in luminosity. It seems to us inevitable that a pp collider in the LHC/SSC range will be built in Europe and/or the United States. Such a machine certainly has very great physics capabilities, as can be seen from Fig. 34. It then becomes reasonable to ask whether a high-energy e+e~ collider such as CLIC could offer exciting additional physics, which is not available with such a pp collider. It should be clear from the bulk of this report and from Fig. 34 that the answer is a resounding Yes! Therefore we very strongly advocate the commitment of human and financial resources to a research and development programme for CLIC along the lines suggested [11] at this meeting. - 121 -

REFERENCES

[1] Proc. ECFA-CERN Workshop on a Large Hadron Collider in the LEP Tunnel, Lausanne and CERN, 1984, ed. M. Jacob (ECFA 84/85, CERN 84-10, Geneva, 1984). [2] Proc. 1982 Summer Study on Elementary Particle Physics and Future Facilities, Snowmass, Colo., 1982, eds. R. Donaldson, R. Gustafson and F. Paige (AIP, New York, 1983). Proc. 1984 Summer Study on the Design and Utilization of the Superconducting Super Collider, Snowmass, Colo., 1984, eds. R. Donaldson and J. Morfin (AIP, New York, 1985). Proc. 1986 Summer Study on the Physics of the Superconducting Super Collider, Snowmass, Colo., 1986, eds. R. Donaldson and J. Marx, in preparation. Supercollider Physics, Proc. Oregon Workshop on Super High Energy Physics, Eugene, Oregon, 1985, ed. D.E. Soper (World Scientific, Singapore, 1986). [3] P. Langacker, Phys. Rep. 72 (1981) 185. [4] M.E. Peskin, Proc. Int. Symp. on Lepton and Photon Interactions at High Energies, Kyoto, 1985, eds. M. Konuma and K. Takahashi (Kyoto Univ., Kyoto, 1985), p. 714. [5] J. Ellis, New frontiers in particle physics, eds. J.W. Cameron et al. (World Scientific, Singapore, 1986), p. 225. [6] Physics-1 Working Group, conveners G. Altarelli and D. Froidevaux, Vol. I of these Proceedings. [7] J. Ellis, Proc. Int. Symp. on Lepton and Photon Interactions at High Energies, Kyoto, 1985, eds. M. Konuma and K. Takahashi (Kyoto Univ., Kyoto, 1985), p. 850. [8] J.H. Schwarz, ed., 'Superstrings—the first 15 years' (World Scientific, Singapore, 1985). [9] For reviews, see J. Ellis, preprint CERN-TH.4439/86 (1986). H.-P. Nilles, preprint CERN-TH.4444/86 (1986). L.E. Ibáñez, preprint CERN-TH.4459/86 (1986). [10] G. Brianti, Vol. I of these Proceedings. [11] K. Johnsen, Vol. I of these Proceedings. [12] C. Albajar et al. (UA1 Collaboration), Events with large missing transverse energy at the CERN Collider: Mass limits on supersymmetric particles (Paper III), in preparation. [13] M. Davier, Searches for new particles, presented at the 23rd Int. Conf. on High-Energy Physics, Berkeley (1986). [14] A. Savoy-Navarro and N. Zaganidis, Vol. II of these Proceedings. See also S. Dawson and A. Savoy-Navarro, in Proc. Snowmass '84 (Ref. [2]). [15] C. Dionisi, Supersymmetric particles search at LEP 200, presented at the LEP 200 ECFA Workshop, Aachen, 1986. [16] R. Batley, Vol. II of these Proceedings. [17] F. Paige and S. Protopopescu, ISAJET Program Version 5.23, BNL 38034 (1986). [18] B. Mansoulié, Vol. II of these Proceedings. [19] H. Baer, K. Hagiwara and X. Tata, Phys. Rev. Lett. 57 (1986) 294. V.D. Angelopoulos et al., preprint CERN-TH.4578/86 (1986). [20] H. Baer et al., Phys. Lett. 161B (1985) 175. H. Baer, V. Barger, D. Karatas and X. Tata, Univ. Wisconsin, Madison, preprint MAD/PH/316 (1986). [21] Physics-3 Working Group, conveners Z. Kunszt and W. Scott, Vol. I of these Proceedings. [22] G. Arnison et al. (UA1 Collaboration), Phys. Lett. 132B (1983) 214. [23] H. Komatsu and R. Rückl, Vol. II of these Proceedings. [24] S.K. Jones and CH. Llewellyn Smith, Nucl. Phys. B217 (1983) 145. P.R. Harrison, Nucl. Phys. B249 (1985) 704. - 122 -

[25] R. Cashmore et al., Phys. Rep. 122C (1985) 275. [26] M.E. Peskin, Physics of e+e" colliders: Present, future, and far future, Stanford preprint SLAC-PUB-3495 (1984). [27] C. Dionisi and M. Dittmar, Vol. II of these Proceedings. [28] F. Zwirner, Vol. II of these Proceedings. [29] G. Costa et al., preprint CERN-TH.4675/87 (1987), and references therein. [30] F. Del Águila, M. Quirós and F. Zwirner, preprints CERN-TH.4506/86 and CERN-TH.4536/86 (1986). [31] V.D. Angelopoulos, J. Ellis, D.V. Nanopoulos and N.D. Tracas, Phys. Lett. 176B (1986) 203. G. Bélanger and S. Godfrey, Phys. Rev. D34 (1986) 1309 and TRIUMF preprint TRI-PP-86-18 (1986). I. Bigi and M. Cvetic, Phys. Rev. D34 (1986) 1651. M. Cvetic and B.W. Lynn, Stanford preprint SLAC-PUB-3900 (1986). P. Franzini and F. Gilman, Phys. Rev. D32 (1985) 237 and Stanford preprint SLAC-PUB-3932 (1986). A. Blondel, Vol. II of these Proceedings. P. Bagnaia, Vol. II of these Proceedings. P. Chiappetta and J.-Ph. Guillet, Vol. II of these Proceedings. R. Ansari et al. (UA2 Collaboration), preprint CERN-EP/87-04 (1987), submitted to Phys. Letters. D. Froidevaux, private communication (1987); see also Ref. [6]. F. Cornet and R. Rückl, Vol. II of these Proceedings. J. Ellis, Vol. II of these Proceedings. D. Schlatter, Vol. II of these Proceedings. S. Nandi, Phys. Lett. 181B (1986) 375. D. Treille, unpublished (1987). N. Tracas and P. Zerwas, Vol. II of these Proceedings. N. Wermes, unpublished (1987). V.D. Angelopoulos et al., Ref. [19]. J. Ellis and H. Kowalski, Vol. II of these Proceedings. R. Rückl and P. Zerwas, Vol. II of these Proceedings. W. Buchmüller, R. Rückl and D. Wyler, Vol. II of these Proceedings. N. Harnew, Vol. II of these Proceedings. D. Schaile and P. Zerwas, Vol. II of these Proceedings. E. Eichten, K.D. Lane and M.E. Peskin, Phys. Rev. Lett. 50 (1983) 811. A. Nandi, Vol. II of these Proceedings. G. Arnison et al. (UA1 Collaboration), Phys. Lett. B177 (1986) 244. B. Bagnaia, N. Wermes and P. Zerwas, unpublished (1987). R. Kleiss and P. Zerwas, Vol. II of these Proceedings. F. Cornet and R. Rückl, Vol. II of these Proceedings. F. Schrempp, Max-Planck-Inst. Munich preprint MPI-PAE/PTh 69/86 (1986), to appear in Proc. 23nd Int. Conf. on High-Energy Physics, Berkeley, 1986. B. Schrempp, F. Schrempp, N. Wermes and D. Zeppenfeld, preprint CERN-EP/87-34 (1987). N. Wermes, Vol. II of these Proceedings. P. Méry, M. Perrottet and F. Renard, Vol. II of these Proceedings. D. Bloch, Vol. II of these Proceedings and Strasbourg preprint CRN-HE 86-06 (1986). R. Kleiss, D. Bloch and P. Zerwas, Vol. II of these Proceedings. B. Schrempp and F. Schrempp, Vol. II of these Proceedings. U. Baur, M. Lindner and K.H. Schwarzer, Max-Planck-Inst. Munich preprint MPI-PAE/PTh 74/86 (1986), and preprint in preparation. K.H. Schwarzer, Vol. II of these Proceedings. - 123 -

LARGE CROSS SECTION PROCESSES

Z. Kunszta)

This report summarizes the result of the Physics 3 Working Group composed of

P. Aurencheb', F. Boppc', J. Chezed', W. Kittel8', Z. Kunszt, A.K. Nandif', M. Pohl9', W. Scotth), W.J. Stirling1', B.R. Webberj), J. Zsemberyd).

ABSTRACT This article concerns the global aspects of the large cross section processes at LHC and CLIC. We discuss some features of low PT physics, minijets, multijets and gauge boson productions at LHC. In the case of ep reactions low Q2 jet and heavy flavour production is considered. At CLIC we investigate two photon (two gauge boson) physics and study the global properties of the most common hadro• nic final states.

a) Theor. Phys., ETH, Höngg, Zürich, b) Lab. Phys. Part. Annecy (LAPP), c) Phys. Dep. Univ. of Siegen, d) Saclay (CEN), e) Fys. Lab. Kath. Univ., Nijmegen, f) Dept. of Nucl. Phys., Oxford, g) Höchen. Phys. ETH, Zürich, h) Dept. of Phys., Univ. of Liverpool, i) Dept. of Phys., Univ. of Durham, j) The Cavendish Lab., Cambridge. - 124 -

1. INTRODUCTION

This report concerns the so called large cross section processes at 1)2) CLIC and LHC . While the study of large cross section processes does not represent the main physics goal of the experiments at CLIC and LHC, the large cross section phenomena form an integral and unavoidable part of the measurements. For example, large cross section reactions can give problems for detector design, triggering and data aquisition simply due to the high absolute rate of the events. It has been pointed out by the Standard Model3' 4 ) and Beyond the Standard Model study groups, that CLIC and LHC will be powerful enough to probe the full dynamics of the electroweak theory. The ex• periments at CLIC and LHC will allow to study the Higgs mechanism up to Higgs mass values of 0(1TeV), they may show the likely incompleteness of the standard model, they may find evidence for the existence of new particles and compositeness of quarks and leptons. However, most of the reactions which allow to test the full dynamics of the standard model and which may show effects beyond the standard model have tiny cross sections. Therefore large cross section processes, even the tiny tails of some of their distri• butions, give serious background problems. A quantitative study of the various background mechanisms has to be done together with the description of the physics signals. Many interesting 3 ) 4 ) examples have been worked out by the Physics 1 and Physics 2 study groups. Nevertheless it is useful to investigate the global features of the large cross section processes on their own importance. The large cross section reactions can provide us also interesting new insights to our understanding of some basic questions, such as e.g. the description of jet production in the small x region (minijets), the validity 2 of the QCD improved parton model in a much larger Q -region, etc. The physics of large hadron colliders has been described in detail in previous LHC5' and SSC6 7' studies. An extensive compendium of the basic 8 ) super hadron collider physics has been given by Eichten et. al. Therefore in Section 2, which is devoted to LHC physics we shall focus mainly on questions specific to LHC and put emphasize on some new developments. For example since the Lausanne workshop5' important new data have been published by the UA19', UA410' and UA511' experiments on various aspects of low p 9 ) T physics. Also, the UA1 minijet data have triggered interesting theoretical activity concerning minijet physics at LHC and SSC.

In Section 2.1 we shall consider low pT physics in view of these new development. In Section 2.2.we turn to the study of hard processes. Section 2.2.1 gives a discussion of parton luminosities. In Section 2.2.2 jet physics is briefly reviewed. New development in this field is that all the four jet QCD processes have been calculated and the multijet backgrounds - 125 -

can be described more quantitatively. A short discussion of large mass diffractive scattering is taken up in Section 2.2.3. Questions of gauge boson productions are considered in Section 2.2.4. In this regard an important observation is that the associated produc• tion of W bosons with two jets gives severe backgrounds to W-pair produc• tion. In Section 3 we are concerned with ep collisions at LHC. At the 12 ) Lausanne Workshop Altarelli et.al. have given a detailed description of 2 deep inelastic phenomena. Therefore we shall focus on the low Q processes. 2 Low Q jet production and heavy flavour production have been calculated for LHC energies. The rates are rather large, for example they give overwhel• mingly large background to Higgs production. Section 4 is devoted to large cross section at CLIC. First we discuss lepton pair production with the 2y mechanism which has remarkably large rate. Single W and Z productions have also large cross sections. An interesting new feature of e+e physics at CLIC energies that the two photon mechanism has to be generalized to two gauge boson mechanism. The two gauge boson scatte• ring mechanism can give the dominant contribution to Higgs production or to certain new particle production. Furthermore the two gauge boson reactions can give difficult background contributions to physics signals of the e+e~ annihilation.reactions (known examples are given by heavy lepton and wino production). An important characteristic of the phvsics of CLIC is given by the properties of e+e annihilation final states.(Section 4.2). Using QCD Monte Carlo fragmentation model we have considered hadron and jet multi• plicities, the characteristics of light, heavy quark jets and jets given by W-decays. New feature of jet physics in the few TeV region is that "the jets within jets" structure predicted by parton branching fragmentation might become observable. In Section 5 we make few comments on the jet fragemen- tation Monte Carlo programs and new calculational techniques developed recently. Some tentative conclusion is given in Section 6.

2. LARGE CROSS SECTION PROCESSES AT LHC

2.1 Low pT physics

2.1.1 Total, elastic, inelastic and diffractive cross sections

l0 12 Low pT physics has of O(10 -10 ) times larger rate than the rates of the interesting short distance phenomena. Therefore we should have some quanti• tative description of the simplest aspects of low p^ reactions at LHC ex• periments, such as the values of total, elastic and diffractive cross

sections, charged multiplicity distributions, rapidity distribution and pT distributions. Low p^ physics cannot be described with perturbative OCD. The best one can do to fit the parameters of "reasonable models" to the - 126 -

available low data and to try to extrapolate them to LHC energies. New

data have been obtained recently by the UA19^ and UA410' experiments on elastic and inelastic scattering at Vs" = 540 GeV. The UA5 group measured rather accurately the ratio

aTOT(900 GeV) / aTQT(200GeV)

in the pulsed collider run of the CERN-SppS. The extrapolation of the value of the total cross section in the multi TeV energy domain could be done with a new fit to the ISR and SppS data using the form proposed by Amaldi et.al.

v -v -v- logs) °T0T = VV A3E ±A4E (2,1)

Alternatively from analyticity properties of the scattering amplitude we can write an explicit analytic and crossing symmetric asymptotic form of the — 14) even signature elastic pp,pp amplitude *

2 A+B(log s/s -irr/2) F (s) = is -- (2.2)

2 1+C(log S/SQ-ÍIT/2)

In a recent analysis this asymptotic form has been supplemented with terms

given by A2~f (even signature) and -w (odd signature) Regge poles. It has been found that the existing constraints are not strong enough to choose between 2 X,n s or const, asymptotics. Due to these ambiguities the extrapolation to LHC energies is rather uncertain. The extreme values of the fits of Ref [14] give the bounds

80mb < aT0T(v"s = 17TeV) < 155mb (2.3)

2 where the upper bound corresponds to In s and the lower bound to const, asymptotics as s + ». It is emphasized that the lower values are pre- fered.**

*) The total cross section is given by the optical theorem

OyOT^^ ) = ¿(ImF+±ImF~) .

**) A recent compilation of cosmic ray data indicates that it is not possible

to improve the upper limit with the help of cosmic ray data.15^ - 127 -

Slightly more detailed models are the eikonal models, which are able to implement additional constraints given by unitarity. Bourrely et.al.16' have found good fit to the ISR, FNAL and SppS data and they predict for the total cross section* at LHC the value

aT0T(/s=17TeV) s 105mbarn (2.4)

Let us note that a fit using a model based on Gribov calculus with super 18 Ï

critical Pomeron gives similar value ^T0T(Vs=17TeV) s 100mb. The energy dependence of the total cross section predicted by these approaches is summarized in Fig. 1. All the models discussed above are able to predict also the s-dependence of the ratio

TOT elastic (2.5) A TOT 4TTB

200 SSC

180

160

140

120

- 100

80

60

40

20

10 100 1000 10000 100000 /s (GeV) _ Fig.1 Energy dependence of the total pp or pp cross sections above /s" = 100GeV, the solid and dashed lines are the two extreme predictions of Ref [14] assuming asymptotic behaviour A &n2s (solid line) and TOT CTTOT ~ const (dashed line). The dashed dotted line is a prediction by Ref [18] based on a supercritical Pome• ron and Regge calculus. This curve is in good agreement also with the prediction of Ref [17] obtained in a Chou-Yang type approach.1

*) These models can predict also the shape of the |t| distribution of the elastic scattering up to |t| values of few (GeV)2. (See the contribu• tion of Bourely and Martin to the Lausanne Workshop17)). - 128 -

where B is the slope parameter of the peak of the elastic cross section. The predicted central value of the elastic cross section is

CTelastic(v's = 17TeV) s 28mbarn (2.6)

2 For the purpose of distinguishing between the In s and const, asymptotic behaviour the total cross section has to be measured rather accurately at LHC. This is a formidable task. The Cb-interference method, which is capable to eliminate the large uncertainties due to absolute normalization, requires the measurment of elastic scattering at extreme small angles. The Coulomb - region appears at angles

0int < 4 yradian (2.7)

The feasibility to measure elastic scattering at LHC from the Cb-region up 2 to a momentum transfer of several GeV has been discussed by Haguenauer and 2 0 ) Matthiae at the Lausanne Workshop. The inelastic diffractive cross sections are not predicted by the mini• mal analiticity models based on analicity, crosssing and unitarity. In order to estimate e.g. the cross section of diffractive dissociation we should rely on some non-minimal models. The ÜA4 collaboration at the CERN SppS re• cently published new data on single diffractive dissociation.10'* They have measured the cross section of single dissociation in pp collision at cm. energy >J~s = 546GeV, for M/Vs á 0.22 and they found the value

c?sd = 9.4 ± 0.7 mbarn. Comparison with the ISR data shows that the increase of awith energy is slower than the increase of the elastic and total cross sections. A simple linear extrapolation of the ratio A ,/A . to LHC Su Gj.clStXC energies gives the estimate**

asd(17TeV) = 10-11 mbarn (2.8)

*) In the case of single diffractive dissociation one of the proton goes down the beam pipe almost undisturbed, while the other one is excited to a hadronic state of mass M such that M/Vs" < 0.3 or so. 2 1 ) **) In a recent paper a ^ has been determined in Gribov calculus with critical Pomeron. In this formalism the inelastic single diffractive cross section is related to the triple Pomeron diagram, which can be estimated in Gribov calculus. It has been found that A , = 10 mbarn. - 129 -

2.1.2 The underlying events

Notwithstanding the poor theoretical understanding, there is a pres•

sing need to have a detailed description of low pT phenomena. For example the UA5 group developed its own Monte Carlo generators to describe their 2 2) 23) data. General purpose Monte Carlo programs such as ISAJET or 24)

PHYTHIA also have a phenomenological treatment of the low pT phenomena. It is difficult to estimate the region of validity of these models in the lack of solid theoretical basis. In view of this we have found more con• venient to use some simple theoretical framework where we are able to have control of the numerous (sometimes adhoc) assumption required to get a model

which allows to describe the various aspects of existing low pT data and which has smooth transition to the semihard region. How "minijet physics" is emerging from the underlying beam jets is important. It influences the

ET and missing ET resolution. We have considered the multistring model proposed in Refs [25]. Recent developments can be found in Refs [26] . The main results obtained by extrapolating these multistring type models to 2 7) LHC energies can be summarized as follows. The predicted charged and total hadron multiplicity at LHC are

^69 and ^ 113 (2.9)

can The energy dependence of cyi be seen in Fig. 2.

90 LHC / 00 — l / 70 - f /

60 - / <(!<„>= 69.5

SO / <>W= "30 spfs Y 40 I/ 30

i 01 1 10 100 /s (TeV)

Fig.2 Energy dependence of the averaged charged hadron multiplicity in proton- proton collisions in the multistring model. - 130 -

The observed rise of the plateau of the rapidity distribution continues (see Fig. 3). The KNO distribution keeps broadening. Multiplicity fluctua•

tions will be large. Events with nch = 3 will occur five times more frequently than at energy >/s = 630GeV. There will be substantial transverse energy extending smoothly over the rapidity interval. The total transverse energy of the underlying event depends crucially on the amount of the semi• hard transverse momentum introduced in the model (see Fig. 4). A smooth

transition to the perturbative QCD regime might be adjusted. The ET distribution is rather sensitive to how semihard features are introduced with some (adhoc) assumptions* and should be considered only as tentative estimate.

Fig.3 Rapidity distributions of Fig.4 Transverse energy spectrum charged particle multiplicities at of unibased events at Vs"=17 TeV. various energies from -is = 200 GeV ET is the transverse energy mea• up to Js = 40 TeV. sured in the pseudo-rapidity inter• val [-3,+3] (dashed line). The so• lid line shows the result when the hard scattering component is arti• ficially turned off so that the

< > average momentum PT of indivi• dual hadrons keeps its low energy value = 0.35 GeV/c.

*) A different approachgbased on multiple hard scattering has been pro• posed by Sjöstrand. - 131 -

Recent data on multiplicity distributions, obtained by the NA22 and UA5 collaborations11' at CERN, are very well described by negative binomial distributions.30' It has been found that the charged multiplicity distri• butions can be parametrized successfully with the simple analytic expression

n I . p = k(k+1)...(k+n-l) / ^Zü\ (2 10) n n.' I 1+n/k / (1+n/k)k with 2 parameters n and k, where n is the averaged charged multiplicity and 1/k is related to the dispersion D

(2.11)

In Fig.5 WE CAN see a compilation of the 1/k values at various energies. The experimental points have been obtained from fits to all available pp and e+e~ data on multiplicity distributions.

Fig.5 Energy dependence of the 1/k parameter of the nega• tive binomial multiplicity distributions obtained by fit to the available pp and e+e charged multiplicity distri• bution data. The solid line is a linear fit on this 1/k vs INS plot to the pp data points, similarly the dashed line is a linear fit to the e+e data points. The dashed dotted line is the prediction of the Monte Carlo (Webber) based on angular ordered jet calculus, the dashed line is the prediction of the Lund Monte Carlo model for e+e~ annihilation. - 132 -

Fig.6 Charge multiplicity distribution predicted by ne• gative binomials for pp collision at

charge multiplicity prediction of+the coherent branch• ing QCD fragmentation model for e e annihilation at V*s" = 2 TeV (+ points) and its fit with negative bino• mials (dashed line). The convolution of the two dist• ribution is also indicated (solid line).

Assuming the validity of the negative binomial distribution we can pre• dict the charged multiplicity distribution at LHC energies. The dashed dotted curve of Fig. 6 gives the multiplicity distribution at 7"s = 17 TeV in the rapidity interval |y| < 3 obtained from the "negative binomial law" assuming that

ñ = 40 and 1/k = 0.70 (2.12)

The value of 1/k has been estimated with linear extrapolation of Uns depen• dence in the given rapidity interval (see Fig. 5) while the value of

< ñ = ncji> from a second order extrapolation (see eq. 2.9).

2.1.3 Minijets

As one measures higher and higher energy proton-proton collisions the x value of the production of jets having p^, larger than some fixed value decreases. However, the gluon wave function increases as x •* 0 therefore the amount of jet production should increase rapidly. Indeed, the UA1 colla• boration recently reported data for inclusive production of jets having p > 5 GeV and pseudo-rapidly |n| < 1.5 at = 900 GeV. They have found - 133 - the value Ao = 14 mbarn* which says that = 18% of all events contain mini- jets and the rise of the minijet cross section is much faster than the rise of the total cross section (see Fig.7). The emergence of minijets has been 3 2 ) predicted. One can argue that even if the active partons are much less energetic than the spectator partons, the prediction of the cross section

from perturbative QCD remains valid if ET is large in comparison with the hadronic scale. A simple parton level estimate of the UA1 inclusive minijet cross section gives = 5 mbarn, significantly too small value. However, there are large experimental and theoretical uncertainties (jet fragmentation, K-factor, etc.). Also the value of the gluon wave function at small x is rather sensitive to the initial condition used to obtain solution to the 8 ) Altarelli-Parisi equation. ** All this indicates that it is difficult to 3 3 ) separate the minijet physics from soft hadron physics. -2 - -4 At LHC the x-region will decrease from 2»10 at SppS to x = 7-10 and using QCD we expect that = 40% of all events will contain UA1 type minijets. It has been suggested35' that we can achieve better theoretical control of the ambiguities if consider the two jet inclusive cross section, with each

80 —i—i—i 111111 1—i—i 111111 + Fermilab-ISR]

• UA4 o-TOT

• UA5 J o-NSD

• UAI o-T0T

+ UAI o-TR|G

x UAI crJET

Fig.7 Comparison of jet cross-sections for events having minijets with p > 5GeV/c and | n | < 1.5 with total

cross-sections °T0T* AND NON-SIN9LE diffractive cross-sections (a. NSD" 20

_i I • i i 1111 _i i I 50 100 500 1000 VS (GeV)

*) This number has large systematic error since it is difficult to measure

the transverse momentum of a minijet, since the pT distribution is sharply changing function and there are problems in separating the jet particles from the "background" particles. **) The small x behaviour of the gluon wave function is determined by Regge theory34', when the diffractive exchange gives a behaviour x J with J > 1. - 134 -

jet having transverse momentum greater than Qc and fixed longitudinal momen•

tum fractions x^ and x2 of the corresponding beam momenta.* Then the rapidity difference between the two jets is given by

X1X2S y = in (2.13)

a The single "bare Pomeron" limit corresponds to the region when s(Qc) is small but ya (Q ) s 0(1). The exchanged hard gluon can radiate gluon mini- s c ^ jets. The resummation of the (a(Qc>y) type leading logarithmic corrections gives the "bare QCD Pomeron" contribution to the y-distribution of the two jet inclusive cross section. 2Q

3 At LHC with Qc ~ 5 GeV - 10 GeV, -q=j=- = 10~ , the rapidity difference can extend up to y = 6 which would give an opportunity to test the validity of perturbative QCD in this new regime. We have noted before that the mini- jet cross section as given by the perturbative QCD calculation increases faster than the total cross section. This is not a contradiction. The per• turbative QCD calculation is proportional to the average number of hard interactions. At very high energy the minijet cross section calculated from perturbative QCD may become larger than the total hadronic cross section, provided that the number of hard interactions is large. The physical in• elastic cross section in this case is given by requiring to have at least one hard scattering36'. The summation over the contributions from independent multi hard scattering contributions may result in self-shadowing. Similar effect is encountered in inelastic collision of high energy w/K-scattering on nuclei. In this picture the size of the inelastic hard scattering cross

section is not given anymore by the low pT cut-off itself. Its size is controlled instead by the typical transverse distance between parton pairs within the hadrons. Asymptotically, multiple hard scattering may saturate the total inelastic cross section, but in a region which is not accessible to LHC. Minjet physics at LHC will give us important new information in our attempt to have a deeper understanding of the region of validity of pertur• bative QCD.

*) We thank M. Jacob for a discussion of this point. - 135 -

2•2 Large pT physics

2.2.1 Parton luminosities

The experience of the UA1 and UA2 collaboration at the CERN SppS col- 31 ) lider has shown that the hard scattering events have simple behaviour at high energies and that most of the important aspects of hard scattering reactions can be described in terms of parton processes. The hadron beams of the hadron colliders can be considered as unseparated broad band parton beams consisting of a variety of parton species: gluons (g), quarks, u,d,s,c,b,t, and electroweak gauge bosons YfW,Z. It is useful to consider the relative luminosities of the individual parton species. Gluon and quark 8 ) luminosities have been considered in great detail by EHLQ . We recall that the luminosity functions are defined in terms of parton number density 2 functions f , (x,Q ) as follows

(AB) dL dx (a,b) _ 2M 2 2 2 — f a/A(x,Q ) fb/B(M /xs,Q ) (2.14) dM x

2 M /s where L is the parton parton luminosity per unit proton proton luminosity and M denotes the parton subenergy (see Fig.8)

M = x x, s . (2.15) a b

Then the hard scattering cross section can be given as

- AB+EC. ,TAB

da i dL , Ä — = Z —^ a (M) (2.16) dM a,b dM aD Lci

*arA Fig.8 Diagramatic representation of Cz the QCD improved parton model - 136 -

Although in principle the quark and gluon wave functions can be calculated in QCD with non-perturbative methods such as lattice QCD, practical calcu• lations are not yet available. Therefore the quark and gluon wave functions

have to be extracted at some initial QQ value from the existing hard scatte• ring data such as deep inelastic lepton-hadron scattering, charm production, jet production etc.*. However, the wave function of the electroweak gauge bosons can be determined by perturbation theory in the Weizäcker-Williams approximation. The longitudinal and transverse wave functions are obtained in the form

2 fy/f(x,Q ) e ln (2.17) 2TT f x 2 m

w 1 -x (2.18) f (X) 4TT GT/f x

ç ,v n2._ w 1 + (1-x) Q f (x Q LN n (2 19) GT/f ' >- 8? x "I G ' mw

2 where a denotes a/sin 9 , G = W or Z and w w

n = 1 n =

0w is the Weinberg angle, is the weak isospin, e^ is the electric charge fermion f ,37' In Fig.9 various parton luminosities are plotted as a function of /i" at LHC and CLIC energies. For pp collision at \/s = 17 TeV

gluon-gluon, quark-quark (summed over all quark species) and WTWT luminosi- + - + - ties are given while for e e collision at fs = 2 TeV we can see e e , YY and W^W luminosities. It is instructive to compare the relative luminosi- L L ties of LHC and CLIC. For example both at CLIC and LHC the dominant mecha• nism of heavy Higgs production is W_W_ fusion. We can learn that somewhat L L *) This initial procedure has to be updated regularly, since more and 2 2

more data are available. Evolution to Q > Qq values is predicted b; QCD with the help of the Altarelli-Parisi equations. - 137 -

0.04 0.1 1.0 10 (TeV)

Fig.9 Parton luminosities as a Fig.10 Ratio of qq parton luminosi• function of = M at LHC and CLIC ties of pp vs pp collision at (see eqs. (2.14) - (2.16)). •Js = 17 TeV as a function of \js . smaller mass region is available at CLIC (with the same integrated luminosi• ty) in comparison with LHC, however LHC has large qq luminosities which give severe background problems. In Fig.10 we plotted the ratio of parton lumi• nosities of pp and pp collisions at «/T =17 TeV as a function of ILS. This figure confirms nicely the earlier conclusion5'8'38' that the pp colliders are not competitive with pp colliders since the increase in the qq parton luminosity can not compensate the approximately two orders of magnitude smaller luminosity of the pp collider. Finally in Fig.11 we plotted the ratio of SSC and LHC parton luminosity functions. Assuming that LHC may re• main competitive up to factor of 4 in this ratio we see that the energy is crucial above > 300 - 1000 GeV. It appears that LHC is less competitive in gg and WW initiated processes e.g. in heavy gluino (m > 3 00 GeV) and L Ju g in heavy Higgs search (M„ > 200 GeV). This conclusion has been confirmed tí with the direct evaluation of the cross section as well. - 138 -

12

10 R

8 Fig.11 Ratio of gg, W*WT, gq, qq Li Là

6 and qq luminosities of pp collisions at 40 TeV vs 17 TeV as a function of 4 \/¡".

o 0.1 1.0 ii (TeV)

2.2.2 Jets, QCD multijet background

Jets at supercollider energies will constitute a formidable conventio• nal background to new discoveries, so it is important that they are rather well understood. The jet rate is very large since jets can be produced with gluon-gluon scattering. Gluons have the largest parton luminosity (see Fig.9) and the point like cross section is largest for gluon-gluon scatte• ring.* However, jets are not understood in all the details. For example we cannot distinguish gluon jets from quark jets. The leading order calcula• tions of the cross section of well separated hard jets have ambiguities due 2 to the unknown "correct" scale of the Q -evolution. We can expect realisti- 3 9 ) cally that the scale problem can be solved for 2-jet production. However the calculation of the next to the leading corrections to 3 jet, 4 jet etc. production is prohibitively difficult.

In Fig.12 pT distributions of two jet production is plotted for gluon, light quark, charm and top quark jets, at

are large giving Aa = Inbarn at pT > 0.5 TeV, Aa ~ 50 pb pT > 1 TeV and

Aa s 0.7 pbarn at pT > 2 TeV. The contribution of charm or top production

at large pT values is about three orders of magnitude smaller than the total jet rate.

*) A more detailed study of two and three jet production can be found in 8) EHLQ - 139 -

ft, (TeV)

Fig.12 Transverse momentum distribution of one of the jets for two-jet production in pp collisions at •{s = 20 TeV in the pseudorapidity interval |n| < 1.5.

Flavour sum over light quark flavour is done. Large pT production of heavy flavour is also indicated, = 35 GeV has been used for the top mass.40'

Since the two jet rate is very high, the multijet rates are also signi• ficant. In principle multijet spectroscopy can be a useful method in new particle search. For example Z', Higgs boson, heavy quark pair production all give multijet final states (Fig.13)

p + p -»• Z'+X •+• q+q+X -Î- 2 jets + X (2.21)

p + p •+• H+X -+ W++W~+X 4 jets + X (2.22)

p + p -> Q+Q+X •+• 6 jets + X (2.23)

The question is whether the 2,4,6 jet final state contributions of these reactions can be observed above the background of direct QCD multi jet pro- 41 ) duction. The answer is no. Although, in principle, multi jet production can be calculated in perturbative QCD with the increase of the number of jets, the calculation becomes more and more complicated. The cross sections of the 2 2, 2 ->• 3, 2-»-4 subprocesses are known exactly. The recent cal- - 140 -

Fig.13 Examples of new particle productions giving multi jet final states

culations of the 2 •* 4 subprocesses presumably represent the limit for the 4 2 ) feasibility of exact calculations. It has required the evaluation of more than 500 Feynman diagrams and 29 different parton subprocesses can contri• bute to the four jet production cross section. Exact calculations for five, six,... jets are very tedious. Fortunately, due to these recent developments the 2 •+ n cross section can be approximated with some simple analytic ex- 4 4 4 1 ) pressions. ' There are two important observations. First it is enough to calculate the n-gluon amplitude, since the physical cross section can be acceptably approximated with the formula

,PP-*n:j gg>ng F(x1)F(x2)S (x1x2s)dx1dx2 (2.24)

where F(x) is given in terms of the gluon and quark wave function 43)

F(x) = G(x) + -|-Q(x) (2.25)

44 ) The next point is a suggestion by Park and Taylor that the dominant heli• city amplitude of the n-gluon process can be approximated by the expression

,n-2 _4-n 2n-4 1 2 g II (ij)4} I (2.26) N -1 i

where (ij) denotes the dot products (k^k_.) and "perms" denotes a sum over (n-1)!/2 distinct terms obtained by permuting 1,2,3,...n. This expression - 141 -

4 5 ) for n = 4 (2 •+• 2) and n = 5 (2 -»• 3) reproduces the known exact results. For n = 6 (2 -+ 4) it typically agrees within 10% with the known exact re- 46) suit. We have used this formula to estimate the multijet background up to 6 jet production. A possible consistency check is given by the comparison of the theoretical calculation at -is =63 0 GeV with the ÜA1 data on multi- 9 31) jets. ' The result is given in Fig. 14 with jets having p^, > 15 GeV,

lego plot distance AR^.. > 1, pseudo-rapidity |n^| < 2.5 and pTi > 40 GeV. Given the uncertainties of such a calculation the agreement is acceptable. In Fig.15 we plotted the invariant mass distribution for 2 jet, 3 jet,..., 6 jet production. It may provide some reference values when an estimate is needed for some multijet background. Fig.16 shows the relative rate of

M(GeV)

Fig. 14 TJA1 data on jet multiplicity Fig.15 Invariant mass distributions distribution for integrated jet cross for njet final states at /i =17 TeV, sections and its comparison with per- calculated using QCD subprocesses turbative QCD estimates. Final state ab -»• c.|...cn, a,b,c, = q,g with an parton cuts as shown. approximate form for the matrix ele• ment for n > 4. Final state parton cuts as shown. - 142 -

2 jet, 3 jet,..., 6 jet production at LHC energies with jets having hard transverse momentum and separation as given in this Figure. We conclude that the search for new physics with the method of multijet spectroscopy has a severe background from large QCD multijet production. We should remark that Monte Carlo programs based on QCD parton branch• ing mechanism are capable also to give estimate of multijet production. However, the calculation of the tail of multijet distribution in some small part of the phase space may require a non-trivial effort. Obviously, it is much simpler to use the analytic expression of eq. (2.2 ) in some first estimate. In the discussion of minijet physics it has been noted that multiple hard scattering may occur at supercolliders in the smaller x region. For example double parton scattering can also give four jet final states.36'47' In Fig.17 we plotted total cross section values of QCD 2,4,6 jet production and of double parton scattering with jets having transverse momentum larger

.than some pT cut as a function of this cut and having jet-jet angles 9 . .> 50°.

2.2.3 Large mass diffractive production

Hard scattering might be studied also in large mass diffractive pro• duction processes (M2/s < 0.1)48'. It may be useful to emphasize that if there is a significant diffractive component in the hard scattering process then it is included in the standard perturbative QCD cross section. The large mass diffractive production is part of the perturbative production. 49 ) In a recent paper, Berger et.al have emphasized that the crucial question is how big is the diffractive part. Using the Pomeron model of diffractive scattering they estimate the diffractive component to be a few percent. Pro• duction cross sections of systems which are not coupled directly to gluons (e.g. W and Z) are expected to have relatively smaller diffractive compo• nent. In view of this we can expect a decrease in the signal/background ratio in the diffractive component in many cases since the most severe back• ground contributions are usually given by gluon initiated reactions. We conclude that the large mass diffractive mechanism is not competitive with the standard perturbative production of large mass systems. Its study, however, could help us to understand the structure of the Pomeron. Heavy flavour and jet diffractive production at LHC energies has been studied in our study group (see Ref. 50) . - 143 -

Fig.16 Jet multiplicity distributions Fig. 17 PT distributions of 2 jet, for integrated jet cross section of 4 jet, 6 jet production at/s=17 TeV Fig.15 with the same cuts. The open given by perturbative QCD with cuts circules correspond to an additional shown on the Figure. 4 jet produc• cut of M > 1 TeV i.e. large mass tion by double parton scattering final states. mechanism is also indicated (dashed line).

2.2.4 Gauge boson production

In view of the difficulties concerning the usefulness of multijet spectroscopy, we should also ask what is the promise of the spectroscopy given by gauge boson and gauge boson plus jets final states. The number of W and Z bosons produced at LHC is very high (for inte• grated cross sections see Table 2.1). At •Ts = 17 TeV and integrated lumino• sity 1040 cm 2 approximately 3 x 10^ Z° boson and 7x10^ W-boson is produced. - 144 -

+ _o A w W Z WW w z ZZ

total 62nb 43nb 35nb 80pb 32 pb 11pb

|y|<1.5 22nb 18nb 12nb 22pb 8 pb 4pb

Table 2.1

Cross section values of W,Z and gauge boson pair pro• duction at Vs = 17 TeV. y is the rapidity. For more details see EHLQ.8>

Although these numbers compare favourably with the number of Z°'s produced 7 at LEP in one year run (~ 10 ), W and Z spectroscopy xs not an obvious bonus at LHC. A factor of s 3-4 would be lost in the rate if central production (|y| < 1.5, see Table 2.1) is required, but more important is that the sig• nature of W and Z gives much penalty. Because of the much larger QCD back• ground (see Figs.12, 15-17) it may be prohibitively difficult to reconstract the W's and Z's from their decays into jets. The leptonic decay W -»• ev,pv can be reconstructed only if there are no other important sources of missing

PT than the neutrino. Consequently the only clear signal is the decay Z -* e e , u p each with ~ 3% branching ratio.* It has to be emphasized, however, that the leptonic decays of Z produce very clean signal with small background from detector problems such as e/ir separation. In view of this we may conclude that the study of production of gauge bosons can not be a major goal at LHC** and similarly to jets gauge boson production will constitute a severe background to new discoveries. Since gauge boson production rates have been described in detail by 8 ) EHLQ , we shall restrict ourselves only to a brief discussion of production of gauge bosons in association with jets. 51 4 ) Indeed it has been found ' that the production of Z-boson with hard jet(s) and its subsequent decay into neutrinos

p + p Z° + jet + X (2.27)

*) Rare decays of W's have also bad signal/background ratio. **) Needless to say that gauge boson production will give new oppor• tunities for improved tests of QCD. - 145 -

O 250 500 750 pmiss = pz (GeV/c)

Fig.18 Transverse momentum distribution of the Z-boson with subsequent decay into neutrino pair at Vs" = 17 TeV with cuts as shown in the Figure.

gives the most important background* in the search of new physics signal

with the study of large missing energy spectrum. The missing pT spectrum in

this case is given by the pT spectrum of the Z-boson (see Fig.18). Fortuna• tely, in the most important cases this background can be suppressed making use of the detailed properties of the decay and production mechanism. How- 4

ever, it gives important limitations in the study of the missing pT events Another important example of the background problems given by gauge 5 2 ) boson production is the W boson production with two hard jets which gives

an overwhelming background to heavy Higgs boson production when mH > 2 n*w** The rate of Higgs production with subsequent decay in W-pairs

*) W production with leptonic decays gives softer missing p^ spectrum.

**) When mw < m < 2mw the jet background will suppress the Higgs signal. - 146 -

p + p -t- H + X

1 L î (2.28) I—> w + w l. jets

is several hundred times smaller than the rate of direct W + 2 jet pro• duction

+ 2 jets + X (2.29) (-) + L~ v

Since this difficulty is discussed in great detail in the report of the Standard Model group3'53'we do not consider it any further. This latter background problem makes the study of continuum W-pair production also very difficult, although the production cross section is rather large, of order of tens of picobarn (see Table 2.1). The continuum production of Z-pairs gives irreducible background to Z-pair spectroscopy.

It has some interest to study the pT~tail of gauge boson pair production as well as the production of three or more gauge bosons. Further work is needed to study multi-gauge boson final states. Summarizing we can say that W-boson spectroscopy is rather difficult because of the bad physics signature of the W's and because of the large background coming from the associated production of W's with QCD jets. Very good detector resolution and large luminosity are crucial when the new + physics signal is expected in final states with W bosons. However, Z-boson spectroscopy is very promising. The leptonic decays produce clear sig• nal. Since the leptonic branching ratio is small high luminosity is vital in this case.

3. LARGE CROSS SECTION PROCESSES IN EP COLLISIONS

The main characteristic of the physics of electron-proton collisions in the range of centre of mass energies vis = 0.3 TeV (HERA) and «Ts = 2 TeV 1 2 ) has been described by Altarelli et.al at the Lausanne Workshop . It has been emphasized that pp, e+e and ep facilities have complementary virtues. Although an ep collider has smaller physics potential in comparison with CLIC and LHC still there are problems where ep processes are competitive. 1 2 ) Altarelli et.al have studied the relevant domains of ep physics: - 147 -

a) the measurement of the proton structure functions and tests of QCD, b) search for new gauge bosons or new effective four fermion interactions involving quarks and electrons, c) search for new particles with electron quantum numbers like leptoquarks, scalar selectrons, sneutrinos, excited electrons etc.

However, low pT physics has not been discussed. Similarly to the case of hadron colliders the large cross section re• actions are given by low physics. When both, the electron and the final hadrons have low transverse momentum, the cross section is dominated by the combination of p-meson proton inealstic scattering due to p-meson dominace of the photon-hadron coupling at small photon virtuality. Although it may have some interest we shall not discuss the features of pp scattering here.

Its global characteristic is similar to the aspects of low pT phenomena of proton-proton collisions discussed in Section 2.1. However, due to the electromagnetic coupling the rate of events given by pp scattering are not expected to cause any major problem for detector design and data aquisition. More relevant large cross sections are obtained by the increase of the transverse momenta in the final state. If it is required that the transverse momentum of the electron is larger than a few GeV we get into the regime of deep inelastic lepton-nucleon scattering while with the increase of the transverse momentum of the final state hadrons we obtain jet phenomena, si• milar to the jet physics of hadron colliders. We shall discuss briefly some aspects of these two regimes in Section 3.1 and 3.2

2 3.1 Deep inelastic scattering at intermediate Q -values The average momentum transfer in ep scattering at LEP-LHC energies is about 90 GeV, much less than the collision energy

£ 90 GeV << /s s 1.8 TeV (3.1)

Therefore an important part of the deep inelastic data will have events 2 with momentum transfer of 0(10-100 GeV). In Fig.19 we can see the Q distribution of the neutral and charged current deep inelastic reactions

e + p •+ e + x (3.2)

e + p ->• v + x (3.3) - 148 -

Fig.19 Momentum transfer distributions for charged and neutral-current processes at /"s = 0.3 TeV (HERA) and Js" = 1.41 and 2 TeV (LEP-LHC). Also given are the neutral-current event rates per day corresponding to 2 the integrated cross section a(Q > Q2)f where Qg can be read off from dashed curves (from Altarelli et.al. Ref 12). at three different energies fs = 0.3, 1.4 and 2 TeV. The event rates assuming luminosity

32 -2 -1 L = 10 cm s (3.4) is also indicated. At intermediate values of the momentum transfer Q the event rates are still rather large. E.g. with Q ~ 50 GeV we obtain ~ 2000 events/day from neutral current reaction at •Ts ~ 2 TeV (and ~ 300 events/ day at HERA energies). This large rate requires to study jet phenomena typically up to 30 - 100 GeV momentum transfer. Such a study has to be done also for HERA. However, the most important global aspect of jet physics can be seen if we restrict ourselves to the Q ~ 0 region.

3.2 Low-Q jets in ep collisions

In the equivalent photon approximation the dominant mechanism for jet production in ep collision is given by the subprocesses (see Fig.20) - 149 -

Y + g ->- q + q (3.5)

y + q •* g + q (3.6)

The jet cross section can be calculated with the help of the standard QCD improved parton model formula

do f (x Q2)f (x Q2)d (x X s)dx dx I Y/e 1' a/p 2' V 1 2 1 2 (3.7) a=g,q

In Fig.21 we plotted the total jet rate as a function of the center

of mass energy for jets having x = 2 pT/-/s > 0.06. The figure shows the typical 1/s fall of the point like cross sections. At Vs = 1.74 with jets

having pT > 50 GeV with the luminosity of eq. (3.4) the event rate is

= 9000 events/day. In Fig.22 we plotted the pT distribution of jet produc• tion at three different energies fs = 0.314, 1 , and 1.74 TeV. We can ob•

serve the typical pT spectrum of jet production known from hadron colliders and the characteristic increase of the jet rate with the energy at fixed transverse momentum.

-25 I0 I 1 1 1 ep —jet+x

- \ 2Pt - \ X = —rr" \ Vs ~ I0" 3I u - 10

§ io-33 - — I nb A

35 Iff

1 1 1 1 10" 10 I02 I03 I04 •/s (GeV)

Fig.20 Feynman diagrams for low Q Fig.21 Scaling behaviour of the ener• large pT jet production and heavy gy dependence of jet production in ep scattering with jets flavour production. x = 2pT/-Vs > 0.06. - 150 -

i i i • i -1 :

- i e + p _ jet, + jet2 + x _ ¡ = 0.314 TeV (HERA) Vs = 1 TeV Vs = 1.74 TeV (LHC) - : \ i . ; i i » i \ \ \ \ \ \ \ \ \ \ \ ; \ \ \ \ * \ \ \ \ \ Fig.22 Differential pT distri \ \ \ \ \ \ bution cross section for jet \ \ \ \ production at three different \ \ \ \ energies V"s = 0.314, 1 and ; ^ \ 1.74 TeV, where pT denotes ^ \ - ^ \ - \ \ the transverse momentum of \ \ \ \ one of the jets. ^ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ IQ"2 \ \ \ \ \ \ - \ \ \ \ \ \ \ \ - \ \ \ i i i i i 0 50 I00 I50 200 250 300

PT (GeV)

In Fig.23 we plotted the rapidity distributions of jets having

x = 2prj,//s > 0.06 at HERA and LEP-LHC energies. The rapidity distribution

is asymmetric and the relevant rapidity region for jets having pT > 50 GeV at v"s = 1.74 TeV energy is (-2,5). The production rate of bottom and top quarks is also rather large. It is an important background when the effects of new physics signals are 3 4 ) searched in lepton-hadron final states. ' In Fig.24 the invariant mass distributions of bottom and top productions are plotted at three different energies (/s = 1, 1.5 and 2 TeV). We note here that for Higgs production with Higgs particle having mass value in the region 10 GeV < < 150 GeV the background given by bottom and top production is overwhelming.3'54' Jet, bottom and top production unquestionably will give one of the major sources of conventional background in search of possible new effects. It is crucial that the discussion of the new physics signals are supplemen• ted with an estimate of the contribution of QCD jets, charm, bottom and possibly top production. - 151 -

vf=l.5TeV

Fig.24 Differential cross section of bottom and top production with top mass = 40 GeV in ep colli• sions at three different energies Ss = 1, 1.5, 2 TeV, where M de• notes the invariant mass of the bb or tt pairs in the final state.

I00 200 300 M (GeV) - 152 -

4. LARGE CROSS SECTION PROCESSES AT CLIC

In comparison with pp colliders, the main advantage of e+e colliders is that the background in e+e annihilation processes is not really a problem. e+e annihilation at CLIC provides us a clear method to investigate the short distance properties of the theory up to /s = 2 TeV. The electron positron beams are monochromatic with little energy spread due to beamstrahlung (see Fig.17). Nevertheless it is important that the nearly monochromatic electron beam has component of wide band parton beams composed of photons, gauge bosons, quarks and gluons. The two most important + —

combinations the YY and WTW_ luminosities are depicted in Fig. 17. This compo- + nent of the e e beams yields us behaviour resembling hadron collisions. The

cross sections of 2y mechanism are dominated by low pT final states and their value is large (see Fig. 27a for example). The e e annihilation cross section decreases like 1/s and the cross section values at vT = 2 TeV are of order of few pbarn (see Fig.25). At CLIC jet and W-spectroscopy does not suffer from the severe back• ground problems seen at hadron colliders. W's can be reconstructed from their hadronic decays. For the purpose to get quantitative understanding of jet properties we shall investigate e+e annihilation hadronic final states.

: III I 1111J 1 I I lllll| 1 I I lllll| 1 I I lilt

Inb r

100 pb

lOpb

Fig.25 Energy dependence of the Ipb + cross sections of e e annihilation into quark antiquark pair and W-pair. lOOfb -

10 fb

Ifb

100 - 153 -

4.1 Two gauge boson physics

Similarly to pp and ep scattering also at CLIC the largest cross sec• tions are given by reactions with low p„ final states. In case of e+e 5 5) scattering low pT particles are produced with the two photon mechanism (Fig.26). An important feature of e+e collisions in the TeV region that diagrams when one or both the photons are replaced by gauge bosons are also 56-5 7) relevant. For example the annihilation of longitudinal W's into Higgs 5^) boson is the dominant mechanism for heavy Higgs production. Low mass muon pair productio-j . . n 58)

ee+ee+py (4.1) can have as large as 1 nbarn visible cross section (Fig.27a-b). The cross

section with scale invariant angular and pT cut depends very smoothly on the energy but it has a steep angular distribution. This latter property may prevent us to exploit the large rate for luminosity monitoring.* Another type of large cross sections given by the single gauge boson reactions

ee+eWv (4.2)

e+e~ •+• e+e~ Z° (4.3)

59) The cross sections have been calculated by Altarelli and Gabrielli (see Fig.28-29). At VI" = 2 TeV their values is s 20 pbarn and 6 pbarn, respecti• vely. The W cross section is sensitive to the change of the magnetic anoma• lous moment of the W. The dotted lines in Fig.28 give the effect of changing the W magnetic moment with = 50%. Another important feature of the 2Y mecha-

Fig.26 Feynman diagram of two gauge boson mechanism. The virtualities of the gauge bosons is close to their on shell values.

*) We thank D. Saxon for a discussion on this point. - 154 -

ee—ee|i|i /s = 2 TeV

o

/s (GeV)

Fig.27a Cross section values as a Fig.27b Integrated cross section function of the incoming energy values as a function of Q . for for the reaction ee^-eeyu with the min acceptance cut as given in the the process ee-+eeuy with the accep• figure. tance cut as given in the figure. 0 . is the minimal value of the mm nuons with respect to the beam.

nism is that at V"3" = 2 TeV the number of W-pairs, heavy quark pairs of mass less than ~ 100 GeV, charged Higgs pairs with Higgs mass less than ~ 100 GeV are larger than the number of the pairs produced by ly* 1Z anni• hilation.* The case of W-pairs61'62'is illustrated on Fig.30.64' We can

*) See also the contribution of G. Burgers - 155 -

100 500 1000 5000 100 500 1000 5000 VS(Gev) \/S(Gev)

Fig.28 Cross section values of the Fig.29 Cross section values of the process ee+eWv as a function of the process ee+eeZ as a function of the cm. energy.59' cm. energy.59'

see that the cross section of the 2Y mechanism is larger then the 1Y, 1Z cross section at energies > 1.5 TeV. However, the invariant mass distri• bution of the W-pair produced by the ee-»-eeWW process is rather steep (see Fig.31) . One should be aware of the importance of the background given by the gauge boson pair production via the two gauge boson mechanism. For example the process Isee also Fiq.32)

e++ e~->- e± +'v'+ Y*+ W* e±(j'v? Z (4.4) gives four jet finals states with invariant mass significantly less than the

3 6 3 64 collision energy and with missing pT of O (Mw) . ' ' 'The process (4.4) is the most important background also to the production of heavy lepton pair, heavy squark or wino pair in the annihilation channel. However, this background is not overwhelming; if we make use of the details of the production mecha• nism it can be suppressed.* Another global feature worth to mention is that contrary to hadron colliders gauge bosons can be observed in their four jet final states and so this decay mode gives the highest rate. Therefore the direct four jet production given by the processes

*) For more detailed discussion^see the reports^by Altarelli Igo-Kemenes ; Ellis et.al.4' and Dittmar - 156 -

Mww (GeV)

Fig.31 Invariant mass distribution of the WW-pair produced in the reaction ee-+eeWW at s = 2 TeV cm. energy. - 157 -

— ï, ;B"B"BT)'B"B"ÖT)"«' W 4J

Y

Four different mechanism leading to 4 jet final states of the form

e" e'

4J

y+Y+q+q+g+g (4.5)

Y+Y-»-q + q + q + q (4.6) have to be estimated. We have found that this QCD-QED mechanism of four jet 6 4) production is not large. It does not constitute an important background (see Fig.33). In view of the promising development concerning the prospect of the feasibility of large e+e linear collider in the TeV region it would be worthwile to work out the most obvious possibilities more systematically and in enough detail providing a reference point for future designs of CLIC experiments. - 158 -

1.0 2.0 3.0 4.0 •>/¡ (TeV)

Fig.33 Total cross section values for the reaction ee+ee 4j. The solid curve is the contribution of WW production (first diagram in Fig.32). The dashed dotted line is the contribution of WZ pairs (second diagram) and the dashed line denotes the direct QED-QCD mechanism of four jet pro• duction.

4.2 Electron-position annihilation final states

We have made a study of the global features of hadronic final states in e+e annihilation at cm. energies up to 2 TeV, using the e+e~ Monte Carlo 67) program of Ref [66], modified to take account of the two dominant annihi• lation processes (Fig. 34)

Fig.34 Feynman diagrams describing the processes (4.7) and (4.8). - 159 -

e+ + e -* q + q -»- hadron

(a = 0.2 pb at 2 TeV)

e+ + e ->• w+ + W~ -»• hadrons

(a B 0.5 pb at 2 TeV)

A study of process (4.7) was also made using a version of the Lund Monte 68) Carlo program that includes coherent parton branching which gave similar results to those presented here for that process. The important features of the Monte Carlo programs for our purposes are illustrated in Fig.35. The hard process of qq production [either from a virtual y or Z° as in process (4.7) or from W* decay in process (4.8)] is followed by multiple parton branching which is treated in an approximation that includes leading collinear and infrared singularities. When the typical parton virtuality 2 2

A has fallen to the order of an infrared cutoff QQ >> QCD the branching is terminated and a phenomenological model of hadron production is used. For the studies we report here, the cutoff was taken to be QQ = 0.7 GeV and the hadronization model involved colourless cluster formation and decay.

Fig.35 Schematic description of the main features of the parton branching Monte Carlo model. - 160 -

However, of the result obtained only the charged hadron multiplicity de• pends significantly on those details. For the parton branching, a leading- order QCD scale of 0.2 GeV and a top quark mass of 40 GeV were used; the results are not sensitive to the top mass. In Fig.36 we show the predicted mean charged multiplicities, which at 2 TeV are

- = 79 ch qq (4.9)

._, = 4 2 ch WW (4.10)

The energy dependence of the qq multiplicity follows the analytic QCD formu• la 69' shown by the solid curve. For the WW processes, on the other hand, the hadron multiplicity is energy independent, being simply twice the pre• dicted mean charged multiplicity in W decay wich is 21. Thus above 300 GeV the qq final state has a higher mean multiplicity then WW, essentially be• cause the reaction virtual y,Z -*• qq at M = 2 TeV virtual mass is a harder process than W -»• qq and this implies more gluon emission from the produced quarks. It has been noted before the charged hadron multiplicity distribu• tion follows the negative binomial distribution to a good approximation. This is illustrated in Fig.37 where the result of a Monte Carlo run and its fits by negative binomials is shown at \fs~ = 2 TeV for qq final states.

e*e"—xl" qq —.-hadrons

QCD

- • Monte Carlo

» Data

WW

10 100 1000 10000 E„ ICeVI

as Fig. 36 Mean charged hadron multiplicity cj1 a function of the cm. energy for reactions e+e -+qq and e+e -> w+ W . - 161 -

e+e" 2TeV

250 300 350 Fig.37 Total and charged hadron multi• TOTAL MULTIPLICITY n plicity distribution given by the Monte Carlo based on coherent branch• ing and its fit with the help of ne• gative binomial distribution. I- coh- bronching

,inmV neg- binomial

25 100 125 150 175 CHARGE MULTIPLICITY n„

Another global measure of the qq processes is provided by the invariant mass per hemisphere, shown on Fig.38. The hemispheres are defined relative to the sphericity axis of the final state, which provides an operational definition of the quark jet or W direction. At LEP 200 (Fig.38) the typical "quark jet mass" in the qq process is 30-40 GeV, while the WW process na• turally gives an average mass around m^ ~ 80 GeV. At this energy, the WW mass distribution per hemisphere has a large width because the 'W's are moving slowly in the cm. frame and decay products from one often leak into the other hemisphere. At 2 TeV the situation is completely reserved compared with 200 GeV. The qq mass per hemisphere (the "effective quark jet mass") is typically larger than m^ with large fluctuations due to increased parton branching. For WW final states, on the other hand, the high W velocity means that practically all decay products of a given W stay in one hemisphere and so the W-mass peak is very narrow (Fig.39). Even for a very poor W mass reso•

lution (aM = 50 GeV, corresponding to a calorimeter size A0,A<¡> ~ 5°), the qq 'background' under the W mass peak is still small (Fig.40). The large effective jet masses in the qq process are associated with the "jets-within- jets"-structure of the final state. By this we mean that an improved angular - 162 -

oqí= 21 pb /s = 200 GeV 0.8 A — e*e"—qq • eV-VV /s = 2000 GeV I uq — e*e~— qq qq 0.6 I—-ni e*e"-W*W-

0.2 o(WW) = 0.5 pb 0(WW) = 18pb 100 -

100 200 300 Mass in hemisphere (GeV) 20 40 60 80 100 120 140 160 180 Mass in hemisphere (GeV)

Fig.38 Invariant mass per hemis• Fig.39 Invariant mass per hemis• phere distributions for reactions phere distributions for reactions

+ + + + - — + - + - e e qq, e e"-*W'•W W~ at LEP e e -*-qq, e e ->-W W at CLIC energies 200 energies yfs = 200 GeV. {T = 2 TeV.

Fig.40 The same as Fig.39 but with including smearing with a., = 50 GeV mass resolution. - 163 - or mass resolution reveals multiple narrow jets within the broad jets in each hemisphere. In order to have a realistic visual picture of the jet structure typical e+e -*qq events have been generated and the energy of the event as deposited in calorimeter cells has been plotted in a (cosO,d)) lego plot. The lego plot of a typical ttg event can be seen in Fig.41 a. With 9° cells in azimuthal angle we see a clear three jet signal. Magnifying the structure for one of the top jets with having 1° cell resolution (Fig.41b) we can distinguish the b,d,u substructure due to weak decays. On Fig.42 a typical event for the e+e -»-WW-*hadrons is shown. The importance of good

Fig.41a Lego plot for an event ttg Fig.41b Lego plot picture of one of

+ the t -jet depicted on Fig.41a after obtained in e e annihilation at magnification from the 9° cell size •{s = 2 TeV. to 1° cell size. Top mass value m, = 40 GeV is assumed. - 164 -

calorimeter resolution is rather important. We investigated the jet multi- 7 0 ) plicity behaviour using the JADE jet algorithm which combines hadrons into clusters and clusters into larger clusters until further combination would exceed a prescribed invariant mass resolution Am* (Fig.43). For a poor mass resolution, Am = 100 GeV (Fig.43a), gq final states typically consists of 3 or 4 clusters while the WW final states of course consists of 2 W jets with no resolvable internal structure. A Am = 20 GeV, however, qq final states often contain 6 or more clusters and the WW final states usually have 4 or 5 clusters (Fig.43b) (qq qq with possibly one hard gluon emission). It will be interesting to see whether the cluster structure of final states can be exploited to improve signal-to-background ratios for new processes such as Higgs boson production.

100 Am=100GeV Am=20GeV J- /s=2TeV / s=2TeV I 50 I WW Burrows I Ingelman Webber 40 qq

30

20

10

i i i j Lzn i L 2 3456 78 23456789 10 11 Number of jets

Fig.43a Hadronic jets obtained with Fig.43b The same as Fig.43a but with the JADE algorithm for e+e~^-qq at Am = 20 GeV. CLIC /s = 2 TeV with invariant mass resolution Am = 100 GeV.

Similar results were obtained in Ref [65] using a variable angular resolution AR = An I + ( Ao> I ). - 165 -

100

~ (e* e-— qq — hadrons)

50 - charged hadrons -

20 JETS:

Am = 6.8 10 - ^—' - ^^^^^20^-^ ^— ^^^"^ 5IL-— ^—^—^ Top^-—n

—i—•—1 1 i i 1 • \ ——i— i r i i 11 i 20 100 200 500 1000 2000

/s (GeV)

Fig.44 Average charged hadron multiplicities and average jet multiplicities at various jet invariant mass reso• lutions Am = 6.8,10,20,50,100 GeV as a function of the cm. energy.

The mean multiplicity of clusters for a given fixed value of Am (Fig.44) is perturbatively calculable for sufficiently large values Am 69) and the cm. energy /¥ . Asymptotically it has the factorizable form

~ F(^s)/F(Am) (4.11) cius where

a 2 F (Q) ~ exp C7 7 s(Q ) '

and is the number of quark flavours. The expression (3) has the same asymptotic energy dependence as the predicted hadron multiplicity (the curve in Fig.36 is reproduced on a logarithmic scale in Fig.43) , corresponding to a fixed asymptotic number of hadrons per cluster at fixed Am. - 166 -

Our conclusion from the study of the global properties of the most

common hadronic final states in e+e annihilation are that at 2 TeV i) The qq process leads to high-multiplicity final states consisting of two broad, high-mass jets that have a complicated internal structure. ii) The more common WW process gives two narrow jets that should be easily distinguishable from the qq final state.

5. TECHNICAL COMMENTS

LEP, LHC, CLIC physics covers a vast amount of phenomena. Since the proton beam, in the TeV region even the electron (positron) beam, can be considered as unseparated broad band parton beam large variety of reactions can contribute at the parton level. If new physics occurs, this abundance of reaction will be further enriched. Indeed, in the course of the studies of the physics of new accelerators hundreds of reactions have been considered. The calculations can be performed at two levels: a) parton level cross section calculations; b) fully developed Monte Carlo calculations. Further improvements and developments are desirable at both levels.

Some progress has been made recently at the parton model level by the introduction of new calculational techniques such as the helicity

method71'and tricks based on supersymmetry.46' With the help of these new tools we are able to evaluate contributions of hundreds of Feynman diagrams in acceptably short time. Another interesting point is that it has been suggested to create an Electronic Yellow Book where individuals can place their computer programs which have been made to perform the cross section calculations of certain parton level reactions. We believe this suggestion has to be further advertised and supported. To have public access to these computer programs would provide certain control. It might also help to eliminate bugs. In the case when a new calculation is needed, the existing set of programs may give resonable support to the development of the new "urgently needed" calculation. - 167 -

The Monte Carlo programs are also important and necessary.* They pro• vide input to detector simulations and efficiency corrections. They are needed to simulate realistic events for detector design. Finally, they can also be used to calculate theoretical predictions. In this latter case, however, one should use them with some care. Monte Carlo programs do not represent sufficiently accurately the theory. For example in QCD Monte Car• los higher order corrections, azimuthal correlations are not included with the required accuracy. They are also rather inflexible when new reactions have to be included. Monte Carlo models usually have several adjustable parameters which limits the region of their applicability. It has to be emphasized that the Monte Carlo programs give the overall event structure about right. However, the precize details such as the tails of distributions and manybody correlations can be in error. Consequently, detector designs, efficiency corrections, measurements of physical cross sections has to be done with the minimal possible reliance on Monte Carlos. It is desirable that regular and systematic comparison is made between the MC outputs and the parton level predictions, since the latters are closer to the theory, they can be understood in terms of the basic aspects of the theory and can give estimate for the importance of the higher order corrections. However, one has to bring the MC programs as close to the theory as possible. We may illustrate these somewhat "ex cathedra" discussion with two examples a) W. Kittel in our working group has calculated the charged particle multiplicity distribution in e+e annihilation at CLIC energies using the Lund MC, the version which includes the exact 2nd order calculation. The result is given on Fig.45. We can see large unphysical fluctuations. They are the consequences of cuts choosen to join smoothly the leading contri• bution with the higher order corrections at \[s~ = 40 GeV. This adjustment did not work at Js = 2 TeV. We do not want to imply with this that this version of the Lund MC is not useful.** We wanted to use this ob• servation only to illustrate the difficulty which may occur when a Monte Carlo program is used beyond the region of its tested validity.

*) Monte Carlo programs widely used in the analysis of the collider data are ISAJET23', PYTHIA24', COJET?3), EUROJET74', FIELDAJET75' for pp colliders and Ali MC76', Webber MC66', Lund MC77' for e+e collisions. 7 8 ^

See also the review by Collins and Gottschalk

**) No problem was found with Webber's Monte Carlo. - 168 -

b) R. Batley has calculated the missing pT distribution given by the re• action p+p-*-Z°+jets with subsequent decay of Z° into neutrinos, using ISAJET. W.J. Stirling performed a parton level calculation of the same quantity (see eq.2.2 and Fig.18). After carefully checking that the cuts of the two calculations are covering the same physical situation, they found satis• factory agreement (see Fig.46). Therefore we have confidence in the norma• lization obtained with the ISAJET calculation.

,10- 3 eV 2TeV lyl<2.5

70 -i I- single string 60 J I + 2"* order

ncg- binomial 50 J '

40 . \ \ 30 ¡T ' 1 \ 1 , \ 20 •~ • V\ 10 \ % Fig.45 n 1 1 1 1 1 1 a) Charged multiplicity distribution 0 25 50 75 100 125 150 175 + CHARGE MULTIPLICITY rg. for e e~-*hadrons at /s = 2 TeV with rapidities |y| < 2.5 as obtained in - a standard version of the Lund Monte Carlo. - % I- cob- branching b) The same as in a) but for coherent neg- binomial branching. Fits with negative binomi- - /' \ kV nals are also given. •/

_f-/ il J' X M // 0 2!5 5I0 75 100 125 150 175 CHARGE MULTIPLICITY n,. - 169 -

1 1 r pp , -/s = 17 TeV 10'

S I02 O m I Fig.46 Transverse momentum distri• bution of Z-bosons produced in pp collisions at = 17 TeV. The dots are the prediction of ISAJET ob• tained by R. Batley, the solid line is the result of the parton level calculation by W.J. Stirling.

lO-rlO 0.4 1.2 2.0 pz (TeV) T

CONCLUSION

Large cross section reactions are more important for LHC than for CLIC. The high gluon luminosity at LHC gives high jet rate and an overwhel• ming background for multijet spectroscopy. W-boson spectroscopy appears to be difficult as well. However, signals of Z production at LHC are clear. The minijet production is the large cross section phenomena which will give interesting physics by its own: we can test new aspects of perturbative QCD. At CLIC both multijet and W-spectroscopy is useful in the search of new physics. In the TeV region the jet events given by W production have rather different characterisitcs from the QCD jets. In some cases the two photon/gauge boson mechanism gives non-trivial background problems. However, they are not overwhelming. It would be worthwile to give a more systematic description of the most obvious e+e phenomena in order to have a good reference for designing ex• periments at e+e colliders in the TeV region.

ACKNOWLEDGEMENT

We have benefited from discussions with many collegues. We would like to thank especially G. Altarelli, U. Amaldi, R. Batley, P. Burrows, J. Ellis, H. Hansen, M. Jacob, R. Kleiss, J. Lindfors, C. Maxwell, K.H. Meier, J. Renner Hanson, F. Pauss, D. Saxon, C. Schmid, D. Soper and D. Treleani. - 170 -

REFERENCES

1) K. Johnsen, this report.

2) G. Brianti, this report.

3) G. Altarelli et al., this report.

4) J. Ellis et al., this report.

5) Large Hadron Collider in the LEP Tunnel, Vol. I-II. Proceedings of the ECFA-CERN Workshop held at Lausanne and Geneva, 21-27 March 1984, editor M. Jacob, CERN Report ECFA 84/85 CERN 84-10.

6) Proceedings of the 1984 Summer Study on the Design and Utilization of the Superconducting Super Collider Snowmass Colorado, DPF APS, eds. R. Donaldson and J.G. Morfin (1984); Proceedings of the Oregon Workshop on Super High Energy Physics, edi• tor D.E. Soper, World Scientific (1985); Proceedings of the Summer Study on SSC, Snowmass Colorado, June 24 - July 15 (1986) eds. G. Kane and L. Pondrom, to be published.

7) M. Gilchriese, talk given at this Workshop.

8) E. Eichten, I. Hinchliffe, K. Lane and C. Quigg, Rev. Mod. Phys. 56, 579 (1984).

9) UAI Collaboration: Paper submitted to the XXIIIrd International Confe• rence on High Energy Physics, Berkeley, California (1986) , presented by F. Ceradini.

10) UA4 Collaboration: D. Bernhard et al., Phys. Lett. 166B 459 (1986) and preprint CERN-EP-86-205.

11) UA5 Collaboration: G.J. Alner et al., Phys. Lett. 167B, 476 (1981).

12) G. Altarelli, B. Meie and R. Riickl, ibid ref. 5 , (1986), Vol. II, p. 549.

13) U. Amaldi et al., Phys. Lett. 66B, 890 (1977).

14) See the recent review by M.M. Block and R.N. Cahn, Rev. Mod. Phys. 57, 563 (1985) and references therein.

15) P. Carlson, talk given at the Moriond Workshop on Collider Physics (1986) to be published. For an experimental review see J.G. Rushbrooke, preprint CERN-EP/85-178 (1985).

16) C. Bourrely and A. Martin, ibid. Ref. 5.

17) C. Bourrely, J. Soffer and T.T. Wu, Nucl. Phys. B247, 15 (1984).

18) K.A. Ter-Martirosyan, preprint ITEF-86-121 (1986).

19) T.T. Cho and C.N. Yang, Phys. Lett. 128B, 457 (1983).

20) M. Haguenauer and G. Matthiae, ibid. Ref. 5, p. 303.

21) A. Capeila, V. Innocente and J. Tran Thanh Van, Orsay preprint (1987).

22) G.J. Alner et al., Nucl. Inst. Meth., to be published. - 171 -

23) F.E. Paige and S.D. Protopopescu, Proc. of Oregon Workshop, ibid. Ref. 6 (1985) p. 3.

24) T. Sjöstrand, Lund preprint LV-TP 85-10 (1985).

25) A. Capella, U. Sukkhatme and J. Tran Thanh Van, Z. Phys. C3, 329 (1980); A.B. Kaidalov, Phys. Lett. 116B, 459 (1982); P. Aurenche and F.W. Bopp, Phys. Lett. 114B, 363 (1982).

26) P. Aurenche, F.W. Bopp and J. Ranft, Z. Phys. C26, 279 (1984), Phys. Rev. D23 1976 (1986); A. Capella, J. Kwiecinski and J. Tran Thanh Van, Orsay preprint LPTHE 86/51; A.I. Keselov, O.T. Piskunova and K.A. Ter-Martirosyan, Phys. Lett. 158?, 279 (1985).

27) P. Aurenche and F.W. Bopp, this report.

28) T. Sjöstrand and M. Van Zijl, Lund preprint, LU TP 86-25.

29) NA22 Collaboration, Phys. Lett. 177B, 239 (1986).

30) A. Giovannini, Nuovo Cimento 15A, 543 (1973); W. Knox, Phys. Rev. DIP 65 (1974) ; P. Carruthers and C.C. Shih, Phys. Lett. B127 242 (1983) ; A. Giovannini and L. Van Hove, Z. Phys. C30 391 (1986).

31) W. Scott, Report to XII Internat. Conference on High-Energy Physics, Berkeley (1986).

32) R.R. Horgan and M. Jacob, CERN Report 81-04 (1981) ; L.V. Gribov, E.M. Levin and H.G. Ryskin, Phys. Rep. 100 1 (1983).

33) M. Jacob and P.V. Landshoff, CERN Report, CERN-TH.4562/86 (1986).

34) For a recent discussion see J.C. Collins, Talk presented at the SSC Workshop at UCLA (1986).

35) A. Muller and H. Navelet, Saclay preprint SPHT/84-094 (1986).

36) M. Jacob, CERN preprint TH-3693 (1983); J. Kwiencinski, Krakow preprint (1986); D. Treleani, this report and further references therein.

37) A.K. Nandi, this report; J. Lindfors, Univ. Helsinki preprint (1986).

38) CH. Llewellyn Smith, ibid. Ref. 5.

39) R.K. Ellis and J. Sexton, Nucl. Phys. B269, 445 (1986).

40) Z. Kunszt, Nucl. Phys. B247, 339 (1984).

41) Z. Kunszt and J.W. Stirling, this report.

42) For a review see Z. Kunszt, Talk given at the XVII. International Symposium on Multiparticle dynamics, Seewinkel, 19 86, Theor. Phys. ETH preprint (19 86).

43) F. Halzen and P. Hoyer, Phys. Lett. 130B (1983) 326; B.L. Combridge and C.J. Maxwell, Nucl. Phys. B239 429 (1984). - 172 -

44) S.J. Parke and T.R. Taylor, Phys. Rev. Lett. 5_6 2459 (1986).

45) B.L. Combridge, J. Kripfganz and J. Ranft, Phys. Lett. 70B (1977); F.A. Berends et al., Phys. Lett. 103B 124 (1981).

46) Z. Kunszt, Nucl. Phys. B271 373 (1986); S.J. Parke and T.R. Taylor, Nucl. Phys. B269 410 (1986).

47) P.V. Landshoff and J. Polkinghorn, Phys. Rev. D18 3344 (1978); B. Humpert and R. Odorico, Phys. Lett. 154B 211 (1985); N. Paver and D. Treleani, Nuovo Cimento, 70A (1982), Phys. Lett. 169B 289 (1986); B. Humpert, Phys. Lett. 131B 461 (1983).

48) For a review see P. Schlein, invited talk at the 23rd Int. Conf. on High Energy Physics, Berkeley, Calif., 1986, CERN-EP/87-18 (1987) and references therein.

49) E.L. Berger, J.C. Collins and D.E. Soper, Argonne preprint ANL-HEP- PQ-86-14 (1986).

50) J. Cheze and J. Zsembery, this report.

51) S.D. Ellis, R. Kleiss and W.J. Stirling, Phys. Lett. 167B 464 (1985).

52) S.D. Ellis, R. Kleiss and W.J. Stirling, Phys. Lett. 163B 261 (1985); J.F. Gunion, Z. Kunszt and M. Soldate, Phys. Lett. 163B, 389 (1985).

53) D. Froidevaux, this report; see also J.F. Gunion et al., Snowmass 1986 to be published.

54) D. Dicus and S.S.D. Willenbrock, Phys. Rev. D32 1642 (1985).

55) See e.g. the review by H. Kolanoski, Springer Tracts in Modern Physics, Volume 105 (1984).

56) D.R.T. Jones and S.T. Petcov, Phys. Lett. 84B, 440 (1979); M.S. Chanowitz and M.K. Gaillard, Phys. Lett. 142B, 85 (1984); R.N. Cahn and S. Dawson, Phys. Lett. 136B 196 (1984); G.L. Kane, W.W. Repko and W.B. Rolnick, Phys. Lett. 148B, 369 (1984); S. Dawson, Nucl. Phys. B249, 42 (1984); R. Cahn, Nucl. Phys. B225, 341 (1985); S. Dawson and J. Rosner, Phys. Lett. 148B, 497 (1984).

57) G. Altarelli, B. Meie, F. Pitolli, Univ. of Rome, preprint 531 (1986). See also a recent discussion on the validity of the effective W-approximation by Z. Kunszt and D. Soper, Univ. of Oregon preprint (1987) .

58) F.A. Berends, P.H. Daverveldt and R. Kleiss, Nucl. Phys. B252, 561 (1985).

59) M. Gabrielli, this report.

60) G. Burges, this report.

61) M. Katuya, Phys. Lett. 124B 421 (1983); K. Hikasa, Phys. Lett. 164B, 385 (1985).

62) G.L. Kane and J.J.G. Scanio, CERN preprint TH. 4532/86 (1986).

6 3) B. Meie, this volume.

64) Z. Kunszt and W.J. Stirling, this volume. - 173 -

65) P. Igo-Kemenes, M. Dittmar, this report.

66) G. Marchesini and B.R. Webber, Nucl. Phys. B238, 1 (1984); B.R. Webber, Nucl. Phys. B238, 492 (1984).

67) G. Marchesini and B.R. Webber, this report.

68) P.N. Burrows and G. Ingelman, Oxford Univ. report, ref. 87/1.

69) B.R. Webber, Phys. Lett. 149B, 501 (1984).

70) JADE Collaboration, W. Bartel et al., Z. Phys. C33, 23 (1986).

71) CALKUL Collaboration, F.A. Berends et al., Nucl. Phys. B206, 53 (1982) and B239, 382 (1982); Z. Xu, D.H. Zhang and L. Chang, Tsinghua University preprint, TUTP-84/3-6; R. Kleiss and W.J. Stirling, Nucl. Phys. B262, 235 (1985); J.F. Gunion and Z. Kunszt, Phys. Lett. 16IB, 333 (1985).

72) H. Baer et al., in Physics at LEP, Vol. 1, p. 297 (1986), CERN Report CERN 86-0 2, edited by J. Ellis and R. Peccei.

73) G.C. Fox and S. Wolfram, Nucl. Phys. B168, 285 (1980); K. Kajantie and E. Pietarinen, Phys. Lett. 9 3B, 269 (1980); R. Odorico, Nucl. Phys., B172, 157 (1980), Z. Phys. C30, 257 (1986); T.D. Gottschalk, Proc. of Oregon Workshop, ibid. Ref. 6 (1985), p. 94.

74) B. van Eik, in Proceedings of the 5th Topical Workshop on pp Collider Physics, p. 165 (1985), editors V. Barger and F. Halzen.

75) R.D. Field, Proceedings of the 1984 Snowmass DPF Summer Study, 713 (1984) .

76) A. Ali, E. Pietarinen, G. Kramer and J. Willrodt, Phys. Lett. 93B, 155 (1980) ; P. Hoyer, P. Osland, H.G. Sander, F.F. Walsh and P. Zerwas, Nucl. Phys. B161, 349 (1979); R.D. Field and R.P. Feynman, Nucl. Phys. B136, 1 (1978).

77) B. Andersson, G. Gustafson, G. Ingelman and T. Sjöstrand, Phys. Rep. 97, 33 (1983).

78) J.C. Collins and T.D. Gottschalk, Illinois Inst. Techn. preprint, IIT-TH-86-40, to appear in Proc. of 1986 Snowmass Workshop. - 174 -

DETECTION OF JETS WITH CALORIMETERS AT FUTURE ACCELERATORS

T. Àkesson, M. Albrow, P.N. Burrows, T. Cox, J.P. de Brion, C. Fabjan, M. Holder, G. Ingelman, G. Jarlskog, K. Meier, P.G. Rancoita, J. Russ, J. Schukraft, G. Stevenson, A. Weidberg and R. Wigmans

Presented by T. Àkesson CERN, Geneva, Switzerland

ABSTRACT

The results presented here are from the Jet-Calorimeter Working Group. A small-radius silicon calorimeter is suggested for a high-luminosity pp machine. Calorimetric limitations for measuring the kinematic variables in ep collisions are examined. The effects of beamstrahlung at a 2 TeV e+e~ machine are estimated, and a detector for this machine is outlined.

1. INTRODUCTION This working group was given the directive to study jet measurements with calorimetric techniques at three different future machines: i) a 20 TeV pp collider (the Large Hadron Collider, LHC); ii) a 1.1 to 1.3 TeV ep machine; and iii) a 2 TeV e+e_ machine (the CERN Linear Collider, CLIC). The starting points are different for each of these three machines: i) The LHC (and the SSC) detector has been studied on several occasions [1, 2], and many important questions regarding both a non-defined calorimeter [1] and one having a rather conventional design [2] have been addressed and solved by the working groups concerned. The detector environment due to the machine is already known. ii) The detector at the ep machine had not yet been studied, and some of the main questions concerning the calorimeter limitations when measuring the scaling variables have had to be examined. The environment is also believed to be known. iii) The CLIC detector had not been previously studied, and the detector environment is not yet known. The following approach was adopted: • Investigate a new, technologically advanced detector for the pp case. Concentrate on the design. • Using Monte Carlo and analytical calculations, examine the limitations resulting from the detector geometry and calorimeter resolution at the ep machine, when measuring scaling variables for charged-current interactions. • Examine the detector environment at CLIC, and sketch a spectrometer.

This paper contains the following sections: 2. Jet characteristics. The properties of jets at these energies are discussed. 3. The calorimeter Monte Carlo. The Monte Carlo used for examining the detector performance is described. 4. The small-radius Si calorimeter. A new detector design for the LHC (SSC) is suggested and discussed. 5. Dose to the U/Si calorimeter. A calculation of the estimated radiation hazard at the LHC is presented. 6. Performance of the small-radius calorimeter. The performance of the suggested detector is demonstrated for a test-case. - 175 -

7. Forward calorimeter and total energy measurement. The total energy measurement and calorimetry in the high-radiation level of the forward region are discussed. 8. Requirements for ep detectors. 9. The problem of beamstrahlung and cm. energy smearing in e + e~ collisions. Problems induced by beam-beam interaction at CLIC are discussed. 10. The e + e ~~ detector. A detector for CLIC is outlined. 11. Conclusions.

2. JET CHARACTERISTICS Many of the phenomena that will be of primary interest at future accelerators will be observable only in terms of final-state jets, e.g. new massive states are often expected to decay into high-energy quarks and gluons giving rise to jets of particles. One would therefore like to measure the four-vectors of jets and, for example, to search for such new states through resonances in the invariant masses of combinations of jets. In order to assess how well this can be done using calorimetric techniques, it is important to have a reasonable estimate of the basic jet properties, since the calorimeter response will depend, for instance, on whether the jet energy is carried by a few hard particles or many soft particles. The angular width of a jet is also of interest in relation to the granularity of the calorimeter. We have thus made a detailed study of such jet properties, reported separately in Ref. [3], and summarize a few of the main results here. Models for the perturbative evolution of the jet have evolved considerably in the last few years. The treatment in terms of a dynamical simulation, using the QCD parton cascade approach giving rise to multiple gluon emission, has been found to produce sizeable effects relative to the available exact matrix element calculations at low order. Combined with phenomenological models for the non-perturbative hadronization process, such as the string or cluster decay models, the perturbative cascade approach has been shown (see references in [3]) to be in good agreement with observations at the highest e+e~ energies and at the CERN pp Collider. This provides a good starting point for realistic extrapolations to future energies, where gluon radiation from highly virtual hard-scattered partons will be abundant. This is further supported by the generally good agreement between different such models, as well as by some comparisons with analytical QCD calculations. In order to produce useful jet properties, the concept of a jet has to be defined in an experimentally realistic way; theoretical parton-level results are only of very limited value for our needs. We therefore base our results on a currently used jet algorithm in terms of the energy flow in an idealized, but realistic, calorimeter. The calorimeter granularity and the basic jet-width parameter do, in fact, influence the number of reconstructed jets as well as their detailed properties. However, at the TeV energy scale we find a small sensitivity to many other parameters, such as whether jets are reconstructed by summing cell energies or cell 'momentum vectors'. The hardness of a jet can be illustrated by the property of 1 TeV quark jets (at CLIC) to have 50% (10%) of their energy carried by particles having E > 80 (300) GeV. The width of such jets is shown by 50% of the particles and 50% of the energy within 3° and 2°, respectively, of the reconstructed jet axis. The charged-particle multiplicity is expected to be around 25 for such a jet and 78 in the whole event. In terms of longitudinal fragmentation, gluon jets are considerably softer than quark jets—but not necessarily much wider—at very large energies. The measured mass of a jet depends both on the virtual mass of the parton emerging from the hard process, which then gives rise to the jet evolution, and on how much of the resulting jet cascade is included in the reconstructed jet, which depends on the size of the jet cone used in the algorithm. It is not possible to find heavy-quark jets, e.g. top, just from the reconstructed jet mass, since off-shell light quarks will in fact give rise to a similar jet-mass distribution. Because of this, and of the resulting gluon bremsstrahlung, the general properties of t-quark jets will also not differ much from light-quark jets at this energy scale. A distinctive top signature is provided by leptons from a semileptonic decay; however, this requires that a sufficiently large px be demanded in order to eliminate leptons from decays of lighter flavours. - 176 -

3. THE CALORIMETER MONTE CARLO A simple parametrization of the hadronic showers was used to simulate the calorimeter response. The simulation used an empirical parametrization [4] for the longitudinal shower development and a Gaussian distribution for the transverse energy spread. The width a of the transverse size of the hadronic shower has a linear growth for the first 1.5X and then stays constant. The a of the transverse size of the electromagnetic shower has a linear growth for the duration of the shower. The starting point of the shower was generated with an exponential distribution with the absorption length X for hadronic showers. The energy loss of charged particles between the beginning of the calorimeter and the starting-point of the shower was taken into account. The calorimeter response to this dE/dx loss is amplified by a /t/e ratio =1.3. The response to the energy deposition of the hadronic shower is suppressed by the measured asymptotic e/h ratio (1.0). The geometry of the detector was coded using the GEANT3 package [5]. The parameters were tuned to agree with UA2 test beam data [6], with w scans across the face of a calorimeter. The calorimeter for the test-beam set-up consists of three slices of the UA2 central calorimeter [6] corresponding to an array of ten cells in 8 by three cells in . The cell size at the front face of the calorimeter is » 10 cm x 15 cm. The calorimeter is divided into three longitudinal sections of 0.5, 2, and 2 nuclear absorption lengths, respectively. The energy flow in the 8 direction was used to tune the transverse width in the shower parametrization. The results are shown in Fig. 1 for different beam impact points oWm. In order to check that the fluctuations in the transverse shower development were correctly simulated, the shower energy weighted centre was calculated: 9 = £ Eioi/E E,. The r.m.s. of 9 was used as a measure of the fluctuations. The results for different values of 0beam are given in Table 1. The fraction of energy deposited in the different longitudinal compartments was used to tune the longitudinal shower parametrization (energy-independent) to agree with 10 GeV/c pion data. However, this did

Table 1

The r.m.s. of energy-weighted centres

0beam r.m.s. of 9

(°) Data Monte Carlo

85 1.33 1.31 87 1.43 1.47 89 1.77 1.65

0.88 1.76 0.88 1.76 Widrh [X]

Fig. 1 Comparison of the lateral hadronic shower- size given by the UA2 test-beam data [6] with that of the simulation program - 177 - not give a good fit to higher-energy data, and more work would be needed to produce an energy-dependent parametrization. In conclusion, the Monte Carlo should give a reasonable simulation of the transverse shower spread, but it contains only a very crude treatment of the longitudinal shower development. For the test case addressed in Section 6, a realistic simulation of the transverse shower spread is, of course, more essential.

4. THE SMALL-RADIUS Si CALORIMETER 4.1 Introduction Previous hadron collider studies (Lausanne and SSC Workshops) [1,2] have examined hadron calorimeters based on scintillator or liquid-argon readout. The UA1 upgrading program is pushing warm-liquid readout from the status of idea to an actual demonstration of technical feasibility [7]. This study has chosen to use silicon as the readout medium in a uranium-plate sampling calorimeter, based on the potential of this material for stable electronics-limited gain calibration, simple mechanical assembly with few dead zones, and an almost unlimited segmentation potential. The pros and cons for silicon are listed below:

Advantages Disadvantages

Absolute gain for the sampling medium Small sampling fraction Gain adjustment and monitoring Expensive sampling material by radioactive sources Non-saturating readout, so sources give Susceptible to radiation damage absolute energy scale Fast charge collection Fine lateral and longitudinal segmentation, easy and feasible

4.2 Calorimeter performance requirements For hadron calorimetry at the LHC or at other ultra-high-energy colliders, the exact matching of calorimeter response for electromagnetic and hadronic shower components is crucial if we are to avoid a non-linear variation of energy resolution with the number of sampling charges. The constant term in the energy resolution function (among other effects) arises from departures from the ratio e/h = 1.0, where e/h is the ratio of the response to electrons and hadrons of the same energy that is incident on the calorimeter [8]. Any non-zero constant term will limit the energy resolution to a few percent at energies of several hundred GeV, rather than give the expected 'VÉ' improvement. A recent study [8] (exploiting the original ideas of Ref. [9]) of the role of fission neutrons in compensating for fluctuations in nuclear binding-energy losses during hadron cascades, has demonstrated the fundamental need to detect low-energy fission and/or spallation neutrons in order to achieve the 'compensation' condition, e/h = 1.0. This requires the presence of an optimal density of protons in the system to transform neutron kinetic energy into a signal which can be detected in the sampling medium. In order to achieve this for low-energy fission neutrons, the protons must be in intimate contact with the sampling medium because of the short proton range. In scintillator this occurs automatically. For liquid-argon readout, there is no compensation because there is no hydrogen—it is probable that not enough methane can be added as a dopant to achieve this compensation [10].

For silicon sampling the question is how to introduce hydrogenous material. Bare U/Si calorimeters would have e/h > 1.1 according to Ref. [8]. However, the range of recoil protons is long enough so that covering the silicon detectors with a thin polyethylene film (100 urn) should give enough neutron conversions to produce compensation. A fact that is crucial to this assumption is that silicon responds linearly to the slow protons produced, whilst most other media saturate and see only about 20% of the actual recoil proton energy. An initial examination of this possibility for the model described in Ref. [8] indicated that indeed one can achieve e/h = 1.0 in a uranium/silicon calorimeter by this means. - 178 -

In order to evaluate the calorimeter physics potential, a simulation has been made for the process W jet + jet. The calorimeter simulation program used for this purpose is described in Section 3, and the result of the simulation is given in Section 6.

4.3 Silicon calorimetry There is already a great deal of information available on silicon readout of electromagnetic sampling calorimeters. In this case, the neutron response is irrelevant. The arguments given for using silicon rather than other sampling media are i) high effective density, leading to good shower confinement; ii) good long-term gain stability, calibrated absolutely by radioactive sources—in nuclear physics applications, 7-spectroscopy detectors routinely maintain a gain stability of better than 0.3%, limited not by the silicon itself but by electronic gain stability and calibration; iii) simple mechanical assembly considerations. The data from these tests strongly support the view that silicon detectors are an excellent choice for stable calorimetry. Very little (< 1%) change was observed over months of operation of a 24Xo W/Si electromagnetic calorimeter using undepleted, low-resistivity silicon detectors. The energy resolution was found to be a(E)/E = 17.6V(T/E), where T is the sampling frequency [11]. Based on these studies, the SICAPO Collaboration is constructing a prototype of a U/Si hadronic calorimeter with which to study the conditions required for a compensating silicon hadron calorimeter [12]. The active silicon area of the calorimeter will be 6.5 m2. Our working group has concentrated on several basic questions regarding the design and performance of a Si sampling calorimeter: i) What is the minimum inner radius for the calorimetry, compatible with the aim of reconstructing high-energy massive states decaying into quark jets? ii) What is the minimal segmentation, lateral and longitudinal, which will give good jet reconstruction potential with minimal overlap confusion? iii) What electromagnetic segmentation can be achieved? iv) Can one make a hermetically sealed calorimeter? v) What is the radiation environment? Can silicon survive? vi) What would be the cost of a Si calorimeter?

4.4 CÓSICA—The COmpact Silicon C Alorimeter The idea behind the small inner radius is that a calorimetric jet detector does not need an inner tracking chamber in order to do TeV-mass-scale jet physics. If the major job of the detector is to measure jet energies and angles in an optimal fashion and to provide strong lepton identification, then it seemed to us that a compact, high-density calorimeter starting close to the beam would have numerous advantages. We expect that it will be absolutely necessary to incorporate a microvertex detector to isolate the interaction point, and we suggest that a compact transition radiation detector (TRD) [13] should follow immediately to give a quick electron tag for triggers. It is assumed that these two devices will occupy the first 20 cm of the radius from the beam line. Thus the electromagnetic section of CÓSICA begins at a 20 cm inner radius. The basic design is a cylinder with end caps, since hermeticity is needed for all 8 > 10°, according to the Working Groups on Standard Theory Physics and on New Physics. This simple design idea is modified slightly in the actual calorimeter geometry shown in Fig. 2. This calorimeter layout has the barrel region covering angles down to 30°. The effective variation in plate thickness is then a factor of 2, going from 90° to 30°. This worsens the expected energy resolution (because of sampling fluctuations) from 0.46/VÈ at 90° to 0.53/VË at 30° according to Ref. [8], but since the energy will increase, this seems to be acceptable. The two additional barrel segments at smaller angles are chosen so that the - 179 -

Muon Steel + Proportional Tubes

Fig. 2 CÓSICA—The COmpact Silicon CAlorimeter. The figure shows a complete jet calorimeter apparatus for the LHC: CÓSICA (shaded area) plus a central vertex detector and TRD, with external magnet coil and muon steel for 4TT coverage.

maximum variation of effective plate thickness is less than a factor of 2. The limit to the angular coverage using the high-quality U/Si calorimetry is at 7°, owing to the radiation level near the beam pipe; this will be discussed later. A calorimeter for an LHC detector must meet two fundamental conditions: the construction cost must be acceptable, and its operating lifetime in the LHC (SSC) radiation environment must be long compared with one operating year at full luminosity. The fulfilment of these requirements is discussed in later subsections.

4.5 Electromagnetic section The proposed parameters are given in Table 2. The lateral segmentation in the electromagnetic section is 2 cm x 2 cm cells, and the total depth is 20Xo. Because high-energy electromagnetic showers are much more sharply

Table 2

Parameters for the Si calorimeter

Electromagnetic section Hadronic section

Absorber 2 mm U 4 mm U

Readout 0.4 mm Si + 0.1 mm CH2 0.4 mm Si + 0.1 mm CH2 Cell thickness 2.5 mm 4.5 mm

Longitudinal segment 8 cells/5X0 13 cells/0.5X Number of segments 4 12 Transverse cell size 2 cm x 2 cm 2 cm x 2 cm Silicon strip Position sample:

depth (5, 10,and20)X0 (0.5, 1.0,andl.5)X pitch 0.5 mm 0.5 mm Energy resolution 15%/VI 46%/VI e/h ratio 1.0 1.0 Spatial resolution 0.2 mm 2 mm - 180 -

50 GeV electrons

0.5mm pitch Silicon strip detectors 20

• - 15 r- _o E E o N FWHM = OA mm ä io E

I I 1 -0.5 0 0.5

Fig. 3 Electron position accuracy at 5Xo sampling depth using 500 /tin pitch silicon detectors. Comparison of the charge-weighted centroid with the actual input position for 100 showers from 50 GeV electrons, using EGS [14].

defined than a 2 cm cell, especially in the first lOXo, the spatial resolution needs to be improved. We propose to

include silicon strip detector layers at (5, 10, and 20)Xo to give submillimetre spatial resolution on, for example, the electron position. This, together with a fine-grained TRD [13], should allow the observation of electrons within a dense particle environment. The electron angular resolution in this device will be 1 mrad, whilst a typical jet will have 50% of the energy flux in a cone angle of 50 mrad, carried by 5-10 leading particles [3]. Figure 3 shows an EGS [14] simulation of the position error on 50 GeV electrons in one projection at the 5Xo depth sampling plane. The FWHM is 0.4 mm, so this method of isolating electrons seems to be quite promising. The SICAPO Collaboration is carrying out some experimental studies along these lines for a HERA detector. Note that these sampling planes considerably reduce the effective cell size of the electromagnetic section. Rather than calculate Sr¡a(j) from the 2 cm x 2 cm segmentation, the two-shower separation area should be used. For electrons of energies above 50 GeV, 90% of the shower energy is contained within ± 2 mm, according to the EGS. Thus, taking 5 mm as a conservative two-shower separation distance, the electromagnetic section has an effective segmentation of 0.02 x 1.5°, rather than 0.08 x 6° for the full 2 cm x 2 cm area.

4.6 Hadronic section The hadron segmentation is 2 cm x 2 cm x 0.5X for the first six absorption lengths, then 10 cm x 10 cm x IX for three additional absorption lengths. The silicon section is followed by an Fe/scint. tail-catcher of magnetized iron that also serves as the first part of the muon detector. This fine segmentation is unusual but very advantageous for a small-radius device, and circumvents the complication of building a projective device. With fine-grained longitudinal and lateral sampling, the energy-weighted centroid of the shower at each cell is readily calculated and the shower direction defined. Results of a simulation will be presented later. - 181 -

In a U/Si calorimeter the additional electronics costs of making fine segmentation are small compared with the cost of the silicon detectors themselves; but the pattern recognition benefits are enormous, as demonstrated in Section 6. For this detector the segmentation in Table 2 gives ÔTJÔ$ = 0.05 x 3° after the first hadronic sampling section (r = 41 cm, depth = 1.12X).

4.7 Costs Although it is premature to estimate the overall cost with any degree of reliability, we want to ensure that the proposed device is not 'obviously prohibitively expensive'. To estimate this cost, we summarize the components in Table 3. The amounts given in Table 4 include those for the silicon detectors, cables, and readout, as far as the anticipated analog pipeline out to the trigger/readout electronics. They do not include any digitization cost nor the

Table 3

Components in the barrel region of the Si calorimeter. (The end caps increase these requirements by 50%.)

Electromagnetic section Hadronic section

Inner radius 20 cm 34 cm Outer radius 34 cm 129 cm Silicon area: 82 m2 1320 m2 readout channels 26,000 158,000 microplex chips 204 1250 SSD area: 10 m2 11.5 m2 readout channels 15,000 18,500 microplex chips 120 150

Table 4

Cost of the Si calorimeter

Estimated cost parameters: Silicon detectors: SF 2 per cm2 Silicon strip detectors: SF 8 per cm2 Microplex chips, preamps., and cables: SF 20 per channel Uranium plates are not included.

Barrel (in MSF) End caps (in MSF)

Detector cost 28 13 Readout 4 2 Silicon strips 2 1 Strip readout 1 0.5

Total 35 16.5 - 182 - cost of the uranium plates. The assumption is that the cost of the Si detector for CÓSICA will decrease from the SF 6 per cm2 at present being paid for the SICAPO devices, to SF 2 per cm2. This reduction assumes that the cost of the bulk silicon is reduced as well. For the readout we use microplex chips [15, 16] at SF 20 per channel, including cabling and on-board radiation-hardened preamplifiers to drive 2 m of cable out to the microplex chips. As cited in Table 3, this detector requires 2100 m2 of silicon detectors. This is a factor of 40-60 more than the overall needs of current projects. Clearly, good industrial engineering techniques will be needed, but large-scale production of semiconductors is quite usual nowadays. The investment in silicon is MSF 41 for pad detectors, and a further MSF 3 for silicon strip detectors. The 326,000 readout channels take 2580 microplex chips. Costs are estimated at MSF 7.

4.8 Radiation damage Can the silicon survive just outside this enormous radiation source? Whilst there is much experience of radiation damage to silicon, very little of it is relevant to calorimetry. Most studies have concentrated on the increase in leakage current under radiation dose. The majority of experiments have used fully depleted detectors made with high-resistivity silicon—the damage is worse, the higher the resistivity. The only studies that measured the energy resolution of a detector before and after radiation exposure were done at CERN. Two detectors, one of 3.7 kfi-cm and the other of 9 kQ-cm resistivity, were exposed. The first survived a dose of 0.7 x 103 Gy with little effect; the other became unusable for calorimetry at a dose of 104 Gy [17]. A similar, earlier exposure of a low-resistivity device indicated a safe operating dose for calorimetry of at least 2.5 x 103 Gy [18]. The radiation dose for CÓSICA has been simulated using the CERN radiation safety program FLUKA [19]. The procedure is described in Section 5, and gives the dose corresponding to 107 s of operation at a luminosity of 1033 cm- 2 s~ ', or one machine-year of good operation. As one would expect, the end caps receive the highest dose. The limiting dose of 2.5 x 103 Gy in the end caps was reached at 0 = 7°. Translating this into the apparatus shown in Fig. 2, the highest radiation dose is given to the corners of the two tilted segments closest to the beam. For this design, the dose at these corners is less than the fatal limit. Most of the detector receives a much lower dose. Therefore, any radiation damage will be highly local, and good monitoring, using radioactive sources placed on the detector and monitored during 'dump and fill' periods, will give a continual record of the resolution of the various detector segments. In this way any damage can be caught during the early stages and a replacement scheme can be implemented for a few detectors. There is still one unknown factor. In a U/Si calorimeter, each high-energy (> 100 GeV) particle generates several thousand low-energy neutrons that form a 'gas' percolating through the detector for distances of the order of several interaction lengths. One of these neutrons may traverse many silicon layers. Neutrons of energy as low as 1 keV can produce dislocations in the silicon lattice. Therefore there may be enhanced radiation damage, which is not included in the FLUKA model. It is vital that this effect is looked into experimentally as part of an overall study of the use of silicon detectors for hadron calorimetry.

4.9 Calorimeter performance We described the calorimeter Monte Carlo used in Section 3. This program was used to simulate this calorimeter's response to jets from the decay sequence H -» W+ + W- -» 4 jets, where the Higgs is produced at rest. Higgs masses of 0.6-1.0 TeV were used, and W angles of 45° and 90° from the beams were compared. The simulation, described in Section 6, gives a resolution of 8.5% in the W mass for 90° incidence, so the compact nature of this calorimeter does not cause any problems when making a very good reconstruction of high-energy W decays. As the polar angle of the W is decreased, the resolution worsens somewhat because of the non-projective geometry. However, the effects are not large and can be reduced by a more sophisticated jet algorithm. The simulation shows that this device will do an excellent job of jet separation and reconstruction. - 183 -

This detector has other useful features for LHC physics. We propose to insert three additional silicon strip layers in the hadronic calorimeter. One use for these layers is to look for non-showering tracks that stay, for two interaction lengths, within the trajectory limited by multiple Coulomb scattering. This would give an early indication of a muon and may be useful for trigger schemes. Another, more speculative, application of these layers could be that of giving greatly enhanced position information about jets. In the experimental study of hadron shower development in uranium [20], it was noticed that the total ionization peak, monitored by dosimeters, was very narrow—only a few millimetres FWHM. The fission product distribution, on the other hand, had a width characteristic of an interaction length. This suggests that the electromagnetic component of high-energy showers, carrying typically 50% of the incident energy for high-energy particles, remains tightly collimated around the incident particle direction. Therefore, using the charge-weighted centroid in silicon strip layers may give a very good measure of the leading particles in a quark or gluon jet. This idea must be explored experimentally to see if it is a useful property, or if fluctuations will obliterate any improvement over the trajectory information from the full pad structure. However, if this possibility is borne out, then the jet resolution may be significantly improved. A complete calorimeter layout, including non-silicon-readout end-cap coverage at 8 < 7°, a magnet coil, and muon coverage (99% of 4ir), is shown in Fig. 2. The end-cap coverage down to 10 mrad will pick up quark jets close to the beam as an additional tag on the production of, for example, Higgs through WW fusion—the quarks, which radiate the W's forming the Higgs, receive a transverse-momentum kick. The radiation level prohibits the use of silicon as a readout medium in this region. However, other media could be considered, e.g. circulating liquid scintillator, or simple scintillator modules that could easily be changed. Thus the prospects should be good for making a 'missing total energy' trigger for some new physics studies, and the calorimetric coverage could be completed by extending the calorimetry all along the beam pipe. This is discussed in Section 7.

4.10 Conclusion A set-up using a compact calorimeter with microvertex and lepton tagging detectors seems to be capable of doing much of the new physics to be explored in the TeV mass region. The calorimetry is of very high quality. The idea of subdividing the cells with additional silicon devices may give even better angle and position information. There remain some important questions about radiation tolerance which should be explored with a well-planned and well-funded experimental program aimed at making a prototype U/Si hadronic calorimeter within the next few years.

5. DOSE TO THE U/Si CALORIMETER 5.1 The calculation The event generator used to simulate the pp interactions was PYTHIA of the Lund model [21]. A file of 44,000 secondaries from 250 events at 10 + 10 TeV was produced. The file was randomized before being used in the subsequent analysis. Particles from this file were used as input to the Monte Carlo Cascade Program FLUKA [19]. The PYTHIA input file was not subdivided into batches, i.e. no statistical errors were obtained. Leading-particle biasing was used in FLUKA and EGS4. The geometries used to simulate the calorimeter structures are illustrated in Figs. 4 and 5. For the lateral calculation, the calorimeter (Fig. 4), was represented by eight pairs of concentric cylinders of uranium (3 mm thick) and aluminium (1 mm thick). Aluminium was used instead of silicon since a PEGS data file for use by the EGS at collider energies was not available at the time of the calculation. An inner radius of 20 cm was assumed, and the half-length of the calorimeter was taken to be 120 cm, divided into 10 cm long bins. The forward calorimeter, illustrated in Fig. 5, was simulated by 11 pairs of discs composed of 3 mm thick uranium and 1 mm aluminium (outer radius 25 cm), placed perpendicularly to the beam direction, starting at 115 cm from the interaction point. The vacuum chamber was represented by an empty cylinder of 2 cm radius passing through the calorimeter structure. - 184 -

Fig. 4 Geometry for the lateral calorimeter simulation. All dimensions are given in centimetres.

U Al

0 115.0115A 1155116.2116.6 118.6119.0119.4 1153115.7116.1 116.5 118.9119.3 I " Z

Fig. 5 Geometry for the forward calorimeter simulation. All dimensions are given in centimetres.

The value of the energy deposited in different parts of the aluminium structure was stored in the FLUKA calculation.

5.2 Results and conclusions The estimated dose per year in the aluminium plates, based on an integrated luminosity of 1040 cm"2, is given in Tables 5a and 5b. These tables give both the total dose and the part deposited by electrons in the electromagnetic showers coming from incident photons and the decay of produced T0 mesons. In the barrel the maximum dose occurs in the second layer of aluminium in the 40-50 cm bin downstream of the interaction point: the value of the yearly integrated dose is 8 x 103 Gy. Note that in the calculations described here, neither the extra energy due to photons from fission in the uranium nor the energy deposition by fission-produced neutrons is considered. To' account for this, calculations using the HETC [22] code coupled to the neutron transport code MORSE [23] must - 185 -

Table Sa

Annual dose from the lateral calculation (Gy/y)

Radius (cm) z (cm)

20.3-20.4 20.7-20.8 21.1-21.2 21.5-21.6 21.9-22.0 22.3-22.4 22.7-22.8

Total energy density (dose)

0-10 1.7 x 103 3.2 x 103 2.7 x 103 2.8 x 103 2.9 x 103 1.7 x 103 1.4 x 103 10-20 2.6 x 103 2.9 x 103 2.9 x 103 1.9 x 103 1.6 x 103 2.9 x 103 9.9 x 102 20-30 3.6 x 103 2.7 x 103 2.3 x 103 2.1 x 103 1.4 x 103 1.4 x 103 9.6 x:102 30-40 2.6 x 103 4.7 X 103 2.2 x 103 1.2 x 103 1.5 x 103 1.1 x 103 8.9 x 102 40-50 6.9 x 103 7.9 x 103 2.8 x 103 2.1 x 103 1.1 X 103 1.6 x 103 1.8 x 103 50-60 5.6 x 103 6.8 x 103 2.0 x 103 1.7 x 103 8.7 x 102 1.1 x 103 1.3 x 103 60-70 5.4 x 103 5.1 x 103 2.9 x 103 1.1 x 103 1.3 x 103 4.1 x 103 1.1 x 103 70-80 5.2 x 103 3.9 x 103 2.4 x 103 1.4 x 103 2.9 x 103 9.5 x 102 1.3 x 103 80-90 4.8 x 103 3.2 x 103 2.6 x 103 1.3 x 103 1.0 x 103 7.8 x 102 9.2 x 102 90-100 7.0 x 103 3.8 x 103 2.3 x 103 1.6 x 103 1.3 x 103 3.2 x 103 1.0 x 103 100-110 6.5 x 103 1.7 x 103 3.0 x 103 2.1 x 103 9.6 x 102 6.1 x 102 1.1 x 103 110-120 2.0 x 104 1.0 x 104 2.2 x 103 1.5 x 103 1.0 x 103 3.3 x 103 1.0 x 103

e.m. energy density (dose)

0-10 9.8 x 102 2.4 x 103 2.0 x 103 1.6 x 103 2.0 x 103 9.7 x 102 6.1 x 102 10-20 1.5 x 103 1.9 x 103 2.0 x 103 1.3 x 103 9.2 x 102 1.5 x 103 4.6 x 102 20-30 2.2 x 103 1.5 x 103 1.2 x 103 1.2 x 103 7.6 x 102 7.3 x 102 3.8 x 102 30-40 1.5 x 103 2.6 x 103 1.3 x 103 5.8 x 102 4.7 x 102 2.9 x 102 1.0 x 102 40-50 5.6 x 103 6.4 x 103 1.6 x 103 1.0 x 103 2.1 x 102 7.4 x 102 9.3 x 101 50-60 3.9 x 103 5.4 x 103 1.2 x 103 6.2 x 102 2.7 x 102 2.4 x 102 4.5 x 102 60-70 3.5 x 103 3.7 x 103 1.6 x 103 2.8 x 102 3.7 x 102 3.3 x 102 1.2 x 102 70-80 3.6 x 103 1.7 X 103 1.0 x 103 4.6 x 102 1.3 x 102 8.8 x 101 7.2 x 101 80-90 3.8 x 103 1.5 x 103 1.3 x 103 1.3 x 102 2.3 x 102 1.0 x 102 6.2 x 101 90-100 3.3 x 103 1.1 x 103 1.9 x 102 1.6 x 102 1.3 x 102 1.4 x 103 1.6 x 102 100-110 5.0 x 103 7.5 x 102 8.3 x 102 6.3 x 102 1.7 X 102 8.3 x 101 2.2 x 102 110-120 1.8 x 104 7.6 X 103 3.8 x 102 6.0 x 102 1.3 x 102 7.2 x 102 2.7 x 102

eventually be made. It should also be realized that dose levels in the range 104 to 10s Gy are sufficient to render useless most active solid-state electronic circuits [24, 25], to darken optical fibres [26] and plastic scintillators [27] and to cause damage to silicon radiation-detectors [12].

In the forward case, an annual dose of 104 Gy would be exceeded at radii of less than 20 cm; 10s Gy would be exceeded at radii of less than 8 cm. Again, caution with regard to the neglect of fissioning in these calculations needs to be expressed: it is estimated that this could double the dose levels described here, but the damage level in a silicon detector could be increased by an additional factor owing to the increased sensitivity of the detector to fission neutrons. - 186 -

Table 5b

Annual dose from the forward calculation (Gy/y)

Radius in (cm) z (cm)

2-4 4-6 6-8 8-10 10-12 12-16 16-20

Total energy density (dose)

115.3- 115.4 1.9 X 10s 7.6 x 104 2.9 x 10" 2.8 x 104 8.1 X 103 7.7 x 103 3.9 x 103 115.7 - 115.8 4.9 X 105 2.2 x 105 4.9 x 104 2.8 x 10 4 1.8 x 104 1.2 x 104 6.4 x 103 116.1 116.2 8.4 x 105 2.2 x 105 1.3 x 105 5.3 x 10" 4.3 x 104 1.9 x 104 1.3 x 103 116.5 116.6 1.1 x 106 2.9 x 105 1.1 x 105 5.7 x 104 3.9 x 104 3.4 x 104 9.1 x 103 116.9- 117.0 1.7 x 106 2.7 x 105 4.1 x 105 6.7 x 104 5.1 x 104 3.7 x 104 3.2 x 103 117.3 117.4 1.7 x 10s 2.9 x 105 1.3 x 105 6.4 x 10 4 3.7 x 104 2.4 x 10 4 1.6 x 103 117.7 - 117.8 1.6 x 106 5.8 x 105 1.1 x 105 5.1 x 104 1.1 x 10s 3.8 x 104 9.5 x 103 118.1 ••118. 2 1.5 x 106 4.2 x 105 1.9 x 105 7.4 x 10 4 4.1 x 104 3.0 x 104 1.5 x 103 118.5- 118.6 1.6 x 106 3.3 x 105 1.2 x 105 7.9 x 104 1.3 x 105 2.4 x 104 9.4 x 103 118.9- 119.0 1.9 x 106 5.0 x 105 1.6 x 105 5.1 X 104 3.8 x 10" 1.1 x 104 1.7 x 103 119.3- 119.4 1.9 x 106 8.0 x 10 s 9.7 x 104 4.0 X 104 1.6 x 104 8.2 x 103 x 103

e.m. energy density (dose)

115.3- 115.4 8.8 x 104 2.2 x 104 1.4 x 104 7.3 x 103 3.6 x 103 3.1 x 103 1.6 x 102 115.7- 115.8 4.2 x 10s 1.7 x 105 3.1 x 104 2.1 x 104 1.3 x 104 8.8 x 103 2.7 X 102 116.1- 116.2 6.9 x 105 1.8 x 10s 9.9 x 104 3.5 X 10" 3.8 x 10" 1.4 x 104 7.7 x 103 116.5- 116.6 1.0 x 106 2.2 x 105 9.7 x 104 3.3 x 104 3.3 x 104 3.0 x 104 7.0 x 102 116.9- 117.0 1.4 x 106 2.1 x 105 3.9 x 105 5.3 x 104 4.7 x 104 3.0 x 10" 2.9 x 103 117.3- 117.4 1.5 x 106 2.4 x 105 1.2 x 105 5.5 x 104 3.1 x 104 1.6 x 104 1.5 x 103 117.7- 117.8 1.4 x 106 5.1 x 105 9.6 x 104 3.7 X 104 1.0 x 105 2.0 x 104 6.2 x 103 118.1- 118.2 1.4 x 106 3.8 x 10s 1.4 x 105 5.8 x 104 3.6 x 104 2.7 x 104 6.5 x 102 118.5- 118.6 1.4 x 106 3.0 x 105 8.9 x 104 6.2 x 104 1.2 x 10s 1.2 x 104 7.6 x 102 118.9- 119.0 1.7 x 106 4.5 x 105 1.2 x 105 4.2 x 104 2.8 x 10" 6.9 x 103 1.3 x 102 119.3- 119.4 1.7 x 106 7.5 x 105 7.1 x 104 3.4 x 104 9.5 x 103 4.7 x 103 1.7 x 102

6. PERFORMANCE OF THE SMALL-RADIUS CALORIMETER 6.1 The test case The hadronic decay of W bosons originating from a heavy Higgs at rest has been used as a test case to study the behaviour of various calorimeter set-ups. The calorimeter has to perform the following tasks: - recognize and resolve two-jet structures from W decays; - identify the W by measuring its mass as precisely as possible; - measure the energy and angle of the W. W decays coming from a Higgs boson of 600 GeV or 1000 GeV mass produce jet pairs with a separation of typically 32° or 17°, respectively. The task of resolving these jets is impeded by the following effects: - gluon radiation (jet broadening); - 187 -

- fragmentation; - spread of showers in the calorimeter. All these effects have been taken into account in the following studies. Technically a W boson (m = 83 GeV) is first produced at rest and boosted according to the assumed Higgs mass. The W then decays isotropically into a quark-antiquark pair, which is then transferred into the jet final state by gluon radiation and fragmentation according to the Lund model [28]. Finally, the response of the calorimeter to the fragmentation products is simulated. The calorimeter performance is measured using the following criteria: - two-jet reconstruction efficiency; - two-jet mass resolution; - angular resolution (quality of opening-angle measurement).

6.2 The calorimeter simulation 6.2.1 The calorimeter response The calorimeter response, simulated according to Section 3, affects both the angular resolution and the energy resolution of the generated jets. A correct simulation is therefore essential for realistic studies. The following list summarizes the parameters used: - optimized longitudinal and transverse shower parametrization as described in Section 3. - relative response to electrons and pions e/ir = 1.0; - relative response to electrons and muons ¡i/e = 1.4; - relative electromagnetic resolution a(E)/E = 15%/VË; - relative hadronic resolution

6.2.2 Calorimeter geometry The basic set-up for this study is a barrel with two sections of different longitudinal and transverse segmentation: i) A fine-grained FRONT part - to have a very good sampling of the complete electromagnetic fraction of the jet as well as of the main hadronic part; - to provide the information for the jet pattern recognition and the two-jet separation power. The total depth of the front part is 6 hadronic absorption lengths X in 12 consecutive samplings. The transverse sampling T was varied as described in the results. ii) A coarser BACK part - to obtain a complete measurement of the hadronic-jet energy. The depth of the back part is again 6 hadronic absorption lengths, but in only 3 samplings, with a relatively coarse separation in the transverse direction of 10 cm x 10 cm. As already mentioned, the main-jet separation power is expected to come from the front part. The back part will mainly provide the full measurement of hadronic energy. Figure 6 illustrates the jet separation for a single event (W boson from a 600 GeV Higgs) generated in the standard compact calorimeter with 20 cm inner radius and 2 cm x 2 cm transverse sampling in the front part. Each Lego-plot represents a 100° x 100° window in the 15 longitudinal layers of the calorimeter. The thick line divides the figure into the front part and the back part. The energy scale is different and arbitrary for each plot. The two-jet topology is clearly visible in the first layers and is then washed out by the spread of the hadronic shower component. It is evident that the early, and clearly separated, development of the two electromagnetic components is of great help for the pattern recognition. This plot motivates the chosen longitudinal structure described in this subsection. - 188 -

Fig. 6 W boson originating from a 600 GeV Higgs producing a jet pair in the 15 longitudinal layers of the standard calorimeter

6.3 Reconstruction techniques Fine-grained calorimeters can, in principle, reconstruct the W mass using directly the measured cell energies. Each cell is assumed to represent a particle with the energy given by the response of the cell and mass zero. However, this method encounters problems in the environment of hadron machines where the energy flow of the underlying event is added to the signal from the W decay products. A more practical approach is the following two-step procedure: - 189 -

¿•O 50 70 80 90 100 0.5 1.0 1.5 2.0

mjj [GeV/c2] ^generated / ^reconstructed

Fig. 7 W mass reconstruction using single cells Fig. 8 Generated two-parton opening angle/observed (dashed line) and reconstructed clusters (full line) two-jet opening angle

i) reconstruction of energy clusters (and possible energy correction techniques); ii) calculation of the mass of the cluster system. For this study the LUCLUS algorithm from the Lund JETSET package was used [29]. Figure 7 shows a comparison between the two approaches for the W mass reconstruction. The curves have been obtained for the compact calorimeter described in the results (20 cm inner radius, 2 cm x 2 cm transverse sampling in the front part). The W boson comes from a Higgs with a mass of 600 GeV. The dashed line shows the reconstructed mass using all cells in the calorimeter, including the energies of non-interacting particles (neutrinos and muons). The full curve is the result of the cluster reconstruction algorithm, as in a real experimental environment. The advantage of the cell method is evident (3.3 GeV r.m.s. compared with 7.1 GeV r.m.s. from the cluster method). The quality of the cluster reconstruction is demonstrated in Fig. 8 where the ratio between the generated two-parton opening angle and the observed two-jet opening angle is shown. The mean of the distribution is 0.96 with a r.m.s. spread of 17%.

6.4 Results This subsection summarizes the results on two-jet reconstruction efficiency e and W mass resolution obtained for a compact calorimeter set-up with an inner radius of 20 cm.

6.4.1 The effect of the transverse sampling in the front part The following defines the standard test case: - transverse sampling T in the front part 2 cm x 2 cm; - 90° incidence of the generated W with respect to the calorimeter surface; - Higgs mass 600 GeV. The performance obtained with this set-up (see the previous figures) is - two-jet reconstruction efficiency, e = 87%; - W mass resolution, 7.1 GeV r.m.s. Figure 9 shows the dependence of e on the transverse sampling T in the front part of the calorimeter in the extreme case of a 1000 GeV Higgs. A rapid degradation of e is observed when the transverse cell size increases beyond 5 cm. - 190 -

100

QJ C O

U Z 50 \N c o U OJ

4— OJ

0 5 10 Transverse cell size [cm]

Fig. 9 Two-jet reconstruction efficiency versus transverse cell size in the front part of the calorimeter (Higgs mass 1000 GeV)

This clearly demonstrates the need for fine transverse sampling (e.g. 2 cm) in the case of such a compact small-radius calorimeter.

6.4.2 The effect of the boost The effect of the boost can be seen by comparing the case of a 600 GeV Higgs decay, with that of a 1000 GeV Higgs decay: - e decreases from 87% in the 600 GeV case to 77% in the 1000 GeV case; - the mass resolution stays unchanged (7.1 GeV r.m.s. at 600 GeV compared with 6.8 GeV r.m.s. at 1000 GeV) for those cases where the cluster recognition was successful in finding two jets. The degraded angular resolution is compensated by the better energy resolution due to the higher boost of the fragmentation products.

6.4.3 The effect of inclination Changing the W angle of incidence with respect to the calorimeter surface from 90° to 45° affects the calorimeter performance as follows: - the two-jet separation probability e is reduced from 87% to 77%; - the two-jet mass resolution is degraded from 7.1 GeV r.m.s. to 8.4 GeV r.m.s. This can be explained by the non-projective barrel geometry of this calorimeter set-up. Inclined showers are smeared transversely and the effective longitudinal sampling is reduced.

7. FORWARD CALORIMETER AND TOTAL ENERGY MEASUREMENT 7.1 Why forward calorimetry? Although these days one frequently speaks of 'hermetic' or 4ir calorimetry, the calorimetry does, in fact, have holes, at least for the beam pipes. The beam fragment jets are centred on these holes, with the result that generally a large fraction of the collision energy is undetected. For example, in the UAI calorimeters, which extend down to 6 = 0.2°, the total energy distribution peaks around 500 GeV even for highly inelastic events (with ET = 150 GeV) at vT = 630 GeV. This loss is not surprising, as the probability of having a very energetic small-angle particle is high (all particles with PT < 300 MeV/c and PL > 85 GeV/c will have 0 < 0.2°). To collect a similar fraction of the total energy at the much higher energy of the LHC, one would need to scale down the minimum angle by approximately the ratio of the beam energies, i.e. by = 0.3 GeV/8 TeV = 0.04 to = 0.14 mrad. This is clearly not possible using normal techniques, with the superconducting low-beta quadrupoles and other machine elements starting at distances of « 15 to 20 m from the intersection (0.14 mrad at 20 m = 2.8 mm). Nevertheless, all - 191 - secondary particles from an inelastic collision will eventually leave the pipe, which accepts only XF = 1, pr = 0 positive particles, i.e. the non-interacting protons. In principle one could collect and measure all this energy by putting calorimeters in front of and in between the machine elements, and even, where necessary, calorimetrizing the machine elements themselves. We shall see later how this might be possible. What would be the advantages of truly hermetic energy measurement? (Truly hermetic meaning that only non-interacting or small-t elastically scattered protons escape geometrically, and only muons, neutrinos, and other weakly interacting particles escape through the calorimeter.) The four examples given below come readily to mind.

7.1.1 Counting the number of collisions or selecting single collisions For the higher-energy colliders we are forced, by the falling cross-sections (a « 1/mx), to go to very high luminosities. This means that we will usually have to work in conditions where 'pile-up' is common, i.e. more than one independent collision in the sensitive time of the detectors. Figure 10 is instructive; it shows the number of collisions per second which occur alone in a bunch crossing, as a function of luminosity. The conditions (solid line)

are t = 25 ns between bunch crossings and Cinei = 100 mb. The linear rise at low L turns into an exponential decrease above the 'optimum singles rate' which occurs when ñ = L at = 1. Because of this exponential decrease at

Curve is ne"n where ñ=Lot ~T t=2.5x10-"(25ns) o=10-H=100mb

/ \

I \

/L=4X10h, ñ=1 1 / J (25ns/crossing,

XI c S io'

u

ai "ai c '(/>

10"

ra

10b

L (cm-2s-1)

Fig. 10 Number of collisions per second which occur alone in a bunch crossing, as a function of luminosity - 192 - very high luminosity (» 1034 cm" 2 s~x), essentially every event will have one or more additional interactions in the same bunch crossing. Reducing the time between bunch crossings appears to help (see dashed line in Fig. 10 for t = 5 ns), but of course this only holds true if the detectors are very fast—and calorimeters usually are not (not a few nanoseconds). Working above L = 4 x 1032 cm-2 s"1 is therefore only efficient when searching for physics signatures which cannot be faked by pile-up (e.g., to have two balancing, very high-pt W's is fairly safe, but four jets which balance pairwise in PT is certainly not). For that class of physics which can be faked by double interactions, it is best to run close to or below ñ= 1 and to have a means of ensuring pile-up rejection. Total energy measurement and multiple vertex detection are two ways of doing this, and ideally one would have both. The 'beam jet' calorimetry can be rather coarse at these energies, because even a resolution ôE/E = 100%/VË is just over 1% at 8 TeV. Thus distinguishing one, two, three collisions should be very easy provided one can calorimetrize, even crudely, the elements around the downstream beam pipes (over a few hundred metres, preferably beyond the bend of the machine). We should repeat that we consider this to be of great importance for modest luminosities of » 1032 cm"2 s"1, for physics which can be affected by pile-up, and where it is important to know that a single collision occurred. Multiple vertex recognition also helps but probably only off-line; it requires very good tracking along the beam direction, and it too fails at very high luminosity. For L = 1034 cm" 2 s" \ essentially every collision

has another interaction within about 1 mm (assuming Gaussian bunches of length OX » 8 cm).

7.1.2 Studies of full rapidity structure of events Suppose we use the rule of thumb that within 3 units of rapidity of the beam particle there is the 'beam fragmentation region', and that closer to 8 = 90° there is the 'central region'. The boundary thus defined, which was at 8 = 33° at the CERN Intersecting Storage Rings (ISR) (vs" = 63 GeV) and at 3.4° at the pp Collider, has shrunk to 8 » 0.14° at the LHC (5 cm at 20 m). It will be very difficult to study physics in the beam fragmentation region at the LHC (this will be more easily done at the SSC provided long straight sections are built in). However, we may wish to know the full rapidity structure of an event—for example a rapidity gap of 3 units is a rather good signature for diffraction. Diffractive processes may be very interesting at LHC energies. At the ISR, single diffractive excitation of protons extended up to about 10 GeV, and at the CERN pp

2 Collider up to about 100 GeV. Using the relation m /s = 1 - x and the rule of thumb xmin = 0.975, which gives the

above two limits, we find mm!X = 2.5 TeV at the LHC. It may be interesting to study these very massive 'diffractively excited' protons, and to do so one would like not only to detect the quasi-elastically scattered proton (PL a: 0.975 x 8000 GeV/c, pr ^ 1 GeV/c), e.g. in silicon microstrip detectors inside the beam pipe, but also to be sure it was isolated in rapidity—no other particles below, say, 8 « 0.14°.

7.1.3 Missing-mass techniques The importance of sufficiently hermetic calorimetry to measure a 'missing transverse energy' vector, Ï$T is now well established for W finding, SUSY searching and so forth. However, owing to the beam-pipe holes and the lack of very forward calorimetry, it was considered that trying to measure the 'missing longitudinal energy' $L would be hopeless. However, this could be so interesting that it is worth re-investigating. If our calorimetry is really 4T apart from two small holes (say PL S 0.95pbeam, pr s 1 GeV/c, positives) and has good resolution and granularity etc., we can measure missing mass: rti= V[(V?-E)2-E2-E2-E2], where the E¡'s are summed. Note that two conventional sources of jft and JÎT at the pp Collider are: i) W -* TV, T -* hadron-jet; ii) Z + jet, Z -» vv. In both cases a hadron-jet recoils against nothing visible, but in case (i) jh = 0 (the neutrino) and in case (ii) jti » mz. So even a missing-mass resolution of a « 25 GeV would be very valuable for classifying these events and for searching for new phenomena, e.g. supersymmetry. We do not suppose that missing-mass resolutions of this order will be achievable at the LHC, but even a(jra) values of 200-400 GeV are interesting in the case of 1 TeV phenomena (e.g. Z'). - 193 -

7.1.4 Tagged parton collisions This possibility follows as a consequence of achievable missing-mass techniques, but here we consider the energy lost from the beam fragmentation regions whether or not it is visible in the central region. We add all the

detected energy below 0cut(say 0.2°) on each side (EL, ER) and trigger on events with, for example, EL,R < (Ebcam - 1 TeV)—this is effectively tagging interacting partons above 1 TeV, then studying the central region 'inclusively'. The physics made possible by nearing the hermeticity required to realize the latter two techniques could result in a big payoff. If one could really trigger on events where a (say) 1.00 ±0.1 TeV parton-parton collision has occurred, without any requirements on the final state of that collision, it would in some sense be analogous to the e+e" case. However, it is clear that this could only be achieved i) for modest luminosities of ~ 1032 cm~2 s~1 such that ñs 1; ii) by having sufficient forward calorimeter coverage so that it is extremely unlikely that a small-angle energetic hadron escapes detection (except 'elastic protons'). This probably means calorimetrizing beyond the machine bend, to catch leading neutrons.

7.2 How to calorimetrize the fragmentation region A proper assessment of what might be achieved requires a 'realistic' model of the machine from the intersection region down to and round the bend, together with Monte Carlo generated 8 TeV beam fragmentation jets. Then, taking into account both the materials and the magnetic fields, one could see just where the bulk of the energy is deposited. This has not yet been done. We understand that after 20 m of free space, a 40 m long quadrupole triplet is likely to be the first machine element. The front face of this element would obviously be about the hottest region of radiation (perhaps = 104Gy/y close to the beam pipe), and this limits the choice of readout medium; liquid scintillator would be a possibility. [Drifting liquid calorimeters (liquid argon, TMP, etc.) are unlikely to be suitable, unless a very fast liquid can be found (< 25 ns drift time), because the occupancy will be high—essentially every event will deposit energy here.] This quadrupole triplet can serve as an example of what may be done also with the other machine elements to minimize the energy deposited in inert material. It may be split into shorter, separated units (e.g. three separate quads) with the bore/vacuum pipe increased in diameter so that particles do not hit the pipe. Between these units, in gaps ~ 1 m long, one inserts dense smaller-bore calorimetry—uranium or tungsten absorbers allow 6X-7X within 1 m, including readout. Figure 11 sketches the

NORMAL VERSION 1m 110cm ItO m

i i —• s 1 ' L BEAM • 1 • r (S.C.I TRIPLE aUAORUPOLE •*—(TO EXPERIMENT)

VERSION WITH "ACTIVE SHIELDING" 1m

OlUAD 3

FRONT FACE INTERLEAVED CALORIMETER CALORIMETERS (-6X LONG)

POSSIBLE WAY OF DETECTING "ALL" FORWARD ENERGY

Fig. 11 Active shielding of machine elements - 194 -

«680

^Vacuum vessel enclosure (between dipole units)

Vacuum vessel

Radiation shield

_ Shrinking cylinder

.Iron yoke

Coil.l.D.= 50. QO.= 121

LHen vessel

_Non magnetic collar

.Beam pipe VI x 38 with inner radiation shield

.SC bus-bars

560 L H C , 10 T, TWIN - APERTURE DIPOLE ( 2°K Version ) CrOSS-Section January 1986

Fig. 12 Sketch of a typical superconducting machine twin-bore dipole

concept. In this way one attempts to preface all inert machine elements with 'active shielding'. Of course this is not ideal from the machine point of view: the magnets have somewhat larger bore and are shorter and more numerous—we will really be interleaving the experiment and the machine for the first 250 m or so. If this causes difficulties, another route may be possible, involving calorimetrizing the machine magnets themselves (an even more intimate interleaving). Figure 12 shows a sketch of a typical superconducting machine, twin-bore dipole. The only parts that seem really difficult to calorimetrize are the coils themselves, but at worst these are not more than 3-4 cm thick. The 'non-magnetic collar' surrounding these coils consists of a stack of laminated, (probably) aluminium plates, each 2 mm thick. This in turn is surrounded by iron laminations, and the whole is placed in a liquid-helium bath. It should be possible to design the magnet in such a way that every few centimetres (the radiation length of Al is 9 cm, that of Fe 1.76 cm) one plate is replaced by a detector layer. The requirements of the detector are that - it operates at liquid-helium temperatures, - has fast signals (25 ns), - is thin and compact, - has the ability to extract signals with ease (1 thin wire or optical fibre?), - is sufficiently radiation-hard so that it does not need to be replaced, - it can take the strains when the magnetic field is switched on. - 195 -

These requirements are probably best met (except perhaps for the last point) by silicon wafers. We hope that experimental tests along these Unes can be pursued, leading towards a prototype 'calorimetrized superconducting magnet'. Clearly this is simply a presentation of some ideas, and much work is needed to assess their feasibility. However, we would urge the designers of the machine to maintain flexibility, and avoid finalizing the design before these studies have been made.

8. REQUIREMENTS FOR ep DETECTORS In ep physics the calorimetry is essential for two reasons: i) the structure of the proton, which is probed by the electroweak current, is revealed by the properties of the scattered lepton, and ii) this lepton, either an electron or a neutrino (if nothing new happens), is best measured by a calorimeter. In the case of the electron, the energy is high enough for a calorimeter to be superior to magnetic tracking; in the neutrino case the kinematic variables are reconstructed from the observed hadrons or hadron jets, best measured as energy-flow vectors by calorimeters. This situation is well known from fixed-target neutrino or muon physics. In order to evaluate the effect of the angular and energy resolutions on the physics capabilities of a detector, it is best to consider charged-current reactions. In these reactions the electron in the initial state is transferred into a neutrino. The kinematics has to be reconstructed from the hadrons. The calorimeter properties are therefore crucial. For many physics questions it is important to have a long lever-arm in Q2, at fixed values of the scaling variable x. Therefore the detector limitation towards low Q2 and towards high Q2 is especially interesting. In the low-Q2(medium- and high-x) region which allows the connection to be made also to lower-energy machines, i.e. HERA, the quark jets are concentrated at small angles. The beam hole therefore represents a natural limitation to the accessible Q2 x-range. This situation in HERA is very similar. The remedy, if one wants to measure this region, is to run at low proton energies so that the kinematics is less symmetric and the interesting final-state particles go to large angles. However, the kinematics is even more asymmetric in the LHC/LEP than in HERA. The question is, Are there any scaling laws for detectors when transferring the limitations in the scaling variables x and y from one machine to another? If we take simple parton kinematics, the scattering angle of the struck quark 0¡ in the lab. system is given by

(1 + cos0j)/(l - cos0j) = (Ep/Ee)(l/y - 1). (1)

For the same x,y, the angles 0j in machines, with different ratios of electron energy Ee and proton energy Ep, therefore scale as

HERA LHC 0.LHC/0.HERA = V[(Ep/Ee) /(Ep/Ee) ] = l/Vl. (2)

The same beam hole therefore produces the same kinematic limitation if the detector is at a distance from the intersection which is V5 times larger in the LHC machine than in HERA. Roughly this indicates that the free space for experiments should be V5 times as large, or about ± 10 m. In the high-Q2 region the measurable x-y range is limited by the luminosity, not by the detector. However, measurement errors will severely affect the visibility of new effects. The resolution at large Q2 is approximately given by

ÔQ2/Q2 = (áEj/Ej)(2 - y)/(l - y), (3)

where Ej is the hadron jet energy. The more the kinematic limit y = 1 is approached, the higher are the errors. Compared with HERA, the jet energies are typically an order of magnitude larger:

Ej = x(l-y)Ep + yEe. (4) - 196 -

In calorimeters, the resolution SEj/Ej is therefore better, and so is the relative resolution in Q2. It is, however, important to have small systematic errors in the jet energy measurements. A constant error of 2% would dominate over a resolution of 5Ej/Ej = 0.50/VEJ for almost the full kinematic range. This consideration favours a detector with e/h ~ 1, and with good calibration of individual channels. For detectors in the neighbourhood of the beam pipe, another systematic effect appears which does not seem to have been explicitly mentioned so far. The scaling variable y is reconstructed from the sum over all particles:

i i 2 y = l/(2Ee)S(E -pí) = l/(2Ee)2(E 0 /2) . (5)

The Q2 is given by

Q2 = l/(l-y)[(Epi)2 + (Ep'y)2]. (6)

Equation (5) is non-linear in 0¡, i.e. small-angle particles enter with much less weight than particles with large angles. However, shower development in the calorimeter causes the energy to be distributed symmetrically around the axis of the incoming particle. Even in a fine-grained calorimeter there is, therefore, a distortion: the square of the average is not the same as the average of the squared angle, {d2} & (6)1. This distortion was studied for a particular model of quark and gluon fragmentation [30], for fixed values of Björken-x (x = 0.2) and varying jet angle 0¡, according to Eq. (1). The calorimeter response was simulated as described in Section 3, with the following parameters: - end cap at 5 m from interaction point, - beam hole with 15 cm diameter, - x = 0.2, - granularity: e.m. cal. = 0.3 x 0.3 x 0.5 cm3, hadronic cal. = 3 x 3 x 5 cm3, - resolution: e.m. cal. = 15%/VE, hadronic cal. = 35%/VË.

The resolution in scaling variables and Q2, calculated by the simulation, is shown in Table 6. At a distance of 5 cm in a beam tube of 15 cm diameter, there is a considerable relative shift of the average reconstructed variables from

Table 6

Simulated performance for an ep detector. Simulation is made at x = 0.2.

Q2 = 1920 3840 7680 11520 15360 19200 38400 57600 76780 fiy/y (%) > 22 23 12 8.3 8.2 5.0 3.4 2.2 2.3 ôx/x (%) 12 10.3 7.3 5.9 4.5 3.9 3.1 3.4 1.8

ÔQ2/Q2(%) 9.3 5.6 3.6 4.2 2.9 2.4 3.0 2.7 2.3

their true values, even for jet angles as large as 50 mrad. As indicated in Fig. 13, this distortion may limit the Q2 range at the lower limit even more than would the beam hole itself. A possible remedy (in addition to the obvious one of lowering the proton energy) is to have a calorimeter in which the centre of the shower, containing the bulk of the electromagnetic energy deposition, is measured separately, as outlined for example in subsection 4.9. The best possible resolution is obtained in this region by means of a large lever-arm, and by the use of calorimeters with the smallest possible X. - 197 -

50 I

• 0

Fig. 13 Shift of the scaling variables x and y versus 0¡ at fixed x = 0.2 -50 0 20 40 60 80 Qj [mrad I

In conclusion, calorimetry at an LHC/LEP ep detector is more demanding than at HERA, in the sense that a constant term in the resolution of 2% would be the dominating contribution for almost the full kinematic range, even if the calorimeter resolution is as modest as 0.5/VE. Therefore e/h = 1 and excellent calibration feasibility are necessary. In the same x-y range, the loss of low-Q2 events due to the beam hole is similar to that in HERA, provided the forward detector starts at an angle which scales as the square root of the ratio of proton to electron energies.

9. THE PROBLEM OF BEAMSTRAHLUNG AND cm. ENERGY SMEARING In order to reach the high luminosity necessary for physics at TeV e+e~ linear colliders, a small spot size is essential. Over these small regions the magnetic field experienced by one beam, and which is due to the other beam, will be several hundred teslas; thus the beams will be subject to drastic deformation during each crossing. It is expected that this 'disruption' of the beams will have two important consequences: i) there will be a mutual self-focusing, called the 'pinch', leading to a luminosity enhancement (typically by a factor of 2 to 5); and ii) a large fraction (typically about 10%) of the initial beam energy will be emitted as high-energy (typical average energy 40 GeV) synchrotron radiation (s.r.) photons (about two per electron), called beamstrahlung. This beam energy-loss will cause a smearing of the centre-of-mass (cm.) energy distribution. As far as the detector design is concerned, it is important to understand how this beamstrahlung might affect a detector surrounding the interaction region. In particular, the questions to be addressed are: i) What is the energy spectrum of the beamstrahlung photons? ii) What is their angular distribution? iii) What is the magnitude of event-to-event fluctuations in these quantities? For the physics purposes it is of course essential to know how much the cm. energy resolution will be degraded because of this loss of energy from the colliding electrons. Definitive answers to these questions are as yet unknown, although some rule-of-thumb extrapolations from current electron machines of lower energy can be made, and presumably the imminent operation of the SLAC Linear Collider (SLC) will cast some light on the problems, albeit at energies of only 50 GeV per beam rather than the TeV per beam considered at CLIC. The only other approach is through Monte Carlo simulation of the beam-beam interaction itself. Several people have written such programs [31, 32], and we have used the program of Yokoya [32]—kindly supplied to us by the author—to study these questions. The Yokoya simulation is probably the most sophisticated of the simulations currently available, and includes i) complete tracking of a two-bunch collision;

ii) allowance for arbitrary

iii) an algorithm to reproduce the quantum theoretical s.r. spectrum; and iv) enforcement of the stochastic nature of the s.r. emission process. The simulation is performed using a cloud-in-cells model that is typical of plasma physics. The electrons in each beam are distributed as macroparticles throughout a space-time lattice, and these superparticles can then be tracked according to the classical relativistic equation of motion for charged particles experiencing Coulomb forces. An immediate consequence of this tracking is the appearance of the disruption leading to a pinch effect. The output of the program includes the final electron and photon energy and angular distributions, as well the luminosity (including the pinch), and 'luminosity-weighted' values for the average beam energy-loss, r.m.s. energy-spread, and mean and r.m.s. centre-of-mass energy squared. The program does not attempt to estimate the 'photon-electron' or 'photon-photon' luminosities, which may also be expected to be significant. The version of the simulation used here was modified to allow repeated collisions, and to plot the distribution of the cm. energy distribution of the colliding superelectrons. We used several sets of collider parameters, within a reasonable range about the favoured CLIC 'Standard' set, but concentrated on these latter values: Beam energies: 1 TeV x 1 TeV

Transverse bunch dimensions a%, ay: 65 nm

Longitudinal bunch size az: 0.5 mm Repetition frequency: 5.8 kHz No. of electrons per bunch: 5.35 x 109. We used 5000 macroparticles per beam, 16 cells each in x and y (of dimension 16 nm), 18 cells in z (0.15 mm), and 40 time steps (0.15 mm). The following results are obtained: i) luminosity: 7.2 x 1032 cm-2 s~1 (including a pinch enhancement of 2.3); ii) average beam energy-loss: 0.093 TeV; iii) r.m.s. beam energy-spread: 0.127 TeV; iv) mean s/so = 0.864, where s = 4E2; so = 4Eo; Eo = 1 TeV;

v) r.m.s. s/s0: 0.170; vi) mean number of s.r. photons per electron: 2; vii) mean s.r. photon energy: 46 GeV.

The distribution of cm. energy [calculated as w = 2V(EiE2)] of the colliding superelectrons is shown in Fig. 14. This has a mean of 1.91 TeV, an r.m.s. of 0.13, and a FWHM < 20 GeV. Thus although the average beam energy-loss is about 10%, the cm. energy is not smeared as badly as might at first have been feared. The final angular distribution of the emitted photons is found, in fact, to be still quite sharply collimated, with an r.m.s. of about 60 /trad and tails stretching to 300 ;trad (< 0.02°) where the statistics are low. In all cases considered, the photon distribution is not much broader than the disrupted electron distribution. The photon power spectrum obtained using the CLIC standard parameters is shown in Fig. 15. For convenience it is plotted in the unconventional units of kilowatts per microsteradian. It is clear that the total energy emitted in photons per beam crossing is enormous: about 10 12 GeV, which can be seen by estimating the total energy lost from the beams, or by calculating the total energy in s.r. photons either by integrating the power spectrum or just by multiplying the average photon energy by the average number emitted. Since the tails of the angular distribution are the source of possible background in a detector, and the distribution is quite exponential over much of the angular range, we made a crude extrapolation in order to estimate the energy flux at angles corresponding to various radial distances from the beam axis, at 1 m and 7 m along the axis from the interaction point. The power spectrum is represented quite well by the equation:

P(0) = 10s x exp (-0.0370) [kW//tsr], - 199 -

~i 1 1 1 1 1 r ~i 1 1 1 1 1 r

Mean w = 1.91 TeV r.m.s. = 0.127

103

a. s 5

0l_i 1 1 1_ -I i i i i i f_L 0.6 0.8 1.0 1.2 1.4 1.6 1.8 2.0 0 40 80 120 160 200 240 280 320 w(TeV) 0 lurad)

Fig. 14 Distribution of centre-of-mass energy of Fig. 15 Beamstrahlung-photon power spectrum colliding superelectrons from Yokoya simulation, from Yokoya simulation, using CLIC standard using CLIC standard parameters parameters

which can be rewritten to give the energy flux in GeV/sr per crossing at any 0 expressed in degrees:

E(0) = 1020 x exp(-65O0).

Some values extrapolated from this formula are given in Table 7. For example, at 1 cm from the beam, and at 7 m from the crossing point, the energy is a few MeV/sr per crossing. This would seem a negligible background for a detector. We conclude that the beamstrahlung is expected to have an angular dispersion comparable to that of the disrupted electron beams themselves, and thus should be contained within the beam pipe and not cause additional background in a detector. We must note, however, that this conclusion is based on extrapolation over many orders

Table 7

Energy in GeV/sr per crossing at a given angle 0 from beams

0 E r (at 1 m) r (at 7 m) (°) (GeV/sr per crossing) (mm) (mm)

0.01 1.5 x 1017 0.18 1.2 0.02 2.3 x 1014 0.35 2.5 0.03 3.4 x 1011 0.52 3.6 0.05 7.7 x 105 0.87 6.1 0.07 1.7 1.2 8.5 0.08 2.6 x 10"3 1.4 10 - 200 - of magnitude, and that the problem has not been thoroughly studied as a function of all possible parameters even within the limits of the present simulation program. Further refinement must wait for experience with real colliders and more sophisticated simulations.

10. THE e+e~ DETECTOR At CLIC, with only 60,000 bunch crossings per second and a small probability for overlapping events, the restrictions on the design of a general-purpose detector are fewer than for the LHC. One can consider scaled-up solutions already found for the LEP detectors. The main restriction at CLIC is the accommodation of final focusing elements, which has been foreseen at 30 cm distance from the beam crossing point. Including support structure, the first quadrupole is expected to subtend an angle of ± 10°. It is necessary to introduce detectors within this angular region both for the purpose of completing the angular coverage and to provide the necessary relative and absolute monitoring of the luminosity. In the previous section it was shown that the beamstrahlung, created at beam crossings, is essentially confined to the beam pipe. The remaining tail of this background should be in the keV region and tolerable. One should, however, be cautious in predicting the background since the beam pipe would have a diameter of only about 0.5 mm and there might be unforeseen background, for example due to wall effects or bad vacuum. In Fig. 16 a proposed solution for a detector in the forward regions is given. A conical electromagnetic calorimeter made of tungsten with silicon-strip readout can be accommodated in front of the first beam element. The depth could be as much as 20Xo to 25Xo. The beam element itself is presumably quite narrow—most of the space reserved for this component is support structure which may be calorimetrized. The rate of Bhabha events in the angular range of 2°-5° has been estimated to one event per second. The proposed main detector is shown in Fig. 17. The layout chosen gives good coverage in tracking and calorimetry. The emphasis is placed on fine granularity and good energy resolution, especially for the hadronic calorimetry, good electron and muon detection, as well as flavour tagging by means of the detection of secondary vertices. The tracking detectors and calorimeters for CLIC can be slow in contrast to the detectors for the LHC, i.e.

Bhabha / Luminosity detector Calorimetrized W/Si calorimeter support Iron tail catcher/p.-detector/'return-yoke

crossing Pb-TMP 5X magnet 30 cm -

Fig. 16 Forward region in CLIC, showing Bhabha/luminosity detector and final focus• ing elements

'0 3 lm] Si vertex W/Si Final focusing detector Calorimeter

Fig. 17 Schema of detector for CLIC - 201 - one can use three-dimensional imaging given by a time projection chamber (TPC), and warm liquid (tetramethyl pentane, TMP) in the hadron calorimeter. The time available between events also allows a more conventional approach to the electronics and to data recording. We will not describe the tracking and electron identification (this is done elsewhere) but give a few comments about calorimetry and muon detection. The main feature of our calorimeter is good granularity, both in the transverse and longitudinal directions, and shower compensation. The electromagnetic calorimeter, placed inside a superconducting coil, is made of uranium with silicon readout as described in Section 4. This is a very compact device with excellent granularity, hence good discrimination between electrons/photons/hadrons. The silicon is covered by a hydrogen-rich layer in order to ensure enhanced response to hadrons through the interaction of fission neutrons with free protons (e/7r « 1). The structure is 2 mm uranium plates and 2 mm silicon-plus-hydrogen arranged in cells of 10 x 10 cm2,

five layers deep. Such a tower is 5Xo. At depths of 5X0 and lOXo, silicon strip detectors are introduced to give accurate position information. The total depth of the electromagnetic calorimeter is 25Xo. The expected resolution is 0.15/VË for electrons and 0.46/VË for hadrons. The hadronic calorimeter is placed ouside the coil. Recent tests made by the UA1 group [7], have demonstrated the feasibility of using TMP in large-scale calorimetry. Calculations [8] indicate that with 4 mm lead plates and 2.5 mm TMP, full compensation is obtained (e/ir = 1) with a hadronic energy resolution of 0.3/VE. A cell structure of 10 x 10 cm2, about 30 layers deep, is assumed for the first 3X, and 20 x 20 cm2, with the same depth, for the remainder. In the simulation of the detection of heavy Higgs production followed by the decay of W+W~ it has been shown that the relatively fine longitudinal segmentation is essential for the event reconstruction. At large angles the hadronic energy resolution is affected by the presence of the coil. The additional error to the energy resolution versus coil thickness, in X, is shown in Fig. 18.

10 i—i—i—i—i—J 1—i—i—i—i—i—i—i—i—

_L 0.5 1.0 Coil thickness [X]

Fig. 18 Total energy resolution for light-quark events, versus coil thickness, calculated for the detector in Fig. 17

Outside the hadronic calorimeter, additional absorber (iron) is required for muon detection. This also serves as the return yoke for the magnet. Embedded in the iron are 10 layers of proportional tubes for tracking purposes. Our CLIC detector has an outer radius of 6.85 m and a length of ±6.6 m. The electromagnetic calorimeter would use 3200 m2 of silicon (estimated cost MSF 63) and 40,000 electronics channels. The hadronic calorimeter would use 1700 t of lead and 96 m3 of TMP, and contain 61,300 channels. The steel for the muon detector would amount to 94001, and the number of electronics channels is 62,000.

11. CONCLUSIONS We have proposed a novel detector design for the LHC (SSC) machine—that of a compact silicon calorimeter. It has been shown that the small radius of this detector can be compensated by fine-grained lateral and longitudinal segmentation, and it is therefore realistic to consider a 4x silicon calorimeter. The calorimeter would - 202 - have uranium as absorber, and the 100 pra polyethylene foils covering the silicon in the sandwich should bring e/h close to 1 and therefore eliminate a constant term (except for calibration errors) in the resolution. Silicon has many advantages for calibration. The proposed detector would be superior to some other approaches in calorimetry and electron and muon measurement, but it is unlikely to be compatible with large magnetic tracking. We have pointed out the considerable value, for hadron-hadron colliders, of making the calorimeter truly hermetic, and have suggested ways of 'calorimetrizing the machine' to achieve this. The same approach could also be used for the ep detector, except for the end cap at the incoming electron side, where a large lever-arm is needed. Small systematic errors, no constant term in the resolution, and a small shower size are required. The same silicon/uranium/polyethylene sampling as that proposed for the LHC case seems, therefore, to be a good candidate. We have shown that the direct beamstrahlung from a 2 TeV e+e" machine does not pose any problems for the detector. The smearing of the cm. energy manifests itself essentially in the long tail.

Acknowledgements We thank J. Ellis, D. Froidevaux, G. Gilchriese, P. Jenni, B. Montague, F. Richard and W. Willis for helpful discussions. We also thank the organizers, and especially J. Mulvey, for the arrangement of a very pleasant workshop in the Italian Alps. We are grateful to K. Yokoya (KEK) for providing us with the means of calculating the effects from beam-beam interactions at the e+e~ machine. - 203 -

REFERENCES

[1] P. Jenni et al. (LHC Jet Study Group), Proc. ECFA-CERN Workshop on a Large Hadron Collider in the LEP Tunnel, Lausanne and CERN, 1984 (ECFA 84/85, CERN 84-10, Geneva, 1984), p. 165. [2] Proc. 1984 Summer Study on the Design and Utilization of the Superconducting Supercollider, Snowmass, Colo., 1984, eds. R. Donaldson and J. Morfin (AIP, New York, 1985). Proc. 1986 Summer Study on the Physics of the Superconducting Supercollider, Snowmass, Colo., 1986, eds. R. Donaldson and J. Marx, in preparation. [3] P.N. Burrows and G. Ingelman, Univ. Oxford preprint 1/87 (1987), and these Proceeedings. [4] R.K. Böck et al., Nucl. Instrum. Methods 186 (1983) 533. [5] R. Brun et al., Group report CERN-DD/EE/84-1 (1984). [6] A. Beer et al., Nucl. Instrum. Methods 224 (1984) 360. [7] M. Albrow et al., preprint CERN-EP/87-55 (1987). [8] R. Wigmans, preprint CERN/EF/86-18 (1986). [9] C.W. Fabjan et al., Nucl. Instrum. Methods 141 (1977) 61. [10] J. Brau and A. Gabriel, Nucl. Instrum. Methods A238 (1985) 489. [11] G. Barbiellini et al., Nucl. Instrum. Methods A23S (1985) 55. G. Barbiellini et al., Nucl. Instrum. Methods A236 (1985) 316. A. Nakamato et al., Nucl. Instrum. Methods A238 (1985) 53. A. Nakamato et al., Nucl. Instrum. Methods A251 (1986) 275. [12] P.G. Rancoita and A. Seidman, preprint CERN-EP/86-113 (1986). [13] V. Cherniatin et al., report CERN-HELIOS-171 (1986). [14] W.R. Nelson, H. Hirayama and D. W.O. Royers, SLAC Report 165 (1985). [15] G. Lutz et al., to appear in Proc. 1st Int. Conf. on Fast Analog Electronics for High-Energy Physics, Philadelphia, 1987. H. Spieler et al., ibid. J. Melean et al., ibid. [16] J. Walker and S. Parker, private communication, March 1987. [17] M. Campanella et al., Nucl. Instrum. Methods A243 (1986) 93. [18] P. Borgeaud et al., Nucl. Instrum. Methods 211 (1983) 363. [19] P.A. Aarnio, A. Fassó, H.J. Möhring, J. Ranft and G.R. Stevenson, FLUKA86 User's Guide, report CERN TIS-RP/168 (1986). [20] C. Leroy et al., preprint CERN-EP/86-66 (1986). [21] H.U. Bengtsson, G. Ingelman and T. Sjöstrand, PYTHIA version 4.1, The Lund Monte Carlo for QCD high-pr scattering, in CERN Long Write-Up of Pool Programs W5035 and W5045 to W5047 (1986), p. 104. [22] T.A. Gabriel, Oak Ridge Nat. Lab. report ORNL-TM-9727, presented at the CERN Workshop on Shower Simulation for LEP Experiments, Geneva, 1985. [23] M.B. Emmett, Oak Ridge Nat. Lab. report ORNL-4972 (1975). [24] S. Battisti, R. Bossart, H. Schönbacher and M. Van de Voorde, CERN 75-18 (1975). [25] F. Wulf, D. Brauing and W. Gaebler, Data compilation of irradiation tested electronic components, Hahn-Meitner Institute for Nuclear Research (Berlin) report TN 53/08 (1st edition 1981), Vols. 1 - 3 (2nd editions: 1981,1983, and 1984, respectively). [26] H. Beger, report CERN-SPS/ARF/77-21 (1977). [27] P. Beynel, P. Maier and H. Schönbacher, CERN 82-10 (1982). - 204 -

[28] T. Sjöstrand and M. Bengtsson, Lund report LU TP 86-22 (1986). [29] T. Sjöstrand, Comput. Phys. Commun. 28 (1983) 229. [30] G. Ingelman, The Lund Monte Carlo for deep-inelastic lepton-nucleon scattering, version 4.3, in CERN Long Write-Up of Pool Programs W5035 and W5045 to W5047 (1986). [31] R. Hollebeek, Nucl. Instrum. Methods 184 (1981) 335. R. Noble, SLAC-PUB-3871 (1986), submitted to Phys. Rev. Lett. [32] K. Yokoya, KEK preprint 85-09 (1985). - 205 -

VERTEX DETECTION AND TRACKING D H Saxon Rutherford Appleton Laboratory, Chilton, Didcot, Qxon 0X11 OQX, UK

G Bellettini (INFN, Pisa) G Ingelman (DESY) H-J Besch (Siegen) D P Kelsey (CERN) P N Burrows (Oxford) W Kittel (Nijmegen) G Chiarelli (Rockefeller) M Pepè (RAL) J B Dainton (Liverpool) G Smadja (Saclay) E Elsen (Heidelberg) M Tonutti (Aachen) J-M Gaillard (Orsay) V Vuillemin (CERN) R Horisberger (CERN) A Wagner (Heidelberg) D C Imrie (Brunei) A H Walenta (Siegen)*

ABSTRACT Track and vertex reconstruction requirements at LHC and CLIC are considered. A more sophisticated tracking system expands both the range of physics to be studied and the confidence in event signatures. Event characteristics and accelerator environment dictate the techniques, with severe demands on an LHC detector at small radii. Solutions for central and forward tracking are developed, and multivertex reconstruction studied. Research and development needs are identified and limitations on luminosity defined.

1. INTRODUCTION When considering event detection and reconstruction at TeV energies, we are entering a new regime. Challenging accuracies are required to reconstruct leptons, to find such quantities of fundamental importance in testing for unexpected physics as an electron's sign, whilst hadronic events are characterised by a multijet structure with multiplicities of about a hundred charged particles, mainly low momentum but with a substantial hard component in each event. In hadronic colliders of the required luminosities, detector-filling events occur at 100 MHz or more and radiation damage is a major problem. In this report we consider the questions of designing detectors for such energies for the study of pp, ep and e*e~ collisions. The tracking task is in a sense common to these, and determined by the outgoing particles, and by the problem, in any tracking device, that the accumulation of low-momentum particles confuses the detection of high momentum ones. The range of approaches that one is able to adopt depends crucially on the environment provided by the accelerator. In particular the beam crossing rates at LHC and CLIC are dramatically different, and this crucially affects the experimenters' options. Excellent studies have been made in the last two or three years of the issues involved in track and vertex detection at Large Hadron Colliders, and it is not our purpose to re-summarise this work or to take issue with it. The reader is referred in particular to the reviews in the Snowmass 1984 and 1986 studies, and to the report

*We have been stimulated and assisted in this work by M Albrow, R J Cashmore, M Gilchriese, J Kirkby, A K Nandi and M G Pia. - 206 -

of the Lausaune ECFA-CERN Workshop, and contributed papers therein1"*. Here we look briefly at the environment and consider tracking options, and develop possible solutions for central and forward tracking. We are particularly encouraged by the emergence of ideas for high rate detectors of good two-track resolution, and by the schemes for handling the immense data rates. We then remind ourselves of the power of vertex detectors currently under development, before drawing conclusions on research and development needs, on the maximum luminosity one can stand, and comparing the tools available to the experimenter at the LHC and CLIC.

2. THE ENVIRONMENT There are two key issues - crossing rates and radiation levels. The crossing rates of the various detector options are summarised in Table 1, and compared to "current" machines. In the LHC pp case we must consider luminosities of up to 1033 cm-2 s"1 and above, which, for a total cross-section assumed to be 100 mb, gives 1 to 3.5 events/crossing or more. (The pp option would have a similar number of events per crossing.) We develop below solutions which we believe are robust to 1 or 2 events per crossing, but it becomes rapidly more difficult as the rates increase. The LHC presents a major challenge to detectors at 25 ns/crossing and when virtually every crossing contains an event, and special techniques are required. It is possible to increase the luminosity further by reducing the inter-crossing time (in steps of 5 ns) which aggravates the reconstruction problem. The sensitive time of the detector becomes a limiting factor. (At a drift velocity v^ = 50 um/ns, a 25 ns maximum drift time would imply a maximum cell size of 1.25 mm). We have therefore looked for solutions capable of labelling out-of-time events, and continuously live to new events whilst one is considering whether to trigger on a particular event. This leads to a strobed fast-logic type of structure, as indicated in Figure 1.

TABLE 1 INTER-CROSSING INTERVALS AT VARIOUS COLLIDERS

LHC pp 25 ns (or less) ep 165 ns (pp 700/3600 ns) CLIC 170 ps

HERA 96 ns Petra, SppS, TeVatron 3-4 lis LEP 25 us SLC 6-8 ms

We note that the same problem of being continuously live occurs at HERA and we can learn from the experiments being constructed there. For the ZEUS drift chamber readout, a structure such as that in Figure 1 leads to a deadtime of 0.6% for a first - 207 -

1. Schematic of digital pipeline scheme envisaged for ZEUS drift-chamber readout.

2. (a) Radiation levels as a function of perpendicular distance to the beam line (from SSC study'), (b) Radiation levels at 2 m, as a function of angle (from G Stevenson). - 208 -

level trigger rate of 10 kHz. In this experiment a 100 MHz digital pipeline of 512 steps is envisaged to allow time for first level decision making5. At CLIC the opportunities are quite different. There is 170 us between beam crossings, and an event rate of order 1 Hz. This long inter-crossing time allows the use of slow devices as, for example, in the SLD experiment6'7, to achieve precision, but drifting in cool gases for example, and elegant vertex detectors could be constructed using dimethyl ether, for example8. One can make use easily of CCDs as pixel devices. One wishes to clear a device between crossings, but the long readout time is no penalty as one can simply turn off the accelerator during readout and avoid any pile up (due to the detector being live to new events whilst being read out) . As regards radiation levels the LHC also presents a considerable challenge, as illustrated in Figure 2. It is clearly hard to get close, and advances are needed, particularly in readout. For a "conventional" detector, this radiation level tends to inflate the device, apart from any issues of particle density in jets. For CLIC the required calculations still need to be made. Beamstrahlung gives approximately two photons of 10-30 GeV accompanying each electron (101°/bunch). However, these are very strongly collimated along the beam direction10. We estimated that the effect of rescatters of these from materials into our detector is not serious, neither as a source of background hits, nor for its effect on chamber lifetime. However, a drift chamber is vulnerable to keV X-rays as a source of background, and one needs to evaluate processes that lead to a more isotropic and lower energy background. Compton back scattering of beamstrahlung off the oncoming beam bunch is one example of a possible mechanism. Fortunately we shall soon be able to learn from SLC experience. For the present exercise we have assumed that radiation levels at CLIC give no problems, but this must be checked.

3. TRACKIMG TASKS In a new environment, one must reconsider the motivation and purpose of a technique carefully, and balance the gain in physics against the cost, both in money and in reduced performance of other devices. Consider the case of no tracking at all. Compact U/Si calorimeters have most attractive performance11 and can do a great deal of jet physics. However, lepton information is degraded if one knows neither the sign of an electron nor whether it came from a primary vertex, and heavy flavours, which can be either a tag or a background to new physics (depending on the process) are lost. Furthermore the credibility of any new outstanding result is hard to establish. Both for multijet signatures and for missing E^ signatures one has to show that a signal of 10 events per year (1 per 1013 interactions) cannot be faked by a chance overlap of two processes each at the 10"6 level. Protection against overlap of minimum bias events is insufficient. A powerful tool is the measurement of the z of the vertex. For a beam spot 10 cm long, one can identify overlapped events with > 99% efficiency. For operation at moderate luminosities it is sufficient to reject these. However, if one wishes to work at or above a luminosity of 103 3 cm"2 s"1, one has to use these events for physics, and that is a much harder question. - 209 -

Two experiments have developed the concept of a non-magnetic tracker, UA2 and D0. In this case one can identify overlapped events, and enhance considerably hadron rejection in inclusive electron identification (no sign of course). The UA2 upgrade has an elegant detector confined within 40 cm radius12, illustrated in Figure 3(a). The UA2 upgrade is designed to play to the strengths of the detector, and to give very high hadron rejection for good electron efficiency. The electron signature is electromagnetic energy in the calorimeter, plus a track, plus a signal in a preshower detector. The main backgrounds and their remedies are three: (1) high p^ y conversion in the beam pipe and Dalitz pairs. One looks for double-ionising "tracks" from the collinear e*e" pair in a silicon detector and jet chamber vertex detector. A gain of 15:1 is anticipated. (2) hadron-photon overlap. A 50 cm2 confusion area is reduced to 0.2 cm2 with a

scintillating plastic fibre preshower detector after 1.5 X0. (3) single hadrons interacting in the electromagnetic calorimeter. A transition radiation detector should gain a factor about 30. This technique is less effective above 100 GeV/c. The D0 detector follows similar principles. It may be significant that given a clean start, they have chosen to use a larger outer radius (Figure 3(b)). There is certainly a tendency for detectors to grow for improved performance. Magnetic tracking expands the power of the detector. One can make a (subjective) list of gains: (1) Sign of e*. (2) Improved hadron rejection for electrons and muons. Possibility of t identification by looking for low invariant mass jets13. (3) Extend u* momentum range. (4) Separate soft and hard tracks, help in determining jet directions. (5) Lepton identification close to jets. (6) Secondary vertices can be identified with a vertex detector. Some momentum information is essential to avoid drowning in a false signal from multiple scattering of soft tracks. (7) Calibrate calorimeter (CLIC). Not so clear if this is any use at LHC. (8) Most importantly, redundancy in signatures for new physics. The extra cross-checks that tracking provides reduce backgrounds and add credibility to discoveries. We now consider magnetic tracking in more detail. First of all, we look briefly at the properties of the events.

4. JET CHARACTERISTICS An extensive study has been presented to this Workshop by Burrows and Ingelman1''. We highlight here one or two points of relevance to tracking. At the level with which we are concerned here, CLIC (e*e_ -» qq) jets and 1 TeV pp jets in the central region are found to be similar. Minimum bias pp events yield 6 charged particles per unit of rapidity. As a representative or preferred model they consider coherent QCD cascade15, which they implement16, followed by non-perturbative string hadronisation. The - 210 -

h. (a) High energy jet fragmentation. Fraction of energy carried by particles of energy E > z E. . as a function of z. Solid line - coherent cascade; dotted - incoherent cascade; dash-àot - coherent cascade with tighter jet definition; long dashes - coherent cascade (Webber model); short dashes - LEP 1. (b) Probability that a nearest neighbour separation (as seen in a pixel device) is less than x, plotted against x, in a 1.5 T solenoid at radii of 10, 50 and 200 cm (solid, dotted, dashed). - 211 -

average charged multiplicity is 80, with a very large spread, and the jets contain a substantial component of hard particles. Selecting jets with at least 90% of the beam energy, Figure A(a) shows the fraction of the jet energy carried by particles with energy E/E. > z, as a function of z. For such a jet at CLIC 40% of the energy J ®t is carried by particles of 90 GeV or more. Figure 4(b) shows the fanning out of particles in a 1.5 T solenoid. The fraction of particles with a neighbour within 2 mm falls from 40% at 10 cm to 4% at 50 cm and 0.4% at 200 cm. We find that a cylindrical chamber with 2 mm track resolution in projection is acceptable beyond about 50 cm radius, where 20% of hits in the core of a jet are lost. Thus the track density, like the radiation background, forces a conventional drift chamber out to large radii. Lastly, we show in Figure 5 distributions for inclusive muons. We see that a lepton from t-decay is typically at 50 mrad from the nearest-neighbour charged particle, and that a p^ (with respect to a reconstructed jet axis) above 25 GeV can be used as a top tag.

5. MAGNETIC TRACKIMG 5.1 Momentum Resolution A desirable aim for momentum precision is to know the sign of a high p^ lepton, say 3o at 1 TeV. This implies Ap^/p^ = 0.3 p^ (TeV). Table 2 compares two possible detectors, where the systematic contribution error is to be kept below about 10%.

TABLE 2 PRECISION NEEDED IN A HIGH p. DETECTOR

Pj^ = 1 TeV B = 1.0 T B = 2.0 T L = 1 m L = 2 m Apl/Pl = °*3

Track sagitta 37 um 300 um Required error 12 um 100 um

Limit on systematic error 6 um 50 um

Attaining 6um systematic error on transverse dimensions in a 1 m radius detector is quite a challenge, and may well demand active calibration/alignment systems. If one is prepared to use a larger volume, and a larger magnetic field, the random and systematic errors needed become less demanding. Momentum resolution is related to position measurement accuracy by:

o A., a sin9 p _ TJ x p* 0.3 B L'/IJ

for a particle of p GeV/c travelling at angle 9 to a magnetic field of B

Tesla, with N measurement of accuracy ox over a length projected L. With equally

spaced points1' - 212 -

1 1 1 i • ' 1 i -t le)

5. Inclusive muon distributions from _L M. e*e" annihilation at 2 TeV. Muon angle

Ne» d9p (a) and pj^ (b) with respect to the jet axis, and angle to nearest neighbour charged particle (c). Solid curve - all muons, dotted - from t decay, dashed - b decay, dash-dotted - from c decay. In (b) an additional curve for all charged particles in jets - long dashes. 0.01 1 i • i I i ' i os aa 1

eP

STEREO AMOLE 0* *' / / / /. - .. J SUPERIWED

Ï- / ///..•

6. (a) Sector of the ZEUS CTD layout5. There are 9 superlayers each with eight sense layers. Five layers are axial with a design resolution ox = 100 pm and four are at small stereo angles, giving effective oz = ox/tan a. The chamber is designed to run with argon-ethane-ethanol (50 um/ns) in a field of 1.8 T. (b) Electron drift lines and isochrones. Superimposed on this is an indication of a stiff track, together with its associated hits, and the apparent track sectors for a similar track from the previous, or three crossings later, LHC bunch. Left-right ambiguities omitted for clarity. - 213 -

Ajj = (720/(1 + 4/N)r

(An improvement factor of 1.68 arises if the points are optimally located (half at centre and a quarter at each end). This is acceptable provided the track-finding is adequate2.) The required accuracy is achieved in a 2 m, 2 T chamber, by 150 measurements of 150 urn accuracy. A detector with 16 points of 10 urn accuracy would have resolution four times better. However, precision without track linkage is useless in dense jets, as one of us (DHS) found out many years ago when putting 1000 particles into a helium bubble chamber, each with one bubble/10 cm.

5.2 LHC Central Tracking For the LHC we need a chamber design that works in a high magnetic field and a high crossing rate. Studies have been made of short drift detectors and yield solutions with over 100,000 channels in the rapidity range -1.5 < y < 1.5 (30° to 150°)2. We decided to investigate the use of devices with fewer channels and longer drift times, but with out-of-time rejection built in to the geometry18. To reduce the number of channels we consider the use of a "fast" gas, such as

CF4 which has a maximum drift velocity of 120 um/ns. Encouraging studies have been made of this gas, but much remains to be done to establish its properties in a high rate and radiation environment18. A number of groups have or are engaged in building "vector" drift chambers with a superlayer structure5'6'20'21. As an example, a sector of the ZEUS Central Tracking Detector (CTD) is shown in Figure 6(a). There are many cells in each superlayer and the line of sense wires is rotated with respect to the radius vector by 45° in each superlayer. With appropriate choice of electric fields, this leads to an electron drift and isochrone structure, as shown in Figure 6(b). Superimposed on this picture is a track and its hits, together with the hits that would be seen if the track had passed through the chamber at the same location, but were out of time by +1 or -3 crossings at the LHC. The loss of points, and in particular, the discontinuity in crossing boundaries (sense wire or field wire planes) is evident. (At the Lausaune meeting, a staggered and tilted cell solution was presented as a possible remedy for out of time tracks3.) A chamber schematic is illustrated in Figure 7. It consists of six superlayers, laid out between 50 and 160 cm radius, with a maximum wire length of 4 m. Some properties are listed in Table 3. This chamber satisfies a number of important constraints. Starting at 50 cm keeps the loss of hits in dense jets to a tolerable level, and provides a reasonable radiation environment for on-chamber electronics (£ 104 rad/year). The instantaneous hit rates/cm are not such that chamber gain will .sag22, and the accumulated radiation doses are well below 1 C/cm after several years, so that there is some optimism for chamber survival23. To achieve this rate capability and life, the gas gain is kept low (2.10*). One must therefore use stereo as a means of z-measurement. This will probably necessitate extra superlayers. The total number of wires is similar to that of existing devices. - 214 -

Riem]

200 H

6

___. 5 ______1) = 1-5 3^ 3 2 _m SL1

100 200 Zlcm]

8. Weak CC interaction simulation (x = 0.1, QJ = 100000 GeV) for 60 GeV e" incident on 8600 GeV protons. - 215 -

TABLE 3

POSSIBLE LHC SUPERLAYER CHAMBER Time between crossings 25 ns Interactions/crossing 1.6 V(drift) 120 um/ns FADC rate 2: 200 MHz

Superlayer Number 1 6

Number of Cells 314 314 Maximum Drift (cm) 0.5 1.5 Maximum Drift Time (ns) 42 125 Total Occupancy (%) 15 46 Hits/Wire/Sec (MHz) 3.7 3.7 Accumulated Charge (Gain = 2.10*)(C/cm/Year) 0.04 0.02 Number of Sense Wires 2512 2512 Number of Field Wires 11300 11300

TOTAL NUMBER OF WIRES: Sense 15000

Field 68000

The construction is naturally modular, and this helps in reducing the thickness of endplates to support the wires. It allows, with a little re-arrangement, for the insertion of TRD layers, but exacerbates the question of survey and alignment. Apart from initial mechanical accuracies, one may wish to consider active systems as in the L3 muon detector2*. We have been interested in the possibility of a collimated X-ray beam for alignment purposes, exciting Pb present in the gas by doping with 1% Pb (CHj)^. Pd or Th emitters can be tuned to the Pb L or K-edges. The all-important question of out-of-time rejection has been studied in a simplified but slightly pessimistic model18 , superimposing a typical jet on minimum bias LHC backgrounds. The loss of tracks is dominated by.the double hit resolution in the event. Out-of-time events cause almost no loss of tracks, and give spurious tracks (in one superlayer) at the 10% level. However, one should emphasise that the correct identification of in-time tracks requires that the trigger uniquely identify the bunch-crossing responsible for the event. We feel a certain vulnerability on the question of z-measurement. Stereo layers give impressive accuracy (oz = c^/tan a). The dominant problem is one of ambiguities in dense jets. 3d seed points are helpful. Charge division at a gas gain 2.10* may give 4% z-resolution in jets (o2 = 16 mm). For tracks with Ax > 5 mm, both z and will be good. For 2 < Ax < 5 mm, the on the second hit is degraded. For the most accurate measurement of the z of a vertex the ideal detector is a pixel telescope, sitting at radii of (say) 3 to 10 cm, in which the track finding in the large drift chamber is an essential back-up to the telescope resolution. Such a device can be conceived now for CLIC. It is not clear that one could make one for the LHC. Radiation damage appears prohibitive at 1033 luminosity for any on-board electronics. - 216 -

Dedicated z-strip silicon detectors are an approach to consider, but we have not had the time to consider any detailed scheme.

5.3 CLIC Central Tracking The solutions considered for the LHC are, of course, still viable but the environment is now much more benign, both because a long quiet time (170 us) follows each crossing, and overlap of events is no longer a problem. As a result, one is able to consider using long drift distances, or gaining precision by the use of a slower gas. Thus the SLD Group use a superlayer chamber with a velocity of 9 pm/ns, and a point resolution of 55 um in tests6. As an alternative, we can consider a jet chamber à la OPAL18 A particular feature of this open-cell design is the use of a laser to calibrate the cell systematics during running. If one wants to take advantage of the long time available between crossings to improve the momentum resolution to Ap^/p^2 =0.1 (TeV) (which will certainly be of use in lepton physics studies) then systematics are dominant. For a chamber with B=2T, L=2m the sagitta is 300 um. It is a challenge to reduce systematic errors below 30 um, but the laser calibration technique of OPAL offers a good approach. If the total error is to be kept down, the random error must be lower. 15 um random error on sagitta is achieved by 160 points of 50 um error.

5.4 Forward Tracking Strong forward tracking is essential for ep physics where the Lorentz boost reverses the roles of the central and forward regions. Collisions at low x (where one looks for preons or observes yg "* tt) fill the central chamber, but with multiplicities and momenta only a little above those at HERA (n^ ^ 20). The time structure is a little more generous than at HERA (165 ns as against 96 ns), so that the detectors proposed for HI and ZEUS provide good examples to follow5>35. High x and high Q2 events are, however, boosted strongly forward, and it is here that one searches for the effects of W^, and for leptoquarks, for example. The physics interest has been described previously26- 2 8 , and we show just one event by way of illustration in Figure 8, overlaid in this case on the geometry of the HI detector. Forward tracking is also beneficial in pp and pp collisions. Even for the exotic high mass processes at the forefront of interest it is of great importance to extend the acceptance, and for more traditional physics (such as W production) reaching lyl < 3 (6 > 5°) is essential. In e*e~ collisions at 2 TeV, t-channel weak exchanges compete with annihilation diagrams and give forward-peaked events, as do the dominant VY reactions. We have considered tracking down to 7° and looked at both dipole and solenoid solutions. Dipoles have a number of advantages - easy access to the vertex region, ease of construction of a modular tracking system, excellent performance at 0°, only limited conflict between the coil and the calorimeter. However we prefer a solenoid, even for forward tracking, for the following reasons: azimuthal symmetry, good forward tracking is possible, and most important the severe limitation on material close to the beam in a dipole, because low momentum secondaries are swept into the - 217 - tracking volume. By contrast in a solenoid they are piped harmlessly away along the magnetic flux tubes. For a machine with an electron beam transverse magnetic fields must be severely limited to avoid catastrophic X-ray fluxes. Thus a dipole detector would have to be a septum magnet (with unpleasant inhomogeneities) , or have a beam pipe shielded by a superconducting tube. It is not clear that any such solution is practical, either on material grounds (several radiation lengths) or for X-ray heat loads at 4 K. For the LHC it is proposed to provide the necessary electron bend by a field of 0.1 T for 20 m through the interaction region29. This gives a comparatively benign X-ray flux. We surmise that this could be provided in a solenoidal detector by additional dipole saddle coils. Note that these form part of the electron beam transport, and so must vary in strength during ramping. The solution we found is illustrated in Figure 9(a). It has eight transverse wire modules at the same radii as the CTD, each module having 12 layers. The azimuthal structure (^ 300 sectors) also follows the central detector layout, not least for ease of track linking between the two systems. The sense wires are radial, giving accurate ^-measurements and the radial co-ordinate is measured by charge division. All the readout is located at the outer radius. The design is based on the HI forward tracker25 and our eight-years-hence resolutions are 50 urn in $ and 1 cm in radius. The total system has 32000 sense and 38000 fieldwires (for one end cap). The occupancy of a radial cell is 0.1, the current/wire 0.2 uA and the charge deposition everywhere below 0.05 C cm"1 year"1 for the LHC at 1.6 interactions/crossing30. The momentum resolution on a 1 TeV track is shown in Figure 9(b), compared to a notional dipole solution. Down to the limits of acceptance, this design gives the acceptable performance.

5.5 Detector Size One imagines detector sizes based around such a central/forward detector as shown in Figure 10, which indicates a quadrant of the detector. Below the axis we outline the geometry for ep collisions. The tracking volume is longer than indicated in the preceding section for two reasons: to allow room for TRD electron identification and because the calorimeter gains in precision by starting later11. This layout is to be understood as a stimulus and not as a real design. Allowances for access and supports are notional and it contains unresolved details, which it is left to the reader as an exercise to list. One finishes with an outer radius of 8 to 9 m, rather as expected. Reducing the tracking volume to zero would reduce the overall radius to 6 to 7 m, with a consequent saving in expensive materials. We return to this question in Section 7 below.

5.6 Track Processing in Vector Drift Chambers The superlayer structure advocated here is particularly suited to high rate environments as event reconstruction and data reduction proceed in a modular manner, very well adapted to parallel processing by many identical special-purpose processors. By this means event sizes of 1 Mbyte (tracking) may be digested at an input rate of 40 MHz, giving information to refine the trigger en route. A number of - 218 -

9. (a) Schematic of a forward tracking detector of 8 x 12 layers, (b) Momentum resolution (B = 2T) plotted against polar angle, compared to a notional dipole (B = .7 T) design.

TOROID

H1HN I

FECAL

SOLENOID 3

DUCAL

CTD

FTD TO F WD DET C>

5 6—i «ÍNJ 2 3 * N T DIPOLJIPÓLE ß " CTD^^j FTD DUCAL FECAL 3 2? ' 3

2 - SOLENOID + DIPOLE

10. Notional layout of detector quadrant for LHC/CLIC - below axis: enhanced forward tracking (following ep needs). - 219 - the comments below can also apply to a chamber with many radial layers divided logically, but not physically, into superlayers. A simulated event in the ZEUS Central Tracking chamber is shown in Figure 11. Only the zero-degree layers are illustrated here, but the left-right ambiguities are shown. One sees the scheme for data reduction allowed by the vector-chamber structure. Step 1 FADC output converted to hit positions31. Step 2 Processing of an individual superlayer cell (plus neighbours for overlap): (a) Left-right ambiguity solved. (b) Out-of-time rejection. (c) (Optional) Low p^ rejection. Low p^ rejection is possible because within a superlayer we find information on track vectors, considered as straight lines. If a track is supposed to come from the origin, the angle (with respect to the local radius vector) immediately labels the p^ of the track. A cut on Id$/drI is a good low p^ cut. Thus background and low p^ tracks are safely removed before they are "found". Step 3 (Optional) High p^ track information to high level trigger processor. Step 4 (Online/offline) Link superlayer vectors into tracks. This is possible pair wise as two vectors can be members of the same track if they are tangent to the same circle. Using the properties as defined in Figure 12, the quantity n,, defined by

n = t, x S„ + t¡ x Sl2

should be zero. This test is unbiased as regards the production point and momentum of tracks and extends to stereo layers. It is robust against confusion by neighbouring tracks at the minimum resolved distance. Track finding strategies based on this are being developed for the SLD3S. The Mark 2 Collaboration associate pairs of vectors from different superlayers, looking for tracks from the origin. Then a superlayer vector is labelled by

V = ($0, k) where $0 is the implied azimuth of the track at the origin, and k the implied curvature. They use a x' test based on

2 1 X = I - Vo) M: (V± - Vo) i where M is the error matrix on V. V is varied to minimise the Y1,13. o Thus choice of a suitable chamber layout enables one to take advantage of the power of parallel processing in data reduction. - 220 -

11. Simulated event in the ZEUS CTD. (30 GeV e' on 820 GeV p), NC event Q1 = 1000. Zero degree layers only shown, with left right ambiguities.

TRACK

2. The elements of the n, test.

Silicon Detector Wellblech Chamber 1m día.

*" Si Wellblech Chamber

13. (a) Cross-section of Wellblech chamber structure, giving square cells at A mm centres, (b) Layout for CLIC. - 221 -

6. TRACKING TRIGGERS How can such chambers contribute to the event trigger? The answers are very different at LHC and at CLIC. At CLIC the events/crossing rate is low. Thus it is beneficial to have a good vacuum in the e*e~ intersection region, to keep down trigger rates. This should be borne in mind when looking at very narrow beam pipes or plasma lenses. Conventional LEP/PETRA like trigger schemes seem quite possible, with tracks contributing to topological event signatures based on multiplicity, topology and p^. Sophisticated processing may be possible in 170 us. For LHC the situation is much more difficult. The chamber is essentially always busy. Finding a vertex at trigger time is a desirable objective. For example, one would like to link calorimeter clusters to vertices to see if all clusters in a certain topological signature come from a common vertex to suppress overlaps of common events. But it looks very hard. In essence a vertex trigger has to resolve congested events. Track by track methods seem very vulnerable to false combinations. Centre of gravity methods are vulnerable to pile up, to dominance by slow particles and to limits to device acceptance. New ideas are needed if this is to progress.

High p^ triggers, on the other hand, look possible. High p^ tracks can be found in the outer superlayer (as outlined above), and if desired linked through the superlayers to the vertex. One can trigger on algorithms based on these. Most usefully they could form part of high p^ electron and muon triggers. (At a more trivial level, an isolated photon trigger can be a key to new physics, in looking for example for excited leptons.)

7. COMPACT TRACKIMG DETECTORS The large radius large field solutions considered above are viable but costly, not least in the effect they have on the sizes of other components. What can be achieve if we are prepared to scale down the tracking volume? As one sees from Table 2 and Figure 4(b), the demands on detector accuracy, precision of systematic error and two track resolution increase markedly. One is led naturally to investigate solid state detectors which offer excellent measuring errors and two-track resolution, but for economic reasons are limited to modest sizes. The detectors and their associated electronics must function in a more hostile radiation environment. To proceed further we take the optimistic view that this can be solved, and look at both gaseous and solid state detectors. Certainly the momentum resolution suffers. We reject solutions with very high fields. The coils would have to be within the calorimeter as large high-field volumes are not yet practical, and the coil thicknesses would be quite unacceptable in this location. We have looked at a possibled) super silicon detector in the CDF detector, replacing the existing detector by 50 Si layers at 1 cm radial spacing, with strip sizes of 50 urn to 200 um. The momentum resolution is compared to the existing detector (large gaseous drift chamber + 4-layer Si detector) in Table 4. The momentum resolution everywhere is worse, and dominated by multiple scattering up to substantial momenta, even on optimistic assumptions on material thickness. - 222 -

TABLE 4

B = 1.5 T MOMENTUM RESOLUTION OF CDF at 90°

2 °(PT)/PT (TeV-i)

Track Momentum (GeV) 100 10 1

Existing detector (1.3 m radius) 0.4 0.4 1 Super Si detector (0.5 m radius) 1.5 3 22

An all silicon detector seems prohibitive in cost beyond 15 cm radius'. We therefore look for techniques which can provide track linkage between sparse Si layers or look for techniques that can fill the volume up to 50 cm with devices with good position and two track resolution, and which tolerate the rate and accumulated radiation there at the LHC. A good candidate for linking is the modular multidrift vertex detector, which offers excellent rate capability, two-track resolution of 300 urn, several hits/cm on each track and a position resolution of order 100 pm3*. We heard at this meeting of several devices of interest. The first is the "Wellblech" structure chamber, constructed with corrugations, currently of 4 mm cell size (see Figure 13). Planar prototypes have been built 1 m long and tested satisfactorily. Mechanical tests for rolling a multilayer structure have been encouraging35. This is logically similar to the straw-chamber considered for the SSC2 but mechanically a different solution. A layout for CLIC would have 11000 Wellblech channels and 200000 silicon strips, between 1 and 50 cm radius. Another interesting device is the induction chamber as indicated in Figure 145»36. Particles pass normally through the detector layer and their ionisation causes avalanches on the anode wire. The position measurement comes from the left-right asymmetry of pick up on the potential wires. With a 600 pm cell (giving the same two-track resolution after some logic) position resolutions of 14 microns have been measured. Cathode strips give an orthogonal co-ordinate and within a cell. The chamber is self calibrating and has enormous rate tolerance. Up to 107 hits/cm/s have been seen with success. Smaller cells can be constructed to improve two-track resolution further.

Such a chamber can work down to less than 10 cm from the beam and can resolve jet cores to 30 cm. A large track chamber could have 190 K wires and 60 K strips and cost perhaps 16 MSFr, and give outstanding momentum resolution. Alternatively one could use these properties in the inner part of a large detector to reconstruct jets at radii below 50 cm. Cool gas vertex detectors are very interesting for CLIC, and excellent precisions have been achieved with dimethylether. The scattering limitations in Table 4 could therefore be overeóme5»8. Time expansion techniques may be very useful here3'. - 223 -

14. Principle of operation of the 15. Decay schematic (a) possible induction chamber. Position measurement sequence of decays in tt event from (L-R)/A. Sizes as proposed for (b) impact parameter definition - called atmospheric pressure chamber for ZEUS5. d in this figure. - 224 -

8. VERTEX RECONSTRUCTION

8.1 Event Characteristics We speculate briefly in Section 8.3 below on the use of the z-co-ordinate of the vertex to disentangle overlapped events. We concentrate here on the elucidation of individual event topology. One expects to use multivertex signatures as a tag (or veto) for flavours, and as a search for new physics. Thus seeing a muon or electron coming from the primary vertex is a clear sign of t, W or heavier objects being produced. Secondary vertices are signatures for c, b, T production, possibly by cascade from heavier objects. We remind ourselves (Figure 15) of some principles. Particles decay after travelling an average distance •vpycT in the laboratory (T = lifetime) . The decay products have opening angles proportional to y"1, so that when extrapolated back, they miss the production point by an impact parameter, d, which is independent of y. Impact parameter is then a useful quantity as it can be studied inclusively without reconstructing the decay particles or knowing their momenta. Most b-life measurements have relied on this principle. Table 5 shows a model calculation for particles resulting from the decay of particles in e*e~ qq events at CLIC (2 TeV). The impact parameter quoted is in three dimensions with respect to all confusing vertices in the same events3 7.

TABLE 5

PROPERTIES OF DECAY PARTICLES IN e*e" - qq at CLIC

qq ce bb tt

(GeV) 150 95 38

(um) 115 310 430

The particles are very fast, so that multiple scattering is not a problem. The 100-400 um spatial impact parameter sets the scale for the problem. In any coHiding-beam vertex detector, the impact parameter error depends on the position measuring error of the detector, and on the lever arm of the extrapolation

to the vertex. For a simple, two-layer detector with resolution ox and at distances

r1P r2 from the axis, we find for the error on the impact parameter projected on to the xy plane

1 + (it)' /.015^

CU.) + R>JI -y ' = ox j^zfy -pp where p is the particle momentum in GeV/c2, ß is its velocity (c = 1), and the beam

pipe and first plane have a thickness of t radiation lengths. (We have assumed rl is sufficiently small that the error on the track curvature is unimportant. Track taken at normal incidence.) - 225 -

Interesting detectors need excellent two-track resolution and o(d0) below d0.

We therefore require rx/v1 small, and small. The radius rt and thickness t should be kept down though as seen in the table, at CLIC energies this problem does not dominate as it does, for example, at LEP. At the LHC, the radiation background may force the detector out to larger radii than at CLIC, resulting in degraded performance. One can distinguish three regimes:

(1) o(d0) << d0. Detailed event-by-event reconstruction and assignment of individual particles to vertices is possible.

(2) o(d0) £ d0. Event tagging and flavour-enriched samples are possible.

(3) o(d0) > d0. Statistical estimates of average properties possible. (For example, b-life measurements have been done this way.)

8.2 Detector Techniques We have considered gaseous techniques already in Section 7. There are other possibilities to come within a few cm of the vertex and offering perhaps even better precision. One possibility is to use scintillating fibre detectors. These are still at an early stage of development but show already some promising properties39. Glass fibre sizes of 20-30 urn have given position resolutions of 21 urn in tests, with

characteristic scintillation times of 48 ns (Ce203 doped). In a fibre bundle one sees 2.1 hits/mm through a readout system. The radiation length is 9.8 cm. The device is radiation hard to > 107 rad and restores (partially) by annealing. However there are problems before it can be useful in a colliding beam environment. There is a very short attenuation length (2.5 cm), believed to be due to unwanted Ce** ions, and for reasonable size devices, without many radiation lengths of matter light yields and the coherence of bundles need to be improved. Imperfections in cladding may limit attenuation lengths to 10-20 cm for fibres of 20-30 urn, independent of problems of absorption. It is not clear whether these will find application at the LHC. Silicon strip detectors are being actively developed now and the performance is exciting. Figure 16 indicates position resolutions achieved with 25 urn strips. The standard deviation of the distribution is 2.6 um. Collection times of 4ns for electrons and 10 ns for holes are possible and there are interesting developments in VLSI readout*0. There are, however, developments necessary before they can be useful at the LHC.

One can anticipate reductions in material from 0.3% to 0.15% of a radiation length per layer and for major cost reductions. Resolution may be somewhat degraded in a high magnetic field, and for z-strips with steeply dipping tracks because of the thickness of the active layer. Cross-survey of mosaic devices is not a trivial task. The major limitations are in the readout. Power consumption is high. (At LEP it is planned to ramp the supplies for crossings). Radiation resistance of the electronics is bad - 10* or 105 rad (106 rad for the detector.) CMOS-SOS promises high speed and much improved radiation resistances. Costs are higher at present. Research and development is clearly indicated here. - 226 -

For the CDF experiment at the TeVatron collider a four-layer Si strip detector is being constructed (see Figure 17)*1. The impact parameter resolution is shown as a function of momentum for CDF in Figure 18, and for possible upgrades. 10 um resolution is achieved at high momentum, but very much worse below 2 GeV/c. One sees immediately that in a non-magnetic detector, low momentum particles would completely obfuscate the vertex, but that even modest momentum resolution is sufficient to sort this out so that one can, for example, try to assign leptons to vertices. CCD detectors offer similar precision, but in two dimensions instead of one. Figure 19 gives a layout for the SLD vertex detector42. The pixels are 22 um x 22 um and the position resolution is 6 um x 6 urn. Bulk radiation damage is acceptable to 106 rad, though surface damage may occur 104-105 rad. This may be partly cured by annealing. The depletion layer is very thin, (16 um) giving a 1000-electron signal. The device is cooled to 200 K by helium gas to give S/N of 20/1. The power of the pixel method is illustrated in Figure 20, which shows a head-on

1 3 view of a Ac production event in a fixed target experiment ' . The three particles

which compose the Ac have measured impact parameters with respect to its time decay vertex of 6, 3 and 8 urn, and with respect to the production vertex of 33, 47 and 29 um (all ± 6 um). Thus they are unambiguously assigned to the decay vertex, even at a lifetime of 1.4.10~13 s. Examining this picture, one sees that working in projection (on x or y axis) alone would not have disentangled this decay. In a Monte Carlo study of e*e_ -» qq at CLIC we obtain high efficiencies for assigning particles to their correct vertices, 90% or more for the primary vertex3 8. One is encouraged to persist.

The rate capability is not adequate for LHC. The present readout time (16 ms) can probably be speeded up a factor of 20. They are continuously live during readout and there is no fast clear. However, all of these are acceptable at CLIC, where in addition there is the possibility to turn off the accelerator during readout.

8.3 Maximum Viable Luminosity at LHC The drift chamber design outlined in Section 5.2 fails at mean rates much above 2/crossing. Can we go higher with a more powerful design? Let us look at the limit set by z(vertex) accuracy. The z-co-ordinate of the vertex (along the beam) is a reliable way to unravel overlaid events, and a vital signature in the case that we wish to assert that a one-in-1013 event multiparton topology is not a fake caused by the overlap of two N one-in-106 events. Let us suppose that we can manage all track reconstruction successfully and reconstruct vertices to better than 100 um in z. We assert that in this case the two-vertex resolution cannot be much better than 1 mm, because of the ambiguity of how many vertices to make. (For jets at high y, the narrow opening angles mean one must use soft tracks or other jets as well to get good vertex accuracy.) Figure 21 shows as a function of luminosity the fraction of all events having another vertex within 1 mm. (This for 25 ns detector resolving time. Shorter crossing intervals are useful only if matched by improved detector time resolution.) These events will - 227 -

I- i i i i i I i i 111 i i i i i i 111 1 60 x CDF «• CDF + small chamber a CDF + 5tt layer 60

1 40

20 a x0 ****** B « gg S il _i i i i i 11101* i i i i i i 111 io*J Pt (GeV) 10"

18. Impact parameter resolution plotted as a function of momentum for CDF detector and possible upgrades.

19. Layout of the CCD pixel vertex detector being constructed for SLD. Dimensions in mm. 228 -

21. Fraction of all events at LHC having another production vertex within 1 mm as a function of luminosity, for 25 ns resolving time*''. - 229 -

be irresolvable and hence, if a rare signature is needed, must be kept to a low level to avoid chance backgrounds. Clearly there is a transition at luminosities between 1033 and 103* cm*2 a'1. Full quality event information is only available at or below 1033. To be discoverable at lO3* luminosity, an event must have a truly outstanding signature. We should learn a lesson from the Y. A broad-band hadronic search discovered it, because of its signature. Orders of magnitude more information came from e*e" colliders. But would they have known where to look? Eventually, yes. What is the maximum viable luminosity for detectors at CLIC? This can be answered for tracking when we know the radiation backgrounds (Section 2). The limit from accelerator techniques will probably be below that set by event rate. Since the interesting cross-sections are "equal" at LHC and CLIC that deserves some thought.

9. RESEARCH AMD DEVELOPMENT MEEDS The technical advance in detectors needed from LEP to the LHC is at least as great as that which occurred during the lifetime of the ISR. HERA provides the first step as detectors have to be designed for continuously live operation with out-of-time rejection and online data rejection. The crossing rate (96 ns) approaches that of the LHC. One can easily list a series of topics where work is needed, some of it quite basic - for example on fast gases, scintillating fibres and radiation resistance, both of solid state detectors and, even more, of electronics. For wire chambers we need to develop: - high rate, good resolution, good two track resolution detectors. - fast gases for use in high magnetic fields. - choice of materials and gases for high-radiation environment. - compact mountings. - alignment systems at 10 um level (active?) - on-chamber electronics: cost, size, power, radiation hardness. - special processors for track finding. - z-readout and assignment in congested events. - triggering processors to work at £ 40 MHz input rate. For scintillating fibres we need more light, less attenuation, less crosstalk and geometric coherence. For silicon microstrips we need: - radiation resistant readout (SOS?). - radiation tolerance of detector. - strips both sides. - cross-survey. - electronics heat/size reductions. For pixel devices, virtual phase CCDs are MRad hard, but have never been tried for HEP use. There are promising developments using pn techniques which promise S/N > 100, no cooling, fast clear, 100 MHz clock rate and no surface radiation effects. The overall scale of the work is modest and an input of resources is very desirable. Can we build a pixel device that can work at LHC rates? Even with modest - 230 - resolution, such a device could transform track finding on dense jets, and ease dramatically the nervousness we expressed on z-measurement in Section 5.2.

We are much encouraged by the interest and dedication of groups working on new devices, and by the supply of new ideas to tackle this formidable regime. Now is the time for basic work on techniques, data handling and trigger processing.

Funding for R & D is a problem in some countries, and there is surely a role for

European co-ordination here, if only to provide overview and integration, as an impetus towards larger scale efforts. Specialised meetings and workshops for exchange of views on technical matters will be most helpful, and we admire the

American initiative in this direction45.

By 1990 or so, considerable technical expertise will be available, and if necessary one could move fast towards building detectors, provided the foundations are laid now.

CONCLUSIONS

There are options on the scale of tracking in a detector, but the physics reach of the detector increases as tracking is strengthened (jets, leptons, high luminosity, heavy flavours). One should not underestimate the vital ingredient of credibility. The more checks one has the easier it is to satisfy oneself of the validity of a new discovery.

Forward tracking enhances the physics. The increased acceptance is important for Higgs and W physics in pp, for t-channel physics at CLIC, and is vital for ep studies. A possible scheme has been sketched.

We have not had the time to explore dedicated heavy flavour detectors which could look very different from the "universal" detectors considered here and would expand the breadth of physics studied at the LHC. New discoveries as basic as CP violation may await us in the B-decay sector. For a clear discussion of forward spectrometers see the work of Cox et al46.

We have attempted to compare LHC and CLIC as places to work in Table 6. It is evident that today (1987) we find CLIC a more comfortable place to work. A sustained research and development effort is needed for the LHC. It is gratifying to see the number of approaches and the enthusiasm. Let us begin.

ACKNOWLEDGEMENTS

We wish to thank J H Mulvey and the members of the committee for organising this

Workshop, Ch Petit-Jean-Genez and M Mazerand for their cheerful and sustaining support and myriad funding agencies for getting us there. There was so much physics to do there wasn't enough time in the day. - 231 -

TABLE 6 COMPARISON OF LHC AND CLIC

LHC CLIC

Cross-section 100 mb 1 nb

Tracks/sec @ 1033 2.10» 80

Bunch spacing 25 ns 170 us

Central Detector

- Large, gaseous Not too formidable OK

- Ap^/p^ at 1 TeV 0.3 0.1

- Tracking trigger High pT, e, u OK

- Smaller detector Lots of approaches

Si Detectors

- Radiation damage Electronics Calculate/check SLC

- Pixel Rate/pick up? OK

High luminosity limit? •V 1033 OK

REFERENCES

1. G Hanson and D Meyer in Proc 1984 Summer Study on the Design and Utilisation of

the Superconducting Super Collider, Snowmass Colorado, pp 585-592, also

published as SLAC-PUB-3428 (1984).

2. D G Cassel and G G Hanson, SLAC-PUB-4130, to appear in Proc 1986 Summer Study on

the Physics of the Superconducting Super Collider, Snowmass, Colorado.

3. A Wagner in Large Hadron Collider in the LEP Tunnel, ECFA 84/85, CERN 84-10,

Vol I, pp 267-281.

4. G Bellini and P G Rancoita in CERN 84-10, Vol I, pp 282-291.

5. ZEUS Collaboration, Technical Proposal (1986).

6. W B Atwood et al, Nucl Instr Meth A252 (1986) 285.

7. J Va'vra, Nucl Instr Meth A244 (1986) 391.

8. M Basile et al, Nucl Instr Meth A239 (1985) 497.

9. Report of SSC Task Force on Detector R&D for the SSC, SSC-SR-1021 (1986).

10. P Chen, these proceedings.

11. T Akesson, these proceedings.

12. C N Booth, Proc 6th Int Conf on pp Physics (Aachen 1986) to be published, and

CERN/SPSC/84-30 and 84-95. - 232 -

13. W Kittel, these proceedings. 14. PN Burrows and G Ingelman, these proceedings. 15. G Marchesini and B R Webber, Nucl Phys B238 (1984) 1. B R Webber, Nucl Phys B238 (1984) 492. 16. T Sjöstrand, Comp Phys Comm 39 (1986) 347. 17. R L Gluckstern, Nucl Instr Meth 24 (1963) 381. 18. E Elsen and A Wagner, these proceedings. 19. J Fischer, A Hrisoho, V Radeka and P Rehak, Nucl Instr Meth A238 (1985) 249. 20. G G Hanson, Nucl Instr Meth A252 (1986) 343. 21. M Atac et al, Nucl Instr Meth A249 (1986) 265. 22. AH Walenta, Nucl Instr Meth 217 (1983) 65. 23. J Va'vra, Proc Workshop on Radiation Damage to Wire Chambers (Ed J Kadyk, LBL-21170 (1986)) p 263. 24. V Becker, Nucl Instr Meth 225 (1984) 456. D Antreasyan et al, Nucl Instr Meth A252 (1986) 304. 25. HI Collaboration, Technical Proposal (1986). 26. G Altarelli et al, in CERN 84-10, Vol II, pp 549-570. 27. JA Bagger and M E Peskin, Phys Rev D31 (1985) 2211. 28. R J Cashmore et al, Phys Rep 122 (1985) 275. 29. W Bartel et al, these proceedings. 30. J B Dainton, these proceedings. 31. G Ekerlin et al, to be published in IEEE Trans Nucl Sei; February 1987. 32. W B Atwood, SLD Note 135, SLD Drift Chamber Note 53 (1985). 33. J Perl et al, Nucl Instr Meth A252 (1986) 616. 34. R Bouclier et al, CERN-EP/85-169 (1985). 35. M Tonutti, these proceedings. 36. H-J Besch and A H Walenta, these proceedings. 37. AH Walenta, IEEE Nuc Sei NS26 No 1 (1979) 73. 38. DP Kelsey and M Pepè, these proceedings. 39. M Atkinson et al, CERN-EP/86-110 (1986) (submitted to Nucl Instr Meth). 40. DELPHI Microvertex Group, DELPHI 86-86 GEN-52 (1986). 41. F Bedeschi et al, INFN PI/AE 86-4 (Pisa, 1986). 42. DC Imrie, these proceedings. 43. S Barlag et al, CERN-EP/86-173. To be published in Phys Lett. 44. M Albrow, these proceedings. 45. M Gilchriese, these proceedings. 46. B Cox, F J Gillman and T D Gottschalk, CALT-68-1411, FERMILAB-CONF-86/166, to appear in Proc Snowmass 1986. - 233 - PARTICLE IDENTIFICATION AT THE TEV SCALE IN pp, ep AND ee COLLIDERS

Report of Group G6-Particle Identification

M.Basile, A.C.Benvenuti, V.Cavasinni, M.Della Negra, L.Di Leila,

Y.Ducros, K.Eggert, M.A.Giorgi, K.Heinloth, E.W.Kittel, B.Merkel,

F.L.Navarria, F.Palmonari, K.Wacker and C.Zupancic.

Presented by F.Palmonari

Dipartimento di Fisica and INFN, Bologna, Italy

ABSTRACT At the TeV scale the relevant particles are quarks, fragmenting into jets, and leptons.

We studied the main pattern for the identification of jets, e, n, r and u. Hadron colliders where taken as reference to calculate the rejection needed against hadrons in the lepton identification, because in ep and especially in ee colliders the background levels are lower.

Muons play a special role because they seem the unique lepton which can be identified inside jets. It is also possible to tag heavy quarks with the help of precise vertex detectors.

The identification of 7r/k/p is feasible for special purpose detectors.

INTRODUCTION: PARTICLES AT 1 TEV.

Experimentation at future colliders is meant to open up a new domain of phenomena at the energy scale of 1 TeV, i.e. much higher than the hadrons mass scale which is 1 GeV.

The particle identification power of an experimental apparatus must be evaluated by its capability of detecting new particles or new interactions.

The experiments at the CERN Collider have shown that already at a ten times smaller scale (100 GeV), the observed objects are on one side the standard electroweak elementary fermions (charged leptons and neutrionos) and bosons (photons, Z° and W^), on the other side quarks and gluons, detected as jets of ordinary hadrons.At the TeV scale the relevant "elementary" particles to be identified will be leptons and quarks, and the relevant interactions between them those mediated by the Gauge electroweak bosons and by gluons. - 234 -

The expected situation is neatly shown by, for example, simulating "event displays" produced in a future 10 TeV (pp) collider. A good way of representing those events is a

'Lego plot' in which the transverse energy now of all produced particles is represented as a function of their 'pseudorapidity' TJ and azimuthal angle . The chosen bin width 6° in

A

The simulated events are 'hard scattering' central events with pr(parton) > 300 GeV/c.

The Lego plots show clearly the main event characteristics, represented by a small number of spikes emerging on a background of many bins with low-energy deposition. Each spike is a 'jet' of particles, charged and neutrals, every event comprising one hundred or more particles of any type. Electrons, 7 and 7r° deposit their energy in the first part of the calorimeter, while hadrons require the full depth of the calorimeter to be totally absorbed.

In contrast muons lose a small fraction of their energy by ionisation and neutrinos escape undetected. For this reason in the Lego plot muons and neutrinos are labeled to signal their odd behaviour in any calorimeter.

In Fig. la) a typical gluon hard scattering event is shown, giving 5 jets, in b) only four jets giving more than 20 GeV in a single cell emerge (the spike in the right-hand corner is a measure of the vertical energy scale at 100 GeV). The event in c) shows the production of a (6 6) pair almost opposite in azimuth. Here the b decayed semi-leptonically producing a 120 GeV muon inside the jet and a ~ 100 GeV neutrino aside. The two-jet characteristic of the event is apparent, but in addition four 'gluon jets' due to initial and final state radiation of the interacting constituents, show up as additional spikes of more than 20 GeV. These 'background jets' are clearly dangerous because they can mask the main features of an interesting hard scattering event like in d) where another (b b) event is shown, giving a much more complicated pattern. Here the b-quark cascaded in four different jets which now look similar to the background gluon jets.

From the above description it is clear that the relevant patterns to be studied at the

TeV scale are: - 235 -

1) jets of hadrons originated by quarks and gluons;

2) penetrating tracks originated by muons;

3) electromagnetic showers originated by electrons ( and 7 );

4) missing energy-momentum originated by neutrinos.

New particles generally show-up as invariant masses of leptons and quarks, so one is faced with the old problem of studying invariant mass spectra of well identified "particles": any ambiguity results in a considerable increase in the combinatorial background and a corresponding weakening of the discovery power of any detector.

The problem of particle identification is a problem of pattern separation. The main question we have addressed is whether the overlapping between the four patterns above and the background level is sufficiently low to give clean signals in the search for new particles.

The experimental conditions are clearly very different in (pp), (ep) and (e+e-) ma• chines, due both to different dominant interaction mechanisms and background sources.

Although some consideration will be given in the following to particle identification in

(ep) and (e+e-) machines, we have chosen, as "case studies" for testing the particle iden• tification power needed in experiments at the TeV scale, to study mainly (pp) or (pp) experiments. The worst experimental conditions are indeed encountered in hadron collid• ers where a sizeabe background to "hard" processes is represented by the radiated gluon jets. We will examine the possibility of identifying the elementary particles taking into account the level of backgrond rejection needed in some characteristic processes of "new physics" produced in hadron colliders. The corresponding background rejection needed in

(e+e~) is much lower, while for (ep) colliders the situation is somehow intermediate,but in this case the center of mass motion of the ep collision needs a very detailed study in different kinematic regions.

1. - IDENTIFICATION OF QUARKS AND GLUONS.

The pattern of quarks and gluons at high energy is a jet of 'ordinary particles'. Jets are a recent discovery in particle physics, and their experimental characteristics have been studied only in a limited range of energies up to ~ 20 GeV in (e+e~) collisions and up to - 236 -

~ 100 GeV in (pp) interactions. The emerging picture presents in both cases three main features:

1) Jets are quite collimated in space if one measures the 'energy flow'; 85 % of the jet energy is contained in a cone of 10° half- aperture (See Fig.2 taken from Lausanne 84 [l]).

The best way of detecting jets experimentally has been proven to be a hadron calorimeter of adequate granularity (typically ~ 5° ) and 'energy cluster' algorithms, much in the same way as is normally done in finely segmented e.m. calorimeters for 7-rays and ir" reconstruction.

2) As energy grows quarks and gluons shower in an increasing number of jets. This has been shown clearly in {e+e~) where the initial number of partons is two, but already at s/s = 40 GeV the mean number of jets is ~ 2.5, depending on the algorithm.

3) Quark and gluon jets look similar as far as energy flow and particle multiplicity are concerned. On this point further experimental studies are needed before making more quantitative statements.

In parallel with the experimental study of jets there has been a great progress in the theoretical understanding of the jet mechanism. Starting from different assumptions and ending eventually with different materialisation mechanism of partons into ordinary hadrons, the jet simulation programs available to day reproduce reasonably well the ex• perimental data up to the higher energies. All models require a number of adjustable parameters, making it difficult to establish their relative merits and theoretical relevance.

Discussions on this subject can be found in the literature [2], here we are interested only in the use of simulations to study the jet pattern at 1 TeV. The gross features of all mod• els are first the generation of a parton cascade described almost perturbatively, followed by an hadronisation mechanism for each parton or parton clusters. The fact that the 1

TeV jet simulations from different models give quite similar results suggests the following observations: i) jet simulations give a fairly good description of the average jet properties; ii) when those simulations are used to reproduce detailed jet characteristics at the level of

1 % or better, as required e.g. by the study of selection criteria, large uncertainties can be expected on such a large extrapolaton in energy.

The jet pattern identification is today a well understood problem from the experimental point of view, on the contrary the identification of the nature of the original "elementary" - 237 -

quark (or gluon) is a much more difficult task. Let us examine below three aspects of this

problem.

1.1 - Quark-gluon separation.

It is now clear both from experimental data and theoretical understanding of the jet

mechanism that the separation of quark from gluon jets is very difficult on an event to

event basis. The particle composition of quark and gluon jets is very similar, and this

is the main reason why at 1 TeV the ir / K J p identification does not seem a relevant

characteristic for a general purpose detector. The nature of the "leading particle" in the

jet can give some information on the original parton, but this is true only on a statistical

basis.

Let us consider the simulation of 1 TeV quark jets by Burrows and Ingelmann [3],

made at y/s = 2 TeV in (e+e~). In Fig.3 the energy distribution of the jets detected

by a standard calorimeter (An X & = 0.1 x 5°) is shown, displaying the large amount of secondary jets radiated by the primary parton. The next Fig.4 shows a comparison of the fragmentation function for quark and gluon jets: their difference is too small for

any practicle purpose. Only at very high values of z (the fractional energy carried by jet particles) do the distribution functions differ in a significant way: quark jets have more hard particles. Suppose now we want to separate quark jets from gluon jets applying a fractional lower cut on the energy of the jet particles. A cut at z > 0.4 for example reduces by a factor of one hundred the contamination of gluon jets in a quark jet sample, but this introduces a comparable reduction in the detection efficiency for the quark jets we want to select.

This illustrates the difficulty of identifying the origin of the jets from their particle nature or energy distribution. Only heavy quarks can be identified to some extent, but in

a different way, as we will see below.

1.2 - Quark flavour identification.

Heavy quarks play a fundamental role at the TeV scale for particle identification at

least for two reasons. First any new high mass object will decay mainly into the highest

mass quarks available; secondly all high mass quarks have a sizeable semileptonic decay

rate.

When the decay leptons are a pair, this produces (~ 10 % of all times) a char•

acteristic jet pattern, i.e. a prompt muon inside a jet. The other types of semileptonic - 238 -

decays, namely the electron and the r lepton decays are difficult to distinguish inside a

jet, giving energy deposition in the calorimeter similar to the other particles present in the

jet. The muon signal can also be faked, by TT/K decays or by hadron cascades producing

punch-through particles on the back of the calorimeter. The amount of background on

prompt muons inside jets depends on the relative inclusive rate of prompt muons over pi•

ons. This rate differs by order of magnitudes in (e+e-) and in (pp) colliders. Fig.5 shows

the inclusive prompt muon rate inside jets (from heavy flavour sources) expected in a -y/s

= 2 TeV (e+e-) collider, compared with the inclusive charged particles rate. From this

figure one can see that the rejection factor needed to select prompt muons is ~ 100. In a

hadron collider the rejection factor needed is at least a factor ten higher. This illustrates

how severe are the particle identification problems in a hadron machine compared to an

(e+e~) collider.

There is another characteristic of heavy quarks which is useful for their identification:

heavy quarks undergo the characteristic chain decay into lower mass quarks. Any quark

heavier than the b quark will produce a b quark decay, thus giving observable secondary

decay vertices. Provided orr has a powerful vertex detector (discussed in Section 6) the

observation of secondary vertices in the core of jets is the best tool to identify heavy quarks:

in this case efficiencies higher than 30 % can be reached in an almost unambigous way.

1.3 - Jet invariant mass.

Characteristic of jets at high transverse energy is the parton cascade process giving to

the original parton a high "effective" mass, irrespective of its gluon or quark nature, and

of its quark flavour. The effective mass can be as high as 30 % of the parton energy so

that at the TeV scale masses of the order of hundred GeV are abundantly produced also in

light quark and gluon initiated processes. This represents a serious source of background

to any search for high mass particles by means of their two-jet mass spectrum.

Another consequence is the width of reconstructed two-jet masses, which is no longer

determined by the detector energy and angle resolution, but mainly by the intrinsic resolu•

tion of the two-jet system. As an illustration of this, in a simulation [4] of the reconstructed

W mass in the process:

W —» jet + jet using standard jet-finding algorithms, and taking into account the detector resolution, it was found that the reconstructed "intrinsic" width is ~ 10 % of the W mass. - 239 -

1.4 - Jets as a source of background.

Jets are the main source of background for the identification of genuine muons, elec• trons, neutrinos and r leptons. The main jet detector being the total absorption e.m. and hadronic calorimeter one can in general observe that better calorimetry gives better particle identification power. The relevant calorimeter characteristics in this respect are:

1) adequate longitudinal and transverse segmentation:

A, Ay « 0.05-r 0.1 for the hadron calorimeter, and

A(f>, Ay « 0.02 -h 0.04 for the e.m. calorimeter

2)hermeticity, both as total solid angle coverage and absence of cracks, is essential to identify genuine neutrinos through the measurement of missing energy-momentum vectors.

3) good e.m. shower space localisation and shower-track matching in front of the calorimeter are important to minimize the number of 'y/V overlaps simulating electrons.

4) instrumentation of the rear part of the calorimeter together with a total thickness of

several interaction lengths, are essential features for a good muon pattern recognition. To

detect muons inside jets, an iron spectrometer must be added to the calorimeter itself.

2. - MUON IDENTIFICATION.

The muon pattern is well defined: any track able to penetrate the thickest calorimeter.

This makes the muon signature unique in the sense that muons can be detected easily

even inside very dense, jets of particles, provided one has equipped the calorimeter with

sufficient absorber and tracking detectors.

The problem of muon detection in a hadron collider at TeV energies has been already

extensively studied at the Lausanne 84 Workshop [5] where the physics potential of a

4n muon detector consisting of 5 m of magnetized iron instrumented with muon drift

chambers, surrounding a calorimeter and central tracking chamber was considered. The

main features of such a detector are:

i)good muon filteringpower , insuring a négligeable contamination of muons from jet

punch-through; - 240 -

ii) adequate muon momentum measurement up to well above 1 TeV;

iii)possibility of triggering on single muons, with a VT cu* built into the trigger to control the trigger rate up to the highest luminosities conceivable for hadron colliders.

For the present workshop two studies have been performed, the first consisting in a update and refinement of the Lausanne 84 studies for the muon spectrometer. Below we will summarize the main results, referring for a complete description to the paper by

A.Benvenuti and C.Zupancic [6].

2.1 - Muon measurement update.

The muon detector used in the new simulations is sketched in Fig.6: muons enter a 80

Xo thick Uranium calorimeter at an angle S0o, followed by five Lpe thick magnetized (2 T) iron absorbers interleaved with muon drift chambers of typical 300 /¿m space resolution.

Two chambers with a lever arm of x¿ measure the outgoing muon angle.

First the effect on the momentum resolution of varying the iron thickness Lpe, the

outging angle resolution (change in x¿) and the initial muon direction resolution 690, has been studied. Some typical results are summarized in Table 1. Fig.7 shows the importance of the measurement of the muon angle at the vertex for the spectrometer momentum resolution: for example the momentum resolution of a 1.5 TeV muon can be as good as

-4 13.8 % if the angle 0O is known with a precision of 10 , it becomes 17.1 % with a 660 of 2 X 10 ~4 and goes to 27 % if this angle is not known. The following Fig.8 compares the performances, as far as momentum measurement, of a 5 m and a 4 m magnetized iron spectrometer. This could represent a 30 % reduction in muon chambers and about 50 % reduction in iron volume. The corresponding deterioration in momentum resolution is less than 19 % up to 1.5 TeV.

Another important refinement of the Lausanne study has been the full simulation of

'hard' energy losses of muons, which above some hundred GeV become bigger than ioni• sation losses. The main conclusions of this study were the following.

Energy loss of muons does not affect substantially the resolution ( a 3.5 % effect at 1.5

TeV) and also catastrophic energy losses are limited: only 5 % of muons at 1.5 TeV lose more than 10 % of their energy.

The loss of good measurements of the muon track, due to e.m. showers along the track itself seems not to be a major problem. In Fig.9 the probability of extra hits is shown as a function of the muon energy. Above 1 TeV about 40 % of muons give extra hits in one - 241 - chamber: this could affect seriously the momentum resolution only in the case that the chamber concerned is the first one.

Finally the vertex reconstruction resolution of the muon iron spectrometer has been studied : once a muon track has been fitted through the whole detector and the best fit momentum found, one can fix this parameter and trace back the muon to the vertex.

Fig.11 shows precisely this study, and one can see that also for (relatively) low energy muons of 100 GeV the muon track can be extrapolated to the vertex with a 1.5 mrad accuracy, i.e.~ 2 mm at a distance of 1 m. Multiple vertices can be resolved to some degree, a very important characteristic for high luminosity operation in hadron colliders.

2.2 - Lepton signature of a Super-heavy quark.

M.Della Negra, K.Eggert and K.Wacker focussed on the study of a particular process in which muons play an important role, especially in hadron machines, being the only way of tagging the production of heavy flavours at the trigger level. The process studied was the production of a heavy bottom quark at a -*/s = 10 TeV cm. energy (pp) collider, a good 'case study' of the potential background sources to prompt muons.

The reaction studied was:

p + p —> Y^w+t + Y^.w+t + X where m(Y) = 500 GeV , m(t) = 40 GeV

Y is a Super-heavy b-type quark of the 4th family decaying in a t quark plus a real

W. The above process gives therefore 6 jets in the final state, each W boson decaying mainly in two quarks. A full simulation study has been done, using the Monte Carlo generator ISAJET [7], in which the QCD matrix elements for 2nd order processes in as are implemented. Fig.11 shows the Lego plot of two typical events: in a) the signature is a quite spectacular 2/z event because one of the two W decays in (pu), and the other gives two clear high pT jets; whereas in b) one can clearly appreciate the complexity of the event pattern which comprises six jets, with energies not very high (the highest energy bin is ~70 GeV), so that the right combinations of jets giving the two W masses are not obvious. In this event a possible signature is given by the decay of one W into:

appearing in the Lego plot as a ~50 GeV muon and missing energy nearby. - 242 -

To study how a good signature are prompt muons (and related neutrinos) to tag this process, the inclusive muon spectrum has been calculated from 25,000 Monte Carlo events in which the production of a {YY) pair was followed by all proper decay chains.The weighted total cross-section for the process given by ISAJET

o{YY) c£ 2.5 pb is in reasonable agreement with other calculations (see EHLQ [8]). Table 2 gives a summary of the percentage of events with a given number of prompt muons above a given cut in py.

As one can see from the table this process is certainly a nice source of multi-muon events: e.g. 12% of all events give at least two muons and 1% of them give two muons with pr >

100 GeV.

Three main sources of background muons can be anticipated: i)those simulated by jet particles punching through the calorimeter; ii)those simulated by JT/K decays in flight before the calorimeter; iii)genuine prompt muons from other physical processes.

The punch-through inclusive muon spectrum was evaluated starting from the single jet inclusive cross-section and the hadron punch-through probability as function of energy and iron thickness (5 m), as given by Bodek [9]. The decay-in-flight inclusive muon spectrum was evaluated from the same single jet cross-section weighting the decay probability with the hadron momentum distribution for a decay path of 1 m. Finally the prompt muon spectrum from known physical processes was calculated using both the ISAJET and EU•

RO JET [10] Monte Carlos. All 2 —• 2, 2 —» 3 hard scattering processes with u, d, s, c, b, t

quarks, as well as single and pair production of bosons, were taken into account to produce the inclusive spectrum of prompt muons, representing in this case the 'physics background' to the process we are looking for.

The results are presented in Fig.12, where also the single jet cross-section is reported as

a reference (broken line at top). As one can see from the figure, the background from fake

muons is at a tolerable level: the rejection factors against hadron jets simulating prompt

-8 muons are ~ 10-7-r 10 (the two curves at bottom). The main background comes from physics: infact the rate of 'isolated muons' from W production, and the rate of 'muons in jets' from heavy quark decays is a factor hundred higher than the inclusive rate from the

Super-heavy quark decay. - 243 -

The conclusion is that the muon trigger can produce an enriched sample of events, but other handles should be used to extract the Super-heavy quark signal. One is the detection of one W by looking at isolated lepton-neutrino pairs correlated in space and mass. In principle this channel has a good efficiency: 2 X 30% ~ 60% for W —* l + u, where

/ = e,n,T. As an example, the calculated efficiency for detecting one W after applying suitable cuts to the simulated events, is reported in Table 3.

The last result of the simulation which can be mentioned concerns the angular distri• bution of prompt muons from this process: 94.4% of prompt muons with py > 50 % GeV are produced with | n |< 2. This means that a general purpose detector must incorporate good muon detection and measurement in the central region.

3. - ELECTRON IDENTIFICATION

The electron pattern is very well defined: one track ending in an electromagnetic shower, i.e. in the first part of any total absorption calorimeter (30 xo correspond to

about 1 A0 in uranium).

To define the electron pattern many criteria are useful and can be exploited to various degrees according to the detector characteristics.

Measurements on the electron track

l)The momentum p measurement with magnetic field and tracking is compared with the energy E measurement in the calorimeter. The ratio E/p must be 1 for electrons.

2)The measurement of 7 = E/m of a track with TRD ( TransitionÄadiation Detectors) is a powerful tool to flag the electron small mass.

Measurement of the e.m. shower

3)dE/dx measurement as function of the depth xo in the first few radiation lengths of the calorimeter can distinguish the early developement of an e.m. shower from more penetrating hadrons.

4) The measurement of the longitudinal and transverse energy deposition associated to an e.m. shower depend on the calorimeter segmentation and can define with great accuracy the nature of the shower itself.

Spatial matching between track and e.m. shower

5)The shower localisation power of the detector is important. - 244 -

6) The isolation of the track from other jet tracks determines the possibility of making the association between a track and an e.m. shower.

Jets contain 7r°'s overlapping charged 7T+- tracks: for this reason electron identification is always difficult inside jets. Electrons can be identified only as isolated electrons. An isolated electron is the direct decay product of an high mass object, like a W —• e + i/, or it has a high transverse momentum with respect to the original jet axis, as in the semileptonic decay of a very high mass quark.

We studied the rejection power against pions which can be reached, with the various techniques listed above, for isolated electrons. Below we present some results on TRD detectors, a review of preshower and related track-shower matching, and finally a 'case study' of a typical possible source of isolated electrons, the production of a very high mass right-handed boson WR, to test the background rejection needed in a hadron collider to detect isolated electrons.

3.1 - Transition radiation detectors (TRD)

The TRD technique is very useful at TeV energies, even if the separation of electrons from higher mass particles above ~ 200 GeV, due to saturation of TR emission, becomes practically impossible. Infact, besides electrons from 7r° Dalitz decays and 7 conversions in the beam pipe wall, background to prompt electons is given by low energy charged pions

(from jets or underlying event) overlapping high energy e.m. showers from ir°.

Y.Ducros and K.Heinloth studied a very simple TRD detector, based on modular, compact, x-ray chambers of the type proposed by the Charpak group [11] and a CH2 radiator whose thickness is tuned to give a characteristic 'hard' x-ray spectrum peaked at

8 keV. A suitable threshold on the x-ray clusters detected by the chamber can then optimize the 'cluster counting' technique for maximum electron efficiency and pion rejection.

Fig.13 shows a schematic picture of one module of the TRD detector: a 10 cm radiator

stack (100 foils of CH2 400 fim thich) is attached to a 4 cm wire chamber of hexagonal cellular structure (there are 10 layers of sense wires)filled with a Isobutane-Xenon (50%-

50%) mixture. The performances of a detector made of three modules (about 45 cm of detector), have been studied by means of a full simulation of the Transition Radiation pro• duced in the radiator and detected in the chamber. This simulation gives the total energy released in the chamber and the number of wire-hits taking into account the ionisation - 245 - of the charged particle in the gas mixture. The response of the detector to electrons and pions of 50 GeV, applying a cut on the x-ray cluster energy, is shown in Fig. 14. As one can see pion rejections as high as 103 can be obtained using a 8 keV cut, with electron efficiencies of the order of 80 %. On the same figure one can compare the power of the cluster counting technique (with energy cut) over the total energy measurement: a 102 rejection is achieved with 90 % efficiency on electrons, against a factor 20 with the last method.

The results of the above simulation compare well with the experimental results on TRD as compiled by B.Dolgoshein [12] and reported in Fig.15. It is interesting to read from this figure the space needed in a detector to obtain a given rejection factor: 20 cm of detector give a factor 10-1; to get 10~3 about 1 m should be dedicated to TRD. The compactness of this type of detector is very appealing, furthermore it can be, at least partly, integrated in the central tracking device.

3.2 - Pre-shower counters

Normally the e.m. calorimeter transverse segmentation is dimensioned to contain to• tally the e.m. showers. In case of too coarse segmentation it is useful to add to the calorimeter, at few radiation lengths of depth, one plane of active detector giving both the dE/dx measurement and a good space localisation of the ionisation. The purpose of such a 'pre-shower' detector is twofold: on one side it gives a rejection against pions which nor• mally interact deep inside the calorimeter, on the other side it allows a better localisation in space of the shower center, improving the spatial matching with the electron track, thus adding rejection of ir/i overlaps.

Fig.l6a) shows a compilation (for more details see F.L.Navarria [13]) of experimental results on the shower localisation in space with different methods. The space resolution depends on the shower energy, but above 10 GeV it improves very slowly. As a matter of comparison the triangles and dashed curve represent the best results obtained with a fine grained e.m. calorimeter made of 2 x 2 cm2 BGO crystals, 22 Xo thick, so that the center of gravity can be determined over the full longitudinal developement of the shower.

This represents almost the optimum one can do in any detector: ~ 0.5 mm at 50 GeV.

One does not expect much improvement by decreasing the transverse sampling below the

Moliere radius; infact cracks and discontinuities will worsen the resolution. - 246 -

The other data on the same figure show what has been obtained with two slightly-

different techniques: i)a position detector plane located after 2x

detector; ii)an active sampling every 0.5 Xo with good lateral granularity as obtained

in a time projection calorimeter. In thefirst cas e the spatial resolution depends on the

detector type and is somewhat better with scintillator compared to MWPC. For a coarser

lateral segmentation similar results can be obtained in time projection calorimeters after

integrating longitudinally over several radiation lengths. Position resolution of about 5

mm, corresponding to ~ 3 mrad in space, is a typicalfigure fo r LEP detectors.

Fig.l6b) shows the rejection factor against pions that can be achieved with a pre-

sampler detector at 2xo applying a cut on the charged particles dE/dx. The full line

shows the ir rejection as function of the electron efficiency determined by the charge cut.

A factor 10 rejection is obtained with practically no loss of electrons. A typical figure is a

factor 16 with 95% electron efficiency.

3.3 - Global rejection against jets for isolated electrons

In any detector the rejection power for isolated electrons against jet induced background

is a mixture of many ingredients, not easily separable. The main contribution is represented

by the calorimeter which provides information on the shower shape. Above ~ 20 GeV the

shape alone gives easily a rejection factor against hadrons of the order of 103, improving

slightly with energy [14]. TRD can provide factors ranging from 10 to 103, depending on

radiator thickness, but rapidly vanishing above 200 GeV; its role is essential in detectors

with no magnetic tracking before the calorimeter, to reject low energy charged hadrons

which overlap high energy e.m.showers. Momentum-energy matching requires, in general,

a large magneticfield volum e between the machine vacuum chamber and the calorimeter,

and precision in momentum measurement becomes poor at momenta of the order of ~ 100

GeV/c. Finally, preshower counters give an additional rejection factor from 10 to 40, with

no significant loss of available space.

To summarize, with to-day's techniques one can expect to obtain global rejection factors

for isolated electrons in the TeV region against jets in the range 10s to 107.

3.4 - Detection of new high mass bosons

As an exemple, we have studied the possibility of identifying isolated electrons coming

from the decay of a new WR of mass 2 TeV in a (pp) collider at y/s — 20 TeV. - 247 -

The leading order cross-section for process

pp -*• WR^_,eu + anything was calculated assuming a coupling (for low Q2):

Oä = G,X[^4p~1.6X10-»G,

m[WR) and using the structure functions of Eichten et al. [8] for Q2=4 TeV2. The result is:

a(W+) = 2.96p6 ; a(W^) = 1.22p6

Assuming a branching ratio

B(WR -> eu) = ¿ for 3 families, and

m(Wi) 1 from the pjf distribution expected in the framework of QCD (Altarelli et al. [15]), one gets the PT spectrum for electrons shown in Fig.17 as a continous curve. The dashed line is the inclusive spectrum of isolated electrons from decays, which represents (at the jacobian peak) a 10% background to this process. In the same figure the pr single jet spectrum is plotted (upper curve) with a shift in the logarithmic scale of three orders of magnitude. The ratio of the two cross-sections is 5,000 at the jacobian peak from

WR -> eu decay (pT ~ 1 TeV/c). At the CERN SPS Collider the same ratio for the

Wi is about 1,000 (at PT ~ 40 GeV/c), and the hadron rejection obtained by the UA detectors is largely sufficient to reduce the background to a negligible level. For UA2 the rejection factor was « 5 x 104 based on a calorimeter cell size of 10° X 10°, limited energy leakage to the hadronic calorimeter and a track-preshower matching of about 7x7 mm2.

With some additional rejection, as given by a compact TRD detector and better space resolution of the preshower, one can anticipate a global rejection against jets of ~ 106. We are convinced that such rejection factors can be achieved at the TeV scale because the cut on the longitudinal shower developement (energy leakage to the hadronic part) becomes - 248 - more effective. In addition, calorimeter cell sizes can be made smaller, thus giving better rejection against jets, and,finally, je t fragmentation becomes softer at higher jet energy.

The conclusion from this study was that hadronic background faking isolated electrons

can be beaten and the WR, if it exists, is easy to see.

4. - IDENTIFICATION OF THE r LEPTON

The r lepton pattern is a low multiplicity narrow jet. In Table 4 the main decay channels for the r are reported. Apart from the 35% branching ratio into c^:(^+-) + 2u, the main decay channels consist of 1 to 4 pions. Let us consider as an example adequate to the TeV scale, a 300 GeV r-lepton decaying into three pions: it will appear as a three particles jet contained in a narrow cone of less than 0.5° half-aperture. This illustrates the principle for the r-lepton identification.

Two studies have been performed on r identification. Delia Negra discussed the selec• tion criteria developed by the UAl collaboration for r-lepton identification in the W —• r v events analysis; Kittel studied the r identification power at the TeV scale by a full simula• tion Monte Carlo. It is clear that the purpose of the analysis was to identify only isolated

T-leptons, as those produced in the W decay, and that tau-lepton identification inside jets is much more difficult and was not considered.

4.1 - Criteria for W —* r v selection.

The r pattern identification used in the UAl experiment was based on a likelihood analysis of three quantities, defined for each jet:

l)The isolation parameter F,

£ffr(AiZ<0.4)

~ ££r(A£ < 1.0)

The sum extends over all calorimeter cells around the jet axis defined by the highest ET cell;i2 = [Ar)2 + A2)a defines the aperture of the cone around it. F gives a measure of the narrowness of a jet.

2) The track matching parameter R

E = (A7-2 + A¿2)* measures the distance between the calorimeter jet axis and the tracks pMax vector. - 249 -

3)The multiplicity parameter HCD is defined as the multiplicity of tracks with pr > 1

GeV, in a cone centered on the calorimeter jet axis with aperture AR < 0.4. One expects for T —• h + v about 72% 1-prong and 28% 3-prong events.

The above parameters were calculated for each event, and the distributions of F, R and ncD from a r —> hadrons Monte Carlo were used to calculate the relative probabilities

(PF, PR, Pn) for an event to fit the r hypothesis. For this purpose a T log-likelihood was defined:

LT = ln[PFPRPn)

In Fig.18 the distribution of the r likelihood function for Monte Carlo r —• h + v decays is compared with the distribution function obtained from a sample of experimental jet data. The chosen cut has a 78% efficiency to detect the T hadronic decay channels, and includes a contamination from jets of 11%.

These figures apply to r-leptons at ET of typically 30 GeV: at the 1 TeV scale a conservative guess is that the jet rejection factor improves by a factor 10. In the next paragraph we will see that the above guess seems to be quite correct.

5.2 - Identification of the rati TeV

We will use the results of the study made by Kittel at this Workshop [16] to check the expected selection power for r-leptons at the TeV scale. The study is based on a full simulation Monte Carlo of (e+e~) events at i/s = 2 TeV. The event generator was the code LUEEVT, a Lund Monte Carlo [17] including u,d,s,c,b and t quarks.

It was used to produce 10,000 e+e- hadronic events. The charged multiplicity dis•

tribution is shown in Fig.19 a); its mean value is < nch >= 77.1+0.3, whereas the total multiplicity (including 7's) is < n >= 163.7+0.6. The events were analysed by the cluster finding algorithm LUCLUS. The reconstructed number of clusters is shown in Fig.19 b): a mean number of 8.4 clusters was found, using only charged particles.

The r identification was then based on two selection criteria:

l)Multiplicity cut: only clusters with nc>, = 3 were selected; in Fig.19 c) the charged multiplicity distribution of the reconstructed clusters is shown, with the effect of this cut.

2) Cluster mass cut: in Fig.19 d) the mass distribution of nc/, = 3 clusters is compared with the mass spectrum of r's. A cut at m < 1.8 GeV/c2 is 100% efficient for r's, reducing by a factor 3 the jet background. Adding neutrals the rejection improves by another factor of three. - 250 -

The efficiency for this r selection corresponds to the branching ratio of the chosen signature (r —• 3 charged tracks): the expected number is 108 over 10,000 events

The results of the analisys of the above simulation are summarized in Table 6. They show how the rejection of background jets varies as the jet energy increases. The two samples of background jets and r jets are divided in three energy ranges: 0-40, 40-200 and

200-500 GeV, corresponding to mean energies of 20, 100 and 300 GeV, and the multiplicity and mass cut effect is shown for each category separately. The results are presented in

Fig.20 where the lower open points (right scale) show the constant efficiency for T jets

(er ~ 13%), and the full points (left scale) show the increasing rejection for background jets: their efficiency is 5.2% at 20 GeV and goes down to less than 1% at 300 GeV. It is reasonable to assume that the same behaviour as function of the jet energy is common to all r jet selection criteria based on the narrowness of the r jet, as the UAl method, which has an efficiency for r of 78% (pattern selection) x 70% (hadronic branching ratio).

As a conclusion for what concerns the r-lepton identification one can put together the experimental results of the UAl analysis (section 4.1) with the simulation results of this section, to get the following extrapolation: isolated r-leptons can be detected at the

TeV scale with efficiencies around 50%, almost energy indépendant; the corresponding jet efficiency will be at the level of 1%. This is not sufficient in hadron colliders to give background-free trigger selection, but it is certainly useful at the analysis stage.

5. - NEUTRINO IDENTIFICATION

The pattern of all neutrinos is very simple: missing energy-momentum in a 47T calorime• ter. The relevant quantity is the missing transverse energy Elp*". The accuracy of this measurement depends entirely on the calorimeter characteristics of linearity, hermeticity and granularity. There are three sources of fake E™*" in any detector:

l)The beam hole contributes a non zero component to the missing transverse energy, whose r.m.s. value depends on the angular aperture of the beam hole.

2) The calorimeter resolution itself produces an unbalance in the transverse energy measurement proportional to the total transverse energy of the event: it scales as y/Y^Er-

3)Missing transverse energy can be simulated by event pile-up, which occurr with non negligible probability in hadron colliders at high luminosity. - 251 -

Further studies are needed to understand the effect of event pile-up in hadron colliders when luminosities higher than 1032 cm2 s~1 are planned. Already at the LHC collider with a design luminosity of 1033, this effect could represent a severe limitation to genuine prompt neutrino selection, which is one of the most powerful tools for new physics, as proven by the SPS Collider.

The effect of the beam hole has been studied by the calorimeter group: a forward- backward aperture of 0.5° — 1.0° degrees in the calorimeter, which is certainly possible to realize, reduces the maximum E™133 produced by transverse energy escaping detection in the beam hole, to a small term compared to the total Eiptaa built up by the calorimeter resolution, at least for high total ET events (obviously the most interesting).

Extrapolation of the typical experimental figures obtained at SPS collider energy, gives at LHC energies an r.m.s. value o{E^iaa) ~ 20 -r 30GeV.

To keep a reasonably low background contamination a cut of 3 -j- 4a is needed and then neutrinos can be identified above 100 GeV. Although the above evaluation can look pessimistic, no account has been taken of possible non-gaussian tails in the background missing ET distribution.

6. - IDENTIFICATION OF SECONDARY VERTICES

In section 1.2 it was anticipated that the detection of secondary vertices is the poten• tially most powerful tool to detect heavy quark jets. V.Cavasinni and M.A.Giorgi studied the performances of a silicon microstrip vertex detector for a future collider having a 2 cm diameter beam-pipe. In Fig.21 the layout of the detector is shown: it consists of 10 planes of silicon microstrips arranged on cylindrical surfaces around the beam pipe, at 2 cm distance one to the other. The outer diameter is 42 cm and the total length is 200 cm.

To minimize the material each microstrip counter is double-layered, with and 9 strips on the same wafer. The strip pitch varies from the innermost to the outermost layer: for the strips from 5 to 150 ¡iva. (there are 155,000 readouts); for the 9 strips fom 0.5 to 20 mm, and there are 56,000 readouts.

The performance was tested by generating single W production events at = 20 Tev.

This type of event produces two or three jets with beauty particles inside two of them. - 252 -

The events were tracked through the detector and analysed with a standard secondary vertex finding algorithm. The space resolution at the vertex was found to be a ~ 70 and is dominated by the multiple scattering. This resolution is sufficient in most cases to associate individual tracks to different vertices. The total efficiency for b-quark vertex finding, tested by the Monte Carlo was very high, ~ 50%, almost the best one can hope to achieve for heavy quark jets identification.

As additional information Fig.22 [18] shows the impact parameter resolution for pions in the CDF detector as function of p^. Here resolutions in the 40 range are achieved

(the impact parameter for b decays is ~ 200 ^imj, enabling the separation of individual tracks in secondary vertices. From the same figure one can see the importance of multiple scattering: while the intrinsic resolution of the detector is of the order of 10 /im, multiple scattering dominates the resolution for pions below 5 GeV. Hence it is important to use a magnetic field in conjunction with microvertex detectors to exclude from the impact parameter analysis the low momentum tracks which are abundantly produced at TeV energies, and exploit fully the potential resolution of any vertex detector.

To conclude this section, we think that for jet identification a vertex detector with intrinsinc resolutions of better than 30 fim in the impact parameter for charged tracks is extremely important, and it must incorporate a magnetic field to separate out low momentum tracks. Furthermore the collider beam pipe must have the smallest possible diameter and wall thickness.

7. - 7T / K / p IDENTIFICATION

At TeV energies the main technique envisageable to separate out hadrons is the detec• tion of the Cherenkov light ring, i.e. the use of ring imaging Cherenkov counters (RICH).

This technique has made considerable progresses in the last few years, so that now one is reasonably confident on how to build such detectors. The main limitation is the space needed to accomodate enough radiating medium to get the desired resolution. In Fig.23, taken from [19], the relation between the space needed to obtain a 4.2a separation of particles and their energy is shown for different particle pairs: e.g. to separate pions from k-mesons up to 300 GeV, about 10 m of RICH counters are needed. This is a strong reason, apart from the physics relevance, discouraging any attempt to identify charged hadrons in

4n general purpose detectors. - 253 -

However the RICH tecnique could be extremely useful in specific experiments, using

detectors of limited solid angle. The problem of studying jet characteristics in detail is one of them. The inclusive study of jets at 90° in a very high luminosity hadron collider, is ex• tremely important to understand better the jet identification problem. Such an experiment could for example be realized in conjunction with a total absorption multimuon experi• ment. A 15° half-aperture hole at 90° in the interaction region dump absorber, equipped with RICH counters, e.m. and hadronic calorimeter, muon filteran d spectrometer.would have enough acceptance to study with good statistics high pr jet particle composition up to many hundreds GeV.

CONCLUSIONS

The next generation of high energy particle accelerators is meant to explore elementary particles interactions and matter behaviour at the energy scale of 1 TeV. Only colliding beams can supply such cm. energies, and the options (e+e-), {ep), and (pp), have been compared at this Workshop. From the point of view of the discovery power, (e+«~) exibits cleaner signals than hadron machines, but the latter win as far as the energy range explored at once. Interesting signals of new physics will show up, if they exist, as invariant masses of jets and of leptons. The experimental problems will be as usual the correct identification of leptons and jets, invariant mass resolution of detectors, combinatorial backgrounds.

We have examined the identification problem for jets and leptons, focussing the atten• tion on hadron colliders, because there the backgrounds induced on lepton identification by beam hadron constituents demand rejection powers at least one order of magnitude better than in (e+e~) colliders. Muons play a special role in hadron colliders for their characteristic of being identified outside the detector core, therefore they can be used as a powerful triggering signal, especially at the almost prohibitive interaction rates envisaged at hadron colliders. The emphasis on muon detection imposes a quite compact central tracking device and calorimeter, with at least 4 m of magnetized iron outside. On the contrary in a (e+e~) collider one can think of a larger central detector and calorimeter to have maximum particle separation and angular resolution: here one can look in some - 254 - detail inside jets, give more emphasis to electron identification and in general to a com• plete reconstruction of the event pattern. From the background point of view the muon detector can be much thinner, ~ 2 m of iron, essentially the return yoke for the magnetic field of any large central spectrometer. For (e-p) colliders the main emphasis should be on electron identification, and here the main problem is the rejection needed against hadrons especially in the the forward kinematic region where the cm. motion produces very high density of particles.

The choice of experimental techniques for general purpose detectors must be based on criteria of simplicity, compactness, maximum homogeneity of the whole detector and capability of producing clean patterns.in the reconstruction of jets and leptons. High resolution tracking devices were considered of great importance for the detection of heavy quark decay vertices. A magnetic field, even of modest bending power seems to be useful to separate the bulk of low momentum tracks from the relevant high px ones. For electrons

TRD is important and can be conveniently incorporated in tracking devices. Finally a muon detector outside the calorimeter is always necessary.

The identification of jets has made a lot of progresses. The presently used calorimeter cluster algorithms for jet finding start to work properly and jet Monte Carlo programs reproduce well the main jet properties. Tagging of heavy quark jets can be pursued with muon detection and micro-vertex detectors.

The identification of leptons will play a fundamental role at TeV energies. The relative merits of e, fi, v, r leptons at the different colliders can be summarized assigning stars to each of them as in the Table below:

LEPTON (PP) (ep) (e+e~) muon *** * ***

electron ** *** ***

neutrinos ** *** **

r-lepton * **

Final remark: nature was fair with us, because to understand its behaviour at TeV energies we do not need to identify 100 particles at once in the same event, but only 2 to

6 relevant jets and leptons. - 255 -

REFERENCES AND NOTES

1) JETS AT THE LARGE HADRON COLLIDER, The LHC Jet Study Group, Pro• ceedings of the ECFA-CERN Workshop "Large Hadron Collider in the LEP tunnel", Lau• sanne and Geneva, 21-27 March 1984. CERN 84-10, 5 sept. 1984, pag.167

2) See e.g.COMPARISON OF JET FRAGMENTATION IN VARIOUS PROCESSES,

A.Seiden, Invited paper at the 6th International Conference on Proton Antiproton Physics

Aachen, Germany, July 1986 Preprint Santa Cruz SCIPP 86/71 (October 1986)

3) JET CHARACTERISTICS AT TEV ENERGIES, P.N.Burrows and G.Ingelmann,

Paper presented at this Workshop

4) See report of the Jets and Calorimetry Study Groups; also M.Gilchrease, Report on

SSC Project presented at this Workshop

5) See Chapter V, MUON DETECTION, pagg.209-242, of Ref. 1.

6) MUON MOMENTUM MEASUREMENT, A.C.Benvenuti and C.Zupancic, Paper presented at this Workshop

7) ISAJET 5.30: A MONTE CARLO EVENT GENERATOR FOR pp AND pp IN•

TERACTIONS, F.E.Paige and S.D.Protopopescu, Proceedings of the 1986 Summer Study on the Physics of the Superconducting Super Collider, 23 June-11 July 1986, Snowmass,

Colorado.

8) SUPERCOLLIDER PHYSICS, E.Eichten, I.Hinchliffe, K.Lane and C.Quigg, Review of Mod. Phys., 56, 579 (1984) - 256 -

9) PUNCHTHROUGH IN HADRONIC SHOWER CASCADES, MUON IDENTIFI•

CATION, AND SCALING LOWS FOR DIFFERENT ABSORBERS, A.Bodek, Workshop on muon identification and momentum measurement at SSC and LHC energies, University of Wiscosin, April 1985. Preprint Rochester, UR911, ER13065-412 (April 1985)

10) INCLUSIVE MUON YIELD AT 10 TeV PREDICTED BY THE EURO JET MONTE

CARLO FROM c, b, t QUARK DECAYS, I.Ten Have, Paper submitted to this Workshop.

Also EUROJET MONTE CARLO, B.Van Eijk, Proceedings of the 5th Topical Workshop on Proton-Antiproton Collider Physics, Saint Vincent 1985 (World Scientific, Singapore,

1985) pag. 165.

11) A MODULAR MULTIDRIFT VERTEX DETECTOR, R.Bouclier et al., Nucl.

Instr. & Meth., ¿252, 373 (1986)

12) TRANSITION RADIATION DETECTORS AND PARTICLE IDENTIFICATION,

B.Dolgoshein, Paper presented at the Wire Chamber Conference, Vienna 1986. Preprint

CERN EP/bd/mm-015lP (Helios note 148, 10 March 1986)

13)IMPACT POSITION MEASUREMENT IN E.M. CALORIMETERS, F.L.Navarria,

Paper presented to this Workshop

14) See for exemple the compilation in: REPORT OF THE WORKING GROUP ON

MUON, ELECTRON AND HADRON IDENTIFICATION, Report of the Task Force on

Detector R&D for the SSC, SSC-SR-1021 (June 1986), pag.159

15) G.Altarelli et al., Nucl. Phys., 73208. 365 (1982)

16) BACKGROUND FOR THE DECAY r -> 3 h± + neutrals IN e+e" AT 1 TEV,

E.W.Kittel, Paper presented to this Workshop - 257 -

17) T.Sjostrand, Comp. Phys. Comm., 39, 347 (1986); for more details, see [16] and references therein

18) See D.Saxon, Report of the Tracking Group, and G.Bellettini and G.Chiarelli paper to this Workshop

19) PARTICLE IDENTIFICATION AT THE SSC, T.Ypsilantis, Paper presented at this Workshop - 258 -

Table 1 - Muon Momentum Resolution

Dependence on Spectrometer Parameters

Parameter Resolution *(l/p)/(l/p)

¿00 XL 1.5 TeV 3.0 TeV (rad) (m) (m) (%) (*) 4 io- 1 0.6 13.8 21.1

10-4 1 0.3 16.1 21.9 no ©o 1 0.3 23.0 43.2

4 io- 0.8 0.6 17.4 27.7

4 io- 0.8 0.3 18.1 29.1 no 6o 0.8 0.3 34.2 65.9

Table 2 - Multi-Muon Signature

Efficiency (in %) for a Super-Heavy Quark Pair YY

pT> 10 GeV/e pT>50GeV/c pT> 100 GtV/c pT> 200 GeV/c

0 46.0 69.8 82.2 93.6 1 38.0 26.3 16.7 6.2 2 12.2 3.5 1.0 0.2 3 2.5 0.2 4 0.6 - 259 -

Table 3 - Efficiency of the W Signature in Y Decay

Sequential cut Efficiency {%)

pT(lepton) > lOOGeV/e 37 prineutrino) > 100 GeV/e 23 A[lu) < 45° 17

2 or mT(lv) < 80 Gev/c 15

Table 4 - r-lepton Decays

Decay channel Branching Ratio (%)

17.6 + 0.6

e~vruli 17.4 + 0.5 h~ + tin" 51.6 + 0.7

3h~ + nit0 13.4 + 0.3

Table 5 - r-lepton Selection at 1 TeV

(number of Background (with neut.) r leptons clusters) all Cut on M < 1.8 GeV

Total number 104,841 2,387

N(ch)=3 8,069 251 108 +45° - 135° 5,354 225 66

pT bins:

0-40 GeV 2,977 159 17 40-200" 1,515 59 33 200-500" 862 7 16 Fig. 1 'Lego Plots' of typical high pT events in a V~s = 10 TeV pp Collider. The binning is A = 6° and Ar¡ = 0.1. The right-hand corner vertical bar sets the 100 GeV level for the energy flow in each cell. Events a) and b) represent gluon- gluon scattering subprocesses, whereas c) and d) show bb quark pair production, see text. 1Q-6I 1 1 1 1 1 1 i . • 0 10 20 30 U0 SO 60 70 80 90 0 (degreesl

Fig. 2 Energy fraction outside a cone, with half-opening angle 6, around the overall jet axis (dashed curve) or around the reconstructed cluster axis (full curve). Gluon-gluon system at Q = 2 TeV, shower plus string fragmentation. The dotted line is without parton shower and 0 relative to the overall jet axis.

Fig. 3 Energy distribution of reconstructed jets in e+e~ at 2 TeV (cms) (left hand scale): coherent QCD cascade (full curve, 2nd order matrix elements (dotted), coherent cascade with AR = 0.2 (dashed). Dash-dotted curve gives fraction of quark jets (right hand scale). From Ref. 2. - 262 -

z

Fig. 4 Fragmentation function for particles assigned to reconstructed high-energy jets in pp collisions. Two sets of curves are shown, the energy assignment being the same for each set: cms energies 630 GeV (dashed), 2 TeV (dotted), 18 TeV (full). The harder set is for quark-assigned jets and the softer set for gluon-assigned jets. From Ref. 2.

Pi

Fig. 5 Inclusive muon PT distribution, with respect to its assigned jet axis, from e+e~ annihilation at 2 TeV. Curves are for all muons in the event (solid), muons from top quark decay (dotted), bottom quark decay (dashed), charm quark decay (dash-dotted) and the additional curve (long dashes) shows the same distribution for all charged particles in jets. - 263 -

Fig. 6 Schematic view of the apparatus used in the Monte Carlo simulation to study the muon momentum measurement at TeV energies. - 264 -

LUÍ i i i i i i i i't*

a>ô0M=O.1 mrad

b"»|- b> <50M=O.2 mrad - c> 0„ not measured

c)

b)

• I

Muon Energy (GeV)

Fig. 7 Relative momentum resolution as a function of muon energy for different values of 68f, the error on the entrance muon angle: a) 0.1 mrad, b) 0.2 mrad and c) 0,, not measured, 300 ¡aa measurement accuracy.

1 1 1—1 1 1 I'T i i

— 6" » - a' Ut = 5. meters -

b> LFE = 4. meters

40

M

30

....-•V"

10 • • • Fig. 8 Relative momentum resolution as a function of the muon energy with 300 ¡aa measurement accuracy for a) 1 m thick iron slabs and b) 0.8 m, for a total of 5 and

1 'ii i • 4 m respectively. 10' Muon Energy (GeV) - 265 -

Eju (TeV)

Fig. 9 Probability, as function of the muon energy, that a muon traversing the spectrometer generates: a) no extra hits in any of the five detectors inside the iron; b) extra hits in any one of them; c) extra hits in two of them; d) probability that a muon traversing the Uranium calorimeter generates no extra hits in the chamber in front of the iron spectrometer.

10- Muon Energy (GeV)

Fig. 10 Error in the entrance muon angle determined from the momentum fit to the muon trajectory using 5 m of iron (curve a) or 4 m (b). Fig. 11 Lego plots representing two typical Super-heavy quark pair production events. The process isp + p-*Y + Y... with Y-»W + tatV7=10 TeV, m(Y) = 500 GeV, m(t) = 40 GeV. In a) two opposite W are clearly seen, one decaying in p. + v, the other in two jets. The event in b) giving six jets is less easy to be recognized. - 267 -

mY = 500 GeV Vs=10TeV

Fig. 12 The expected inclusive muon spectra from different sources to be expected in a 10 TeV (pp) collider, compared to the spectrum from the decay of a Super-heavy quark of mass 500 GeV. The two lower curves are the maximum level of background muons expected from ir/K decay and punchthrough of jets (their inclusive cross-section is shown for reference as the upper curve: the expected rejection factors with a 4 m magnetized Iron muon spectrometer are 10_7-10~8). The full curve is the spectrum of muons from the decay Y -> W + ...-•/* + ... =UNCH\ ' giving the highest PT muons. Their inclusive cross-section THROUGH^ above 250 GeV/c is still a factor of 10 lower than the UmFe) inclusive cross-section from single W (dash-dotted line) and c, b, t semileptonic decay (dotted line). 100 200 300 400 500

PT(GeV/c)

Fig. 13 Schematic drawing of a typical TRD detector element, keeping about 15 cm of space in a tracking device. - 268 -

; 1 1

: 1/ejt

3 stacks fCH2 foils)

Fig. 14 Typical rejection factors for electrons against pions versus the electron efficiency, calculated with a Monte Carlo simulation for three TRD elements like those of Fig. 13. The figure compares the performances using the cluster counting technique and the total energy cut.

Radiator Method

R=^l , Ee=90% o NA-3MHELI0S) CH2 • C. Fabjan e.a.[7l Li N

- R806 [5] Li û

o A.Büngeneretal.[9] CH2

v ZEUS prol. 114} CH2 E=2i15 CeV 10" - rt A KEK (11) CH2 Û N +FADC if ©E715 [10] CH2 1 analysis ¿ UA2 prot. [13] Li H ! 10- preliminary

a H. Buttetal [161 CH2

10"

-4. 10 CH2, g/cm ®

10" J L 20 30 40 50 70 100 200 300 L0ET, cm

Fig. 15 Summary of TRD detectors performances as function of the detector dimensions. From Ref. 10. - 269 -

10 a)

• DELPHI - HPC/6.5X,Pb/35mm G" (mm) • DELPHI-HPC/6.5X,Pb/2.5 mm

Oil OPAL /2X0AI (coil)/r0mm

V UA2 /2X.W/ 10mm

A E705/3.6X.SCG1-C/10mm

* L3/22X„BGO/20mm

+ GAMS/18\LG/38 mm

\

\

12 mm 1 mm t a

10 20 30 AO 50 60 E(GeV)

b) 100 CHARGE CUT AFTER 2X»

E = 60 GeV e -rr

6 GeV

0.98 0.94 0.90 Ee

Fig. 16 a) Compilation of space resolution for e.m. showers obtained by various pre-shower counters compared with the best obtainable localization by total absorption active shower counters, b) Typical x rejection versus electron efficiency curve given by a pre-sampler counter with a cut in the ionization energy of charged particles. Fig. 17 Inclusive spectrum of isolated electrons produced by a 2 TeV mass WR weak boson decay in e + v, in a (pp) Collider of 20 TeV. For comparison the inclusive e spectrum from single W production is shown (dash-dotted line) and the source of background electrons, i.e. single jets. As one can see the required rejection against jet background is about 105, to get a significant signal on the electron pr spectrum at the two-body decay Jacobian peak around 1 TeV.

T likelihood function i 1 1 1 2U

T—-Hadrons 20 Monte Carlo - UA1 Jet data \ (Arbitrary 16 normalisation)

12 f 1

/ fax Vs*! y ^_ Fig. 18 Typical separation of T jets from hadron jets obtained with _ en a likelihood analysis at about 30 GeV (UAl r analysis, see text). 16 -12 -8 ) 4 S »10 »10 I 40 CL a)

35 V coh- branching

neg- binomial 30 .iWllV 4 X 25 v n H 20

15

10

5 _1_ 0 25 50 75 100 125 150 175 8 12 16 20 CHARGE MULTIPLICITY n„ NUMBER OF CLUSTERS

5. 7,5 10. 12.5 15. 17.5 20. 8 12 16 20 24 GeV/c* CLUSTER MASS (NC=3) CLUSTER CHARGE MULTIPLICITY Fig. 19 Study of T separation from jets in (e+e~) at Vs~ = 2 TeV. In a) the total charged multiplicity and in b) the number of jets per event reconstructed with a cluster algorithm shown. The r jets selection was based on a cluster charged tracks multiplicity cut shown in c) and on a cluster mass cut as in d). - 272 -

12

10 100

ot EFFICIENCY (RIGHT SCALE ) 80

• JETEFFICIENCY (LEFT SCALE)

200 300 E (GeV)

Fig. 20 Efficiency for jets and for T as a function of energy, for clusters of charged multiplicity nc = 3 and a cluster 2 mass cut mc < 1.8GeV/c .

20 mm. 5 jUm

0.4"/. X.

20mm.

— 200 0mm. _

Fig. 21 Silicon /i-strip vertex detector set-up used in the study of heavy flavour detection. 273

C—I—LILTI I I I I 1111 80

x CDF o CDF + small chamber

60 O CDF + 5th layer

1 40

20 o x

• ' 111111 i i i i i 111 10" 10* 10" Pt (GeV)

Fig. 22 Typical performance of a silicon ft-strip vertex detector in the measurement of the impact parameter D for charged tracks: impact parameter resolution OD as a function of the transverse momentum for the CDF vertex detector (Ref. 18).

-1 i I I i I I I I 1 1 i I I I I I

Conditions1

lOGeVl i—i—• i i i i 111 t 10 100 L(meters)

Fig. 23 Maximum energy at which a 4.2 standard deviations separation can be obtained between a pair of hadrons with a RICH counter of total length L (from Ref. 19). - 274 -

REPORT FROM THE WORKING GROUP ON TRIGGERING AND DATA ACQUISITION.

D. Delicaris/ College de France, Paris

J. Dorenbosch/ NIKHEF, Amsterdam

B. Foster/ University of Bristol

J.R. Hansen/ Niels Bohr Institute, Copenhagen

J. Harvey/ Rutherford Appelton Laboratory

G. Heath/ University of Oxford

D. Notz/ DESY, Hamburg

A. Putzer/ University of Heidelberg

S. Stapnes/ CERN, Geneva Presented by J. Renner Hansen

ABSTRACT

This study group has investigated problems related to the triggering and data acquisition at future CERN collid• ers. Event rates from physics and beam background is estimated for each of the three collider options and a more detailed discussion of the triggering at the pp collider is presented.

1. INTRODUCTION.

From a triggering and data acquisition point of view, the three machine options being considered at this workshop present an enormous range of input data rates, from about 1 Hz at the e+e~ machine to about 108 Hz at the proton-proton collider. Whereas rates of about 1 Hz are easily dealt with by present day techniques, 108 Hz is at least three orders of magnitude higher than the rate from the CERN pp collider and will require many develop• ments in hardware before satisfactory techniques can be found for this type of machine.

Rather than attempt a design for the trigger and data acquisition system for a general purpose experiment at each of the possible machines, a task which is either trivial or very difficult for the e +e~ or pp machines respectively, we have attempted to uncover potential problems and to suggest possible solutions, given reasonable extrapola• tions from present techniques to what may be available within the next few years. We then proceed to consider in somewhat more detail a trigger scheme for the LHC, the most difficult of the machine options to trigger effectively. Finally we discuss some general considerations in software development. The appendix contains a series of short descriptions of recent developments in hardware which we believe may be important in the solu• tion of some of the problems referred to below.

2. PHYSICS AND EVENT RATES.

2.1 The proton-proton collider.

The total cross-section for proton-proton interaction, atot, is shown in Figure 1 as a function of the pp center of mass energy, y/s [1]. The predicted total cross-section is very model dependent. At yfs =17 TeV the predic• tions range between 90 mb and 135 mb. In the following we use 100 mb as our reference cross-section. This choice introduces a 20-30 % uncertainty on the calculated rates, but they are easily rescaled if a more accurate

estimate of atof should become available in the future. - 275 -

The total cross-section consists of three parts: the elastic, the diffractive and the inelastic cross-sections. Only the inelastic events contribute particles in the trigger sensitive region of a general purpose detector. The elastic and diffractive events send all the outgoing particles into a very narrow cone in the forward and backward direc•

tions. The elastic cross-section is estimated to be about 25 % of om [1] and the diffractive cross-section to be 15 % of o** [2].

With a luminosity L = 1033cm ~2s _1 and an inelastic cross-section equal to 60 mb the event rate becomes

6 107 Hz . With 25 ns between bunch crossings, the average number of inelastic interactions observed in each crossing is 1.5. The final event sample will have a higher mean event multiplicity since the trigger never selects bunches without at least one inelastic event. After renormalization of the Poisson distribution one gets on average 2.5 events for each accepted trigger. A veto on the total energy observed in the calorimeters down to very small angles could be used to reduce the number of triggers with more than one event. However, most inelastic events

are accompanied by one or more elastic or diffractive events, which do not disturb the interesting high-pr struc• ture of the inelastic event. They do however deposit all their energy in the forward direction, the same direction as the majority of the energy from inelastic events goes. Hence a total energy veto would reduce the number of remaining events to almost zero. We therefore do not believe that it is either possible or even desirable to try to reject multievents in the trigger, particularly as experience from the CERN pp collider suggests that one or two extra unbiased events are unlikely to disturb later analysis.

Studies made at the Lausanne workshop [3] with a calorimeter of cell size not dissimilar to the design presently under consideration indicated that only about 1% of the calorimeter cells will be hit by particles from a normal inelastic event. The chances of overlap from multievents are therefore slight, and signals from events produced in earlier or later bunch crossings, can be resolved and in principle subtracted by looking at the time development of the calorimeter signals, provided these are sampled rapidly enough. Depending on the details of the beam crossing rate, sampling using a 80 MHz FADC (Flash Analogue to Digital Converter) should be adequate for this purpose.

A significant fraction of the inelastic cross-section comes from parton-parton elastic scattering, observed as jets in the final state. Figure 2 shows the integrated cross-section for jet-pair production at \ls~ = 17 TeV. If it is re- quird to obtain an event rate of about 1 Hz it is necessary to demand that each jet has in excess of 500 GeV transverse energy. This has the unfortunate consequence of biasing the invariant 2-jet mass up to 2 TeV.

The detection of missing transverse momentum is going to be the only tool to select events with neutrinos, pho- tinos or other neutral particles which do not interact with the calorimeter. Figure 3 shows the expected cross-

section for missing transverse momentum from QCD events, integrated above threshold. The calculation assumes

detection of energy above 10 mrad in polar angle and 2rr in azimuth. Only the missing transverse momentum from particles escaping below 10 mrad, from neutrinos and muons and as the result of energy resolution is

counted. For the resolution we have used a = 0.5~^I.ET . Cracks in the calorimeter are not included. Hence

the result shown in Figure 3 underestimates the real cross-section, but only by not more than 50 %. For the UAÍ

experiment the resolution is 0.8NJTE-T , when all the cracks are taken into consideration. The calculation is done - 276 - for events with total transverse energy in excess of 100 GeV. A threshold at 3a = 15 GeV results in an in• tegrated cross-section of 103 nb or 103 events per sec. Combined with a lepton trigger it provides a very power• ful handle on the trigger rate, although it must be noted that the calculation depends critically on the details of the simulation of the underlying event. Our limited knowledge of this introduces uncertainties of about a factor of 2. For small missing transverse momentum the cross-section is dominated by the momentum escaping through the beam holes, while at high missing transverse momentum triggers originate from the c-, b- and t-quark semi leptonic decays.

The high TET events expected from non QCD sources have all small production cross-sections compared to the jet cross-sections and require a combination of two or more basic triggers, such as an electron trigger or a miss• ing transverse momentum trigger, to be picked up. Decay signatures and event rates for these events are described in the in the contributions from the physics study groups at this workshop. The beam-gas interaction rate is estimated to be about 1 MHz, which is negligible in comparison to the 100 MHz rate of beam-beam events.

2.2 The electron-proton collider. The large cross-section events at an electron-proton collider come predominantly from weak neutral- and charged-current processes. Figure 4 [4] compares the differential cross-section at the value of \ls~ used at HERA (314 GeV) to the differential cross-section at two possible values of sis" (1.41 TeV and 2.0 TeV) for a com• bined LEP-LHC collider. It is clear that the rates, even for relatively low Q2, are comfortable. The figure gives

2 2 together with the cross-sections a table of the integrated number of events, for Q > Q0 , per day of running with a luminosity of 1032 cm ~2s ~l.

With this low event rate from beam-beam processes, beam-gas interactions become the major source of triggers. We have simulated the event rate observed in the detector from a 200 m straight section. Our simulation assumes that the sis" for a gasnucleon-proton interaction at LHC is 130 GeV, compared to 40 GeV at HERA. The parti• cle distribution from such an interaction is taken to be flat in rapidity with 2-3 particles per unit and with a mean transverse momentum equal to 400 MeV, perpendicular to the beam direction. Given the machine parameters for LHC we come to a total proton beam-gas interaction rate from the 200m long straight section of 1 MHz, compared to a few events per second from neutral-and charged-current processes.

The result from further calculations, shown in Figure 5, predicts that a reduction of 10-1 can be achieved by re• quiring more than 25 GeV of total transverse energy measured in the calorimeter. This rather modest trigger rate can be further reduced by online timing cuts.

We expect a much smaller beam-gas background trigger rate from the electron beam. However, synchrotron radi• ation could be a source of extra triggers, particularly in the tracking chambers. The severity of this problem will depend on the exact details of the magnet configurations and the masking. It is clear however that a great deal of detailed work will be necessary to design masks to prevent the enormous number of photons present in such a collider from entering the detectors. - 277 -

2.3 The electron-positron collider.

The trigger rate from e+ e ~ collisions at 2 TeV is completly dominated by two photon interactions. The two photon cross-section becomes very large for small two photon invariant mass, but a cut in the two photon invari• ant mass at 15 GeV limits the cross-section to 1 nb. Given a luminosity of 1033 cm ~2s ~l the trigger rate be• comes 1Hz. Trigger rates from all other processes are significantly smaller. The background from collective beam-beam interaction is a potential problem, even though present calculations indicate that no significant rate will be observed below 3.5 mrad in polar angle. Figure 6 shows that the rate of photons from beam-beam interactions as function of polar angle, measured in mW. It demonstrates that the pho• ton rate is zero for polar angles bigger than 200 \xrad . The angular distribution of electrons from beam-beam interaction is even narrower. Hence we conclude that the beam-beam interactions will most likely create no prob• lem for the triggering and data acquisition.

As for the electron-proton collider we have no general means to estimate the rate introduced by synchrotron radi• ation and back scattering. A more detailed study of these problems can only be done in conjunction with the design of the machine and experimental halls.

A summary of the machine parameters and the expected rates is given in Table 1.

3. TRIGGERING AT LHC.

It is clear that the proton-proton machine represents the biggest challenge for triggering and data acquisition. The trigger- and bunch repetition rates at the ep and e+ e ~~ machines are similar to what we expect to get at HERA and at SLC in the near future. The bunch repetition rates of 4 107 s -1 and multiple events at LHC pose a real problem. We will therefore concentrate on the proton-proton option since it represents the most difficult problem of the three machines under consideration.

3.1 The calorimeter. We consider a calorimeter similar to that proposed at the Lausanne workshop [5], covering 6 units in pseudo rapi• dity (-3 to 3), and 2rr in azimuthal angle (). It is longitudinally segmented into an electromagnetic compart• ment and a hadronic compartment. We assume an energy resolution of 15%/for the electromagnetic showers and 35%/\IÏF for the hadrons. The electromagnetic compartment is latterally segmented into cells covering Ac])Ar| = 0.5°x0.01 The 432000 cells are reduced to 4320 first level trigger cells by adding the signals from a 10x10 cell matrix. Each trigger cell is big enough to contain the full shower from an electron. Overlapping trigger cells must be foreseen at the first level trigger in order to remove the trigger inefficiency from electrons and photons sharring energy among cells. The hadron calorimeter has a cell size of A

us have been proposed. Making the pipeline longer or shorter adds very little to the expense of the system and therefore can be treated as more or less a free parameter. For the higher trigger levels the fully digitized calorimeter information is available and at the third level also the tracking and vertex information can be used. Figure 7 shows a schema of the flow of the calorimeter data through the system. In order not to lose energy resolution we propose to use the measurements of the calorimeter signal taken in the last 25 nsec before the bunch crossing for the measurement of the pedestals and residual signals from events in preceding bunch crossings. The measurements from the 125 ns after the crossing are used to integrate the signal. Time characteristics of standard calorimeter signals is given in Table 2. Most calorimeter types will have a response time fast enough to produce most of the signal within 125 ns and to show a double hit structure if a cell is hit during a preceding or following bunch crossing. If necessary, the falling edge of the calorimeter pulse can be shaped using pole-zero filters, though at some cost in resolution. The sensitive time of 150 nsecs is covered by 12 FADC samplings, implying that the total amount of data from the calorimeter will exceed 2.5 mil• lion 32 bit words per event. It is both impossible and unnecessary to try to extract all of this information from the FADCs and dispatch it down buses to the central data acquisition device. We recommend strongly that the maximum amount of processing power is placed on the front end of the electronics where it can be used to suppress the amount of data which needs to be transferred to a minimum. The use of Digital Signal Processors (DSPs), as is being considered for the HERA experiments, offers one way of achieving this suppression. Since

on average less than 5% of cells are hit in high X£r events, zero suppression already brings down the amount of information greatly. These devices could also reduce the 12 16bit words to two 32bit words, the first word containing address and a quality flag and the second an integrated pulse height with appropriate pedestal subtrac• tion. With present 'state of the art' DSPs this could be achieved within about 5 us and one might well expect significant increases in DSP speed within the next five years. FADC's and DSP's are desribed in more details in the appendix.

3.2 The trigger

The reduction in trigger rate needed for the data acquisition is 107. This can be achieved in a three stage trigger with a reduction factor of roughly 500 at each stage. The time available for the decision in the first level trigger is given by the length of the pipeline, or in other words, the pipeline has to be deep enough to allow for

sufficiently precise cuts. With 105 events left after the first level trigger the second level trigger can use up to 10 us for its decision of which some is used for the data reduction. The third level trigger should be made out of many parallel working processors allowing it to handle relatively large event rates without adding extra dead- time. Each level operates with a set of basic triggers and combinations of them. The basic triggers would be

i. Electron/photon trigger ii. Jet trigger

iii. Missing transverse momentum ( TpT)

iv. Total transverse energy (LET) v. Muon - 279 -

3.2.1 First level trigger. A schematic layout of the first level trigger is shown in Figure 8. The basic time constant is 25 ns, matching the time interval between bunch crossings. For the calorimeter signals we assume that it is possible to clip the pulse after 25 ns without losing too much resolution.

In the 25 ns time interval following a bunch crossing the analog signals from the electromagnetic calorimeter cells are added to form the e/y trigger cells. Next the e/y threshold is applied and at the same time the analog signals from the hadron calorimeter cells and the relevant electromagnetic trigger cells are added to create the ha• dron trigger cells. The third time interval is used to apply the jet threshold to the hadron trigger cell signal and to

add all hadron trigger cell signals to get the total transverse energy. Also a signal proportional to Ex +Ey , used as a rough estimate of the missing transverse momentum, is created in the third time interval, by summing the

Ej from each cell weighted by cos + sin(p . In the fourth time interval thresholds on the TET and Ex +Ey signals are applied. The muon trigger is made in parallel to the above described triggers. More than one threshold can be applied to each trigger signal, giving the flexibility to make less strict triggers for the use in combination with other triggers.

The total time used to form the basic triggers is 100 ns. If 75 ns is used to bring the signals together and 25 ns to make a decision in the combination logic, the digital pipeline must be more than 200 ns deep. If an event is rejected it is simply overwritten by the following event, otherwise it is shifted into a memory buffer accessible by the second level trigger. The move of data from the pipeline into the buffer can hopefully be made dead time free.

e/y trigger The cross-sections for electron and photon production are expected to be much smaller than the jet cross-section.

The first level e/y trigger is going to be dominated by jets with one or a few highly energetic neutral pions. To bring down the trigger rate to 104 Hz we estimate the threshold needed on the e/y trigger cell to be 16 GeV. This rate is uncertain by a factor of two because of the limited knowledge of the fragmentation functions. Also the treatment of the raw calorimeter signals will influence the rate, which is in general true for all trigger rates [5].

Jet trigger The jet trigger rate is read from Figure 2. A jet threshold of 500 GeV will give a trigger rate near 1 Hz,

whereas the threshold set to 50 GeV will result in a rate in excess of 105 Hz.

The missing transverse momentum trigger. The rate for this trigger is deduced from Figure 3. Together with a 100 GeV threshold on the total transverse en•

ergy, a 15 GeV cut on the missing transverse energy will contribute 104 events per sec to the input of the second level trigger. - 280 -

Total energy trigger.

The total energy trigger can not be used as a trigger by itself. Used as a stand alone trigger it will select pre• ferentially multi event bunch crossings.

We believe that a first level trigger scheme like the one proposed above or one with a similar structure has a fair chance of being constructed and made to work early enough for data taking at LHC, but only with a concentrat• ed effort from a large team of specialists and sufficient resources for R&D. The planing should start as soon as possible.

3.2.2 Second and third level trigger. With an input rate from the first to the second level trigger of 100 kHz, the time available for processing at the second level is around 10 LIS . While it may be possible to extend this somewhat by designing a multi-processor architecture which allows the processing of several events in parallel, the processing time will remain short on the scale of conventional microprocessors. Multiple buffering of events between the first and second levels will be necessary to allow processing to take up to 90% of the total 10 us without introducing large dead time losses. At this level the trigger has available the full digitised calorimeter information as output from the DSPs on indivi• dual channels. A grid of transputers or similar processors, as illustrated in Figure 9, each working on a restricted part of the calorimeter, could possibly find clusters in a few LIS . After a first search for local clusters the infor• mation is exchanged between neighbours and global clusters are created. With the cluster information available the basic triggers are easily constructed.

The reduction in number of events, which is assumed to happen in the second level trigger, comes from a better measurement of the energy, higher granularity of the calorimeter, isolation and leakage cuts on the electron/photon candidates, from the calculation of the true missing momentum instead of the crude estimate

from Ex +Ey and possibly from some crude track reconstruction for the muon trigger. All these improvements will easily bring the factor 500 needed at this level.

After the second level accept the data are moved to buffer memories from which the third level processors can go and collect the relevant information, i.e. cell energies and clusters found by the second level trigger. At the third level track chamber information will also be ready and the vertex position known. This information can be used, for example, to separate electron and photon/rro candidates. A reduction factor of at least 20 should be possible, which will bring the rate down to below 100 Hz. With event sizes similar to the 200 kbytes of the LEP experiments achieved by the data reduction processors, it will then be possible to transfer the full data to central buffers using present day high-speed buses such as FastBus, and a further reduction can then be made based on a preliminary reconstruction of the events. The final output rate to permanent storage can be as high as 10-20 Hz with foreseeable extensions of present-day technology (data transfer and recording rates are discussed further in the Appendix). However a more severe limitation on the final output rate of events will probably be the availabil• ity of offline processing power, so that more of the event filtering task will be performed in the online system leading to a final output rate of order 1 Hz. - 281 -

4. SOFTWARE CONSIDERATIONS. As can be deduced from the previous discussion the collection of data from modern HEP experiments is achieved by a complex network of powerful processing elements which communicate through high speed data acquisition busses and computer networks. The smooth operation of these components as a coherent system must be ensured by the careful design of the software needed to run them. The reliability of software becomes an ever increasing requirement, in that the use of distributed microprocessor based systems has meant that more and more decisions are being entrusted to the software used to operate them. In addition, the task of ensuring that the various software components are compatible is complicated by the fact that a large number of people can be involved. Moreover, the relatively long lifetimes of modern experiments imply that turnover in the composition of members of the online group must be anticipated, which poses additional problems for the maintenance of the software.

As the data acquisition system has evolved to exploit advances in hardware technology so the role of the online computer has changed emphasis. Instead of being primarily responsible for the collection and formatting of the event data it must support, through the development of sophisticated software tools, the procedures needed to manage the rest of the data acquisition equipment. These procedures include, keeping track of the configuration of the system, supporting its initialisation under different data-taking conditions, handling error situations as they occur and monitoring the integrity of the data recorded. The scale and complexity of the DAQ system poses par• ticular problems in this respect. Firstly access to all resources belonging to the readout and control systems must be managed to ensure that different activities can proceed the same time without interfering with oneanother, par• ticularly during commissioning and test periods. In addition the large volume of data constants associated with calibration of electronic channels, with describing different aspects of the detector and with keeping track of bookkeeping information imply that strict methods are needed to organise and manage the database. The operator interface to the system should be convenient to use and attempt to hide the complexity of the system by the pro• vision of many automatic checking procedures.

Currently it is becoming increasingly common for these problems to be tackled using modern software engineer• ing techniques. These techniques cover all phases in the life-cycle of software development including the specification of the system's intended functional behaviour, the architecture of the program structure and the de• tailed logic of the individual software procedures. Using these techniques, the design of the system is represented in graphical and mathematical notation rather than in a textual form which tends to be less precise and open to ambiguity. In addition, many software products are available for checking the completeness and integrity of the software design and also for improving the efficiency with which the designs can be implemented and tested. Modern programming languages, which adhere to the principles of Structured Programming, may be used to help avoid the introduction of errors during the implementation phase.

The set of procedures followed through the life-cycle of program development constitute a Software Methodolo• gy. There are many different Methodologies available and a survey commissioned by the Ada Joint Program Office [6] gives a good indication of each methodologies principle features and benefits. At CERN several groups are currently using Structured Analysis / Structured Design (SASD) in their software projects. The methods have - 282 - generally been found to be extremely useful in providing a better understanding of the systems' intended behaviour and as a means of communicating ideas within the project teams. In particular the deliverable items produced as a result of the design process provide a very convenient way of abstracting information and showing details. In this way the largely independent components of the project and the interfaces between them are readi• ly identifyable and can be assigned to different members of the team. This approach to design results in a very modular program structure which gives confidence in the reliability of the software and eases maintenance.

5. CONCLUSION. The design and construction of a triggering and data acquisition system for experiments operating at future col• liders, can either be trivial or very difficult, depending on the choise of collider. The effort needed to make the triggering at an e+ e _ or an ep collider work will be comparable to the effort invested in the triggering and data acquisition systems for the SLC, LEP and HERA experiments. This is by no mean small, but little compared to the effort needed for the successful construction of a triggering and DAQ-system for the detectors at LHC or SSC. The improvements in speed and sophistication of electronics required at future collider detectors will be founded on developments in basic semiconductor technology. Almost all elements of the systems described in this report assume faster, more compact circuitry than is available today, and at comparable cost, while for the large fraction of the readout which will be integrated directly onto the detectors, considerations of low power con• sumption and radiation hardness may be at least as important. There is considerable investment within the elec• tronics industry in the development of new materials and improved manufacturing techniques to achieve these goals. We strongly endorse the recommendation of the Task Force on Detector R&D for the SSC (SSC-SR-1021, p. 1-2) that High Energy experimentalists should become more closely involved with these developments. - 283 -

APPENDIX A: Hardware Topics. A.l Semiconductor technology in general. Improvements in speed and power dissipation by factors of 5-10 have been achieved using gallium arsenide (GaAs) instead of silicon for the fabrication of semiconductor devices, due to the higher mobility of electrons in GaAs [7]. Novel transistor designs, analogous to the familiar fieldeffec t (FET) and bipolar transistors, have had to be developed to exploit this property, but a number of technologies are now emerging. Standard integrated logic building blocks such as counters and shift registers are available using metal-Schottky (MESFET) or junc• tion (JFET) FETs, as are RAM memories and gate arrays in the range of l-10k elements. Typical propagation delays of 200-300 psec per gate are quoted. Newer ideas involve the use of GaAlAs layers in the GaAs structure to produce further improvements, and devices incorporating these techniques to give still higher speeds or lower power dissipation are expected to go into production within two years. GaAs devices also offer improved radia• tion hardness compared with silicon. Problems exist at present in the quality of both raw material and device manufacturing techniques, so that the prospect of readily available components in large quantities remains in the future. However widespread R&D effort is in progress aimed principally at military and communications applica• tions, and this technology should be a viable choice for the detectors of the 1990s.

Silicon technology continues to improve, meanwhile, with costs continually falling and the density of packaging of components rising [8]. In memories, 1 Mbit chips are already available with 4 Mbits expected within the next year. By 1995 64 Mbits on a chip could be possible with feature sizes well below lui. The use of very large scale integration (VLSI) for the front-end readout seems essential for the next generation of experiments - custom VLSI devices can now be routinely produced by the electronics industry with quoted turnround times for produc• tion down to 6 months.

A.2 Flash ADCs It is likely that important constraints on the design of the DAQ and trigger system architecture at LHC will come from the timing characteristics of the readout and digitisation electronics for the detectors, as is currently found for LEP and HERA experiments. The major problem in these experiments occurs in the calorimeter readout where the non-availability or prohibitive cost of sufficiently fast high-resolution ADCs leads to the adoption of some multiplexing scheme which increases the readout time by a large factor. This problem is compounded by the fact that ADCs of the highest available resolution do not cover the dynamic range of signals required, from muon or other calibration signals to a high energy electron in one electromagnetic tower. Schemes for extending the dynamic range of the readout must therefore be found, many of which involve further increases in readout time. While the question of readout time is particularly important for high repetition rate machines such as LHC, techniques for dynamic range extension will also be required at CLIC.

Our solution for the DAQ system at such a high repetition rate collider, described in Section 3, follows previous studies for LHC and SSC in assuming that a strictly deadtimeless pipelined firstleve l trigger can be built. This implies that each calorimeter tower is equipped with a flashAD C of 12-16 bits resolution, sampling continuously at a rate of 80 MHz. Rates up to 200 MHz. have been quoted in previous work [5], but it is not obvious that such a fine sampling is needed. In the simplest scheme for achieving the necessary dynamic range, two ADCs per - 284 -

Channel are required. An examination of the current state of the market shows that, while such devices may well be technically feasible within the timescale of an LHC detector, closer collaboration between High Energy experi• mentalists and industry would be useful again here to ensure that they are available with reasonable characteris• tics, in large numbers and at an affordable price. A true flash ADC of 12 or more bits would be a very complex device, containing for instance 4096 comparator circuits for 12 bits and associated priority encoding logic to form the 12-bit output number. Today's fastest high resolution devices use a technique known as "sub-ranging" to reduce the complexity, where the digitisation proceeds in two stages, each of much lower resolution (2x6 bits for a 12-bit result). 10-bit sub-ranging ADCs are available with a sampling rate of 40 MHz, or 12 bits at 20 MHz, a factor 4 slower than required for a deadtimeless first level trigger, and at a cost around 4000 pounds. On the other hand ADC technology is developing fast under pressure from military consumers, and new ideas such as the use of GaAs for extra speed are being introduced by the major manufacturers. Since the digitisation opera• tion in a sub-ranging F ADC proceeds in several stages, higher sampling rates could in principle be achieved by pipelining these stages, so that new analog samplings can be performed while previous digitisations are still in progress. In this case the rate is given by the time taken to complete one stage of the operation rather than the full digitisation time.

The simplest option for extending the dynamic range of the calorimeter electronics, used by a number of LEP and HERA groups, is to digitise the signal from each tower twice with amplification factors of order 10 between the two channels. In a first level scheme where dead time is introduced between triggers for the digitisation (as at HERA) this implies that the dead time is doubled. In a deadtimeless scheme, as mentioned above, the only possi• bility is to double the number of ADCs. Alternative ways of extending dynamic range exist, some of which may be found in HEP experiments. These include schemes for achieving a non-linear response and for dynamic switching of gain (or attenuation) factors depending on signal size. Development work on fast, high-resolution ADCs must also consider the practicability of using some of these techniques to reach the required dynamic range.

A.3 Digital Signal Processor In resent years digital signal processors (DSP's) have undergone an enormous development mainly in response to demands from the field of speech and pattern recognition, fast Fourier Transforms and digital filtering. The market for these devices is both large and rapidly expanding, producing rapid developments and price reductions because of the enormous potential sales volume. DSP's are essentially special purpose Reduced Instruction Set Computers (RISC), optimized for fast multiplica• tion. With the commonly used Harvard architecture, i.e. separate data and instruction paths, Texas Instruments were one of the first firms to offer these devices and they remain one of the markets leaders. Their TMS32020 is

a typical product; it has a 200 ns cycle time and can perform a 16b/t X 16bit multiplication in one cycle. It has 288 bytes of on chip RAM and dissipates 1.5 W. While the present price is 250 SF, Texas promise that the price for a quantity order will have fallen to about 25 SF by 1988. Many other manufactures such as NEC, Fujitsu and Motorola are now entering the field. A typical 'state of the art' device is the Motorola 56000, which has four internal buses, two 256x24 bit RAM's, three execution units and can perform a 24Wr x 24bit multiplication in one 100 ns cycle. This corresponds to more than 10 MIPS at this clock rate. - 285 -

Future developments, increased sophisiication and reduced prices of these devices are certain due to the emor- mous amount of resources being devoted to them and their potential market. These factors, together with the size of typical DSP's ( of the order of a few centimeters ) make these devices very attractive as on-board computers giving large amount of distributed processing power to process and compact raw data at the front end of data acquisition systems of the future.

A.4 Transputers. Transputers form a microcomputer family produced by INMOS that is equivalent in power to the 68000 series. Transputers are RISC machines with a very small instruction set and only a few registers. They have an internal memory that, unfortunately, is rather small and a parallel address/ data bus that is 16 or 32 bits wide, depending on the model. Context switching is done very rapidly, compared to the 68000's. The transputers are optimised for inter-process communication, not only in their internal architecture, but also externally. Apart from the paral• lel bus, each chip has four serial bidirectional links that are used for message passing between processes running in different CPU's. The links can operate in DMA mode and transmit data while processes in the transputers continue to run. Due to these properties, transputers can easily handle parallel processing. As an example one could imagine a calorimeter trigger implementation where many transputers are used to process the detector information. Each CPU receives information on a limited region of the calorimeter and performs pattern recognition on these data using the fast parallel bus. Results are passed via the serial links and combined by separate processes running in the CPU's that the results pass through. A problem inherent in this approac is how to deal with the boundaries between the areas dealt with by different CPU's. Solutions can be envisaged where the data of those areas are made available to all (2-4) transputers concerned, or where (pre-processed) information about the boundary areas is passed via the links. Figure 9 shows the geometry of such a system.

A.5 Data storage devices [9]. A standard 6250 bpi tape as used in present-day experiments has a capacity of 150 Mbytes. The data rate expect• ed from each of the LEP experiments is equivalent to 200 tapes per day. In order to store data in a more com• pact way optical discs can be used. Recording on optical discs is done by a laser which burns precise holes on the sensitive surface of an optical plate, decreasing its reflectivity in a small spot. The linear record density is < lum which is similar to those of magnetic discs, but the distance between tracks is 1-2 |im , a factor 50 more dense. The theoretical limit for the density with short wavelength lasers is of the order of 50Mbit/mm2 . Discs are available today with have a capacity of 4 Gbytes and can be written at rates up to 3 Mbytes/sec. An automat• ic loading procedure of discs in a jukebox system will provide fast random access to huge datasamples. A system containing 100 discs will store 400 Gbytes, enough for all the data collected by one of the LEP experiments in one day of running. Table 1

MACHINE PARAMETERS AND EVENT RATES.

PP ep e+e" Bunch spacing 25 ns 165 ns 170 us

Luminosity (cm ~2s ~l) 10» 1032 1033

Large cross-section ä (0"nc +OŒ) «*«) (CTYy 0-9 er««)

100 mb 0.5 nb 1 nb

8 _1 Large cross-section rates 10 events s ~l 0.05 events s 1 event s ~l Events per crossing. 2.5 8-10-9 1.7-10-4 Events observed per crossing 1.5 810"9 1.7-10-4 Events per trigger 2.5 8-10"9 1.7-10^*

Typical cross-sections from

new physics < 100 pb 0.5 nb < 1 pb

Background from

beam—halo « physics rate 106 s 0 (6 > 2°) beam-beam 0 (8 > 2°)

synchrotron radiation. ta* ? 0 - 287 -

Table 2

Timing properties of various calorimeters

Jitter Type Rise time Pulse width (small cells)

Lead glass ± 2ns < 5ns ~ 40nS

Scintillator sampling with wavelength shifter ± 2ns ~ 10ns 70 - 100ns

Scintillator sampling with fast wavelength ± 2ns ~ 5ns 20ns shifter

Gas sampling (MWPC) 20ns/mm ~ 10ns > 100ns

Liquid Argon < 5ns 50ns Several 100ns

Liquid Argon + Methane < 5ns 25ns Several 100ns

Silicon sampling < 5ns 5ns - 20ns - 288 -

References.

[1] Results from the Large Cross-section Group at this workshop. [2] Privat communication with M. Haguenauer and M. Haguenauer and G.Matthiae, Total corss-section ans diffractive processes at the Large Hadron Collider. Proceeding of the ECFA-CERN workshop on a LARGE HADRON COLLIDER IN THE LEP TUNNEL ECFA 84/85 CERN 84-10 5 September page 165. [3] P. Jenni. Report from the Jet Detection Group Proceeding of the ECFA-CERN workshop, ECFA 84/85 CERN 84-10 5 September page 549 [4] G. Altarelli, Physics of ep collisions in the TeV energy range. Proceeding of the ECFA-CERN workshop, ECFA 84/85 CERN 84-10 5 September page 549 [5] J. Garvey, Triggering at LHC. Proceeding of the ECFA-CERN workshop, ECFA 84/85 CERN-10 5 September page 243. [6] P.Freeman and A.I.Wassermann, Ada Methodologies: Concepts and Requirements, ACM Sigsoft Software Engineering Notes, Vol 8, No 1, (Jan 1983) 33-97 Lee, Springer 1986 [7] Digital GaAs ICs, Technology Report, David Bursky, Electronic Design, December 1985 [8] Picosecond Electronics and Optoelectronics, Proceedings of the Topical Meeting, Lake Tahoe, Nevada, 1985, G. A. Mourou, D.M. Bloom and CH. [9] E. Freytag, Data storage, where do we store terabytes of data? Computing in HEP, North Holland 1985. Page 123. - 289 -

FIGURE CAPTIONS.

Figure 1. Compilation of the pp and pp total cross-section data presented together with two extreme models, (ins)2 behaviour and asymtotically constant total cross-section. The predictions for the total cross-section at 17 TeV ranges from 90 mb to 135 mb.

Figure 2. Cross-section for the production of 2- , 4- amd 6-jet events at nIF =17 TeV, integrated above thres• hold pj. The solid line represents the QCD prediction. The dashed curve shows the predicted 4-jet cross-section from multi parton interactions.

Figure 3. The cross-section for observing missing transverse momentum in excess of n a, from standard QCD

sources, a is defined as 0.5-\IÏ£r. The minimal value of TET used in the calculation is 100 GeV, giving a = 5 GeV. The dotted curve shows the contribution from u-, d- and s-quarks and the dashed curve the momentum unballance from c-, b- and t-quarks. The solid line gives the cross-section from all sources.

Figure 4. Differential cross-section for neutral-current and charged-current processes as a function of momentum transfer (Ç2). The cross-sections are shown for three values of \LT , 300 GeV (HERA), 1.41 TeV (LEP-LHC) and 2.0 TeV (LEP-LHC). Also given are the neutral-current event rates per day, corresponding to the integrated

2 2 cross-section a ( Q > Q 0 ).

Figure 5. Fraction of beam-halo events accepted by a total transverse energy trigger as a function of the distance from the production point to the center of the experiment. The distribution is given for three values of the trigger threshold, 10 GeV, 20 GeV and 50 GeV.

Figure 6. The photon angular spectrum from beam-beam interaction (beamstrahlung) at an e+e~ machine with equal to 2 TeV. The rate is given as effect per space angle ( mW/^ster ). The rate observed above 200 urad is almost zero. The electron angular distribution has a similar shape.

Figure 7. Flow of the digitized calorimeter signals through the pipeline, buffer memory, DSP system to the second and third level trigger.

Figure 8. Block diagram of the first-leveltrigger . The 100 ns can be adjusted by changing the length of the pipe• line following the FADC.

Figure 9. Second level trigger transputer grid. Each transputer finds energy clusters in a restricted region of the calorimeter. When the local energy clusters are identified the information is exchanged between neighbours and global energy clusters are found. /s (GeV) pT (GeV)

Figure 1 Figure 2 Figure 3 Figure 4 10"

1CT r

4 10 r

N)

1íf r

1Cf

1(J 50 100 150 200 250 300 0 (yLirad)

Figure 5 Figure 6 - 293 -

80 MHz CLOCK

CALORIMETER SHIFT FADC CELL > REGISTER MEMORY V V)

READY

SECOND THIRD BLOCK DIAGRAM INDICATING LEVEL ) LEVEL FLOW OF FADC DIGITIZED •V- SIGNALS TO SECOND AND THIRD LEVELS GATE Figure 7

80 MHz CLOCK

e m BUFFER CALORIMETER FAOC SHIFT REGISTER MEMORY CELL

25rts CLIP Sine ELECTRON AOD T'HOLDS c 0 JET AOO M T'HOLDS Sin 8 - WEIGHTED B HAORONIC CELLS I N Cos0 AOD ET I T'HOLDS N G Sin0 E,.E L AOO y T ' HOLDS 0 G I C MUON

X)0 TIME.ns

Figure 8 - 294 -

Calorimeter trigger with transputers >, Input: calorimeter data from limited area of calorimter to each transputer on 32 bit bus

Figure 9 - 295 -

DESIGN AND LAYOUT OF pp EXPERIMENTAL AREAS AT THE LHC

W. Kienzle

CERN, Geneva, Switzerland

1. INTRODUCTION

We have performed a design study of possible experimental areas around the Large Hadron Collider (LHC) in the LEP tunnel. We have concentrated our exercise mainly on pp collisions; ep collisions require separate collider runs including LEP operation. We also assume that an ep experiment should be located in a separate interaction region owing to special logistics requirements such as the electron beam path etc. For details on the ep case, we refer the reader to the contribution of W. Bartel et al. in these Proceedings. Particular contributions to the present work came from the following members of our Study Group: - Collider and low-beta insertions: J. Gareyte - Experimental areas and infrastructure: G. Bachy - Radiation levels: G. Stevenson - Beam pipe and vacuum: B. Angerth/H. Wahl - Small-angle experiments: M. Haguenauer with additional contributions by T. Àkesson, M. Albrow, F. Bonaudi, A. Verdier, L. Vos, and V. Vuillemin. As far as the choice of suitable Interaction Regions (IR) around the LEP ring is concerned (Fig. 1), we assume that IR 5 and IR 7 are possible 'unbiased' candidates; IR 1 may be suitable too, pending interference with the injection system; IR 3 is ruled out as it serves for the beam abort system and is also deeper underground. The even-numbered IRs 2, 4, 6, and 8 have not been considered here, since they are already occupied by LEP experiments, and they may have proposals of their own for how to re-use their equipment at the LHC.

# 2,4,6,8 a LEP EXP'TS # 3= BEAM DUMP H 1 = INJECTION & LHC EXP'T I?) #5+7 LHC EXP'TS

Fig. 1 General layout of the LEP ring with possible positions of future LHC experiments

2. DIMENSIONS OF A 'GENERIC DETECTOR In order to design experimental areas, we first need the dimensions (e.g. diameter x length) of a 'typical detector' and in particular we have to be sure that these dimensions are indeed needed as every extra metre costs a lot of money. Therefore we have tried to obtain as much independent information as possible: i) our own - 296

Fig. 2 Schematic structure of a generic general-purpose LHC experiment

'reference detector' (Fig. 2), ii) the generic detector of the Central Tracking Study Group (these Proceedings), and iii) SSC detector designs from the 1986 Snowmass Workshop. The ingredients for a new general-purpose detector can be listed as follows (Fig. 2):

gas device with narrow gaps,") r* 1.5 m ") , ,„_, - Central tracking j j op/p = 10 p tubes/straws J l ~ ±2.5 m J

fine-grained: ~) Pb/U with ~) _

- ey calorimeter (25X0 deep) | | CTE/E < 10%/vl Ad x Art = 1° X 0.01 J TMP/TMS )

r = 2m •) - Superconducting solenoid: { B « 1.5 T t = ±3mJ

6X fine-grained U/Cu with TMP ~) r- - Hadron calorimeter 1! ffE/E « 40%/VE 6X tail-catcher Fe/gas device J

Muon detector = 'A MUST' : about 4 m of Fe instrumented with gas device (see Lausanne '84).

In addition, very likely a forward detector is needed around the beam pipe, requiring another 6 m of length.

The sum of all these items leads to a Total detector size of (r = 9 m) x (I = ± 18 m)

The external dimensions are in accordance with the Central Tracking Study Group (D. Saxon et al., these proceedings). Furthermore, we show for comparison (Fig. 3) the two detector designs described at the SSC Workshop (Snowmass 1986, Fig. 3). The values for diameter and length are compiled in Table 1. Conclusion: The general-purpose detector needs approximately ±18 m full length from the crossing point rather than ± 10 m as assumed originally (Lausanne '84). - 297 -

a)

b)

Fig. 3 Examples of SSC reference experiments (from the SSC Workshop, Snowmass, 1986)

Table 1

Parameters of experimental area

Design Detector (r x (.) Hall (0 x full i)

(m) (m)

LHC study: this report 9 x ±18 30 X 50 push-pull Lausanne 1984 9 x 25 30 x 64 bypass SSC (Snowmass 1986): Model I 8.5 x ±25 27 x 50 push-pull Model II 9 x ±24 32 x 70 bypass

Average dimensions (r = 9) x (I = ±23) 28 x -50 push-pull for a typical detector 31 x 67 bypass - 298 -

3. DESIGN STUDY OF TWO TYPES OF INTERSECTING REGIONS Two types of experimental areas have been designed, assuming total dimensions of (0 = 20 m) x (£ = 40 m) for a general-purpose experiment, and making use of the LEP design computer. In both solutions, PUSH-PULL and BYPASS, we aim at a maximum of decoupling between actual LEP operation and the construction and assembly of the LHC detectors.

3.1 A mobile experiment: The PUSH-PULL solution A large garage, a vertical cylinder of about 40 m diameter, initially decoupled from the LEP ring, serves for assembly and technical checking-out of the detector; it is equipped with a polar crane suspended from a domed roof. The layout is shown in Figs. 4a and 4b. The collision hall and passage between the LEP ring and the garage consists of a horizontal cylinder of about 35 m diameter. At a suitable time the detector will be moved on rails (à la UAI), as a whole or in pieces, into the collision hall; later on, between runs, only the sophisticated inner detectors (e.g. the central detector and fine-grained calorimeters) are brought back into the garage, whilst the heavy outer steel frame ('muon wall') may remain in the collision hall. An electronics counting room of cylindrical shape [(0 = 10 m) x (I — 30 m), with two floors] is situated between the garage and collision hall at a height which is sufficient to allow continuous access during LHC operation. Access to the garage is not permitted when the LHC is running, whilst the radiation due to LEP can be shielded quite easily. Furthermore, there are the usual access shafts for material (0 14 m) and an additional shaft for personnel. The total excavation volume is of the order of 120,000 m3; also, the infrastructure and the experiment itself (approx. 30,000-40,000 t) are of such magnitude that the feasibility of the push-pull solution is not obvious, at least for the case of a large general-purpose detector. It may, however, remain a valid solution for smaller experiments such as those with a specialized detector (see below under subsection 4.2) or with refurbished and LHC-adapted detectors of the LEP or UA type. For a comparison with the SSC, Fig. 4c shows the push-pull layout described at Snowmass '86.

AUX. PIT

Collision hall 26 x 50m

Assembly hall 45x36m c) COLLISION HALL I*-— GARAGE / ASSEMBLY HALL

Fig. 4 Design study of a 'push-pull' experimental area at LHC: a) the artist's three-dimensional view; b) top view onto a horizontal plane; c) an equivalent design study for the SSC (Snowmass, 1986). - 299 -

3.2 A fixed experiment: The BYPASS solution In order to avoid moving the gigantic experimental apparatus at all, and to avoid interference with LEP, we are proposing a bypass. The idea is to build at IR 5 or 7 a stretch of additional tunnel parallel to the LEP tunnel but initially not connected, at a radial distance of about 20-25 m. The bypass should not reduce the LHC maximum energy; it consists of two arcs and a short straight section; the latter contains the beam separators and low-beta elements plus the experiment covering a length of ±20 m from the intersection point. The technical feasibility of the bypass, from the LHC machine point of view, is not yet clear, and a design study is under way. An artist's view of a possible collision hall [(0 « 30 m) x (£ = 60 m)], complete with access pits and counting room, is shown in Fig. 5a; for dimensions and technical details, see Fig. 5b, and for a comparison with the SSC, see Fig. 5c (Snowmass '86). The collision hall of the bypass solution is somewhat longer than for push-pull, as extra space must be provided for extraction and in situ storage of the inner components of the detector. The logistics advantages of this solution are obvious, both for the detector itself and also for LEP operation. The total excavated volume is about half that of solution 3.1, and the infrastructure is, of course, much simpler. Table 1 summarizes the dimensions of the experimental areas in the various solutions studied. In conclusion, a collision hall of the order of (0 < 30 m) x (£ < 60 m) seems adequate for a general-purpose LHC detector. - 300 -

4. SPECIFIC MODES OF OPERATION In addition to the standard LHC mode for a general-purpose detector operating at a typical luminosity of 1033 cm-2 s-1, 25 ns bunch spacing, and an insertion optics of ßaßv = 1 m2, we consider two extreme running conditions at very low and very high luminosity, respectively.

4.1 High-beta insertion for small-angle experiments

In order to make elastic scattering and Coulomb interference (atot), experimentally accessible, a 'high-beta' beam optics is required in at least one insertion, in analogy to similar experiments at the pp Collider (e.g. the former UA4 experiment); however, the scattering angles in the case of the LHC are very small, typically microradians. The detectors, movable 'Roman pots' with silicon strips of a few square centimetres area, would thus be located at a distance determined by the LHC phase advance. Numerical examples of momentum transfer t (GeV/c)2, scattering angle 6 (/trad), and beta values (/3H in m) are given in Table 2.

Table 2

Examples of parameter values for a high-beta insertion for small-angle experiments

g a) t ß (GeV/c)2 (/irad) (m)

Normal beta 1 125 1 Total cross-section io-2 -12.5 100 Coulomb interference io-4 1.2 10000

a) 6 = V7/sin A0 VjS; e = beam emittance; A<¡> = phase advance.

The luminosity requirement for this type of experiment is rather modest, L = 1026-27 cm-2 s"1, and the typical amount of data needed, e.g. 105-106 elastic events, can be achieved in an overall run of about a week. Therefore it would not seem worth while to build a dedicated permanent insertion for elastic-scattering experi• ments, taking also into account that a permanent hybrid optics would be uncomfortable for normal LHC operation. The elastic-scattering experiment should rather be centred on a general-purpose set-up, thus obtaining additional vertex information, and data should be taken in dedicated runs with whatever high-beta optics is required for short periods of time.

4.2 A (very) high luminosity experiment The experimental search for the Higgs particle can be considered as one of the prime objectives of the LHC. Amongst the various decay modes, the most unambiguous is the multimuon channel H -» Z°Z° -» n+ /i~ + n+ yT, in particular as it is experimentally accessible even with present-day detector technology and in the presence of high luminosities. However, since the rates are rather low in the 1 TeV mass range the LHC must be pushed to its extreme luminosity limits, for example in one specifically designed low-beta insertion with a suitable 'beam-dump type' experiment: the 'Muon Ball'*1. A by-product of this experiment is the search for Z'-* w up to masses between 5 and 6 TeV (Ellis and Pauss, these Proceedings).

*) Name coined by C. Rubbia. - 301 -

L = 12 m

Fig. 6 Schematic sketch of a multimuon spectrometer for Higgs and Z' searches at high luminosity

The 'Muon Ball' detector (Fig. 6) has a total absorption, heavy metal shield (e.g. 5\ of tungsten) surrounding the interaction point; there is no vertex detection at these luminosities. Muons are identified and measured in an iron yoke magnetized by a solenoidal coil; the iron is suitably laminated and equipped with tracking detectors, for example as in the design described at the LHC Workshop, Lausanne, 1984. Momentum reconstruction is done essentially in a plane perpendicular to the beams, thus making use of the sharply defined vertex at the beam crossing. The mass-independent resolution of this device matches well the rather large physical width of the Higgs particle. Table 3 summarizes the expected event rates for 100 days of running at a luminosity L of 5 x 1034 cm-2 s"1 indicating that, in order to be sensitive to Higgs masses up to 1 TeV, the highest luminosity is indeed needed.

To achieve L = 5 x 1034 cm-2 s"1, the standard L of 1033 cm-2 s_1 would have to be boosted by the following improvement factors: i) bunch spacing of 5 ns versus 25 -• x 5,

1 ii) special low-beta insertion* of ßx, ßy = 0.5 m versus Im-* x 2, iii) protons per bunch 1011 versus 2.5 x 1010 -» 4.

Table 3

Rates and mass resolutions for Higgs production at LHC in pp - H + X and H - Z°Z° -> V + /» V

mH (TeV) 0.3 0.5 0.8

Events per 100 days a) 410 190 75

hy rp 7mH w 3 12 28

rexp/m4^ (%)b) 12-15 12-15 12-15

a) Assuming L = 5 x 1034 cm-2 s"1 and 100 days (from D. Froidevaux, these Proceedings).

b) Assuming a momentum resolution

and a mass resolution

*) The high-L insertion would be shorter than the original standard of ± 10 m, owing to the compact design of the Muon Ball detector of ± 6 m, and due to the fact that forward calorimetry is almost certainly impossible at these radiation levels. - 302 -

Figure 7 illustrates what a four-muon invariant mass spectrum may look like, depending on the mass of the Higgs particle.

Fig. 7 Example for multimuon physics: H -» Z°Z° -» 4¡i (from D. Froidevaux, private communication)

5. CONCLUSIONS - General-purpose experiment: (0 » 20 m) x (£ — 40 m). - Experimental hall: (0 » 30 m) x (£ » 60 m). - Low-beta quads at ± 20 m from the crossing point for the general-purpose detector - Special insertions for specific experiments:

i) high-beta low L for ae¡ and atot to be done in short dedicated runs; ii) high L > 1034 cm-2 s-1 for Higgs and Z' search needs special low-beta insertion. - For the bypass solution a LHC machine study is required. - 303 - ep INTERACTION REGIONS

W. BarteP

DESY, Hamburg, Fed. Rep. Germany

Working Group composed of

W. Bartel (DESY), G. Guignard (CERN), E. Lohrmann (DESY), K. Meier (CERN), A. Piwinski (DESY), A. Verdier (CERN)

The first two sections of the Working Group report contain a discussion of boundary conditions which will have an impact on the design of ep-interaction regions at the LHC such as the energy of the colliding beams, the luminosity and the required length of the interaction region (IR). In the second part of the report a possible machine configuration for an ep-interaction region is discussed. It is shown that it is not easy to avoid synchrotron radiation hitting superconducting magnets of the proton machine. Therefore all magnets adjacent to the interaction region have to accommodate a warm beam pipe. The experimental conditions concerning beam-related background will not differ from those prevailing in the e+e~ and pp modes of the LHC-LEP complex.

1. ENERGY OF THE COLLIDING BEAMS For the following considerations the proton energy is assumed to be 8.0 TeV, although a lower energy may be chosen for measurements at low Qz in order to obtain a better x-resolution. For a fixed proton energy the ep luminosity is a function of the electron energy as discussed in [1]. The study of ep reactions is difficult because of the fast decrease of cross-sections with increasing momentum transfer. The neutral-current cross-section, for example, decreases like

dQ2 Q4

Therefore the choice of the electron energy will be a compromise between counting rate and kinematic range. Since the energy range which is accessible to experimental investigation with a luminosity larger than 2 x 1032 cm-2 s_1 lies between 1.1 TeV and 1.4 TeV centre-of-mass energy, corresponding to electron energies between 40 and 60 GeV, the present study was carried out for 50 GeV electron energy. Only for the calculation of synchrotron radiation effects, was a higher electron energy of 60 GeV assumed. If the LEP energy reaches 100 GeV the maximum ep centre-of-mass energy becomes » 1.8 TeV, but the lumonisity is reduced to = 1031cm~2s_1. Assuming that structure functions can be measured up to a limit where one event per day is registered, the Q2 range which could be covered by an LHC/LEP machine extends up to 3 x 105 GeV2.

2. ep KINEMATICS A typical deep inelastic scattering event has the configuration of a three-particle final state with a scattered lepton, a current jet, and a target jet in the direction of the proton beam. The diagram of Fig. 1 shows a plot of longitudinal momenta versus transverse momenta with respect to the incident proton direction. The upper part of

*) Convener of the Working Group. - 304 -

ep Kinematics 8 TeV p 60 GeV e

Fig. 1 ep kinematics, 60 GeV electrons on 8 TeV protons. Transverse momentum versus longitudinal momentum. Upper curves refer to lepton momenta, lower curves to current-jet momenta.

the plot refers to final-state leptons, whilst the lower part displays current-jet momenta. Lines of constant Q2 and constant x are indicated. It is apparent that all final-state particles are strongly boosted into the direction of the proton beam, because of the imbalance between the electron and the proton energies. Detector components will therefore preferentially occupy the forward direction (proton direction). For typical momenta of the order of 1 TeV calorimetry will play an important role in the detector design. A magnetic field with tracking chambers could complement the detector in the central region, whilst muon spectrometers with magnetized iron could be employed in the forward direction. A detector designed along these lines will need a free space of about ± 10 m in the interaction region. In order to minimize the loss of particles remaining within the beam pipe, its diameter should be as small as possible. However, it is not useful to go below a value of about 150 mm, since the resolution is limited by the granularity of the forward detector.

3. INTERACTION-REGION GEOMETRY 3.1 Electron insertion As outlined in Section 2, particle detection at small angles requires the availability of a ± 10 m free space around the interaction point. Therefore, an electron insertion of the type proposed in [2], using superconducting quadrupoles and similar to the nominal low-/3 insertions of LEP phase 1 [3], cannot be used, since it provides a free space of only ± 3.5 m. Therefore it is proposed to consider a solution for which the focusing of the electron beam resembles the back-up solution for low-/3 insertions (±9.0 m) of LEP phase 1 [3]. The protons are focused into the interaction region as explained in Ref. [2], where the low-0 insertions for the LHC machine (±61 m) are described. The electron insertion for an ep operation mode has to be designed in such a way that, for a free space of ± 10 m, the correction of the chromaticity introduced by the low-/3 quadrupoles is still feasible in the arcs. The criterion retained was that the vertical chromatic perturbation should not exceed the one of the LEP back-up scheme, whilst the horizontal chromatic perturbation was allowed to be about twice as big as that of the LEP nominal scheme. This choice is possible, since a finite chromaticity in the horizontal plane is less critical than in the vertical direction. These constraints lead to a minimum value of the vertical beta function for electrons at the interaction point of 0.24 m. The vertical beta function for protons in the LHC is 2.8 m. The horizontal beta values at the interaction point for electrons and protons are adjusted so that the beam-beam tune shifts reach 0.03 for electrons and 0.003 for protons. With these assumptions, the luminosity has been calculated as a function of the electron-beam energy. Further parameters in this calculation were 510 proton bunches, and 3 x 1011 protons per bunch, a proton energy - 305 - of 8 TeV, and an electron-beam current using the full available RF power (5 mA at 100 GeV). The luminosity curve thus calculated is compared with a similar curve associated with the initially proposed insertion, i.e. a free space of ±3.5 m and a vertical beta function of 0.2 m. The results, shown in Fig. 2, indicate that the present proposal leads to a luminosity reduction of the order of 22%. As an example, Table 1 summarizes some machine parameters at an electron energy of 50 GeV. A new feature of this proposal is the introduction of a long weak dipole in the interaction region with an integrated field of J B dl = 0.03E (T-m if E in GeV), the insertion quadrupoles remaining centred on the electron beam (Fig. 3). As a consequence, these quadrupoles must have a large enough aperture to allow the cone of synchrotron radiation to go through. Since the integrated gradients are about 0.67E and 0.52E (T if E in GeV) for the first two quadrupoles, warm magnets should have a length of about 5.5 and 6 m, and an aperture diameter of about 280 and 480 mm. This leads to external diameters for normal conducting quadrupoles of around 1.4 to 1.6 m. Since warm magnets compatible with the radiation cone appear to be so big, it seems preferable to consider superconducting quadrupoles with a warm bore. In this case, their length would only be of the order of 1.5 m, and their aperture diameter could be limited to about 230 and 300 mm. Consequently, an external diameter of the cryostat below about 1.0 m can be achieved.

Table 1

Possible parameters for ep interactions'

Quads shifted Dipole in IR ±3.5 m free ± 10 m free

Protons Np/bunch 3.0 x 1011 3.0 x 10u 20x 20TT 0zp(m) 2.8 2.8 £p(m) 58.8 91.0 kb 510 510

/icell ir/2 TT/2 Ep(TeV) 8.0 8.0

Electrons

13 Ne (total) 4.4 x 10" 4.4 X 10 ex (nm) 47.2 49.5

ez (nm) 3.6 3.6 £«(m) 0.2 0.24 & (m) 0.92 1.37 kb 510 510 ir/3 TT/3

Ee (GeV) 50 50

Luminosity 1.76 X 1032cm-2s_1 1.37 x 1032cnr2s-1

a) Since this paper was written some changes have been made in the design parameters for ep operation; however, the conclusions are not substantially modified. The most recent parameter list is given in the summary paper of Brianti [1]. - 306 -

Fig. 2 Luminosity in ep collisions between LEP and LHC for two values of the LEP ^-function at the crossing point /3ye of 0.2 m and 0.24 m corresponding to distances between the crossing point and the first low-/3 quadrupole of respectively 3.5 m (standard LEP insertion) and 10 m ('back-up' LEP insertion).

Fig. 3 Apertures and arrangements of synchrotron-radiation masks in a ± 10 m ep interaction region with weak bending in the interaction region - 307 -

Since it does not appear reasonable to design the LEP quadrupoles of the ep insertion for energies above 60 GeV and to bend up the electron beam when LEP runs with e+e~ collisions in phase 2, a straight bypass should be foreseen (sketch in Fig. 4).

3.2 Zero crossing angle Vertical crossing of the electron and proton beams was studied [2]; but it was disregarded for two reasons. Firstly, the vertical bending magnets which steer the LHC orbit create a large dispersion which cannot be compensated locally. Secondly, it is assumed that non-zero crossing excites harmful synchrotron resonances which blow the beam up. Calculations for the SSC project have, however, shown that finite crossing angles are possible. For the LHC project, angles of the order of the milliradian would simplify the problem of synchrotron radiation penetrating into the superconducting proton machine, but probably at the expense of luminosity. Investigations of beam stability as a function of the crossing angle, equivalent to those done for the SSC, would require about 50 hours CPU time at the CERN IBM, using the programs developed by Piwinski [4]. A zero crossing geometry is achieved by bending up the electron beam, while the proton beam goes straight. The first bend is made about 200 m from the crossing point, with an angle of 4.43 mrad. The second bend is made in the interaction region itself with a low-field dipole and amounts to twice this value (Fig. 4). Consequently, a weak transverse field has to be introduced in the interaction region extending over 16 to 20 m.

r-,05 SEPARATORS O4/3/2/I IP

LHC

QSC2/1

I COLL. n p ¡1 I1 !• ll !, ,1 H 11 ll 11 ¡QS4/3 ' i ll i II !¡¡! DISP. LEP i! II 'I SKETCHED 11 +M-I SUPP. 11 III! »"f*SS i i O S 6/5 11 i i n H 11 20cm 11 OS8/7 11 B2l Ü LiU 20 m OS11 OS10/9

Fig. 4 ep interaction region at LEP and the LHC with bypass for e+e operation of LEP

This scheme has the advantage that the source of synchrotron radiation is inside the experimental beam pipe and that the radiation is directed away from the experiment. Further, the critical energy can be kept low. This scheme has also disadvantages. It needs larger apertures for the insertion quadrupoles. The dipole magnet has to be integrated in the detector design where the dipole field is too low to be useful for momentum analysis of charged particles. - 308 -

3.3 Synchrotron radiation A possible arrangement of magnets and synchrotron-radiation masks around the interaction point is sketched in Fig. 4. A 16 m long magnet with a maximum field of 0.113 T at 60 GeV electron energy is employed to bend the electron beam through 9 mrad with a radius of curvature of 1.77 x 103 m. The critical energy in this case is

E3

kc = 2.2 — = 268 keV.

The radiated power is calculated to be

F4 W p = U=- = 57.9 — , ' Q2 m • mA

P7(30mA) = 14 kW.

In Fig. 3 the fan of synchrotron radiation emitted in the bend is indicated. In a zero crossing angle geometry it is unavoidable that a fraction of the synchrotron radiation hits the aperture of magnets belonging to the proton ring. For proton magnets of 5 cm aperture, about 1.4 kW of radiated power will hit the inner walls of the beam pipe inside the magnets. Hence it is advisable not to use cold-bore superconducting downstream proton magnets close to the interaction region, because for every watt of power absorbed at 4 K between 600 and 1000 W of cooling power have to be installed at room temperature. Thus that part of the proton machine which is hit by direct synchrotron radiation has to be normal conducting or has to use warm-bore superconducting magnets with thick absorbers to stop the synchrotron radiation.

4. BACKGROUND 4.1 Synchrotron radiation The synchrotron radiation fan emitted in the vertical bends is sharply collimated in a plane perpendicular to the machine plane, whilst the radiation from the quadrupoles appears in a cone around the beam and is much weaker. Therefore, at the present stage, only two sources of synchrotron radiation are considered: the vertical bends of the electron beam 200 m before the interaction region and the bend in the interaction region. The radiation created in the vertical bend 200 m before the interaction point should be masked some 60 m in front of the interaction region. That part of the synchrotron radiation which accompanies the incident beam has to pass through the interaction region and the first electron quadrupoles without hitting any obstacle. This requirement determines the height of the beam pipe in the interaction region, which will be of the order of 60 mm to 100 mm. The synchrotron radiation emitted in the interaction region passes through, and is partly absorbed some 20 m behind the interaction point, in a low albedo absorber. Further absorbers are introduced at the position of the first proton quadrupoles. Design studies for synchrotron-radiation masks at HERA have shown that the background at an ep machine is lower than that of an equivalent e+e" machine, because collimators are hit only from one side by direct radiation and masks close to the experiment have to cope with reflected photons only.

4.2 High-energy background High-energy background produced by off-momentum particles and beam-gas scattering in the interaction region will be the same as in the case of e+e~ and pp interactions.

5. POLARIZED ELECTRONS The physics scope of ep interactions is considerably enlarged by scattering longitudinally polarized electrons off protons. In order to profit from polarized electrons, the degree of polarization has to be above 70%. That - 309 - requirement can be met with a polarization time of the order of half an hour, assuming a lifetime of the electron beam of about 3 h in the absence of strong depolarizing effects in the machine. Short polarization times at 50 GeV would require the installation of wigglers into LEP. Therefore, before considering polarized beams for ep collisions in the LHC, tests with the LEP machine are necessary to study the principle. Schemes have been proposed which may lead to a sizeable polarization around 50 GeV. In case transversely polarized beams are available, spin rotators have to be introduced in the ep straight sections. That should be possible without major problems.

6. COMMON INTERACTION REGIONS An experiment designed to study pp collisions could in principle be used to investigate ep collisions as well, after some additions. The machine geometry for both cases is, however, different so that an interaction region for both ep and pp physics is not very attractive: switching from one mode to the other would require a major rearrangement of machine components.

REFERENCES

[ 1 ] G. Brianti, these proceedings. [2] G. Guignard, K. Potter and A. Verdier, Interaction regions for ep and pp collisions in the LEP tunnel, contribution to the Particle Accelerator Conf., Vancouver, 1985, and report CERN LEP -TH/85-11 (1985). [3] G. Guignard and A. Verdier, Description of the LEP lattice version 13, configuration with 60° in the arc cells, report CERN LEP-TH/83-53 (1983). [4] A. Piwinski, Computer simulation of the beam-beam interaction at a crossing angle, contribution to the Particle Accelerator Conf., Vancouver, 1985, and report CERN LEP-TH/85-10 (1985). - 310 -

THE CLIC INTERACTION REGION

J.E. Augustin, Laboratoire de l'accélérateur linéaire, Orsay, France

In the present status of the consideration of large linear colliders for TeV energies, a final design of the interaction region is not yet possible. The present study aimed at clarifying the main issues: first, the effects of beam-beam radiation ("beamstrahlung") on the energy dispersion in the collisions and on the outgoing beam divergence; and secondly the final focussing system necessary to achieve the required CLIC luminosity. Other subjects, such as synchrotron radiation masking, beam polarization, and multiple intersections, were not touched upon.

1. Beamstrahlung.

The theoretical understanding of this phenomenon depends mainly on advances in a proper quantum treatment of the photon emission along the electron trajectory. Original studies used classical computations based on electrons in circular motion. It was then realized that the quantum treatment of this phenomenon had also to be applied to beam-beam radiation. This provided a more correct evaluation for high energy colliders. Then the proper computation of the beam- particle collision as such was undertaken. The status of this work is explained and summarized in the presentation by P. Chen/1) The conclusion is that for the parameters of machines up to a few

TeV, the present formulae are correct, and the results reliable.

These formulae are the basis of a code written by K. Yokoya^2) to compute the beam-beam collisions including the focussing ("pinch") due to beam-beam forces. It was used to study, for

the present CLIC set of parameters/3) the energy dispersion in e+e_ collisions, and the angular

spread of the outgoing electrons and photons.

The energy dispersion calculation was made in the study of the CLIC production of a possible

Z recurrence^4); the calculation shows that the broadening of the energy distribution is not likely

to reduce substantially the physics possibilities of CLIC: this is due to the peculiar shape of the

energy loss distribution, with a very high peak at small losses, and a long tail at much larger

radiated energy. With typical CLIC parameters, a half of the collisions take place within less

than 2% of the nominal energy: the mean and variance are not well suited parameters to describe

such a J-type distribution (see figure l^5)). In the present study, CLIC would allow a very clear

observation of a Z-llke state; the peak counting rate would only be reduced by a factor 2 to 3 by

these radiation effects.

The angular spread of the outgoing electron beam, and of the radiation emitted in the col•

lision was studied within the calorimetry group/5) Using the same simulation program^2), it is

found that the photon background is extremely narrowly collimated in the forward direction: the - 311 -

i i i i i r i 1 1 1 1 1 r

Mean w = 1.91 TeV r.m.s. = 0.127

a. X

- 102-

2 -

OL-L J l_ 1 j i i i_ 0.6 0.8 1.0 1.2 U 1.6 1.8 2.0 0 40 80 120 160 200 240 280 320 w(TeV) 0 (urad)

Flg.l Fig. 2

radiated energy flux decreases by about 5 orders of magnitude within 300 micro radians (see figure

2). Studies should continue to see whether indirect, secondary, or higher order mechanisms can generate an intense low energy and large angle photon background. But from what is now known, both the radiated energy and the beam after collision can be taken care of by millimeter size apertures at meter distances from the collision point. It is thus a requirement for the final focus system to provide the corresponding beam clearance.

2. Final Focus.

The aim of CLIC is to provide physics measurements of e+e~ collisions, at the necessary

luminosity for meaningful rates, with minimum interference with experiments design, and to allow for as nearly as possible 4.7T steradian detectors around the crossing point.

The luminosity requires a very strong focussing with a beam envelope at collisions of about 3

millimeters.

Besides the somewhat new studies necessary to design a quadrupole of millimeter-size bore,

which seems tractable, the main uncertainty arises from the energy dispersion of the linac beam,

expected to be anywhere between 2 10-3 to a few percent.

Many ideas about crossing geometry have been put forward and still need to be studied, such

eis flat beams crossing at an angle, the use of bending magnets on the beam path between the linac - 312 - and the crossing point, etc... In this study, we were only concerned with the final focussing of the linac beams at one common point, with enough aperture to clear the outgoing electron and photon beams. The necessary relative timing of the two linacs is supposed to be accurate.

A study of the effect of the chromaticity on any focussing system was presented by F. Ruggiero^6) who showed that, strictly speaking, any achromat made of quadrupoles is defocussing, and thus useless for our purpose. Nevertheless, the actual value of the beam dispersion is crucial in this statement: solutions can be found, up to a few 10-3 dispersion. Above 0.5% or so, one will have to resort to dissipative focussing systems like plasma lenses, or secondary electron bunches/6' An idea which needs more investigation is to take advantage of the time-energy relation in the linac beam to design a time dependent focussing system which could thus be achromatic. Moreover, the linac designers are not yet ready to give final values for the beam energy dispersion: large values have been advocated to avoid transverse instabilities, but a more recent studyt7) shows the possibility to get excellent beam stability together with a very small energy dispersion. Clearly, this parameter has to be kept as small as possible to reach an efficient luminosity.

In this study, we have assumed an energy dispersion of 2 10-3, similar to the SLC one.

Parameters can then be found for a DC quadrupole system. The resulting chromatic effects are of the same order or smaller than the beam envelope, and the desired luminosity is reached by a slight increase in the focussing strength. The following parameters are necessary:

^ ;S 2 10~3 aperture radius 1mm

free space along the beam ±30cm

quadrupole and support external diameter <, 10cm.

It is expected that the corresponding quadrupoles can be built after some research and development of known techniques. These parameters were used to evaluate the small angle properties of CLIC detectors, leading to an outline intersection region arrangement illustrated in figure 3.

Bhabha / Luminosity detector Calorimetrized

magnet l< 30 cm »I

Fig. 3

Clearly, the beam crossing constraints have to be studied in much greater depth in the next years during which a realistic CLIC design will be made, but for the moment, nothing appears to prevent an efficient use of the collisions. - 313 -

References.

(1) P. Chen—these proceedings.

(2) K. Yokoya, Nucl. Inst, and Methods A251 (1986), 1.

(3) K. Johnsen, these proceedings.

(4) D. Schlatter in J. Ellis, these proceedings.

(5) P.T. Cox in T. Akeson, these proceedings.

(6) F. Ruggiero, CERN-LEP-TH-87-11 (1987).

(7) W. Schnell, CERN-LEP-RF/87-24 (1987). - 314 -

THE NATURE OF BEAMSTRAHLUNG*)

Pisin Chen

Stanford Linear Accelerator Center, Stanford University, Stanford, CA 94305, USA

ABSTRACT

The physical nature of beamstrahlung during beam-beam interaction in lin• ear colliders is reviewed. We first make the distinction between a dense beam and a dilute beam. We then review the characteristics of synchrotron radiation (SR) and bremsstrahlung, and argue that for a wide range of beam parameters beamstrahlung is SR in nature, even if the beam is dilute. Some issues concern• ing the specific conditions in beamstrahlung as SR are then discussed. Finally we suggest that in order to suppress beamstrahlung energy loss and to improve energy resolution, it is desirable to partition a bunch into a train of bunchlets, where the length of each bunchlet is shorter than the SR convergence length.

1. INTRODUCTION

For future e+e~ linear colliders with center of mass energy at the TeV range, and luminosity around 1033 cm-2 sec-1, it is inevitable that the e+e- bunches be focused down to miniscule dimensions. The high density of charged particles at the interaction point would provide strong electromagnetic fields viewed by the particles of the oncoming beam. The bending of particle trajectories under the influence of these EM fields is called disruption.1) During this bending particles would radiate, causing an energy loss of the beam; this is called beamstrahlung.2) Both effects of disruption and beamstrahlung are important to the design of linear colliders.3)

While disruption with negligible energy loss, which is a purely classical phenomena, is in principle understood (although in practice, the effect is convolutional and therefore needs computer simulations for detailed description of the phenomena), the nature of beamstrahlung still needs to be further clarified. In this paper we review the beamstrahlung in various beam parameter regimes. We then point out that it is desirable to partition each e+e_ bunch into a train of bunchlets with longitudinal standard deviation

2. DENSE BEAM vs. DILUTE BEAM

In the laboratory frame (also the center-of-mass frame) of a linear collider, an electron encountering a positron with an impact parameter b would have an effective interaction time Aíi ~ 6/7C, where c is the speed of light, due to the fact that the fields associated with relativistic particles span about an opening angle AO ~ 1/7. In turn, the corresponding effective distance of traverse through the fields of the oncoming particle is

A£i = cAíi ~ - . (1)

*Work supported by the Department of Energy, contract DE-AC03-76SF00515. - 315 -

Fig. 1. Schematic diagram of a dense beam.

2.1 Dense beam

Consider an electron encountering the entire flux of the oncoming positron bunch. The flux is roughly Ne _ c

[2 az - Ai2 ' >

where A£j = oz/N is the mean longitudinal separation of target particles. The target beam is considered to be dense if A£i » A£%. Taking a typical value of impact parameter to be one standard deviation in the transverse direction , i.e., b ~ o>, the condition for a dense beam translates into

NaT > 1 (3)

1oz

In this case the background field provided by the particles in the oncoming bunch is continuous. (See Fig. 1.) For example, the Stanford Linear Collider (SLC) beam parameters are 7 = 105, number

1 of particles per bunch N = 5 X ÍO ^, oz — 1 mm, and oT — 1 ßm at the interaction point. Thus,

NaT/^az =• 500 » 1, and the beams are dense.

2.2 Dilute beam

A beam is said to be dilute if ¿\¿2

< 1 (4)

In this case the background field becomes discrete and the test particle would see the granularity of the target bunch. (See Fig. 2.) For example, in the conceptual accelerator of 5 TeV + 5 TeV discussed by

4 7 8 -3 3 Richter, ) 7 = 10 , N = 4.1 x 10 , az ~ 10 mm and o> ~ 10~ í¿m, we have NaT/^az =¿ 0.04 < 1. The beams are therefore quite dilute.

/ \ I I -I—I- I© I

Fig. 2. Schematic diagram of a dilute beam. - 316 -

5 6 9 In one version of the CLIC parameters, ) where T = 2X 10 , TV = 5.4 x 10 , az = 0.5 mm and

or = 65 nm, we find Nar/iaz — 0.35 ^ 1. Therefore the beam is marginally dilute.

3. SYNCHROTRON RADIATION AND BREMSSTRAHLUNG

In terms of the physical nature of beamstrahlung, two well-known radiation mechanisms come into mind, i.e., synchrotron radiation (SR) and bremsstrahlung (BR). Each mechanism has a different characteristic length.

3.1 Synchrotron radiation

By synchrotron radiation we mean the radiation of charged particles moving in circular orbits under a uniform background magnetic field infinite in extent. Quantum mechanically, the photons are emitted in a discrete manner. For each radiated photon, it takes a certain convergence length £gR such that the radiation process can be completed. This length is found to be

3 where p is the radius of the orbit and wc the critical frequency (wc = 3cy /2p).

2 When u)c is much less than the kinetic energy of the radiating particle £ = lime , the radiated photons are soft and in large quantity. This corresponds to the classical regime of SR. On the contrary,

when wc » £, the photons would take away a substantial fraction of the particle's initial energy; therefore the conservation of energy-momentum before and after the radiation process and the noncommutativity between the photon field and the particle field have to be properly treated, and we are in the quantum mechanical regime.

A useful Lorentz invariant, dimensionless parameter that indicates the various regimes of SR is T, defined as

2 3 where Bc = m c /eh. In the classical regime, T < 1, whereas in the quantum regime, T » 1.

The applicability of the SR picture to the problem of beamstrahlung can be qualified by the following inequality:

A4 = í£ < ¿SR « ». • (7)

When this is satisfied, the field provided by the opposite beam can be treated as homogeneous and infinite longitudinally. For the transverse dimensions similar arguments apply, i.e., we require that

AMgR- —• (8) 1 - 317 -

3.2 Bremsstrahlung

Historically, bremsstrahlung refers to the radiation phenomena caused by the scattering of a test particle by target particles. In order for radiation to take place, it is necessary that some momentum q be transferred from the radiating particle to the target particles. The minimum of this momentum

transfer, çmt-n, corresponds to the situation where the photon momentum k is parallel to the momentum of the radiating particle:

Qmin = A - \Pf\ - * « T" ) = ö • 9

2 Let us define q = çj + îj., where g(| and q± are the longitudinal and transverse components, respectively. We can then distinguish two characteristic regions of q values.6) The first region is characterized by q « q±. The momentum transfer is essentially in the direction transverse to the particle's instantaneous motion. In this region the value of q± is determined only by the action of the external field (i.e., the scattering angle) and is not associated with the radiation process. In this region of classical momentum transfer, the radiation is essentially that in the Born approximation.

In the second region where q « « qmin the momentum transfer is not determined by the scattering angle of the particle, and the phenomena of quantum diffraction becomes important. From the uncertainty principle the virtual photon that carries the minimal momentum transfer can be absorbed anywhere within the coherence length le,

£C = JL_ = ^KH . (10) 9mtn W

In a wide range of parameters that we study in beamstrahlung, for example, from SLC, CLIC, to the Richter scale, we always find that q± >• q^ due to the following observation:7) For the sake of argument, let us consider bunches as uniform cylindrical slugs of charges. The total q± for a test charge

with impact parameter or is thus

2 2Ne 2 dzE = — q± = 2e dz üx . (11) /I 'x Or Th us we find ' 2.8 X 102 , SLC , 4.3 X 102 , > 1, CLIC , (12) mc ot 2.0 x 103 , Richter 5 TeV + 5 TeV .

Whereas typically

mc 2 \lf 1iJ

Therefore q± » qn and we have q « q±. This means that the bremsstrahlung coherence length tc is irrelevant to our issue. More importantly, the applicability of bremsstrahlung to the problem of beamstrahlung lies only in the domain

oz & ¿sn • (13)

In this regime the spatial extent of the external field is too limited for synchrotron radiation to take place. - 318 -

4. BEAMSTRAHLUNG AS SYNCHROTRON RADIATION

In the parameter regime where SR is applicable to beamstrahlung (i.e.,

4.1 Uniformity of field strength

Typically the density distribution varies both longitudinally and transversely across a bunch. For

round beams where R = crx/oy = 1, and define ar = ax = o-y, the distribution function is proportional

to ¡R=I:

2 2 2 2 _ 8 exp{-z /2a } l-exp{-r /2gr } Ir=1 V3 V^r" r/or

and for flat beams (R > 1), (14)

2(1 + R) e-*3/**'- ( x + iy \ _ e-[(.»/*2)+iy/*î)] . w f x/R + iRy \

1 2 v/3 (R-1) ' \^2{R*-l)os) \y/2{S*-\)a9)

where w(ç) is the complex error function. In turn the field strength has the same functional variation. It is in principle possible to evaluate the radiation energy loss in this varying field by carrying out calculations based on first principles. It is, however, more desirable if there exist simple scaling relations where energy loss and other related physical quantities in beamstrahlung can be evaluated based on the knowledge of single particle radiation in a uniform field; namely, that of Sokolov and Ternov.8)

For this purpose it is essential to define an effective SR parameter T for the entire target bunch.9) In the case of bi-Gaussian density distribution of Eq. (14), it is found by integrating over the entire bunch that

2R1/2 (15) 4 yJOXOyOZ 1 + R

where re is the classical electron radius and Xc the Compton wavelength. The local T(x,y,z) is then related to T via

T(«,y,*) = /Jl(*,y,*)T . (16)

4.2 The effect of granularity

In the case of dilute beams, the fields are physically discrete as viewed by a test particle. Though on the average the test particle would bend towards the axis, locally the dilute scattering centers may deflect the particle inward or outward stochastically. This "wiggler" effect due to the granularity would therefore superimpose some ripples to the smooth trajectory associated with the global bending. The

mean periodicity of the ripples is expected to be the mean separation between particles A£2 =

d ( A£2 oz >

Since we are in the regime where A£2

Ud » ^ wc . (18)

10 In the case of the Richter scale, uc :» £, thus both wc and w¿ are kinematically forbidden. ) The same conclusion was reached by Blankenbecler and Drell11) through explicit calculations.

As for CLIC, even though wc < £, one can easily verify that wj is still so large that wj >• £, and would not be seen. We therefore conclude that in a wide range of beam parameters the effect of granularity in the case of a dilute beam would not be seen.

4.3 Finite length of the target

Given the total power of radiation J?(T) from an electron and the photon emission rate JV(T), one can deduce scaling laws of various physical quantities related to beamstrahlung if another parameter T is introduced. This is because dimensionally the fractional energy loss per electron is

• (-»

It is thus useful to introduce T as available energy per unit length9);

From computer simulations, Noble9) has deduced a set of remarkably simple scaling laws for beamstrahlung with negligible disruption based on the two parameters T and T, which includes the

following relations for average energy loss 6, average photon number (iV7), and average photon energy (ftw/£): -<¥>-H'<*> W = 5ffMT) . (21) 2v/3 r

4 gff)

(T>- 5y/3 h Çr)

where T2 , T < 1 ,

ff(T)=M 0.556 T2/\ T»l,

and

&m = iT' T<<1' V ' \ 1.012 T2/3 T » 1 , are the well-known functions for radiation power and emission rate in SR. For intermediate values of T, a numerical table for g(T) and Ä(T) is necessary, which can be found in the literature. - 320 -

4.4 The effect of disruption

In reality both e+ and e~ beams pinch each other into smaller sizes during the collision, forcing the

value of T to change. For disruption parameters DX,DS

dx _ DX dy _ Dy (22) dz Z oz dz oz

It can be shown that for a given aspect ratio R,DX — DS/R = D/R. Therefore, the bunch size after

penetrating a distance oz is related to its initial size by

,-D/R D o'y = oye- (23)

and the aspect ratio is changed to

iE' = — = ReD(R~l}lR . (24)

The beamstrahlung parameter T in Eq. (15) is therefore modified into

(1 + R)eD T , for Z?< 1 (25) 1 + Re1*1*-1)/1*.

To generalize this expression to arbitrary value of D, we replace the factor tP by a general function H^2 whose functional behavior can only be obtained numerically. So for arbitrary D and R we have

1/2 T = HB{D,R)T (26) .1 + Rüg-1»2*

In the limit for round beams, R = 1, and

T = ff¿/2T (27)

This equation for the round beam limit agrees with the corresponding expressions in the literature.12

Taking again the example of CLIC, where D ~ 0.91 and R~D = 3.5, we find T = 0.16 and

T = y/TToT = 0.29. Plugging T into Eq. (20), we find that 6 = 0.10, (Nn) = 2.17, and (ftw/f ) = 0.048. Other quantities obtained from a computer simulation13) show that the mean center of mass energy

squared (S/S0) = 0.85, and the rms S/S0 = 0.18.

5. BUNCHLETS AND BEAMSTRAHLUNG SUPPRESSION

From the discussions above, we see that there would be substantial energy loss and energy spread through beamstrahlung in future linear colliders. The available center of mass energy for the colliding beams would therefore be less, and the energy resolution degraded. One way to suppress the beam• strahlung is to partition a typical bunch into a train of bunchlets such that the nature of beamstrahlung departs from synchrotron radiation. - 321 -

Notice that in. the quantum regime of SR the beamstrahlung energy loss 6 ~ 0.37ctT2^3/r. Since

l l both T oc a~ and T oc o~ , it is clear that there will be less energy loss for smaller az if all other

parameters are fixed. Therefore even in the range where az ^> £gR> the situation is in favor of short bunches when T > 1. What we are suggesting here is to go beyond this point and to work in the

parameter range where az < IgR. In this limit the external field becomes so short that the edge effects of the field play an essential role in the radiation process. The entire target bunch acts more like a nucleus, and the radiation is turning more bremsstrahlung-like.

First let us compare the following two situations: A magnet with length L, and a similar magnet but cut into two halves. Let the two shorter magnets be separated longitudinally such that no interference between the radiation from the separate magnets would occur. In the classical regime where T <: 1 the total power of the radiation is the same for the two cases, except that the shorter magnets tend to suppress the lower frequency spectrum in favor of higher frequencies. In the extreme limit where the original magnet is cut into a large number of short magnets with length L* < ¿SR, the radiation

power spectrum would become independent of w up to a maximum frequency w* ~ wc (¿SR/L*) (Fig. 3). Radiation is therefore suppressed for w* > £ = ^mc2 or equivalently for

T>ë • (28)

Under this condition, the high-frequency spectrum beyond the kinetic energy of the radiating particle is energetically forbidden, and the total radiation power is reduced.

Fig. 3. The radiation power spectrum of bunchlets in the two asymptotic limits.

The cut-off frequency ui* is related to uc by w* = LJc(ISR/L*).

To invoke this radiation suppression mechanism in beamstrahlung, let us recall that in terms of T,

P- = ^ • (29)

For T > 1, and for radiated photons at the kinematic limit £, the convergence length is therefore 1/3

«»<»-<>-;-(*r-(I) ^

Assuming that a bunch with length az is now partitioned into n bunchlets, each with length a*z, the requirement for beamstrahlung suppression is then

-:<(01/37^T-2/3 . (30)

Next we insist that there is no constructive interference between the radiation from the separate bunch• lets. For this purpose we require a photon radiated at the end of a bunchlet to travel long enough through - 322 - the free space such that before both the radiating particle and the photon reach the next bunchlet, there is 7r/2 relative phase difference between the two particles.

Taking into account the Doppler shift, this translates into the following relation for interbunchlet spacing:

A£*>|7*c • (31)

For a 1 + 1 TeV collider, Ai* =¿1.2 um. In the particular case where T ~ 1, the other condition

[i.e., Eq. (30)] requires that a*z < bA¿*.

Finally, since the power spectrum of this bunchlet arrangement is approaching a constant (i.e., independent of w) in the asymptotic limit, the photon emission rate is thus oc 1/w, and we expect that the rms center of mass energy spread (i.e., the energy resolution) should also be improved because the emission probability for haxd photons is largely suppressed. Technically, it may be feasible to bunch such bunchlet trains by some kind of laser or FEL at ~ 1 fim wavelength while the beam is still at reasonably low energy. More studies are necessary before one can be certain that this is a promising scheme.

ACKNOWLEDGEMENTS

The author is grateful to many helpful discussions with Dr. J.S. Bell of CERN, Dr. K. Yokoya of KEK, and Drs. T. Himel and P. B. Wilson of SLAC.

REFERENCES

1) R. Hollebeek, Nucl. Instrum. Methods 184, 333 (1981). 2) See, for example, T. Himel and J. Siegrist, in "Laser Acceleration of Particles," eds. C. Joshi and T. Katsouleas, AIP Conference Proceedings No. 130 (1985).

3) See, for example, P. B. Wilson, ibid., Ref. 2. 4) B. Richter, IEEE Trans. Nucl. Sei. 32, 3828 (1985). 5) W. Schnell, CERN-LEP-RF/86-06 and CLIC Note 13, 1986. The parameters stated in our paper is a recent modification by Schnell. 6) V. N. Baier and V. M. Katkov, Soviet Phys. JETP 28, 807 (1969). 7) J. S. Bell, private communication. 8) A. A. Sokolov and I. M. Ternov, "Synchrotron Radiation," Pergamon Press, 1968. 9) R. J. Noble, SLAC-PUB-3871, 1986. To appear in Nucl. Instrum. Methods. 10) P. Chen and R. J. Noble, SLAC-PUB-3842, 1985. 11) R. Blankenbecler and S. D. Drell, SLAC-PUB-4186, 1987; submitted to Phys. Rev. D. 12) U. Amaldi, Nucl. Instrum. Methods A243. 312 (1986); and P. B. Wilson, SLAC-PUB-3985,1986. 13) The computer simulation performed by the author is based on the code developed by K. Yokoya. Nucl. Instrum. Methods A251. 1 (1986). - 323 -

PHYSICS AND DETECTORS AT THE LARGE HADRON COLLIDER AND AT THE CERN LINEAR COLLIDER

Ugo Amaldi CERN, Geneva, Switzerland

1. INTRODUCTION

2. LHC AND CLIC PARAMETERS 2.1 The LHC design 2.2 The CLIC concept 2.3 Luminosity comparisons

3. EXTRAPOLATION OF KNOWN PHYSICS TO THE LHC AND CLIC 3.1 The proton-proton channel 3.2 The electron-proton channel 3.3 The electron-positron channel

4. PHYSICS TARGETS AND ACHIEVABLE LIMITS 4.1 The theoretical framework and its questions 4.2 The relevant cross-sections 4.3 Overview of the discovery limits

5. DETECTORS 5.1 Track detectors at the LHC 5.2 Calorimetry for hadron colliders 5.3 Triggering and data acquisition

6. THE INTERFACE BETWEEN ACCELERATORS AND DETECTORS 6.1 The proton-proton LHC interaction regions 6.2 Electron-proton collisions at the LHC

6.3 The CLIC final focus

7. CONCLUDING REMARKS

REFERENCES

1. INTRODUCTION The workshop held in La Thuile (Val d'Aosta, Italy) from 7 to 10 January 1987 was the end-point of the activity started many months ago by the Physics and Detector Advisory Panel chaired by John Mulvey. The Panel acts in the framework of the CERN Long Range Planning Committee (LRPC) set-up by the CERN Council under the chairmanship of Carlo Rubbia. Since the beginning, the Advisory Panel has subdivided the subject among eight working groups. They are listed in Table 1 together with their conveners. On 12 and 13 January, in the CERN Auditorium, reports were presented by the conveners. At the end of the meeting the main points were combined in a summary talk, which was addressed to an audience of machine physicists, engineers, experimentalists and theorists. - 324 -

Table 1

Working groups and conveners

Group Conveners)

Physics 1. Standard Model G. Altarelli and D. Froidevaux 2. Beyond the Standard Model J. Ellis and F. Pauss 3. Large cross-sections Z. Kunszt and W. Scott

Detectors Jet detector T. Ákesson Vertex detector and tracking D.H. Saxon Particle identification F. Palmonari Triggering and data acquisition J.R. Hansen

Interaction regions J.E. Augustin, W. Kienzle and W. Bartel

Leaving aside the usual apologies for the unavoidable incompleteness, in the written version I closely follow the oral presentation. After giving the main parameters of the Large Hadron Collider (LHC) and the CERN Linear Collider (CLIC) considered by the other two Advisory Panels (chaired by G. Brianti and K. Johnsen), I review the status of our knowledge and the main physics questions which are in front of us. Then, following the approach chosen by the physics working groups, by using a relatively wide spectrum of possible 'physics scenarios', I summarize the limits which can be put on twelve different phenomena by performing experiments at the LHC and at CLIC. Finally, I comment on the main problems facing those who will have to build suitable detectors and I point out the highlights among the results obtained by the Detector Groups and the Interaction Regions Group. As usual, the summary is filtered by my perception of what is really important, and only the reading of the proceedings can give a balanced view of what has been achieved. The workshop was particularly stimulating because, for once, instead of concentrating on the physics questions which one particular accelerator can help to answer, we were obliged to discuss the relative merits of two

very different colliders: the Ecm = 16 TeV proton-proton LHC to be installed in the LEP tunnel (and its electron-

proton option) and CLIC, the electron-positron machine with Ecm = 2 TeV, which could be built, given enough R&D investment and time, using technologies already partially developed at CERN. At the workshop the studies were devoted to physics and detector issues without any reference either to the cost or to the time-scale of the colliders; the results of the comparison I shall present are thus an essential, but far from unique, component in the decisional process which aims at choosing the European accelerator for the end of the century. The physics possibilities offered by the LHC were already analysed at the Lausanne workshop [1], whilst the promises of a TeV electron-positron collider had never been systematically studied even if, before now, a few papers had discussed them [2, 3]. Going back to the summer of 1985, when the CERN LRPC was set-up, certainly the most widespread opinion on the issue of an LHC-CLIC comparison was: 'We (almost) know how to build the LHC but not its detectors, whilst we (almost) know how to build detectors for CLIC but not the collider itself. The work done by the Advisory Panels chaired by G. Brianti, K. Johnsen and J. Mulvey have certainly modified this simplistic vision. I shall come back to this point in the concluding remarks. - 325 -

2. LHC AND CLIC PARAMETERS 2.1 The LHC design Table 2 contains the essential parameters of the present design of the Large Hadron Collider [4]. It is probable that an even higher luminosity, up to L = 1034 cm ~ 2 s "~ can be reached.

Table 2

Main parameters of LHC proton-proton collisions

Quantity Value

Centre-of-mass energy, Ecm 16 TeV Luminosity, L 1.4 x 1033cm-2s_I Number of interaction regions < 7 Filling time 2h Filling rate 2d-* Time separation between two crossings 25 ns Number of interactions per crossing 2.5

The centre-of-mass energy appearing in the table corresponds to a 10 T field in the bending magnets. The superconducting magnets built for HERA, when cooled to 1.7 K reach about 7 T. At present an intense R&D program is going on, in collaboration with various European industries, to design and construct models of the needed 10 T magnets.

LHC proton bunches can be made to collide with LEP electron (or positron) bunches at an energy Ecm = (1.3-1.7) TeV and a luminosity which decreases with the energy and varies in the range L = 1032-1031 cm"2s"'. Electron and proton bunches will cross at the frequency of 6 MHz, corresponding to a time separation between two crossings of 165 ns.

2.2 The CLIC concept By now LHC, and its electron-proton option, are well studied and documented. Most accelerator problems have been tackled and solved, and there is no need here to go into more details. On the other hand electron- positron linear colliders are still at the level qf 'concepts' requiring not only the understanding and solving of some essential issues, but also analytical and numerical calculations, model work, prototype development and testing. According to the CLIC Panel, three to five years of intensive work are needed to transform the CLIC concept to a

design that could be used in a decision-making process [5]. Moreover, the successful running of the Ecm = 0.1 TeV SLAC Linear Collider (SLC) is an essential condition for any further development. Given the novelty of the scheme, I devote a few paragraphs to describing it. The upper part of Fig. 1 schematically shows the main components of a linear collider. A few comments are in order: (i) in most schemes the positrons are obtained by using the spent bunches; (ii) the damping rings, which are needed to reduce the invariant emittance e„ of the positron bunches, are not a small component of the system and may be a few kilometres long; (iii) high gradients in the main linac are certainly useful, but a viable collider requires much more than an idea on how to produce such high gradients; (iv) the value ß* of the /3-function at the interaction point determines, together with e„, the luminosity [L oc (6nj3*)~ and small values of ß* are preferred; however, the 'natural' scaling law is ß* oc Ecm2 and in all schemes the final focus poses difficult problems. I shall come back to this question in subsection 6.3. - 326 -

e* DAMPING RING(S) e' DAMPING RING(S)

MAIN LINAC 5-1000 GeV

Fig. 1 Schematic representation of a linear collider complex. In practice the positron source will make use of either the accelerated or the spent bunches. The electron damping ring system will probably not be needed, because Iow-emittance sources are now available and, for this reason, is dashed in the figure. The lower part of the figure represents the two-stage accelerator proposed by W. Schnell, which uses superconducting cavities at 350 MHz (LEP 200) to power the drive beam [7].

It is generally recognized that, to achieve high luminosities (L = 1034 cm ~ 2 s ~ l), the ideal solution would be a fully superconducting (SC) linear collider running almost d.c. with a field frequency in the range 1000 < ÎRF < 1500 GHz and a quality factor Q > 3 x 1010 [6]. Unfortunately quality factors are at present ten times smaller and

the maximum gradients achieved in a single cell are < 20 MV/m, so that the total length of an Ecm = 2 TeV collider based on possible extrapolation of the available technology would be about 100 km. Whilst struggling toward higher gradients in superconducting structures may be helped by the new type of 'warm' superconductors recently discovered, other solutions have to be pursued. New ideas have been proposed and developed [7] during the studies of the CLIC Panel; they still make use of the technology of superconducting cavities, well known at CERN, but indirectly, and can thus provide much higher gradients for acceleration in a very high frequency copper structure. The CLIC Panel is at present concentrating on the proposal by W. Schnell, which is schematically drawn in the lower part of Fig. 1. The superconducting (SC) cavities developed for LEP 200 (ÍRF = 350 MHz, X = 85 cm) accelerate a low energy (3-5 GeV) drive beam. Trains

of electron pulses follow each other at a repetition frequency fr = 6 kHz. Each train is formed of four pulses separated by a distance XRF = 85 cm; each pulse is made of about ten very short electron bunchlets which are about 1 cm apart and contain about 4 x 10u electrons each (in the figure, for simplicity, only five bunchlets are shown.) These bunchlets give energy to coupling cavities resonating at the 30 GHz frequency corresponding to the spacing X = 1 cm of the bunchlets in a pulse. The electromagnetic field produced by the 30 GHz cavities is then fed to a copper accelerating structure of the main linac, which is about 20 cm long and 1 cm across. The average accelerating field in this structure is of the order of 80 MV/m for a field in the SC cavities of 7 MV/m, which is achievable today. The gradient in the 30 GHz structure increases in proportion to the field in the superconducting low-frequency cavities, so that about 150 MV/m can be foreseen for the near future. In the two cases (80 and 150 MV/m) the length of each linac would be about 15 km and 8 km, respectively. - 327 -

Table 3

Main parameters of the CLIC electron-positron collider

Quantity Value With respect to SLC

Centre-of-mass energy, Ecm 2 TeV SLC x 20

Luminosity, L 1033 cm-2s-1 SLC x 150

Number of interaction regions8' 1 SLC x 1

Repetition frequency, fr 5.8 kHz SLC x 35

RF frequency, ÍRF 30 GHz SLC x 10

r.m.s. bunch length, az 0.5 mm SLC 2

Number of particles per bunch, N 5.4 x 109 SLC 10

0-value at the collision, ß* 3 mm SLC 2

6 Invariant emittance, en 2 x 10~ m SLC 10

r.m.s. final spot radius, a 65 x 10_9m SLC 25

Power per beam, P 5 MW SLC x 50

Disruption parameter, D 0.9 SLC x 1

Average energy of beamstrahlung y 's 45 GeV

Beamstrahlung power per beam, P7 0.5 MW

a) More interaction regions can be served by sharing the luminosity.

The main parameters of the CLIC collider as foreseen at the time of the workshop are collected in Table 3. The comparison with the SLC parameters is useful in order to focus on the main challenges. In particular, a repetition frequency larger by a factor of 35, which is possible because of the use of SC cavities which run continuously, implies a roughly proportionally larger number of damping rings such that the sum of the circumferences is a few kilometres. Either the ISR or the LEP tunnel could house the damping system [8]. The

needed invariant emittance in is similar to what can be obtained today at the best synchrotron light sources.

Extrapolating from SLC values with the natural scaling law 03* °= VEcm) gives, at 2 TeV, a ß* eight times larger than what is needed, and the larger fractional momentum spread of CLIC complicates the design of the final focusing

system. The two parameters en and ß* define the transverse dimensions of the colliding bunches (a = Venj3*). At CLIC it should be 25 times smaller than at the SLC, which implies extremely tight alignment tolerances and very sophisticated feedback systems.

2.3 Luminosity comparisons The power radiated at the interaction point by an electron (positron) in the intense magnetic and electric fields of the opposite bunch (beamstrahlung) gives a widening of the distribution of the electron-positron effective

centre-of-mass-energies ÊCm, as indicated by the continuous curve of Fig. 2, computed for an energy spread at the end of each linear accelerator of AE/E = 0.5%. Note that in the figure the effective energy dependence of the

luminosity is represented by the behaviour of the function F(Êcm), which is such that the number of interactions per second is:

rate = L j

Fig. 2 Parton-parton normalized luminosities at the LHC and CLIC as a function of the effective centre-of-mass energy Êcm [10]. The exact definition of F(ÊCm) is given in Eq. (1). The CLIC curve assumes an energy spread at the end of the linacs equal to 0.5% and has a FWHM of 20 GeV. (The LHC and CLIC curves have been computed by A. Nandi and P.T. Cox, respectively.)

F(Êcm) was computed at the workshop by P.T. Cox using a modified version of the Yokoya program [9]. The peak in Fig. 2 is about 20 GeV wide (FWHM) and there is a long tail at low energies. Clearly the collider is working in a 'narrow-band' regime, in spite of the relatively large average photon energy (Table 3). This is good news.

It is interesting to compare the CLIC normalized differential luminosity F(Êcm) with the LHC one. The LHC curves of Fig. 2 show that, as is well known, at low energies the gluon-gluon luminosity is larger than the quark- quark luminosity. Moreover the figure points out that, for a centre-of-mass subenergy equal to that of CLIC, the LHC is rather a quark-quark than a gluon-gluon collider. Photon-photon physics is already playing an important role at present electron-positron storage rings. The dotted curve of Fig. 2 shows the relevant luminosity at CLIC. As we shall better see in the following, at TeV colliders the focus will be on a similar process, in which the initial fermions (leptons or quarks) radiate intermediate vector bosons. The two lower curves of Fig. 2 give the energy dependence of the luminosity for the collision of two longitudinally polarized intermediate bosons W at the LHC and CLIC. It is seen that they differ by a factor of about 7, so that the rates are higher at the LHC. But the quark-quark and gluon-gluon luminosity is almost six orders of magnitude larger, already indicating that at the LHC, for the rare phenomena which can be produced in the WLWL channel, the problem will be background rather than rate. - 329 -

3. EXTRAPOLATION OF KNOWN PHYSICS TO THE LHC AND CLIC To examine the potentialities of accelerators running in a new energy regime, it is necessary to extrapolate established knowledge to higher energies. Today we are in a situation where the Standard Model provides a reliable framework for such an extrapolation, so that one can confidently compute the backgrounds due to known processes and compare them with the expected rates of the 'new' physics. I devote the next subsections to a review of the energy extrapolation of the main quantities describing proton-proton, electron-proton and electron-positron collisions.

3.1 The proton-proton channel The nucleon-nucleon total cross-section continues the rising trend seen at the ISR and confirmed at the pp

Collider. At the LHC it is expected to be atot = 100 mb (Fig. 3). The total cross-section is dominated by complicated 'soft' phenomena in which the final energetic particles form small angles with the beam direction [10]. The production of jets of hadrons is a more dangerous background for any new physics. Jets in hadron colliders are usually defined by the total transverse momentum px with respect to the collision axis. As indicated by the lower curve of Fig. 3, the jet cross-section has a strong energy dependence, i.e. a behaviour similar to the point-like

LHC

I i 1 1 rl 1 i i I

Fig. 3 Proton-proton cross-section as a function of the centre-of-mass energy. The dotted lines represent the extrapolations of the total cross-section. The continuous line is the cross-section for producing a jet having

transverse momentum PT = XTECM/2 > 0.03 X ECM[10]. - 330 - cross-section characterizing the electron-positron channel to be discussed in subsection 3.3. (The cross-section 03« plotted in Fig. 3 refers to jets having a transverse momentum larger than a constant fraction of the energy: XT =

33 2 2pT/Ecm > 0.06.) At the LHC the jet cross-section for pT > 500 GeV is 2 x 10" cm = 2 nb, which corresponds to a rate of 2 ev./s at the nominal luminosity of L = 1033 cm ~ 2 s "l. This is about the rate of events which can still be written on today's magnetic tapes. Note that such a rate, being eight orders of magnitude smaller than the total interaction rate, imposes a sophisticated trigger for the on-line selection. Jet events contain a very large number of particles: the most probable value for the number of charged particles is about 100 (Fig. 4a). These secondary hadrons form many jets, as shown in Fig. 4b. [Here, as in Ref. 10,

2 l/Z a jet is technically defined by the conditions: AR = (AT;2 + A ) < 0.2, d > 5°; tj and 0 are the pseudorapidity and the azimuthal angle, respectively.] The most probable number is six, without counting the two beam jets which

LHC (PR0T0N-PR0T0N)

(a) Distribution of the number of charged particles nCh

0.3

Nj

(b) Distribution of the number of jets Nj (see text for the definition of 'jet')

ü I I I ü -0.9 -0.5 0 0.5 0.9 COS0J

(c) Angular distribution of the jets with respect to the beam axis

Fig. 4 Summary of some of the main properties of proton-proton collisions at Ecm = 16 TeV [10]. (Computed by B. Webber and W. Kittel.) - 331 - go forward. About 1% of the events are expected to have more than 18 jets, and the corresponding rate is about 50 per hour. The jet angular distribution, plotted in Fig. 4c, is peaked forward and backward.

3.2 The electron-proton channel

> I consider the charged-current reaction e~ + p- ve + X caused by the exchange of a virtual charged

2 intermediate boson W. Figure 5 shows that at low energy the total cross-section increases proportionally to E m, a

well-known property shared with the inverse reaction vc + p -» e + X whose cross-section increases linearly with

2 E„ = E m/2mp. At higher energy the cross-section flattens owing to the finite mass of the exchanged W. The dash-dotted curve of Fig. 5 represents the total cross-section for momentum transfers Q2 larger than 10s GeV2, i.e. Q > 0.32 TeV; Q is nothing else than the (time-like) mass of the exchanged particle and, according to the Heisenberg relation d = 2 x 10"17 cm-TeV/Q, corresponds to a space domain having dimensions of the order of 5 x 10"17 cm. At LHC energies the cross-section for Q > 0.32 TeV is such that, with L = 1032 cm-2 s"1, one expects about one event per hour.

LHCjep)

,-29 10- T

\ °"JET 1ub

\(xT>0.06) \ Elecfron-proton V

E» (GeV)

Fig. 5 Electron-proton cross-sections as a function of the centre-of-mass energy. The continuous line refers to charged-current inclusive events. The dash-dotted line is for a cut at Q2 > 105 GeV. (Computed by R. Riickl). The thick dashed and continuous line gives the energy dependence of the jet cross-section for XT > 0.06. (Computed by Z. Kunszt.) - 332 -

The angular distribution of the hadron jets having xT > 0.06, i.e. PT > 45 GeV at Ecm =1.5 TeV, is shown in Fig. 6, which quantifies the well-known fact that the jets tend to keep the direction of the incoming protons: electron-proton detectors have to be asymmetric in order to accurately measure hadron jets produced within a few degrees with respect to the proton beam.

8j

Fig. 6 Angular distributions of the jets with respect to the proton direction at the electron-proton option of the LHC. Most of HERA physics is here confined at 6j < 5°. (Computed by M. Holder.)

3.3 The electron-positron channel The continuous curve of Fig. 7 represents what is known today of the hadronic 'annihilation' cross-section, i.e. the sum of the cross-sections of all the processes in which the positron and the electron disappear having

annihilated. The figure is taken from Ref. [8] and is based on experimental data up to Ecm — 45 GeV and on the predictions of the Standard Model above 45 GeV. Since the overall trend is roughly parallel to the dashed curve,

which represents the 'point-like' cross-section applicable to fi+n~ and T+T" production, the figure proves that, in the explored energy range, quarks (and leptons) are point-like particles. The four sets of peaks appearing below about 10 GeV are due to the pair-creation of quark-antiquark pairs, which together have the same quantum numbers as the electron-positron pairs and, immediately after production, whirl thousands of times one around the other in a resonance state. The four sets are due to the production of unstable bound states of the quark pairs uü and dd (Q and o>), ss (<£), cc (J/^) and bb (T). We would like to complete the figure by plotting the set of resonances

due to tí bound states, but at present we only know that they must lie above Ecm = 45 GeV and below ECm — 200 GeV. There are also indications that the lower limit could be as high as — 80 GeV. The Z peak is not only taller than the others but, according to the Standard Model, it is also of a very different nature, because the Z boson is a single elementary particle having the quantum numbers of the initial electron-positron pair and not a composite system. After climbing on the Z° peak, LEP 200 will open the possibility of exploring the much smaller shoulder due to the production of pairs of charged bosons: e+ + e" -> W+ + W~. The point-like cross-section decreases as Eöm, so that at CLIC — 2.2 x 10"38 cm2: by running one third of a calendar year (T = 107 s) at a luminosity L = 1033 cm-2 s"1, only 220 pairs would be collected. Figure 7 shows that at CLIC the rate of production of W pairs in a 4ir detector is 50 times larger than for ¡i pairs, whilst the production of single W's is 500 times bigger. This cross-section increases with energy because the initial leptons (the electron and the positron), do not annihilate, but are still present in the final state as an electron (or positron) plus an antineutrino (or neutrino). It is difficult to say whether such a large production rate will be 'useful' for physics or should only be considered as a source of background. - 333 -

Fig. 7 Electron-positron cross-section as a function of the centre-of-mass energy. The dashed line is the point-like cross-section. The continuous line is measured up to about 45 GeV and computed above. (Figure taken from Ref. [8].) The dashed and the dotted lines increase with energy because they represent the cross-sections of phenomena in which the two initial leptons do not annihilate. (Computed by W. Scott.)

The dotted line in the top right-hand corner of Fig. 7 gives the total cross-section for the 'two-photon' channel e+ + e ~ ->e+ + e ~ + p+ + /t~.In this reaction the initial leptons survive and form very small angles with respect to the beam direction. The events can easily be distinguished from 'annihilation' events, and the fact that the cross-section is about seven orders of magnitude larger is not really a problem. By comparing Fig. 7 with Fig. 3 we notice that: (i) both in hadron-hadron and electron-positron collisions the 'interesting' cross-sections of the known point-like phenomena decrease as E^, whilst for the 'uninteresting' reactions, which send most of the particles in the direction of the beams, the cross-sections increase slightly with energy; (ii) for both the LHC and CLIC, these cross-sections are seven or eight orders of magnitude larger than the 'interesting' ones. The fact that the pp and eë 'uninteresting' cross-sections differ by a factor of — 106 is a direct consequence of the fact that strong-interacting partons are contained in a bag having a radius of about 10"13 cm and that the electromagnetic interaction is much weaker than the strong interaction. The distribution of the number of charged particles expected at CLIC is shown in Fig. 8a: the most probable value is about 60. These particles are grouped in jets and the most probable number of jets is 4.5 (Fig. 8b). Their angular distribution (Fig. 8c) is flatter than in the proton-proton case (Fig. 4c). - 334 -

CLIC

(b) Distribution of the number of jets Nj

n 1 I i

(c) Angular distribution of the jets with respect to the beam axis

Fig. 8 Summary of some of the main properties of electron-positron annihilations at Ecm = 2 TeV [10]. (Computed by B. Webber and W. Kittel.)

4. PHYSICS TARGETS AND ACHIEVABLE LIMITS Comparing the physics potentialities of two accelerators is a formidable task for at least three obvious, though fundamental, reasons: (i) the unknown cannot be predicted; (ii) even after having agreed on a list of 'expected' new phenomena, the relative importance is subjective; (iii) tomorrow's discovery may completely modify the 'relevance' weights given to the selected phenomena. Looking for inspiration in the past, I want to underline that almost exactly 25 years separated the only three great discoveries which were 'expected' and eventually found to be there: nuclear reactions produced by artificially accelerated protons (1932), and antineutrons (1955-1956), charged and neutral asthenons (1982-1983). - 335 -

By linear extrapolation, I conclude that the next 'expected' discovery will take place around 2005-2007, too late to be relevant in the comparison of accelerators which should run before the end of the century. Probably history wants to tell us that it is wrong to try and compare LHC and CLIC potentialities on the basis of the 'new physics predicted' today. Still, for want of a better solution, this is what the physics groups have done; I shall here sum• marize the results, after having recalled the main theoretical ideas behind the choice of the expected 'new' physics.

4.1 The theoretical framework and its questions The Standard Model gives a rationale to the existence of the weak, electromagnetic, and strong forces, but offers no explanation for the fact that we have observed only three families of fermions, the matter-particles which act as sources of the known force-particles: the photon (y), the asthenons (W and Z), and the gluons (g). (I shall use here, as I did in the oral report and in Ref. [3], the term 'asthenon' to indicate a weak intermediate vector boson with a single word, which sounds more or less as 'photon' and 'gluon'. I am convinced that, five years after the W and Z discovery, such a symmetry in nomenclature is needed; perhaps a better neologism could be found). A first very natural series of questions is thus:

Questions 1: Do heavier (sequential) leptons and quarks exist? What about heavier asthenons both charged (W ') and neutral (Z ')?

Whatever the answer to these questions is, there is a fundamental problem related to the force-sector. The Standard Model of the electroweak interaction is very successful in explaining a host of data and, at the same time, very unsatisfactory. This is true not only because it contains about twenty free parameters, but also because it needs an ad hoc mechanism, not yet confirmed experimentally, to justify the rest energies of the asthenons. At the fundamental level these bosons are very similar to photons. A photon has zero rest mass and thus, once emitted by a particle in a virtual process, can fly far away from its source, giving rise to a force of infinite range (Fig. 9a). Why is the range R of the weak force ~ 10"16 cm instead (Figs. 9b and 9c), i.e. why are the masses of the asthenons ~ 0.1 TeV? (In this case the Heisenberg relation reads: R = 2x 10"17 cm-TeV/0.1 TeV = 2 x 10"16 cm.) The answer theorists give is: Because of the Higgs mechanism. According to the prevailing point of view, all space is filled by a boson field which behaves as the collection of Cooper pairs in a superconductor. The macroscopic wave function of such a collection repels out of the metal the virtual photons of an external magnetic field, so that they do not penetrate in the conductor (Meissner effect). The Higgs field has a similar effect on the asthenons, but the other way around: since it fills all space, it pushes back the asthenons radiated by leptons and quarks, so that they can be felt only at less than 10 "16 cm from their source (Figs. 9b and 9c). The Higgs field fills the physical vacuum because its self-interaction is so strong that the state of minimum energy is reached when the average field is non-zero. As always in quantum theory, there are quanta associated to fields, thus also to the Higgs field. They are called 'Higgs particles' or simply 'Higgses'. As the Cooper pairs of normal superconductivity, they carry no

e v e v

R=-

a) b)

Fig. 9 Leptons (and quarks) radiate virtual photons y (a), and virtual asthenons W (b) and Z, (c) but according to the Standard Model, the Higgs field pushes back the asthenons and obliges them to remain close to the sources, so that the range of the weak force is R = 10"16 cm and not infinite, as for the photon. - 336 -

Fig. 10 (a) WW scattering and (b) neutrino-quark scattering have cross-sections which increase with the centre-of-mass energy, but for different reasons.

intrinsic angular momentum—in other words they are bosons of spin J = 0. In the simplest model there is only one neutral Higgs field, but it is well possible that charged Higgs fields are also present. The above arguments do not fix the mass of the neutral Higgs quantum. One of the main arguments in this direction comes from considering WIWL scattering at very high energies (Fig. 10a). If the neutral Higgs bosons did not exist, the cross-section computed in perturbation theory would increase as the square of the centre-of-mass

energy (a ~ E?m) and would violate probability conservation (unitarity, in theoretical parlance) when ECm > 1800 GeV. If a scalar (J = 0) Higgs particle exists, it will contribute to the virtual processes which take place in the dashed region at the centre of Fig. 10a: perturbative calculations show that in this case there is no unitarity violation provided the Higgs mass is smaller than about 1 TeV. There is also the possibility that perturbation theory does not apply and that new strong forces come into play at this mass scale. Anyway, the fact that both LHC and CLIC are sources of WLWL collisions (Fig. 2) indicates that the new accelerators will give us the opportunity of studying directly this fundamental process. The argument which gives an upper limit for the mass of the Higgs looks simple, but one may wonder if the reasoning is sound enough. We can find some confirmation going back to past experience and recalling that already once unitarity guided us to a correct mass scale. As mentioned in subsection 3.2, for the neutrino-quark scattering shown in Fig. 10b the old theory of weak interactions, now superseded by the Standard Model, gives a cross-section

2 which is proportional to E m, so that unitarity would be violated when the energy is larger than about 300 GeV. This was a fundamental problem already thirty years ago, and, while experimentalists were collecting data around 1 GeV, theorists were bold enough to conclude that, to cure this disease, something had to happen at energies smaller than about 300 GeV. They were right and everybody now knows that probability is conserved in weak interactions, because the exchanged W has a finite mass so that the cross-section flattens at high energy, as shown in Fig. 5. Experience is thus telling us that unitarity arguments are very powerful and the present indication of violation in WLWL scattering around 2000 GeV, while we are experimenting around 100 GeV, should not be lightly dismissed. This long argument focuses to The Problem of today's physics [11]:

Questions 2: Does the standard neutral Higgs exit? And if yes, which is its mass? Do charged Higgses exist?

Higgs particles could be bona fide Cooper pairs, so that they can be broken into two fermions at energies of the order of 1 TeV. For this, and other reasons, some theorists contemplate composite models, in which the known 'elementary' particles are made of more fundamental fermions. If a particle is made of two other particles, one expects to find their excited states, in the same way as the cc quarks form both the fundamental state J/^ and its excited states \p', \¡/", etc. Thus the questions arise:

Questions 3: Are quarks and leptons composite systems of other simpler fermions? Are W, Z, and Higgs themselves composite? Do excited leptons (£*) and quarks (q*) exist? - 337 -

w,z au H a) b) c)

Fig. 11 A Higgs particle H gets mass by virtual emission and absorption of (a) asthenons, (b) Higgs, and (c) fermion-antifermion pairs. The virtual boson contributions are positive, whilst the fermion ones are negative.

Table 4

Particles and sparticles

Particle Symbol Spin Sparticle Symbol Spin

Quark q 1/2 squark q 0 Lepton i 1/2 slepton î 0 Photon y 1 photino y 1/2 Gluon g 1 gluino g 1/2 Charged asthenon w 1 wino W 1/2 Neutral asthenon z 1 zino Z 1/2 Higgs H 0 higgsino H 1/2 Graviton G 2 gravitino G 3/2

Unfortunately a consistent and unique theoretical framework for introducing compositeness does not exist. By necessity, composite models contain various parameters, whose definition has to be closely scrutinized before they are interpreted in terms of a spatial dimension by applying the Heisenberg uncertainty relation. To increase the

confusion, usually all those parameters are indicated by the letter A, so that one meets symbols such as Aec, Aeq, Aqq, etc., which have different meanings in different contexts. In the following I shall use for simplicity a single symbol A which has the dimension of an energy and can be used to roughly define the linear dimensions d at which the composite nature of the particles would become apparent through the Heisenberg relation d ~ 2 x 10"17 cm-TeV/A. Not many theorists are today following the composite route. Many more pursue the idea of a fundamental Higgs, and here they meet with a difficulty: while propagating through space, a Higgs particle H emits and reabsorbs various other particles (Fig. 11) and, through their interaction, picks up weight very easily. This is a general property of spin-0 particles since there is nothing to protect them from becoming massive. Theorists are then faced with the problem of cancelling the many different contributions to the mass increase. As is shown in Fig. 11, the contributions due to virtual bosons (W, Z, H) are positive, whilst the contributions of virtual fermion pairs (ff ) are negative. It can be shown that a miraculous cancellation occurs if each known boson has a related fermion (and vice versa) and their couplings are in a well-defined relation. Supersymmetry gives exactly the needed pattern: for each particle of integer (half-integer) spin there is a sparticle of half-integer (integer) spin (Table 4). If the cancellation is to be effective, the masses of the sparticles should not be much larger than = 1 TeV, the known limit on the Higgs mass. Other arguments also point to the existence of sparticles, but we cannot discuss them here. We can thus express the following questions:

Questions 4: Do sparticles exist? If yes, what are their masses? - 338 -

In the last few years, theorists have become very interested in superstring theories, in which fundamental particles are not point-like objects but strings about 10 ~33 cm long. Such theories give rise to a consistent treatment of quantum gravity, fulfilling a long-standing dream, and to supersymmetry; but they are not unique. Some theorists think that in the cold world in which we live it even reduces naturally to the Standard Model, but with some interesting additions. A particularly fashionable model foresees the existence of an additional neutral asthenon Z ', a set of other 'normal' particles (such as a neutrino, a neutral lepton and a quark of charge -1/3) and a new kind of particles which carry at the same time the 'flavour' of a lepton and a quark. These predictions are very definite but not all experts agree on their necessity and, anyway, the masses of these particles are unknown. The 'leptoquark' Do is a particularly interesting object: it is a boson (spin J = 0) which couples in a well-defined manner to a lepton and a quark. Thus:

Questions 5: Do leptoquarks of spin 1 and spin 0 exist? In particular, does the spin-0 leptoquark Do exist?

In summary, theorists indicate to machine builders and experimentalists the following 'discovery targets': i) Higgs particles, both neutral (H°) (with mass less than — 1 TeV) and charged (H *); ii) sequential leptons and quarks; iii) sparticles with masses less than about 1 TeV; iv) various new particles, and among them a particularly puzzling leptoquark Do; v) a new neutral asthenon Z ' of unknown mass and, possibly, new charged asthenons W ' ;

vi) compositeness, which reflects either in various compositeness scales Aec, Aeq, Aqq or in vii) the existence of excited states of quarks (q*) and of leptons (for instance e*).

4.2 The relevant cross-sections Figures 12, 13, and 14 reproduce the main contents of Figs. 3, 5, and 7 with the addition of curves representing the energy dependence of the cross-sections of some of the most important 'new' phenomena. For the proton-proton channel I have chosen the production of an asthenon Z ' having a mass mz ' = 1 TeV and a neutral Higgs with mH = 0.5 TeV. Figure 12 shows that at the LHC these cross-sections are respectively four and five orders of magnitude smaller than the jet cross-section computed for a fixed value of the transverse momentum (PT = 0.25 TeV). This is the relevant background cross-section, since one wants to cut the events having a pj smaller than a fixed fraction of the produced mass. The figure shows that the signal-to-background ratio does not improve when the cm. energy increases. Higgs production is a relatively abundant process: for a luminosity L = 1033 cm _ 2 s ""1 and mH = 0.5 TeV, the expected rate is a few events per hour [11]. The problem is that the H° decays in jets and, as discussed in the next subsection, the signal is easily hidden under the 10s times larger background due to normal jet events [12]. Electron-proton collisions are ideal for producing leptoquarks. This is due to the large cross-section for Do production plotted in Fig. 13, which at LHC energies for mo= 1 TeV is even larger than the charged-current cross-section for Q > 0.32 TeV. On the other hand, the exchange of a heavy charged asthenon W having mw' = 1 TeV has a cross-section a — 0.1 pb. If W is associated with a neutrino which can be distinguished from the usual

electron-neutrino ve, its production rate is very low, 1 event per day, but it may still be seen. If a ce is produced, then the major effect will come from the interference between W and W exchanges, whose contribution to the cross-section is intermediate between the one represented in Fig. 13 by the dashed and continuous lines. In electron-positron annihilations the superstring-inspired Z' appears as an enormous peak, which I have drawn in Fig. 14 for mz' = 1 TeV. Clearly at CLIC it would be possible not only to see it but also to study it up to

the available centre-of-mass energy, i.e. mZ' = 2 TeV. The cross-section for a Higgs of mass mH = 0.5 TeV is

3 33 2 _1 large, since it corresponds to a 10 events per year at Ecm = 2 TeV and L = 10 cm" s . Compositeness would LHC LHC(ep)

KT1 1 10 102 103 #

ECM (GeV)

Fig. 12 Proton-proton cross-sections as a function of the centre-of-mass energy. The Fig. 13 Electron-proton cross-sections as a function of the continuous and dash-dotted lines are the computed cross-sections for a neutral Higgs meson centre-of-mass energy. The continuous curves refer to the and a Z' of masses mH = 0.5 TeV and mz- = 1 TeV. (Computed by the Large Cross-Section production of a leptoquark D0 and of an asthenon W' having Group.) The dotted line represents the jet cross-section for a fixed pT cut, relevant to the masses of 1 TeV. (Computed by R. Rückl.) The other curves are production of masses of the order of I TeV. (Computed by W. Scott and W.J. Stirling.) taken from Fig. 5. - 340 -

CLIC

Fig. 14 Electron-positron cross-section as a function of the centre-of-mass energy. The peak due to a 'superstring inspired' Z' is very high. Compositeness with a form-factor scale A = 0.5 TeV would have striking consequences on the total cross-section. Sparticles (êe, WW and ¡ijí) are expected to be produced with cross-sections which are of the order of the point-like cross-sections (dashed areas). The thick dotted line represents the cross-section for the production of a neutral Higgs having a mass of 0.5 TeV, well within the range available at CLIC.

be signalled by a flattening electron-positron total cross-section, as indicated by the dash-dotted line of Fig. 14. For a value of the parameter A of the order of 0.5 TeV (i.e. for distances d = 4 x 10"17 cm), the effect shown in Fig. 14 is striking, so much as to remind us of the surprise of the physicists working in 1969 at Adone when, thanks to the pair-production of point-like quarks, they found a hadron production rate which greatly exceeded the expectations. (The parameter A is the one appearing in the particle form-factors and it is about a factor of 10 smaller than the parameter appearing in the composite model used at the workshop and discussed in Ref. [13].) In Fig. 14 the shaded areas indicate the cross-section ranges of three sparticle channels produced in the annihilation of an electron-positron pair. No unique value can be given because various parameters enter the calculations [13]. In general one can state that selectron pairs are produced about 10 times more abundantly than wino pairs, whose cross-section is of the same order as the point-like one (~ 200 events per year for L = 1033 cm"2 s~ '). Smuons are expected to be somewhat rarer, but are very interesting because it was shown at the workshop that, if found in the reachable mass range, their bosonic nature (spin J = 0) can be proven by measuring the angular distribution of the decay muons. This is a unique and most interesting possibility of really pinning down - 341 - supersymmetry, since in hadron colliders production of SUSY particles can only be seen through a missing-transverse-energy signal and no spin can be measured.

4.3 Overview of the discovery limits In Fig. 15 I have collected the detection limits of the LHC and CLIC. For each of the twelve phenomena the four vertical bars represent the limits achievable with (i) proton-proton collisions at the LHC, (ii) electron-proton collisions at the LHC; (iii) e + e" collisions at CLIC with L = 1033 cm-2 s_1; (iv) e + e" collisions with L = 4 x 1033 cm - 2 s -1. The phenomena follow the order presented at the end of subsection 4.1. 1. The first histogram refers to the discovery of the neutral Higgs H° expected in the simplest form of the

Standard Model. For mH £ 0.2 TeV the two-quark decay H° -» q + q gives mainly two jets and the workshop has confirmed what was already known: at a hadron collider there is no general way to pin down such a signal. For mH > 0.2 TeV the decay H° -»• W+ + W" is dominant; the very detailed analysis performed at the workshop shows that at the LHC the signal is certainly visible only for mH ^0.6 TeV [11, 12]. Kinematics may help to extend this range, because the main production mechanism is the fusion of two charged asthenons to form a heavy Higgs meson: WÍ + Wf -* H°. (The luminosity for such a process is shown in Fig. 2.) For this process the quarks radiating the virtual asthenons go forward and their jets may be tagged by calorimeters placed at angles 2° < 6 < 15°. The Calorimeter Detector Group concluded that this is possible (see subsection 5.2). In this case, the usual jet background is greatly reduced and it may be possible to see (dashed line in Fig. 15.1) a mass as large as the theoretical upper limit above which perturbation theory cannot be trusted (mH = 1 TeV). As for the LHC, the production of heavy neutral Higgs at CLIC is dominated by the asthenon fusion process, whose luminosity is plotted in Fig. 2. At CLIC the rate is relatively large and, at variance with the LHC, the background is negligible up to mH — 0.8 TeV. The Standard Model Working Group concluded that in electron- positron collisions masses up to mH = 0.85 TeV (1.2 TeV) can indeed be reached if the luminosity is L =

33 -2 1 33 2 10 cm s ~ (4 x 10 cm" s"') [11, 12]. Most important here is the observation that the 'low' mass range mH = 0.2 TeV is fully available because the decay H° -» q + q is not masked by the background as at all hadron

colliders. I want to underline that LEP 200 will reach mH= 0.08 TeV. In summary, as far as the upper limit of the Higgs mass is concerned, it seems that CLIC is slightly superior to the LHC. However, the great advantage of e + e~ collisions stems from the possibility of covering the theoretically

very important window mH < 0.2 TeV, for which an energy Ecm = 0.5 TeV and a luminosity L > 2 x 1032 cm"2s"' should be sufficient. The main question is then: are we really sure that such a mass range cannot be covered at the LHC? The Standard Model Working Group concluded that the electron-proton mode of the LHC does not help solving the intermediate-Higgs problem [11, 12]. In the proton-proton mode a search for rare and characteristic decays (of the type H° y + y and H° -» Z° + 7) may help in discriminating against background events, but the present understanding is that it is not possible to guarantee that the signal will be seen. However, this

may be possible in some special cases, for instance if either (i) the top mass is so large (mt > 80-90 GeV) that, for

+ mH £ 2mw, the main decay is H° -» bb, or (ii) a fourth generation exists and the heavy-lepton decay H -> L + L" is searched for [11, 12,14]. 2. Charged Higgses are not foreseen in the simplest version of the Standard Model, but they are expected in any supersymmetric theory. Their search is a must. Charged Higgses are expected to decay into heavy quarks or lepton pairs, typically H + -» t + b. Unfortunately, at the LHC, charged Higgses cannot be found because of the high jet-jet background, so that the discovery limit appearing in Fig. 15.2 is mH = 0. But it is worth remarking that, if a quark Q heavier than H * exists, a search for the decays Q -> H * + q and Q -» W + q in the same event may open an interesting window [11, 12]. At CLIC, the cross-section is small but the signal is clear and the discovery limit, plotted in Fig. 15.2, is mH = 0.8 TeV. For a search at low masses it is convenient to reduce the

collision energy Ecm so that the rate increases. Still, below mH = 0.2 TeV the background is not easy to handle and in Fig. 15.2 this range is indicated with a dashed line. - 342 -

Fig. 15 Summary of the discovery limits expected for 12 different processes. The vertical scale is in TeV and changes by a factor of 4 (2.5) when passing from the first (second) to the second (third) line. The four dashed areas in each diagram refer to the following beam conditions, from left: 33 2 proton-proton at Ecm = 16 GeV with L - 10 cm" s~ electron-proton at 1.5 TeV with L = 1032 cm~ 2 s" u, electron-positron at 2 TeV with L = 1033 cm-2 s"1 (indicated for simplicity as 'low' L); electron-positron at 2 TeV with L = 4 x 1033 cm" 2 s"1 ('high' L). The detailed explanations of the 12 histograms are given in the text. Note that in working out the discovery limits and in compiling the figures all the quoted luminosities have been taken for granted, even if the CLIC electron-positron collider is still at the level of a 'conceptual design'. - 343 -

3. At the LHC sequential heavy quarks Q are best searched for through the decay Q W + q. The Working Group concluded that they can be seen also if niQ < mw and that, as indicated in Fig. 15.3, niQ < 0.8 TeV [12]. Owing to the limited centre-of-mass energy, the electron-proton mode gives only ITIQ < 0.1 TeV [2]. At CLIC the electron-positron annihilation cross-section in pairs of heavy quarks is small; to cover the full range up to m<) < 0.8 TeV it is useful to vary the energy of the collision. 4. Sequential leptons are better seen at CLIC than at the LHC, as shown in Fig. 15.4: mL < 0.8 TeV and mL < 0.5 TeV respectively [11]. At CLIC the higher luminosity will allow more stringent cuts and give a better signal-to-noise ratio, but a very detailed discussion of the detector properties is needed before concluding that the discovery range can be sizeably increased. 5. At the LHC the discovery limits of squarks and gluinos are both m = 1 TeV [13]; this value is plotted in Fig. 15.5, which refers to 'strong' sparticles. (Note that the vertical scale changes by a factor of 4 when passing from processes 1-4 to processes 5-8.) In electron-proton collisions the final states (ë + q + X) and (? + q + X) have the largest cross-sections. The discovery limits depend on the masses of the squark q and the slepton t(v = ë or ? ). The Beyond the Standard Model Working Group concluded that m¿ + 1115 < 0.7 TeV. For this reason, by asuming

m~t < m^, I plotted mq0.7 TeV in Fig. 15.5. At CLIC the squark-antisquark cross-section is small, of the order of the cross-section for selectron-pair production plotted in Fig. 14. Fortunately the events due to the decay q q + 7 have a very characteristic signature: missing energy, missing momentum, and two acollinear and acoplanar jets. The discovery limit is close to

33 -2 1 the kinematic limit, mq = 0.85 TeV [12], if the CLIC luminosity is 'large'. For L = 10 cm s" the Working Group concluded that one cannot be sure that sparticles can be found at all. For this reason in Figs. 15.5 and 15.6 a dashed line has been used for CLIC with L = 1033 cm ~ 2 s ~1. 6. Figure 15.6 shows that the discovery limits of electroweak sparticles are better at CLIC, // L = 4 x 1033 cm-2 s" \ than at the LHC. In proton-proton collisions two reactions were studied in detail [13]: è+ + ë" +

X and q + W + X. The best discovery limit, plotted in Fig. 15.6, is mw < 0.45 GeV. In the electron-proton mode

the already quoted limit m-( + mq < 0.7 TeV, combined with the assumption m.~t < mq, gives m¡ < 0.35 TeV. The selectron pair cross-section, plotted in Fig. 14 is the largest one among the weak sparticle channels, and the

discovery limit at CLIC is me < 0.85 TeV. 7. At the LHC, if the leptoquark Do decays in the channel I * + q, which has the most favourable signature, the discovery limit could be as large as 2 TeV [13]. I have already remarked that electron-proton collisions are ideal for a leptoquark search. It is no surprise that, in spite of the limited cm. energy, the ep mode of the LHC opens a window up to mo < 1.6 TeV (Fig. 15.7). CLIC is not as good because Do has spin J = 0 and the cross-section is small: high luminosity is needed to reach mo = 0.85 GeV. 8. Heavier asthenons of different sorts occur in many theoretical models. The Beyond the Standard Model Group has considered a superstring-inspired model and concluded with the discovery limits plotted in Fig. 15.8. At the LHC the most promising channels are Z' -» e + +e" and Z' -» n+ + ¡i~, providing [13] a conservative limit as large as mz' ^4 TeV! Indirect effects of the Z' on asymmetries measured in electron-proton collisions give a much lower limit. As evident from the prominence of the Z ' peak in Fig. 14, CLIC would discover and study new

neutral asthenons up to the available energy mZ/ = Ecm = 2 TeV even with 'low' luminosity. For mz> > Ecm and 'high' luminosity, a mass up to « 4 TeV can be indirectly inferred by measuring the asymmetry in the channel e+ + e" -> fi+ + fi~, as indicated by the dotted line in Fig. 15.8. 9. At the workshop no particular attention was devoted to the discovery limits of heavy charged asthenons. The entries to Fig. 15.9 have been obtained from the review paper of Llewellyn Smith [2]. (Note that for processes 9-12, the vertical scale changes by another factor of 2.5.) The LHC discovery limit is impressive: almost 5 TeV. The electron-proton limit (1.25 TeV) applies for a second W boson which would couple as the standard W boson; the much more interesting right-handed charged asthenon has a similar discovery limit. CLIC is not at all competitive - 344 -

in W searches, since the limit is at about 80% of the energy available to pair-produce them: 0.8 TeV. Single W production gives a comparable limit. 10. As anticipated, in Fig. 15.10 the vertical axis does not represent the mass of a particle but rather the discovery limit for a parameter (A) that has different meanings in different reactions and is roughly connected to the scale of particle compositeness through the Heisenberg relation. (The limits obtained by the Beyond the Standard Model Working Group on the parameter appearing in the four-fermion contact interaction [13] have been divided by 10 before plotting them in Fig. 15.10, so as to roughly transform them in the A parameter appearing in particle form-factors.) The figure shows that in this field CLIC can put much better limits than the LHC. This is mainly due to the spectacular modification of the total electron-positron cross-section, shown as an example in Fig. 14 for A = 0.5 TeV. In the proton-proton case, instead, the best limit comes from a careful study of the angular distribution of two-jet events. 11. If particles are made of other subparticles, one expects to see excited states of the known quarks and leptons. Quite naturally in the discovery of excited quarks q*, hadron colliders are better than electron-positron

colliders. By searching for the decay q* -> q + g the LHC discovery limit may be as large as mq* = 5 TeV

(Fig. 15.11). At CLIC one can reach the kinematic limit on mq* = 2 TeV. 12. The production of an excited electron e* has not been studied in detail, but it may be that the LHC is better than CLIC. This is due to the fact that the decay mode e* -+ e + y has a very clear signature, even in a

hadronic environment. In the CLIC case the kinematic limit me» = 2 TeV can be extended up to about 3 TeV (dotted line of Fig. 15.12) by probing indirectly the contribution of virtual e* states to the cross-section of the reactione+ + e~ -»7 + 7.

5. DETECTORS The Working Groups have concentrated on the problems posed by the LHC environment, since in first approximation CLIC detectors can be considered costly, but relatively simple, scaled-up versions of LEP detectors.

5.1 Track detectors at the LHC At the LHC proton-proton bunches will collide at the frequency of 40 MHz and produce about 2.5

33 -2 interactions per crossing (aWt = 100 mb, L = 10 cm s~ '). The inelastic interactions (amci = 60 mb) will be 1.5 per crossing but, since the trigger system will not accept crossings without events, the average number of inelastic events per accepted trigger will be — 2.2 [15]. Detectors will have to cope with these extreme conditions, never encountered before in high-energy physics experiments. At HERA, for instance, the crossing spacing is 96 ns, but the expected rate of events with particles going outside of the beam pipe is much smaller: only 10"5 per crossing. The Vertex Detector and Tracking Group has devoted a lot of attention to the problem of running a track chamber and measuring particle momenta in such a difficult environment. Previously the problem had been studied in the United States in connection with the Superconducting Super Collider (SSC) [16]; the status of these studies was summarized at the workshop by M. Gilchriese [17]. At present the favoured solution foresees the combination of a central track detector (CTD) and a vertex detector (VXD). The VXD will need to measure accurately enough the longitudinal position of the vertex of an event (Azv < 1 mm) to be able to assign tracks to each vertex and thus handle more than one event per trigger. The CTD, formed by many superlayers, will be a vector drift chamber, with enough on-line computer power to provide a vector for each superlayer from the registered hits in the 200 MHz flash analog-to-digital converter (FADC). Left-right ambiguities can be eliminated on-line and tracks from previous bunches will be sorted out with the scheme shown in Fig. 16 [18]. The Working Group is convinced that relatively simple on-line algorithms can combine the vectors from each superlayer in a track, whose information can also be used in the trigger. Formidable problems have to be solved for running such a detector at L = 1033 cm-2 s-1, but both the Working Group on Vertex Detector and Tracking and the one on Particle Identification [19] concluded their work on a positive note. - 345

Fig. 16 The Vertex Detector and Tracking Group studied, as an example for the LHC, the use of a chamber structure similar to the one of the ZEUS central detector. (Figures taken from Ref. [18].) (a) Sector of the ZEUS CTD layout. There are nine superlayers each with eight sense layers. Five layers are axial with a design resolution

5.2 Calorimetry for hadron colliders The LHC is the worst environment not only for track detectors but also for calorimeters. The Jet Detector Working Group concentrated on the study of a compact silicon calorimeter (CÓSICA) which uses silicon as the sensitive medium (as proposed a few years ago by G. Barbiellini and P.G. Rancoita) and uranium as the inert material. Careful Monte Carlo calculations indicate that the response of such a calorimeter to electrons and pions can be equalized by sandwiching the silicon detectors with 100 /¿m thick polyethylene layers [20]; of course the experimental proof is now needed. A silicon calorimeter shares with liquid-argon and warm-liquid ionization chambers the essential property that it can be absolutely calibrated with electronic pulses and/or radioactive sources. Moreover, its mechanical construction is simple and the segmentation relatively easy. The cost, however, is still high. Figure 17 reproduces the schematic layout of CÓSICA at the LHC [20].

Muon Steel + Proportional Tubes

Si-v.rtli1' i i ' i I i 1 i I i 0 12 3^5 Z [m]

Fig. 17 Structure of the Compact Silicon Calorimeter considered by the Jet Detector Working Group. At angles smaller than about 8° silicon detectors cannot survive and a different technique is needed. (Figure taken from Ref. [20].) - 346 -

Silicon detectors and their electronics are sensitive to radiation. The working group has computed the integrated doses expected at the LHC and concluded that the limiting dose is reached at angles 0 = 7° with respect to the beam direction. For this reason a silicon calorimeter cannot operate safely around 6 ~ 5°, the angular region in which the quarks, radiating virtual asthenons, should be tagged to extend the discovery limit of neutral Higgses (subsection 4.3.1). Different, less performing calorimeters have to be used for this very important task; solutions have been proposed.

5.3 Triggering and data acquisition This is a most challenging enterprise at the LHC, and the Working Group on Triggering and Data Acquisition discussed it in great detail. The trigger has to reduce the event rate by a factor of 107, and this can be obtained with a three-stage trigger if each stage reaches a rejection factor of about 500. The external superlayer of the CTD discussed in subsection 5.1 may be used to trigger on large-transverse-momentum tracks [18], but here I concentrate on calorimetric triggers. For the first-level trigger, the Triggering Group worked on the hypothesis that the ~ 5 x 105 (~ 5 x 103) electromagnetic (hadronic) calorimetric cells will be reduced to - 5 x 103 (~ 103) by adding the signals from 10 x 10 (2 x 2) cell matrices. A special output of each cell must be clipped to 25 ns, as shown in Fig. 18a [15]. In the 25 ns time interval following a bunch crossing, the analog signals from the electromagnetic (hadronic) calorimeter cells are added (with weights sinô) to form the basic electron/photon and jet signals. As shown in Fig. 18a, the next 25 ns are used to sum the weighted signals from the hadron calorimeter matrix with the corresponding signals from the electromagnetic matrix and obtain the transverse energy (ET) signal. Then 25 ns are needed to obtain a signal

proportional to (EX + EY), by summing the transverse energy ET from each matrix weighted with (co&<¡> + sin<£). (This is used to form a rough missing-transverse-energy trigger.) The scheme of Fig. 18a allows to apply, within about 100 ns, thresholds to signals proportional to the electromagnetic and hadronic energies deposited in

localized regions of the calorimeter, EET and (EX + EY), and to the muon trigger, which is built in parallel. To take

80 MHz a) b) CLOCK

80 MHz e m CLOCK N ^ BUFFER ^ CALORIMETER FAOC SHIFT REGISTER / MEMORY / CELL / i 25ns CLIP \ CALORIMETER SHIFT FADC CELL REGISTER ) MEMORY S ~*~ ELECTROELECTS N V ^ ADDO T* HOLDS H DSPs JET T ' HOLDS Sin 8 - WEIGHTED HADRONIC CELLS SECOND THIRD LEVEL LEVEL Cosí

E . E L Fig. 18 Block representation of the three levels ADD r\ T • HOLDS U of the calorimetric trigger considered by the G Triggering and Data Acquisition Group. (Figures 1 taken from Ref. [15].) C MUON (a) Operations to be performed on the clipped signals of the calorimeter cells to obtain the first-level trigger. The shift register is used as digital pipeline to keep the data whilst the summing circuit forms the trigger. The time scale of the operations is indicated at the bottom. (b) Block diagram indicating the flow of FADC timers signals to the second- and the third-level trigger. - 347 - into account the length of the cables, the digital pipeline, represented by the shift register of Fig. 18a, has to be 200 ns deep. For the higher trigger levels the signals from the single cells have to be used. Each cell has to be equipped with a 12-bit to 16-bit FADC, which samples continuously at twice the collision rate, i.e. at 80 MHz. Note that today's 10-bit ADCs running at a sampling rate of 40 MHz cost about 10 kSF! Electronics develop very fast, but it is clear that a close collaboration with industry has to be started now, if one wants to reduce the cost of the single channel to an affordable level. The flow of the calorimeter data to the second- and third-level triggers is shown in Fig. 18b [15]. The 12.5 ns shift registers are used here also as digital pipelines; they are necessary because the time needed for a decision is much longer than the bunch spacing. About 12 FADC samplings (for a total time of 150 ns) are enough to correctly integrate the charge of any modern calorimeter, subtract the pedestal, and spot events following each other within a few bunches. The second-level processor has the full digitized calorimeter information available. Multiple buffering of events between the first-level and the second-level trigger will be necessary to make the dead-time small. A grid of transputers, or similar type of processors, could find clusters in few microseconds [15]. Afterwards, the data are moved to buffer memories from which the third-level processor can collect the relevant information, including data from the track chambers to compute the vertex position. The above sketchy description of a possible trigger architecture indicates the complexity of the tasks and the many problems which have to be solved to fully use the LHC luminosity of the order of 1033 cm " 2 s ~ \ However, the conclusions of the Working Group are not as pessimistic as one could have expected; the feeling of the experts involved was that, given enough time and investment, viable solutions to the major problems can be found.

6. THE INTERFACE BETWEEN ACCELERATORS AND DETECTORS 6.1 The proton-proton LHC interaction regions Figure 19 shows a general layout of the LEP ring. The Interaction Regions (IR) free and suitable for proton- proton physics are IR5 and IR7, together with the four even-numbered regions which are at present devoted to LEP

# 2.4,6,8 Ï LEP EXP'TS #3s BEAM DUMP t* 1 • INJECTION 1 LHC EXP'T l?l tt 5 . 7 LHC EXP'TS

Fig. 19 Interaction regions in the LEP tunnel [21].

experiments. As remarked by the Interaction Regions Working Group [21], each of these collaborations may have proposals of their own on how to re-use their equipment at the LHC. IR3 is not available, because it is used for the beam abort system. IR1 may also be used, but the interference with the injection system has to be clarified. However, IR1 could be used for a dedicated electron-proton detector. At the Workshop two types of proton-proton IRs have been examined [21]: (i) a push-pull experimental area, consisting of a garage, in which the detector is mounted, and of a collision hall (Fig. 20a); (ii) a bypass solution, which allows the detector to be mounted from the beginning in its final position and avoids interferences with LEP (Fig. 20b). The working group favours the second solution because it does not interfere with LEP running, the excavated volume is about half, and the infrastructures are much simpler. Of course, one has to add the excavation of the bypass, which has not been studied in detail yet and will certainly increase the cost. - 348 -

Fig. 20 (a) Push-pull experimental area at the LHC. (b) Bypass solution for a detector dedicated to proton- proton collisions at the LHC. (Figures taken from Ref. [21].)

6.2 Electron-proton collisions at the LHC Since most of the particles go in the direction of the proton beam (Fig. 6), an electron-proton interaction region has to be designed in such a way as to have ± 10 m full space around the interaction point. In Ref. [22] the Working Group proposes a new solution, which leads to a luminosity reduction of only about 20% with respect to the previous design based on a free space of ±3.5 m. For electron-proton collisions at (60 GeV + 8.0 TeV), i.e. for

32 -2 -1 Ecin =1.4 TeV, the foreseen luminosity is now L = 10 cm s . This is obtained with the beams colliding head-on, so that the electron beam has to be brought to the level of the proton beam, which runs 90 cm higher in the LEP tunnel. A zero crossing angle is obtained by bending up the electron beam about 200 m from the crossing point. The second bend is made in the interaction region itself with a weak horizontal field (B = 0.1 T) which extends over 16-20 m. The scheme sketched in Fig. 21, taken from Ref. [22], has the advantage that synchrotron

RATORS Q4/3/2/1 p ,-,05 1 l

LHC H

OSC2/1

I COLL• n p

O S 4/3

OISP. LE P SK£ ICHED 11 i J 11 SUPP. OS6/5 II -UA i-jri 1 B4W OS8/7 B2 OS11 OS10/9 20cm

20 m

Fig. 21 Electron-proton dedicated interaction region with bypass for the electron-positron operation of LEP. (From Ref. [22},) - 349 -

radiation has a low critical energy (= 300 keV). However, a fraction of the radiation hits the aperture of the magnets belonging to the proton ring. To avoid problems, these magnets either have to be normal conducting or must use a warm bore. Since the detector disposition for electron-proton collisions is so special, the use of a single interaction region for both electron-proton and proton-proton collisions is complicated and will have to be studied further, in case experimenters have sufficient interest in such an option.

6.3 The CLIC final focus Very high field gradients are required if the magnets of the final focus system have to focus a 1 TeV beam with ß* — 3 mm, even smaller than the value foreseen for SLC (ß* = 5 mm). Since, as discussed in subsection 2.2, the natural scaling law is ß

10« 11 .1) TeV a = 65 nm

az=0.5 mm

fr = 5.8 kHz N =5.4 x 10' A

10*

10s

\

fe 103 v. \

10' W

Fig. 22 Angular distribution of the energy of the electrons (positrons) and of the photons radiated by beamstrahlung in the

CLIC interaction region [20]. (Calculations made using the i \ \ program of Ref. [9] by P.T. Cox.) 80 160 240 320 400 480 0 (u.rad) - 350 - different. Waiting for more accurate and complete calculations, which take into account also secondary and tertiary effects, at present it can be concluded that for head-on collisions the quadrupoles must have an opening certainly larger than — 5 x 10 ~4 rad, i.e. a diameter larger than = 0.5 mm at a distance of about 1 m from the crossing point. This is almost feasible with permanent magnets, but then the energy spread has to be very small (AE/E = 10"3), otherwise chromatic effects increase too much the transverse dimensions of the bunches. Since one expects AE/E = 10~2, the conclusion is that tip-fields definitely larger than 2 T are needed [24]. Plasma lenses, which focus in both planes simultaneously, may provide a solution. 'Flat' bunches crossing at an angle have also been considered [25], the advantage being that the disrupted beam could avoid the opposite quadrupole without loss in luminosity. For these reasons the CLIC final focus is still one of the main areas of study and concern.

7. CONCLUDING REMARKS In the introduction I quoted the opinion widely spread in our community less than two years ago: 'We (almost) know how to build the LHC but not its detectors, whilst we (almost) know how to build detectors for CLIC but not the collider itself. I believe that the present status of the CLIC concept and the work done by the Detector Groups (coupled with the results of similar studies made for the SSC [16, 17]), justify the statement that, given three to five years of intense R&D, it is highly probable that one will be able to design the detectors able to utilize hadron-hadron luminosities of the order of 1033 cm-2 s_1, as well as TeV electron-positron colliders. However, construction costs and time schedules cannot at present be realistically estimated. The first conclusion is that Europe should give the highest priority to R&D projects in these two fields.

Before touching upon the issue of the LHC-CLIC physics comparison, I want to put forward three caveats. As far as the LHC is concerned, the workshop did not consider the very abundant forward production of heavy flavours, and in particular of b-quarks, which could be used to study rare decays and CP-violation effects, as proposed for the SSC [16]. In connection with CLIC, the present parameters list is such that the bunches are heavily disrupted after the collision and only one experiment can run at any given time. Since at the LHC more detectors could run simultaneously, a comparison, based on discovery limits, does not make full justice to the other physics issues which can be tackled simultaneously by different detectors at a hadron collider. As a third point, I want to underline the statement made at the beginning of Section 4: comparing LHC and CLIC potentialities on the basis of 'predicted new' physics is probably unjustified: this is the only approach we can adopt, but the 'unknown and unpredicted' have, in the past, always been much more rewarding. The discovery limits of the LHC and CLIC for predicted new physical phenomena are summarized in Fig. 15, which displays at once the richness of the physics potentials of both accelerators and the complementarity between the two approaches. Whilst CLIC appears to be better in the search for Higgs particles and in the possibility of finding (or excluding) a compositeness scale, the LHC has its strong points in the searches for new asthenons, Z' and W' (the CERN pp Collider docef), and of leptoquarks decaying into charged leptons. The limits which can be reached by the LHC on excited quarks are definitely larger, whilst CLIC promises more in the search for heavy leptons. For sparticle searches the competition slightly favours the LHC for discovering strongly-interacting sparticles and CLIC for electroweakly-interacting sparticles. However, for checking SUSY theories CLIC is superior because one can measure better the masses and spins of sparticles. Moreover, all SUSY schemes imply the existence of charged Higgses, and these can be seen at CLIC but are very difficult to fish out of the hot environment of hadron-hadron collisions. The second conclusion is that the two colliders are rich of potential physics and complementary, and thus a balanced world-wide program should foresee the construction of one accelerator of each sort. This conclusion is supported by well-known examples of the past, which point to another kind of complementarity that cannot be read from the histograms of Fig. 15: hadron-hadron colliders have been good in exploring really new territories, whilst electron-positron colliders have been necessary to map them in detail. - 351 -

For some channels the discovery limits computed for CLIC at 'low' (L = 1033 cm-2 s_1) and 'high' (L = 4 x 1033 cm - 2 s - ') luminosity differ appreciably. In particular, at the lower luminosity the leptoquark Do cannot be seen, whilst Fig. 15 shows that the discovery ranges of neutral Higgses, of a new heavy asthenon Z', and of compositeness are sizeably extended if L = 4 x 1033 cm-2 s~ The third conclusion is that CLIC must aim at reaching at least L = 4 x 1033 cm ~2 s ~1 per interaction

region [13]. More generally, one can state that an electron-positron collider of cm. energy Ecm must provide, at each interaction region, a luminosity

Lu > (Ecm/TeV)2 x 1033 cm-2s_1 in order to make good use of the physics potentials opened by the available energy [3]. Such a luminosity corresponds to the production of about 1000 n + n~ pairs per year. If an advanced, general-purpose detector which can well measure all types of particles cannot be built and two of them have to be foreseen, then the collider has to provide a luminosity twice as large to be shared between the two interaction regions, where two complementary detectors can be located. The set of parameters that are considered at present for CLIC, and are summarized in Table 3, corresponds to L = 1033 cm-2 s~ It is clear that, by requiring a luminosity at least four times larger, a heavy burden is put on the shoulders of the machine physicists who are working on the CLIC concept. But the needs have now been analysed and spelled out; further machine studies have to take them into account. (I recall that multibunch schemes have been proposed [5] and their implementation could give the factor required, but they introduce new problems which have not yet found a convincing solution.)

Is it possible to go beyond the three conclusions listed above and integrate all the information produced at the workshop in a 'choice' between LHC and CLIC? No, because to do this, at least one prejudice and two further dimensions have to be added to the collection of histograms of Fig. 15. The prejudice has to do with an estimate of the 'probability' that one (or more) of the twelve processes we have considered is realized in nature; the added dimensions refer to the cost of each project and to its time-scale. I believe that most theorists would agree on the statement that the search for Higgs particles is a more relevant physics problem today than hunting for new asthenons. In fact, in ordering the twelve physics targets of Fig. 15 I tried, as much as this is possible, to follow present theoretical prejudices in order of decreasing 'probability'. From this point of view, clearly limited to the consideration of expected 'new' physics, CLIC has to be preferred to the LHC because it promises to perform better on more important problems. (But the enthusiasts should not forget that a LEP 200 discovery of, for instance, a neutral Higgs with mH = 80 GeV would immediately influence the physics priorities.) On the other hand, cost and time scale carry us out of the simple bidimensional space of the physics comparison considered at the workshop and summarized in Fig. 15. By looking only at these extra dimensions, the LHC has to be preferred to CLIC, which needs between three and five years of R&D to become a design [5], accompanied by a cost estimate, which I do not think will be lower than the LHC cost. To make a definite choice many other arguments have to be considered, in particular the SSC project recently endorsed in the United States and the needs of a balanced and timely world-wide program. During the next months many committee meetings and coffee-table discussions will be exploring this multidimensional space and, I believe, will make frequent use of the careful mapping of the physics plane done at this workshop. The members and the Chairman of the Detector Physics and Advisory Panel and all the participants in this workshop have to be warmly thanked for providing this essential instrument. Moreover, I want to express my personal gratitude to the conveners of the working groups, for passing and explaining to me an enormous amount of information, and to all those who have been quoted in the figure captions, for their help in collecting the data needed for the plotting of the graphs. - 352 -

REFERENCES

[1] Proc. ECFA-CERN Workshop on a Large Hadron Collider in the LEP Tunnel, Lausanne and Geneva, 1984, (ECFA 84/85, CERN 84-10, Geneva, 1984). [2] C.H. Llewellyn Smith, Physics at future high energy colliders, Proc. XXIII Int. Conf. on High Energy Physics, Berkeley, 1986, and University of Oxford, Depart, of Th. Physics, preprint 72/86 (1986). [3] U. Amaldi, Energy and luminosity requirements for the next generation of linear colliders, Proc. Symposium on Critical Issues in the Development of New Linear Colliders, Madison, 1986, and preprint CERN-EP/86-210 (1986). [4] G. Brianti, The Large Hadron Collider in the LEP tunnel, these Proceedings. [5] K. Johnsen, Linear e+e" colliders, these Proceedings. [6] U. Amaldi, Phys. Letters 61B (1976) 313. D.H. Rice, Linear collider design based on a fixed cost, Cornell report, CLNS 85/708 (1985). R. Sundelin, A 2 TeV centre-of-mass e+e" linear collider, Cornell report, CLNS-85/709 (1985). U. Amaldi, H. Lengeler and H. Piel, Linear colliders with superconducting cavities, report CERN EF 86-8 and CLIC Note 15 (1986). [7] W. Schnell, A two-stage RF linear collider using a superconducting drive linac, CERN-LEP-RF/86-06 and CLIC Note 13 (1986). U. Amaldi and G. Pellegrini, Linear colliders driven by a superconducting linac-FEL system, CLIC Note 16 (1986). [8] U. Amaldi, Nucl. Instrum. Methods A243 (1986) 312. [9] K. Yokoya, Nucl. Instrum. Methods A251 (1986) 1, and KEK Report 85-09 (1985). [10] Z. Kunszt, Large cross-section processes, these Proceedings and references therein. [11] G. Altarelli, The Standard Theory Group: general overview, these Proceedings and references therein. [12] D. Froidevaux, Experimental studies in the Standard Theory Group, these Proceedings and references therein. [13] J. Ellis and F. Pauss, Beyond the Standard Model, these Proceedings, preprint CERN-TH 4682/87, and references therein. [14] G. Kane, to be published in Proc. First Rencontre de Physique de la Vallée d'Aoste, 1-7 March 1987. [15] D. Delicaris et al., Report from the Working Group on Triggering and Data Acquisition, these Proceedings and references therein. [16] Proc. 1984 Summer Study on the Design and Utilization of the SSC, eds. R. Donaldson and J.G. Morfin, Snowmass, Co (1984). [17] M.G.D. Gilchriese, Contribution to the Workshop. [18] D.H. Saxon, Vertex detection and tracking, these Proceedings and references therein. [19] F. Palmonari et al., Particle identification at the TeV scale in pp, ep and ee collisions, these Proceedings. [20] T. Àkesson et al., Detection of jets with calorimeters at future accelerators, these Proceedings and references therein. [21] W. Kienzle, Design and layout of pp experimental areas at the LHC, these Proceedings and references therein. [22] W. Bartel et al., Electron-proton interaction regions, these Proceedings and references therein. [23] J.E. Augustin, CLIC Intersection regions, these Proceedings. [24] W. Schnell, Can the first quadrupole of a classical final focus system clear the disrupted beam?, CLIC Note 27 (1986). [25] R. Palmer, Low emittance for colliders, Contribution to the Workshop on Low Emittance Beams, Brookhaven National Lab., 1987. - 353 -

LIST OF PARTICIPANTS

T. Akesson CERN, EP Division M. Albrow Rutherford Appleton Laboratory, Oxfordshire G. Altarelli University of Rome U. Amaldi CERN, EP Division J.E. Augus t in Laboratoire de l'Accélérateur Linéaire, Orsay R. Batley Queen Mary College, London G. Bellettini INFN, San Piero a Grado, Pisa A. Benvenuti University of Bologna H.J. Besch University of Siegen Ph. Bloch CERN, EP Division F.W. Bopp University of Siegen G. Brianti CERN, DG P.J. Burrows University of Oxford M. Chen Stanford University, California T. Cox Rockefeller University, New York J. Dainton University of Liverpool J.P. De Brion CEN-Saclay, Gif-sur-Yvette D. Delikaris Collège de France, Paris M. Delia Negra CERN, EP Division D. Denegri CEN-Saclay, Gif-sur-Yvette M. Dittmar CERN, EP Division J. Dorenbosch NIKHEF, Amsterdam Y. Ducros CEN-Saclay, Gif-sur-Yvette J. Ellis CERN Theory Division G. Fidecaro CERN EP Division G. Flügge III Physikalisches Institut, RWTH Aachen B. Foster University of Bristol E. Franco University of Rome D. Froidevaux Laboratoire de l'Accélérateur Linéaire, Orsay J. Gareyte CERN, SPS Division M. Gilchriese Lawrence Berkeley Laboratory, Berkeley, CA M. Greco Frascati, Rome M. Haguenauer Ecole polytechnique, Palaiseau J.R. Hansen Niels Bohr Institute, Copenhagen P. Hansen Niels Bohr Institute, Copenhagen N. Harnew University of Oxford J. Harvey Rutherford & Appleton Laboratory, Oxford G. Heath University of Oxford M. Holder University of Siegen P. Igo-Kemenes University of Heidelberg D. Imrie Brunei University, Uxbridge, Middlesex G. Jarlskog CERN and University of Lund P. Jenni CERN, EP Division K. Johnsen CERN, LEP Division D.P. Kelsey CERN, EP Division V.P. Kenny US Department of Energy, Washington DC - 354 -

W. Kienzle CERN, EP Division E.W. Kittel University of Nijmegen H. Kowalski DESY, Hamburg Z. Kunszt ETH, Zurich B. Mansoulié CEN-Saclay, Gif-sur-Yvette K. Meier CERN, EP Division B. Meie G. Marconi Physics Institute, Rome J-P. Mendiburu Collège de France, Paris M.N. Minard LAPP, Annecy J. Mulvey University of Oxford A. Nandi University of Oxford F.L. Navarria University of Bologna D. Notz DESY, Hamburg F. Palmonari University of Bologna F. Pauss CERN, EP Division M. Perrottet Theoretical Physics Centre, Luminy, Marseille P. Petiau Ministry of Research and Technology, Paris M.G. Pia INFN A. Putzer CERN and University of Heidelberg P.G. Rancoita University of Milan F. Richard Laboratoire de l'Accélérateur Linéaire, Orsay R. RUckl DESY, Hamburg F. Ruggiero CERN, LEP Division J. Rus s Carnegie Mellon University, Pittsburgh J. Sacton University of Brussels J. Sass CERN, EP Division D.H. Saxon Rutherford Appleton Laboratory, Oxfordshire D. Schaile CERN, EP Division J. Schukraft CERN, EP Division W. Scott Oliver Lodge Laboratory, Liverpool G. Smadja CEN-Saclay, Gif-sur-Yvette S. Stapnes CERN, EP Division M. Steuer CERN and MIT G. Stevenson CERN, TIS W.J. Stirling University of Durham H. Taureg CERN, EF Division M. Tonutti III Physics Institute, RWTH Aachen D. Treille CERN, EP Division A. Verdier CERN, LEP Division K. Wacker III Physics Institue, RWTH Aachen A. Wagner University of Heidelberg B. Webber University of Cambridge N. Wermes CERN, EP Division R. Wigmans NIKHEF, Amsterdam P. Zerwas I Physics Institute, RWTH Aachen J. Zsembery CEN-Saclay, Gif-sur-Yvette C. Zupancic University of Munich