DELPHI Collaboration @ 89-92 DAS 101

, Ne

DELPHI EVENT TAGGING

M. Dam (Oslo), A. De Angelis (Udine), P. Eerola (Helsinki), R. Fruhwirth (Vienna), P. Gavillet (CERN), P.O. Hulth (Stockholm), M. Innocente (Udine), B. King (Liverpool), J.P. Laugier (Saclay), J. Maillard (CDF), M. Pimenta (Lisboa), P. Privitera (Bologna), M. Tarantino (Udine), J. Varela (Lisboa), . Werner (Lisboa), M. Zito (Genova), R. Zukanovich (CDF)

DELPHI EVENT TAGGING

M. Dam (Oslo), A. De Angelis (Udine), P. Eerola (Helsinki), R. Fruhwirth (Vienna), P. Gavillet (CERN), P.O. Hulth (Stockholm), M. Innocente (Udine), B. King (Liverpool), J.P. Laugier (Saclay), J. Maillard (CDF), M. Pimenta (Lisboa), P. Privitera (Bologna), M. Tarantino (Udine), J. Varela (Lisboa), C. Werner (Lisboa), M. Zito (Genova), R. Zukanovich (CDF)

Abstract

The DELPHI event tagging is an application designed to check the raw data output, to perform a fast on-line pattern recognition of the events, and to provide for them a classifi- cation. This application will be implemented on the on-line 3081/E emulator system. In this note, the general structure of the application, the detector algorithms and the implementation are described. A summary is provided for the performance tests.

Contents

1 Introduction

2 3081/E Emulators

Description of the General Steering 3.1 The Initialization Step (T4INIT) ...... 0...... 0.0..0..0.00.4. 3.2 The Processing Step (T4LOOP) ...... 0.000 000000 0G 3.3 The Termination Step (T4END) ...... 00.00.0000 0 beeen

Detector algorithms

4.1.1 The T4TPC Algorithm ...... 0. 0.000000. ween 4.1.2 Some performancetests ...... 0..0.000000 eee eee eee 4.2 Outer Detector ©... 2... tk eas 4.2.1 Introduction ...... 0.00.0. ee ees 4.2.2 Information provided by the Outer Detector...... 4.2.3 Brief description of the method used ...... 00004 4.2.4 Summary ...... 0.0. ee es 43 HPC... 2. ens 4.3.1 General structure of the program ...... 000800. 4.3.2 Test of performance ...... 0..00000 ce eee ee eee eg 44 EMP... . 2... ee eg 4.4.1 Input/Output...... 0.00.00 000. ee ee ng 4.4.2 User access to thresholds and parameters ...... -.. 4.4.3 Performance ...... 0.0.00 cc eee ee ey 4.5 Hadron Calorimeter ...... 0.0..0.0000. cee eee ee ee ee 4.5.1 Input/Output...... 0.000000 000. ee eeee eee 4.5.2 User access to thresholds and parameters ...... 4.5.3 Performance ...... 0... 00 cee ee es 46 SAT 2... ek ee ee 4.6.1 Adjustable thresholds and variables ...... 0000. 4.7 Forward Chambers (A and B)..... Pe ee

Global performance and timing tests

FADO implementation

Graphic interface 7.1 Graphic representation of elements ...... 0000 eee. 1 Introduction

The event tagging software for DELPHI is essentially made of:

e A set of detector modules able to perform:a fast pattern recognition starting from the raw data (and eventually taking advantage of the data preprocessing form the Ist, 2nd and 3rd level trigger). These modules provide a simplified “reconstruction” of the event and a flag telling if the individual detector is able to classify the event as a “real event” by himself.

e A general steering driving the sequence of the calls to the detector modules (and eventually calling only a subgroup of modules). The steering is thought in such a way to optimize the calls to the modules in order to classify an event spending the minimum CPU time. In the final version of the tagging such a logic will be controlled in a FADO ([FAD.1], [FAD.2]) environment.

At the end of the tagging processing, an event can be classified as cosmic, beam gas, other background, or Z, eventually specifying the Z decay channel for which it is a candidate. The tagging software will run on a battery of three 3081/E emulators working in parallel, sited in the DAS central partition [EMU.1]. The DELPHI tagging software is coupled to a graphic interface for the visual scanning of the events.

2 3081/E Emulators

The 3081/E emulator is a processor which emulates an IBM system 370 computer. It is designed to execute any High Energy Physics application program, giving results that are bit for bit identical with those obtained on an IBM system 370 mainframe. It is a reduced instruction set machine, running its own unique microcode which is produced from IBM by a TRANSLATOR program. The 3081/E emulator has the following hardware features:

1. Modular architecture consisting of:

e Five execution units (Control and register board, Floating point add/subtract card, Integer card, Multiply card, Divide card) e One Interface card e Memory cards

2. Separated program and data memory which ensures that a program cannot overwrite its own instructions

3. Memory configurable in 0.5 Mbytes units up to 7 Mbytes

4, Full support for FORTRAN 77 including double precision 64 bit floating point arithmetic and character type instructions, but excluding I/O instructions. 5. Pseudo-dual port interfacing to external busses (Fastbus in Delphi)

6. High speed external data transfer capability. The 3081/E has 64 bit internal data paths and a 120 ns clock.

?. A performance that is equivalent to an unit IBM 370/168. Advantage is taken of several hardware design feature to pipeline program instruction at the translate step. Referring to expected rates (Input rate of 5 Hz, Output rate of 2 Hz) with 3 to 4 Emulators the processing time in each 3081/E must be less than 500 ms by event [EMU.1]. Comparing to the Delphi analysis off-line program DELANA using as much as 16 megabytes of memory necessary to load complete database for each detector, the tagging program must use 4 megabytes of memory at maximum depending on the hardware configuration of the Delphi 3081/E emulators. Thus only partial database for each detector with a few geometrical and calibration constants can be loaded. Our program needs, for the moment, 0.5 Mbyte of Program Memory and 3.0 Mbytes of Data memory.

3 Description of the General Steering

The DELPHI 4'*-level Tagging process is divided into three main steps: the initialization step T4INIT, the processing step T4LOOP, and the termination step T4END.

3.1 The Initialization Step (T4INIT)

This first step takes care of the following:

1. Initialization of ZEBRA, used as memory management system (Call MZEBRA) Creation of a special division to receive Raw—Data (Call MZDIV) Creation of a Link area for: (Call MZLINT)

e Raw Data e Detector algorithms (TPC, HCAL, HPC, EMF , OD, SAT, FCAB., ...) . Initialization of TANAGRA if requested. [*XTANAGRA] (Call TSTLIM) . Initialization of HBOOK4 if requested. [xHIST] (Call HLIMIT)

. Initialization of the Input/Output files depending on the computer: [xVAX ,xIBM ,xEMUL] (Call FZFILE) a) Block size definition. (FZDEF sequence) b) I/O Mode choice. (Native (= default) or Exchange) c) Fortran Read/Write mode or QIO Package choice.

. Initialization of TANAGRA files if requested. [*XTANAGRA] (Call TOPTN) 6. Detector Algorithms choice by defining a vector IT4ORD: (Call T4ORDI) a) One can choose the Detector Algorithms to be activated in the process (= n means yes, = -1 means no; n is the detector number for T4ORDI); b) One can change the order of calling the detectors in the process. Ex: IT4ORD(i) = n...... Detector Algorithm number n called in the i-th position , where l

8. Initialization of the Tagging Statistics.

9. Initialization of the general purpose Commons.

3.2 The Processing Step (T4LOOP)

This is the core of the Tagging processing: the Loop on events. Here we: 1. Read the Event in the Raw Data division (Call T4READ) 2. Loop on the chosen algorithms (order defined in T4ORDI): CALL T4TPC (if TPC ) .... Time Projection Chamber algorithm | CALL T4EMF (if EMF ) .... Forward Electromagnetic Calorimeter algorithm CALL T4HCAL (if HC ) .... Hadron Calorimeter algorithm CALL T4HPC (if HPC ) .... Barrel Electromagnetic Calorimeter algorithm CALL T40D (if OD ).... Outer Detector algorithm CALL T4SAT (if SAT ) .... Small Angle Tagger algorithm CALL T4FCAB (if FCAB) .... Forward Chambers A and B algorithm Each algorithm outputs a Flag: IFu (u = TPC, EMF, ves) One has:

IF'u = -1 — means no data (or bad data) for the corresponding detector for this event; Iu = 0 — means no physical relevant clusters(or tracks) seen by the corresponding detector for this event; IFu = 1-— means physical relevant clusters(tracks) seen by the corresponding detector for this event. Each detector fills in a Common T4vRES (v = T,E,C,H,S,O and F for each detector respectively TPC,EMF,HCAL,HPC,SAT,OD and FCAB) giving: e the number of clusters (tracks) for the event; e for each cluster(track) some relevant information as E(p), 6, ¢. (see appendix on output format)

3. ‘Try to take a decision for the final tagging of (Call T4PHYS) by trying to combine the calorimetric information and the TPC tracks for each event (this subroutine is not yet fully coded as it shall depend highly on the tagging philosophy to be clearly defined in the future). A FADO implementation is foreseen to perform this tagging decision in a more general way. 4. Get the tagging information given by each detector. (Call T4BITS) 5. Write tagged events on stream 1, all events on stream 2. (Call FZOUT)

3.3 The Termination Step (T4END)

Here we 1. Close input/output ZEBRA files. (Call FZENDI/FZENDO) 2. Get the ZEBRA statistics on the memory occupation. (Call MZEND)

3. Print the statistics on the tagged events. IN SUMMARY: e Torun a DELPHI Level-4 Tagging process one needs essentially two PAM files: T4ALGO(n) PAM and TPCANA(n) PAM, where (n)= current version number. (Plus the TANAGRA if one wants to run with the x TANAGRA option) and the CERN Library (general Cern Library + ZEBRA + HBOOK). At the moment we work with the version 25 of 01/09/89. e A graphic option [kGRAFICS] which permits the scanning of events is also available (see T4GRAF description) .

This option is very useful in two particular cases: e spying events during data taking. e scanning events of the tagged file on a VAX-Station.

4 Detector algorithms

To allow the event tagging, some (or all) the detector modules for “1st stage” pattern recognition are called in routine QNEXT, in the order defined by the user. Each module provides as output a set of “TE-like” banks (we will often refer to them simply as “TEs” throughout). The output for detector u is stored in common /T4uRES/. Common /T4uRES/ contains the total number of TEs (NTEu) and a list of TE informations, made by a “standard” part and by a “detector dependent” part. The structure of the TE informations is (for the j-th TE): T4uTE(j,1:13) - Reserved for FADO T4uTE(j,14) - Charge

T4uTE(j,15) - Rin (of the reference point) in cm T4uTE(j,16) - Oi, (rad)

T4uTE(j,17) - din (rad)

T4uTE(j,18) - p (or FE) in GeV

T4uTE(j,19) - %, (of the track element) in rad T4uTE(j,20) - dz, (rad)

T4uTE(j,21) - Free T4uTE(j,22) - ARin

T4uTE(j,23) - Adz,

T4uTE(j,24) - Adin T4uTE(j,25) - Ap (or AE) T4uTE(j,26) - Ad,

T4uTE(j,27) - Adz, T4uTE(j,28:29) - Free

T4uTE(j,30) - Flag (= 1 if in the barrel region, = 2 if in the forward region) T4uTE(j,31) - Number of detector dependent informations following

A location is filled with -999. when the detector is not able to provide the corresponding information. The pattern recognition of the detectors can make use of parameters and thresholds that can be modified. These parameters for pattern recognition are stored in common /T4uPAR/. The attribution of a standalone “tagging flag” to an event is in general function of a set of tagging thresholds, that are stored in common /T4uTAG/. In the following, a short description of the detector modules is provided, with emphasis on the parameters and the thresholds that can be set by the user, on the detector dependent part of the output, and on the performance tests.

4.1 TPC

The T4TPC algorithm is the 4*"-level Tagging algorithm for the Time Projection Chamber ([TPC.1], [TPC.2]). It uses the same TPC clusterization routines as DELANA but the pattern recognition and track fit are done by polar inversion method ([TPC.3]). 4.1.1 The T4TPC Algorithm

T4TPC is the TPC Emulator’s steering routine. It performs pattern recognition and fit by the polar inversion method. An important feature of this method is its perfect symmetry with respect to the z axis. The input parameters for T4TPC are the TPC raw data retrieved from the ZEBRA structure produced by real data taking or DELSIM. Its general lay-out may be given by the following subroutines:

e T4TINI —- initialization of Commons: called once per run;

TPBOOK —- initialization of HBOOK4 [ if xHIST ], also called once per run; e T4READ —- read event in the raw data ZEBRA bank:

e ETPAT1 — calculation and calibration of the time clusters (z coordinate);

e ETPAT2 —- calculation of the pad clusters (x and y coordinates);

e ETPAT3 — pattern recognition and track fit using Maillard’s method.

The Clusterization in T4TPC The clusterization here is done by a call to ETPAT1 and ETPAT2 as for DELANA. At the moment these routines are used in T4TPC with some simple modifications to neglect the wire corrections by deleting the calls to ETWICL and ETPOIW but we forsee still some further changes in ETPAT1 to speed up the clusterization in to manners: by doing a simple barycenter and by clusterizing only inside small cones defined by the other detectors processed before the TPC. Clusterization is very time consuming in T4TPC. ETPAT1 and ETPAT2 take together about the same time as E'‘TPATS3 and in some cases twice the time of ETPAT3 for typical DELSIM qqbar events. So schematically the clusterization is done by:

e ETPAT1 — calls the routine ETRECO that calculates the parameters of the time clusters by calling ETFADC which converts codes to amplitudes, ETCLAS which searches for local peaks inside the clusters and ETPARA which performs a parabolic adjustment on the three amplitudes AMPR;

e ETPAT2 —— calls the routine ETPCLU that processes the pad clusterization in one row by contiguity checks; —— calls the routine ETPOIN that calculates the space coordinates of the clusters.

ETPAT1 and ETPAT2 take around the same processing time per event which is of about 0.8 s on a VAX750 for DELSIM qaqbar event. The Pattern Recognition and Track Fit in T4TPC The TPC is basically divided into two symmetric parts: the forward TPC (z > 0 ) and backward TPC (z < 0). Each part then is subdivided into 6 sectors of 60 degrees each. Those sectors are also subdivided into 16 pad rows. All this suggests a simple way of localizing pad clusters in the TPC by using a two-dimensional matrix NDETCL(i,j), where 2< i <97 and j = 1,2. So that j=1 tells us that we are talking about clusters in the forward region whereas j=2 tells us that we are in the backward region. As for i, it will complete the positioning of the clusters by giving the sector and pad row. We allow a maximum of 1000 clusters to be found in the TPC. Our recent experience with real events has shown that a maximum of 500 clusters should be enough for reality but we still want a confirmation of this fact before making any change in the code. In a general way we have:

e ETPA3 —— calls ETCLER: interface between TD structure and the space points structure used by the polar inversion algorithm. This routine fills a local common /ETCTRC/ with the clusters coordinates and angular position. We will use the matrix NDETCL as a pointer in each region of the detector (forward/backward), in each sector and pad row as explained earlier in this section. This matrix will contain the number of clusters in each sector row. We will introduce here also IADICL and IAD2CL, the first and last address of the clusters.

For DELSIM we fill in a vector, NUTRA ,that will contain the number of the generated track associated with the hit , in a local sense (from 1 to 200, as for the moment we allow up to 200 tracks to be reconstructed). If one wants to obtain the DELPHI number corresponding to this track one will find that information in the vector N OM(NUTRA).We will also fill a vector ITRATO(NUTRA) with the number of the pad rows hitten by this track and ITRATZ(NUTRA) with the number of reconstructed track(s) for this generated track.

—— calls ETMPAT which is the polar inversion main routine that we will discuss in detail in the next item.

The Polar Inversion Method: ETMPAT The polar inversion method developed (ref.[TPC.3]) in view of reconstructing the TPC tracks pointing to the interaction vertex (VERMAX is the maximal distance from the vertex allowed — this is put at the present to 6.5 cm) relies deeply on geometric considerations. The two beams et and e~ cross along the z axis inside a interaction region V centered at the origin O. The particles produced by they interaction will have helliptical trajectories with the helix axis parallel to the magnetic field B. Those helices projected on the x-y plane are simply circles that pass through the same point, that in a primary approximation is the origin of the interaction. The principle of the reconstruction [TPC.4] is quite simple:

e we observe that all the circles which are the projections on the x-y plane of the particles trajectories pass through the same point that, in a first approximation, may be taken as being the center of the TPC. ¢ we make a polar inversion of center O and power Ro of all the TPC clusters. This will transform all circles in straight lines with slope equal to the angle at the vertex of the interaction and with their distance to the origin equal to the inverse of P,, the transverse momentum.

e we see that the algorithm must in summary find, interpolate and fit two straight lines: one in the z — y plane and the other in the z — ¢ plane. The polar inversion algorithm main steering routine is ETMPAT. This routine performs the pattern recognition and the track fit for each TPC event. There are two points one has to keep in mind when thinking about our code: the minimum number of clusters in a track is 3 (NLIMIT) and tracks are constrained to point to the interaction vertex. ETMPAT operates in three steps: 1. one searches for a good track candidate first by selecting a straight line, if possible (ET- DROI), and then by selecting a helix. In this second case one first find two clusters that belong to the same helix (ETSEL2) and then one looks for other clusters in the other pad rows that may fit the same helix (ETSABE); 2. one confirms the track by fitting the track parameters (ETFIT) and by a second search of points on the pad rows that may still belong to that track (ETSABE); 3. one makes a final fit (ETFIT/ETFITR) , stocks the track found and fitted (ETCRTE) and eliminates the points belonging to this track (ITACH is then set — for each clusters belonging to the track — to the track number). If we find at least one track of P, > PTCUT we set the TPC flag, IFTPC = 1. For the pilot-run we set PTCUT = 0.5 GeV but we shall be more exigent later on and increase this value up to a few GeV. All this is done by a call to the following routines: e ETPOLA —- this subroutine performs the polar inversion of all clusterized space points, after this step xclus,yclus will be points in the polar inversed plane, and zclus will contain the absolute value of the z coordinate. The corresponding points in the real plane are nevertheless kept in the vectors xclus0,yclus0 and zclus0. The polar inversion power used by ETPOLA is Ro = 2000.. This corresponds to do

\ Rozclus0(2) aclust(z) ~~ xclus0(z)? + yelus0(i)2

[ ° = Royclus0(z) yclust(z) eclus0(z)? + yclus0(7)? e ETCART —— this subroutine takes care of the symmetry relative to the Y-Z plane in order to make an efficient pattern. We make this correction also on sector limits: zclust(i) = —2clust(t) tclust(t) = sign(7, telust(i)) — tclust(i) Note: here tclust(i) is in fact the angle ¢ in the plane x-y! e ETDROI — here we take clusters in external pad rows as what we call a ’pivot” (IT1) and from this pivot-cluster we try to find other clusters that are distributed inside a certain fork given by TETX and TETZ , which are parameters calculated by dynamical calculation of errors and at a minimal space distance DDEE of a possible straight line drawn from the pivot-cluster towards the vertex. We also estimate at this point the number of rows that should have been hitten by a track (NHMAN) and we calculate for the straight lines we find the number of rows actually hitten for that track (NHMAX). After finding the straight line one compares NHMAX with NHMAN and if one finds NHMAX > NHMAN — 1 then one will consider that this is a perfect track and one will go directly to ETFIT and ETCRTE , to fit the track and save the track information in a common. If one finds NH MAX > NHMAN/2 one will then consider that this is a good track candidate and one will perform a preliminary fit (ETFIT) before going to ETSABE to search for other clusters that may belong to the track, make a final fit and go to ETCRTE. In any other case we carry on to search for helixes (ETSEL2). e ETSEL2 —> this subroutine selects two pivot-clusters creating a track candidate. In other words it looks for two points that fit a helix pointing to the interaction vertex. The flag ICORRE is set to 0 when one enters this routine, then one calculates the vertex angle of the possible track in the x-y polar inversed plane (same as in the x-y real plane) and uses this value in the z — ¢ relation to see if the track comes from the vertex region. In the positive case one sets ICORRE = 1 meaning this is a track candidate. e ETSABE —— here one looks for other clusters that may belong to the track candidate found by ETDROI or by ETSEL2. We calculate for this track the number of hitten rows - (NH) and the number of rows that should be hitten (ITROU). We select those clusters by asking them to lay as close as possible to the track candidate using for that a fork on z, a fork on @¢, a fork on the distance in the polar inversed plane, a fork on the distance in the z — @ plane and a fork on the distance in the real space plane (x, y, z). One has basically 3 loops:

1. Loop on the pad rows: for each pad row one calculates the intersection of the row with the track. If there is no intersection one puts the flag ICOISE = 3 (if there is some way to recuperate a point) or 10 Gif there is no way to recuperate a point because we are already out of the bounds of the TPC). If however the intersection exists but the point lays in the border of the TPC (this means either its z coordinate is, taking into account the errors, around the length of the TPC or the R for this point is around the radius of the first row of the TPC) we set ICOISE = 2. 2. Loop on the sectors: one looks if the @ for the intersecting cluster lays , taking into account errors, between two sectors. In which case one sets ICOISE = 1. 3. Loop on the clusters of a row: the point is rejected if the pattern has found it before (ITACH # 0). After looking in each projection , x-y and z — ¢,in a fork of error, we take the nearest point in the space (of this row) to the track.

10 If ICOISE = 0 the point is a ”good-normal” point, in the contrary it is not. So we must distinguish between all the ICOISE cases by attributing a different weight to each case , this is precisely the role of GAMT. Also the forks used depend on if it is the first (SA(1)) or the second (SA(2)) time one calls ETSABE.

e ETFIT —— it takes the points (xclust, yclust, zclust) that where found to belong to a track via the NBOBO array and makes a linear fit in two planes: 2 — y and z — @. It also calculates the track parameters but at this point one does not require the vertex constrain (this could be done here exactly as in ETSEL2).

e ETCRTE —— in this routine one calculates the track parameters in the usual DELANA frame: one can find for example px, py ,pz for the track at the entry of the TPC or if one puts delta = 0 the same quantities at the interaction vertex. We will fill in the T4TRES common with all the interesting output information for every track found in each event following the T4ALGO output format specifications.

A number of histograms and debug printings are available at debug level 4 ( IDEBLV = 4). This may be helpfull if one wants to follow all the steps of the pattern for a given event.

4.1.2 Some performance tests

T4TPC is a quite fast and powerful algorithm. We have obtained during the pilot-run very good results both for its efficiency in tagging real events (it has tagged all the Zp candidates seen in DELPHI including the bhabha event for the run 386, providing a background rejection of 99%) and for the time it takes to process events (in run 385 TPC+OD took together 0.380s per event on VaxStation 3100). A timing and efficiency study was done by P.Eerola on simulated events giving the following results for the TPC (on a VAX 3100):

event type | time spent(s) | efficiency of tagging ete7 0.201 86.2% utp 0.218 83.8% Thr 0.641 83.8%

qg 2.464 89.5% bhabha 0.041 4.8%

The efficiency in track finding/momentum reconstruction results satisfactory.

4.2 Outer Detector

4.2.1 Introduction

The presence of track candidates in the tracking chambers of DELPHI is one of the handles available in deciding on a fast selection of interesting events. To this end, a program has been

11 written to decide if any track candidates exist in the Outer Detector. The idea is that this code forms part of the program running in the emulators to select events for the hot-line connection to DELICE. | The code used in this program for the Outer Detector is, in fact, a subset of the full track reconstruction running in DELANA. The intention is to find collections of hits forming patterns which are likely to have arisen from a track, but not to do the full track reconstruction through to the point where a fit is performed. Since the program is intended to run in an environment where the database (and other Delphi software) is not installed, only the raw measurements are used and no pedestal subtractions are performed. In other words, the only information used is the presence or absence of a hit in a given cell. Drift distances are not calculated.

4.2.2 Information provided by the Outer Detector

The steering program calls SUBROUTINE T4OD(NREQ,IFLAG) once for each event. NREQ is an input argument which is used to indicate the number of tracks required before the event is flagged good. This is done in the following way:

e If NREQ is negative all tracks in the OD are reconstructed. The flag IFLAG is set to indicate a good event if 3 or more tracks have been found.

e If NREQ is positive, the flag is set once NREQ tracks have been found. At this point further track finding in the OD is halted and the program returns to the calling routine. This is useful if it is necessary to speed up the program.

IFLAG is set equal to 1 if the event is flagged good in the OD, otherwise it is zero. If an error occurred during processing of an event, this is indicated by a negative value of IFLAG. The value indicates the fault which occurred. These can be trivial (error 813 means simply that there were no hits in the OD) or serious (error 812 means that the raw data buffer was inconsistent in some way). Further information on the reconstructed tracks can be found in the COMMON /T4ORES/ NTEO , T4OTE(20,100) This contains the following information for the NTEO reconstructed tracks: e T4OTE(ITRK , 15) = Radial distance to the reconstructed track.

e T4OTE(ITRK , 16) = Theta of the reconstructed track. e T4OTE(ITRK , 17) = Phi of the reconstructed track.

e TAOTE(ITRK , 2 22) Error on the radius.

e TAOTE(ITRK , 23) Error on Theta.

e T4OTE(ITRK , 24) = Error on Phi.

e T4OTE(ITRK , 30) = 1. to indicate that the track is in the barrel region.

12 All angles are given in the range 0. to TWOPI and all distances in centimeters. Unused locations of the T4OTE array are filled with -999999. Since the program is intended to be fast, no information on the track direction in the OD is given, simply the position of the start of the track in the OD. The Delphi coordinate frame is used throughout. Additionally, the COMMON /EOSREG/ contains, amongst other things, three arrays XTRKSUIRMAX), YTRKS(IRMAX) and ZTRKS(IRMAX), which contain the APPROXI- MATE X,Y and Z coordinates of each of the track candidates in array locations 1 to NTRKS. The coordinates are approximate for two reasons. Firstly, no track fit has been performed and secondly, since the geometry file containing the exact location of the OD is not present, the detector is assumed to be in its ideal position.

4.2.3 Brief description of the method used

The program loops round the OD and defines regions of hits when it locates a hit. This region is left open and further hits assigned to the region until no more adjacent hits are found. The region is then closed and a new region opened when more hits are found. Once a region has been defined, the pattern of hits in this region is inspected. If the pattern of hits is consistent with the passage of a track, then this is accepted as a possible track candidate and its approximate position calculated.

4.2.4 Summary

In summary, a small program has been written to give a rapid count of the number of track candidates in the Outer Detector. This code represents the first step used in the full off-line reconstruction and as such does not perform a full track fit, but gives the approximate track simply by looking at patterns of hits in the OD. The steering program can define the number of tracks to be searched for, after which further reconstruction of an event is halted, or it can request all OD tracks to be reconstructed.

' 4.3 HPC

The task of this program is to perform a fast and simple scan of HPC raw data in order to tag ” interesting ” events and to offer information useful for correlation with other detectors (e.g. with TPC tracks for electron identification). Given the high granularity of HPC (255 samplings x 128 pads x 144 modules) the size of the raw data is such that most of the CPU time is spent just reading and decoding. As a first approach to condense this information we have decided to evaluate the energy deposited in each module. Modules with signal can then be clustered to recover the energy spread over a large area of the detector. A study of the energy deposited in HPC modules in a typical qq event shows that the granularity chosen is adequate for studying the large scale structure of the jets. We are also working at an upgraded version ([HPC.1]) with much higher spatial resolution to be used for more sophisticated analysis.

13 4.3.1 General structure of the program The program consists of a) initialization routines T4HPRI , T4HTGI , I4HGEO filling the commons T4HPAR , T4HTAG , GEOHPC containing the internal parameters , the tagging thresholds and the HPC geometry information b) subroutine T4HPCL: it reads and decodes raw data according to the HPC data format (see ref.[HPC.2]). For each module the total charge deposited and the number of charge strings with maximum above a threshold are computed. c) subroutine T4HAN: it selects modules with signal as those with at least 600 Mev and 5 charge strings in order to reject noise or signal coming from lead natural radioactivity. Starting from modules with at least 1.8 Gev it builds cluster with the 8 first neighbours and computes for each cluster energy, 6 , ¢ and the errors on these quantities. It is possible to deactivate this procedure and in this case each module above 1.8 Gev will be considered a cluster. d) subroutine T4HTG: it tags an event as ”good” if i) there is at least one cluster with more than 5 Gev ii) there are at least 2 clusters with more than 3 Gev each and with an acollinearity of less than 35°. The numerical values reported above are the default ones. The corresponding variables are: Variables in the common T4HPAR

HPCPR6 Average calibration factor

HPCPR7 Threshold for significant energy inside module (default = 0.6 Gev)

HPCPR8 Threshold for energy of cluster pivot module (default = 1.8 Gev)

HPCPR9 Flag for activating clustering procedure (default = 0. means clustering ) Variables in the common T4HTAG

HPCTH1 Threshold for tagging in case at least one cluster (default = 5. Gev)

HPCTH2 Threshold for tagging in case at least two clusters (default = 3. Gev)

HPCTH3 Maximum allowed acollinearity in case two clusters (default = 35. degrees)

The input and output of the program are respectively: INPUT : HPC raw data OUTPUT : IFLAG status word . If = 1 , the event has been tagged as ”good”. If = 0 , the event has not been tagged . If =-1 , a format error was found in the data structure (or no data).

14 4.3.2 Test of performance

The program has been tested on MC and real events. The timing in qq events was found to be ~ 60 msec on Vax 8600 . The energy resolution can be parametrized as AE_ 30% E VE This result is in agreement with beam data analysed in a similar way ([HPC.3)). clusters We analysed also data taken during LEP pilot run: the program tagged ~ 1% of the data taken (~ 8000 events). The efficiency of the tagging resulted close to 80%, the HPC geometrical acceptance. We verified with a graphical scanning that these events corresponded indeed to e.m. showers coming mainly from cosmic ray interactions.

4.4 EMF

Aim of the module is to find the clusters in the EMF, and for each cluster to reconstruct the total energy and the position of the particle originating the shower. The cluster search/fit is divided in two stages. Although both have been tested, only the first one is at present implemented in the program.

1. In the beginning, first stage retrieves in subroutine T4ERET the RAW data from the ZEBRA structure. After that, a visit is done in an array of cells corresponding to the EMF counters (sub- routine T4ECLU). All the counters above a threshold of ITHREL MeV (set in common /T4EPAR/, default = 30 MeV) are grouped into subsets of contiguous counters. Finally, the clusters with at least one counter above a threshold of ITHREH MeV (set in common /T4EPAR/, default = 300 MeV) are considered as “physical clusters”. For each of them, the total energy and the centre of gravity are computed. For the calculation of the centre of gravity, the central positions of the front faces of the counters are projected at |z| = 288.5 cm, and approximated by a rectangular array. A counter is considered noisy if: e it belongs to a cluster of energy > 70 GeV, or e it carries more than a fraction RATC (default = 0.96) of the energy of his cluster. The total length of the code (including data retrieval) is ~ 450 FORTRAN lines.

2. Second stage makes a search for a structure (i.e. for the presence of local maxima) into the clusters. If the evidence for such a structure is found to be statistically significant, the energy is shared among subclusters. | Finally, the positions of the centres of gravity are corrected by means of an algorithm for bias reduction see ref. [EMF.1]): centre of gravity is in fact a biased estimator of the position of the particle originating the shower. The total length of the code is ~ 250 FORTRAN lines.

15

4.4.1 Input/Output

The input and output of the program are respectively: INPUT :

* EMF RAW data (see above).

OUTPUT:

IFLAG Status word. If = 1, the event is good and at least one cluster in EMF has an energy greater than a threshold of ENETHR MeV (set in common /T4ETAG/, default = 3000. MeV). If = 0, the event is good but no cluster in EMF has an energy greater than ENETHR MeV. If = -1, bad input data or algorithm failure (or no input data at all). Algorithm failure is signaled by a diagnostic. ITOTEN Total energy deposited in EMF (MeV). NCLDEF Number of clusters found (also in common /T4ERES/ as NTEE). T4ETE About the informations given as standard output: | * The quantities AQ;, and Ad;, represent the size of the cluster. The definition of the “error” is thus very conservative. The estimated errors in the hypothesis that the shower comes from a single particle are given in the detector dependent informations. In addition to the standard output, for each cluster j are provided: T4ETE(j,31) Number of detector dependent informations following (6) T4ETE(j,32) x coordinate of the centre of gravity (mm) T4ETE(j,33) y coordinate of the centre of gravity (mm) T4ETE(j,34) z coordinate of the centre of gravity (mm) T4ETE(j,35) Az (cm) calculated in the hypothesis that the shower comes from a single particle (see ref.[EMF.2]) T4ETE(j,36) Ay (cm) calculated in the hypothesis that the shower comes from a single particle T4ETE(j,37) Energy of pivot counter in the cluster (GeV)

4.4.2 User access to thresholds and parameters User can access to thresholds through commons /T4EPAR/ and /T4ETAG/. Their contents are respectively:

T4EPAR :

ITHREH High threshold for the definition of a cluster (see above). ITHREL Low threshold for the definition of a cluster (see above).

16 IEPEDE Offset for the energy content of the counters. Set by default to 1000 MeV, for reasons of compatibility with the simulation.

T4ETAG :

ENETHR Tagging threshold (MeV) on the total energy of the most energetic cluster (see above).

4.4.3 Performance

The first stage has been tested on ~ 1000 Bhabha events, ~ 200 simulated qqbar events, and 200 single photons for each of a set of energies of 1, 2, 5 and 10 GeV output by DELSIM32. In addition, tests have been done with the simulation files in the directory [EVENTS.CAL], in order to allow comparisons with DELANA (see ref.[EMF'.3]). In the run, the code has been tested on around 10000 events. Average timing is ~ 40 ms/event on VAX 8600 for qqbar events, and ~ 20 ms/event for single particle events. Second stage has been tested on ~ 100 simulated single photons (average timing is ~ 20 ms/event on VAX 8600). Some results of the output quality check of first stage are presented in the following.

a. Position resolution The error on the reconstructed position, expressed in cm, can be parametrized as

1.0 Az ~ (=) @ 0.8, 3 V E stat

(systematics come from bias of the algorithm and approximations in the geometry of the detector).

b. Energy resolution The error on the reconstructed energy form a “true” incident energy E, expressed in GeV, can be parametrized as AE ( 8% : —_ ~|—-= PB Wosys E VE stat (systematics come from incertitude and fluctuations of the calibration constants and en- ergy /position correlations). The quality of the reconstruction of “clean” 3 GeV photons from DELSIM is comparable with that of DELANA at the same energy.

4.5 Hadron Calorimeter

Hadron Calorimeter (HCAL) fourth level tagging algorithm reconstructs clusters of energy depositions. The algorithm processes raw data word by word. Clusters are formed from active towers which have a common side or a common corner (in 3D). Also towers which have the same 6 and ¢ angle but which have one or more supertowers between them are joined into a cluster. Muon candidates are those clusters which have an energy deposition both in the 3rd and 4th

17 supertower layer. An event is tagged by the HCAL if the total energy is above a threshold (HADTH1) or if there is at least one muon candidate. The HCAL steering routine T4HCAL calls the initialization routine T4CINI for the first event of the run. For each event, the length of the data structure is checked and if it is OK, the steering routine for the data processing T4CPRO is called. Finally, the T4HCAL calls the output routine T4CHF which prints out a summary of the event and fills histograms, if needed. The steering routine for the data processing, T4CPRO, calls first a routine which checks the format of the data (T4CBEG). The raw data are decoded in the routine T4CDEC and the clusters are formed in T4CCLU. The output common and the event flag are filled in the routine T4CFIL.

4.5.1 Input/Output

Input:

e Hadron Calorimeter Raw Data

Output:

e Hadron Calorimeter Event Flag IT4HAC

— IT4HAC = -1 corrupted data structure — IT4HAC = 0 event not tagged — IT4HAC = 1 tagged event

e Hadron Calorimeter T4 Output Common /T4CRES/

— NTEC Number of clusters

— T4CTE(I,1-30) Standard output giving the energy and direction of the cluster — T4CTE(I,31) Energy deposition in the first supertower layer ~ T4CTE(I,32) Energy deposition in the second supertower layer — T4CTE(1,33) Energy deposition in the third supertower layer — T4CTE(I,34) Energy deposition in the fourth supertower layer - T4CTE(I,35) Number of active towers in the first supertower layer — T4CTE(I,36) Number of active towers in the second supertower layer — T4CTE(I,37) Number of active towers in the third supertower layer ~— T4CTE(I,38) Number of active towers in the fourth supertower layer — T4CTE(I,39) 6 of a muon on exit from the HCAL — T4CTE(I,40) ¢ of a muon on exit from the HCAL

18 4.5.2 User access to thresholds and parameters The internal parameters needed in the HCAL algorithm are defined in the common /T4CPAR/. This common includes the following parameters:

e MUCLIB Calibration constant for muons in units GeV /ADC.

e HACLIB Calibration constant for hadrons in units GeV /ADC.

e IOFF Offset of the Octopus cards in the HCAL FASTBUS crate.

ITHREJ(500), IPHREJ(500), ISLREJ(500) 0, ¢ and supertower number (internal num- bering) of noise channels which are to be rejected. e NREJ Number of noise channels.

e IREJ(500) Coded addresses of the noise channels.

The tagging threshold is set in the common /T4CTAG/:

e HADTHI Total energy threshold (GeV) for the HCAL.

4.5.3 Performance

The performance of the algorithm was tested with simulated events (DELSIM33). The cali- bration parameters were MUCLIB = 0.06 GeV/1 ADC, HACLIB = 0.20 GeV/1 ADC and the total energy threshold was HADTH1 = 6 GeV. The calibration parameters were obtained from the North Area beam test in 1988 ([HAC.1], [HAC.2]). For gg events the timing was 20 ms per event on VXDEL1 (VAX 6200). The corresponding time in the emulators is about 75 7%, i.e. 15 ms. The average number of active towers was 21. The average total energy per event was 21.3 GeV. The average energy per cluster was 3.0 GeV; the cluster energies are peaking at zero but the distribution has a long tail. The average number of clusters was 7.7. 31 % of events had one muon candidate, and 8 % of events were identified with two muons. The over all efficiency for tagging the qq events was 94 %. The timing for w+ p7 events was 10 ms on VXDEL1, and 95 % of events were tagged. r-pairs were tagged with an efficiency of 74 % — the timing was the same as for muon pairs.

46 SAT

The aim of the SAT tagging algorithm, as it is now, is solely to tag Bhabha candidates. This is done by comparing the energy deposits in back to back azimuthal sectors. Like in the SAT trigger, the SAT has been divided into 24 30 degree overlapping sectors. Also large single arm deposits are tagged. The module consists of the following routines:

T4SAT The local steering routine. Here the SAT raw data is scanned. If the data is from the calorimeter control is given to T4SDEC.

19 T4SDEC The SAT calorimeter data is decoded, the pedestals, IPED, are subtracted, and the result is stored in the array IADC(IEL,IARM). It is assumed that the pedestals are of approximately equal size for all channels.

T4SSEL Here the selection of events is done. First, energy is summed in each arm inside 24 overlapping 30 degree azimuthal sectors. Only channels with signal above twice the assumed noise level, NOISE, contribute to the sum. An event is denoted a “double arm Bhabha” if there are energy deposits above the threshold value, SATTH1, in both arms in a back to back combination of double sectors. If this is not the case, single arm deposits are checked against the threshold value, SATTH2. Both “double arm Bhabhas” and single arm deposits are tagged as interesting events.

4.6.1 Adjustable thresholds and variables

The principle way of tuning T4SAT is by adjustment of the two thresholds in common /TASTAG/:

SATTH1: The threshold energy (in GeV) for a “double arm Bhabha”. Default value: 23 GeV. Should not be set higher than the DELANA luminosity module (EXLUMI) cut: 0.75 times the beam energy. Rather, if the rate should turn out to be too high, it could be scaled down at the steering level.

SATTH2: The threshold energy (in GeV) for single arm deposits. Default value: 32 GeV.

In addition, the variables in common T4SVAR should correspond to the quality of the data comming from the on-line system:

IPED: Pedestals in ADC counts.

NOISE: Noise in ADC counts.

Note: Both of these values are assumed to be equal for all channels.

4.7 Forward Chambers (A and B)

Code for Forward cheambers is at present under re-writing.

5 Global performance and timing tests

During the october/novembar runs we processed some ~ 5 104 events from the pit, to check the robustness and the efficiency of the code. The timing for filtering (i.e. for flagging an event as good against background, which is different from a complete reconstruction) is at present in the order of 0.7 s/event on a VaxStation 3100.

20 6 FADO implementation

The steering of the T4 tagging system will be in a near future performed by a Fado program ([FAD.1], [FAD.2]). FADO 2.0 is a high level language , that provides a simple and concise way to define physics criteria for event tagging. Its syntax is based on mathematical logic and set theory, as it was found the most appropriate framework to describe the properties of single HEP events. The language is one of the components of the FADO tagging system. The system also implements implicitly a mechanism to selectively reconstruct the event data that are needed to fulfil the physics criteria, following the speed requirements of the online data acquisition system. A complete programming environment is now under development, which will include a syn- tax directed editor, an incremental , a debugger and a configurer. This last tool can be used to transport the system into the context of other HEP applications, namely offline event selection and filtering. A Fado program is mainly organized in Reactions Blocks, each one specifying in a concise way the tagging conditions that must be fulfilled by the events candidates to a given reaction. These tagging conditions are executed on lists of elementary objects (the detectors reconstructed tracks or clusters - e.g. TPC tracks), of selected objects (objects obeying some special properties - e.g. TPC tracks with PT j 5 GeV/c), or of composite objects (constructed from associations between detectors tracks or clusters - e.g. fitted tracks between TPC and OD). The Fado editor (based on VAX LSE) is used to create a Fado program which is subsquently compiled to produce a numerical vector which is charged in a special common of the Fado Steering program. The Fado steering program (resident on the emulators) drives the calls to The T4 reconstruction algorithms and performs the tagging logic. For each tagged reaction a bit is set in the T4 output which is added to the raw data. At the end of the run a block of statistics is provided. The complete chain is now running in parallel with the first T4 simplified steering and will be fully installed in a near future.

7 Graphic interface

The tagging program is coupled to a GKS/3d-based graphic interface, optionally called in routine QNEXT after the pattern recognition detector modules. The graphic package allows: e To display the “TE’s” of the detectors for which the fast pattern recognition has been executed, and the clusters found in TPC.

e To retrieve the informations on the reconstructed elements in the detectors, simply by picking them.

The graphic interface makes use of the WIG library (ref.[GRA.1]) for windows management, and is thus largely device independent. After the event visualization, the user can decide to change the event flag from inside the graphic mode.

21 7.1 Graphic representation of elements

The various elements of the pattern recognition for 4th level trigger are represented in the work area as follows:

-e TPC track elements are polylines joining the clusters (white points) from which they are fit.

e Electromagnetic calorimeters TE’s are represented as ellipses.

e Hadron calorimeter TE’s are represented as rectangles.

e OD, FCA and FCB data are represented as white crosses.

In the cases in which a detector provides information on the energy /momentum of the particle, its TE’s are coloured correspondingly to a colour/energy conversion map displaied in the main window. The intervals of this map, as well as the colours, can be set from the user. The graphic interface makes use of two external definition files:

e an interface modals file (T4GRAF.DEF) defining the settings of the workstation (colors, windows characteristics and dimensions, representations of the elements...);

e a menu definition file (MENU.DEF) defining the menus. One can find examples on how to interact with these files in order to change the default settings in ref.|GRA.2].

22 References

EMU.1 Workshop on Delphi 4th level trigger (SACLAY 26-27 Nov. 1987).

TPC.1 DELPHI Technical Proposal, DELPHI 83-66/1, 17/05/83.

TPC.2 D. Delikaris, Thése, Orsay 1986. |

TPC.3 M.Crozon and J.Maillard, A fast algorithm to reconstruct tracks in TPC, DELPHI 83/58 PROG, 20 April 1983.

TPC.4 J.Maillard, Algorithme rapide de reconstruction de trace pour la TPC de DELPHI, LPC 84-21, 1984.

HPC.1 P.Privitera and M. Zito, in preparation.

HPC.2 H. Burmeister et al., The HPC data format, DELPHI 88-38 DAS 81 PROG 109.

HPC.3 R. Contri, private communication.

EMF .1 G.A. Akopdjanov et al., Nucl. Instr. and Meth. 140 (1977) 441.

EMF .2 A. De Angelis and F. Mazzone, to be published in NIM A.

EMF .3 P. Checchia et al., Check of calorimeters offline programs, DELPHI 89-17 PROG 132.

HAC.1 H. Herr et al., The results of the HFM’ beam test (7+, e+ runs), DELPHI 89-56 CAL 69, 1989.

HAC.2 E. Veitch et al., Muon identification efficiencies from the HFM experiment, DELPHI 89-57 PHYS 48, 1989.

FAD.1 R.Bernard et al., FADO Emulators Tagging System, DELPHI 88-55 DAS 87, 1988.

FAD.2 C.Werner et al., FADO 2.0: A high Level Tagging language, Computing in High Energy Physics conference, Oxford 1989.

GRA.1 M. Innocente, WIG, Udine Report 89/01/AA.

GRA.2 A. De Angelis and M. Innocente, EMFGRA (graphic interface for EMF), DELPHI 88-75 PROG-118.

23