Jets + Missing Energy Signatures at the Large Hadron Collider

Total Page:16

File Type:pdf, Size:1020Kb

Jets + Missing Energy Signatures at the Large Hadron Collider Jets + Missing Energy Signatures At The Large Hadron Collider DISSERTATION Presented in Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy in the Graduate School of The Ohio State University By Khalida S. Hendricks, M.S. Graduate Program in Physics The Ohio State University 2019 Dissertation Committee: Linda Carpenter, Advisor Amy Connolly Annika Peter Antonio Boveia c Copyright by Khalida S. Hendricks 2019 Abstract In this work we consider new ways to use jets plus missing energy signatures in searches at the Large Hadron Collider. We study the Higgs boson (h) decay to two light jets at the 14 TeV High-Luminosity- LHC (HL-LHC), where a light jet (j) represents any non-flavor tagged jet from the obser- vational point of view. We estimate the achievable bounds on the decay product branching fractions through the associated production V h (V = W ±;Z). As a reasonable estimation, we only focus on the boosted region of high pT (h) and the three leptonic decay channels of the vector boson. We find that with 3000 fb−1 data at the HL-LHC, we should expect approximately 1σ statistical significance on the SM V h(gg) signal in this channel. This cor- responds to a reachable upper bound BR(h jj) 4 BRSM (h gg) at 95% confidence ! ≤ ! level. A consistency fit also leads to an upper bound of BR(h cc) < 15 BRSM (h cc) ! ! at 95% confidence level. The estimated bound may be further strengthened by adopting multiple variable analyses, or adding other production channels. We then consider some simple machine learning techniques applied to the same channels. We use both a Fully Connected Neural Network (FCN) and a Convolutional Neural Network (CNN) on a statistically identical dataset as the one used for the cuts-based analysis of the Higgs decay to light jets. We found that that both networks improved upon the cuts-based results in two of the three signal channels, and roughly matched the cuts-based analysis on the third. This gave an improvement on the significance of the analysis from 0.59 for the cuts-based analysis to 0.61 using the FCN, and 0.62 using the CNN. Finally we consider the HL-LHC discovery potential in the 3 ab−1 data set for gluinos in the gluino-weakino associated production channel. We propose a search in the jets plus missing energy channel which exploits kinematic edge features in the reconstructed transverse mass of the gluino. We find that for squark masses in the 2 TeV range we have 5 sigma discovery potential for gluino masses in the range of 2.4 to 3 TeV, competitive with the projections for discovery potential in the gluino pair production channel. ii Acknowledgments There are many people to whom I owe sincere gratitude for their contributions to my work and education. I would first like to thank my advisor, Linda Carpenter, for her patience, mentorship, and guidance throughout my graduate career. During the course of my research, I had the privilege of collaborating with Tao Han, Zhuoni Qian, and Ning Zhou; I would like to thank them all for their insight and patience. I would like to thank Richard Furnstahl for his enthusiastic and generous support in so many areas, from homework help to obscure coding issues to career advice. I would also like to thank Jesi Goodman for the advice, encouragement, and guidance which she continued to give generously even after moving on from her postdoctoral position at OSU to pursue her own career. I have been lucky to have Sushant More, Russell Colburn, and Humberto Gilmer as my office mates over the years. They have provided much assistance from discussing physics to helping finding bugs in codes to working together on homework, and helping to maintain sanity. Finally I would like to thank my family and many friends who have provided support and encouragement over the course of my entire educational career. I would especially like to thank my father, John Hendricks, for his continuous and unconditional support throughout my life, even as I took many unexpected directions and detours. iii Vita October 13, 1978 . .Born|Los Alamos, NM May, 2013 . B.S., North Carolina State University, Raleigh, North Carolina Publications Increasing Discovery Threshold is Rare SUSY Scenarios Part I: Gluinos. Linda M. Car- penter, Khalida Hendricks, arXiv:1812.08406 (2018). Higgs Boson Decay to Light Jets at the LHC. Linda M. Carpenter, Tao Han, Khalida Hendricks, Zhuoni Qian, Ning Zhou, Phys Rev. D 95, 053003 (2017). Pion Momentum Distributions in the Nucleon in Chiral Effective Theory. M. Burkardt, K. S. Hendricks, Chueng-Ryong Ji, W. Melnitchouk, A. W. Thomas, Phys Rev. D 87, 056009 (2013). Fields of Study Major Field: Physics Studies in: Collider Phenomenology Higgs Physics iv Table of Contents Page Abstract........................................... ii Acknowledgments..................................... iii Vita............................................. iv List of Figures ...................................... vii List of Tables .......................................x Chapters 1 Introduction1 1.1 The Standard Model............................... 2 1.2 Problems with the Standard Model....................... 12 1.3 Higgs Physics at the Large Hadron Collider.................. 16 1.3.1 Overview of the Large Hadron Collider................. 17 1.3.2 SM Higgs Couplings: Measurements and Searches .......... 21 1.4 Supersymmetry.................................. 23 1.4.1 The Minimal Supersymmetric Standard Model............ 31 1.4.2 Breaking SUSY.............................. 34 1.4.3 Additional Problems solved by SUSY ................. 35 1.4.4 Current LHC SUSY searches ...................... 37 1.5 Machine Learning................................. 40 1.5.1 How Artificial Neural Networks Work ................. 41 1.6 Summary ..................................... 50 2 Higgs Decay to Light Jets at the Large Hadron Collider 51 2.1 Introduction.................................... 51 2.2 Signal and Background Processes........................ 53 2.3 Signal Selection.................................. 55 2.3.1 `+`− + jj channel ............................ 59 ± 2.3.2 ` + ET + jj channel .......................... 59 2.3.3 ET + jj channel ............................. 60 2.3.4 Background control ........................... 61 2.4 Alternative Discriminants with Missing Energies ............... 64 2.5 Results and Discussion.............................. 66 2.5.1 Signal significance ............................ 66 2.5.2 Bounds on the branching fractions and correlations with h b¯b; cc¯ 67 ! v 2.5.3 Bounds on light-quark Yukawa couplings ............... 69 2.6 Summary and Conclusions............................ 70 3 Applying Basic Machine Learning Techniques to Collider Phenomenology 72 3.1 Introduction.................................... 72 3.2 Data Preparation................................. 72 3.3 The Network ................................... 76 3.4 Analysis and Results............................... 78 3.4.1 Results: 2-lepton channel ........................ 80 3.4.2 Results: 1-lepton channel ........................ 81 3.4.3 Results: 0-lepton channel ........................ 83 3.4.4 Combined Results ............................ 84 3.5 Outlook and future work............................. 85 4 Increasing the Discovery Potential Using Rare SUSY Scenarios: Gluinos 86 4.1 Introduction.................................... 86 4.2 Production Modes ................................ 88 4.3 Event kinematics and SUSY parameter space................. 89 4.4 Cuts-based analysis................................ 92 4.5 Results....................................... 95 4.6 Conclusions.................................... 97 5 Conclusion 99 Bibliography 101 Appendices A Machine Learning Data 108 A.1 Feature Key.................................... 108 A.2 Correlation Tables ................................ 114 A.2.1 2-lepton Correlations........................... 114 A.2.2 1-lepton Correlations........................... 116 A.2.3 0-lepton Correlations........................... 118 vi List of Figures Figure Page 1.1 The primary Higgs production channels at the LHC. (a) The primary produc- tion channel for the Higgs boson at the LHC, gluon fusion. (b) The second largest prodcution channel is vector boson fusion. (c) Associated production with a vector boson. (d) Associated production with e tt¯ pair. 19 1.2 Higgs to massless gauge bosons via heavy intermediate particles. 21 1.3 Higgs pair production at the LHC. .......................... 23 1.4 1-loop corrections to the Higgs mass. (a) The fermion correction to the Higgs mass given by Eq. 1.53. (b) & (c) The scalar corrections to the Higgs mass given by Eq. 1.54. ................................ 24 1.5 Gauge interaction \near miss" in the SM, left, and SUSY unification, right. The kink in the right graph shows where SUSY appears, altering the coupling strengths to bring them together. Image from LEP............... 36 1.6 Gluino mass limits in various channels from ATLAS.............. 38 1.7 Gluino mass limits for a particular SUSY model with particular choices for sparticle masses and other parameters...................... 39 1.8 Basic function diagram of an Artificial Neural Network. ........... 41 1.9 The feedback loop of a neural network. Image credit [39]........... 46 1.10 An illustration of how the lower-level nodes in a CNN look for broad patterns of lines and curves in order to classify objects in an image. To recreate what the CNN \sees", the algorithm output was interrupted early in the training cycle [40]...................................... 47 vii
Recommended publications
  • The Large Hadron Collider Lyndon Evans CERN – European Organization for Nuclear Research, Geneva, Switzerland
    34th SLAC Summer Institute On Particle Physics (SSI 2006), July 17-28, 2006 The Large Hadron Collider Lyndon Evans CERN – European Organization for Nuclear Research, Geneva, Switzerland 1. INTRODUCTION The Large Hadron Collider (LHC) at CERN is now in its final installation and commissioning phase. It is a two-ring superconducting proton-proton collider housed in the 27 km tunnel previously constructed for the Large Electron Positron collider (LEP). It is designed to provide proton-proton collisions with unprecedented luminosity (1034cm-2.s-1) and a centre-of-mass energy of 14 TeV for the study of rare events such as the production of the Higgs particle if it exists. In order to reach the required energy in the existing tunnel, the dipoles must operate at 1.9 K in superfluid helium. In addition to p-p operation, the LHC will be able to collide heavy nuclei (Pb-Pb) with a centre-of-mass energy of 1150 TeV (2.76 TeV/u and 7 TeV per charge). By modifying the existing obsolete antiproton ring (LEAR) into an ion accumulator (LEIR) in which electron cooling is applied, the luminosity can reach 1027cm-2.s-1. The LHC presents many innovative features and a number of challenges which push the art of safely manipulating intense proton beams to extreme limits. The beams are injected into the LHC from the existing Super Proton Synchrotron (SPS) at an energy of 450 GeV. After the two rings are filled, the machine is ramped to its nominal energy of 7 TeV over about 28 minutes. In order to reach this energy, the dipole field must reach the unprecedented level for accelerator magnets of 8.3 T.
    [Show full text]
  • CMPSCI 585 Programming Assignment 1 Spam Filtering Using Naive Bayes
    CMPSCI 585 Programming Assignment 1 Out: February 5, 2003 Due: February 17, 2003 Spam Filtering using Naive Bayes Naive Bayes is a simple, effective machine learning solution to the problem of document classification. For this assignment, you will implement a naive Bayes classifier to classify email messages as either spam (junk mail) or ham (legitimate messages). Data The data was collected from http://spamassassin.org/publiccorpus/. For this assignment, use the slightly modified version found at http://canberra.cs.umass.edu/~culotta/cs585/ass1-data.tgz (8.3M). The data consists of 4150 ham messages and 1897 spam messages, with original header information intact. Tasks Read data: Read in all messages and store them in some efficient manner. To do this you must decide how to tokenize each message – i.e. designate which characters indicate the start of a new “word.” See http://www.paulgrahm.com/spam.html for one way of doing this. (You will probably also assign each unique word an integer index you can use as an index into arrays of word counts and word proabilities.) Split Data: Randomly split the messages into a training set (70% of the messages) and a testing set (30%). Train Classifier: Using the training set only, estimate and store the prior class distributions P (spam) and P (ham), as well as the conditional probability distributions P (w|spam) and P (w|ham). Test Classifier: Classify each message in the testing set as spam or ham according to the Naive Bayes formulation. Note that you will most likely run out of floating-point resolution unless you do the product over words 1 Spam Ham Spam TP FP Ham FN TN Table 1: Confusion Matrix: TP = “true positive”, TN = “true negative”, FP = “false positive”, ‘FN = “false negative”.
    [Show full text]
  • Top Quark Physics in the Large Hadron Collider Era
    Top Quark Physics in the Large Hadron Collider era Michael Russell Particle Physics Theory Group, School of Physics & Astronomy, University of Glasgow September 2017 arXiv:1709.10508v2 [hep-ph] 31 Jan 2018 A thesis submitted in fulfilment of the requirements for the degree of Doctor of Philosophy Abstract We explore various aspects of top quark phenomenology at the Large Hadron Collider and proposed future machines. After summarising the role of the top quark in the Standard Model (and some of its well-known extensions), we discuss the formulation of the Standard Model as a low energy effective theory. We isolate the sector of this effective theory that pertains to the top quark and that can be probed with top observables at hadron colliders, and present a global fit of this sector to currently available data from the LHC and Tevatron. Various directions for future improvement are sketched, including analysing the potential of boosted observables and future colliders, and we highlight the importance of using complementary information from different colliders. Interpretational issues related to the validity of the effective field theory formulation are elucidated throughout. Finally, we present an application of artificial neural network algorithms to identifying highly- boosted top quark events at the LHC, and comment on further refinements of our analysis that can be made. 2 Acknowledgements First and foremost I must thank my supervisors, Chris White and Christoph Englert, for their endless support, inspiration and encouragement throughout my PhD. They always gave me enough freedom to mature as a researcher, whilst providing the occasional neces- sary nudge to keep me on the right track.
    [Show full text]
  • MIT at the Large Hadron Collider—Illuminating the High-Energy Frontier
    Mit at the large hadron collider—Illuminating the high-energy frontier 40 ) roland | klute mit physics annual 2010 gunther roland and Markus Klute ver the last few decades, teams of physicists and engineers O all over the globe have worked on the components for one of the most complex machines ever built: the Large Hadron Collider (LHC) at the CERN laboratory in Geneva, Switzerland. Collaborations of thousands of scientists have assembled the giant particle detectors used to examine collisions of protons and nuclei at energies never before achieved in a labo- ratory. After initial tests proved successful in late 2009, the LHC physics program was launched in March 2010. Now the race is on to fulfill the LHC’s paradoxical mission: to complete the Stan- dard Model of particle physics by detecting its last missing piece, the Higgs boson, and to discover the building blocks of a more complete theory of nature to finally replace the Standard Model. The MIT team working on the Compact Muon Solenoid (CMS) experiment at the LHC stands at the forefront of this new era of particle and nuclear physics. The High Energy Frontier Our current understanding of the fundamental interactions of nature is encap- sulated in the Standard Model of particle physics. In this theory, the multitude of subatomic particles is explained in terms of just two kinds of basic building blocks: quarks, which form protons and neutrons, and leptons, including the electron and its heavier cousins. From the three basic interactions described by the Standard Model—the strong, electroweak and gravitational forces—arise much of our understanding of the world around us, from the formation of matter in the early universe, to the energy production in the Sun, and the stability of atoms and mit physics annual 2010 roland | klute ( 41 figure 1 A photograph of the interior, central molecules.
    [Show full text]
  • Using Machine Learning to Improve Dense and Sparse Matrix Multiplication Kernels
    Iowa State University Capstones, Theses and Graduate Theses and Dissertations Dissertations 2019 Using machine learning to improve dense and sparse matrix multiplication kernels Brandon Groth Iowa State University Follow this and additional works at: https://lib.dr.iastate.edu/etd Part of the Applied Mathematics Commons, and the Computer Sciences Commons Recommended Citation Groth, Brandon, "Using machine learning to improve dense and sparse matrix multiplication kernels" (2019). Graduate Theses and Dissertations. 17688. https://lib.dr.iastate.edu/etd/17688 This Dissertation is brought to you for free and open access by the Iowa State University Capstones, Theses and Dissertations at Iowa State University Digital Repository. It has been accepted for inclusion in Graduate Theses and Dissertations by an authorized administrator of Iowa State University Digital Repository. For more information, please contact [email protected]. Using machine learning to improve dense and sparse matrix multiplication kernels by Brandon Micheal Groth A dissertation submitted to the graduate faculty in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY Major: Applied Mathematics Program of Study Committee: Glenn R. Luecke, Major Professor James Rossmanith Zhijun Wu Jin Tian Kris De Brabanter The student author, whose presentation of the scholarship herein was approved by the program of study committee, is solely responsible for the content of this dissertation. The Graduate College will ensure this dissertation is globally accessible and will not permit alterations after a degree is conferred. Iowa State University Ames, Iowa 2019 Copyright c Brandon Micheal Groth, 2019. All rights reserved. ii DEDICATION I would like to dedicate this thesis to my wife Maria and our puppy Tiger.
    [Show full text]
  • The Birth and Development of the First Hadron Collider the Cern Intersecting Storage Rings (Isr)
    Subnuclear Physics: Past, Present and Future Pontifical Academy of Sciences, Scripta Varia 119, Vatican City 2014 www.pas.va/content/dam/accademia/pdf/sv119/sv119-hubner.pdf The BirtTh a nd D eve lopment o f the Fir st Had ron C ollider The CERN Intersecting Storage Rings (ISR) K. H ÜBNER , T.M. T AYLOR CERN, 1211 Geneva 23, Switzerland Abstract The CERN Intersecting Storage Rings (ISR) was the first facility providing colliding hadron beams. It operated mainly with protons with a beam energy of 15 to 31 GeV. The ISR were approved in 1965 and were commissioned in 1971. This paper summarizes the context in which the ISR emerged, the design and approval phase, the construction and the commissioning. Key parameters of its performance and examples of how the ISR advanced accelerator technology and physics are given. 1. Design and approval The concept of colliding beams was first published in a German patent by Rolf Widerøe in 1952, but had already been registered in 1943 (Widerøe, 1943). Since beam accumulation had not yet been invented, the collision rate was too low to be useful. This changed only in 1956 when radio-frequency (rf) stacking was proposed (Symon and Sessler, 1956) which allowed accumulation of high-intensity beams. Concurrently, two realistic designs were suggested, one based on two 10 GeV Fixed-Field Alternating Gradient Accelerators (FFAG) (Kerst, 1956) and one suggesting two 3 GeV storage rings with synchrotron type magnet structure (O’Neill, 1956); in both cases the beams collided in one common straight section. The idea of intersecting storage rings to increase the number of interaction points appeared later (O’Neill, 1959).
    [Show full text]
  • Physicists Raid Tevatron for Parts Fermilab Icon Plundered Amid Tight Budgets and Shifting Scientific Aims
    IN FOCUS NEWS HIGH-ENERGY PHYSICS Physicists raid Tevatron for parts Fermilab icon plundered amid tight budgets and shifting scientific aims. BY EUGENIE SAMUEL REICH CANNIBALIZING THE TEVATRON Parts at the Tevatron and its two main experiments, the CDF and D0, are being t is a 4,000-tonne edifice that stands three considered for recycling in the wake of the collider’s closure in September 2011. stories high, chock full of particle detectors, Antiproton power supplies, electronics and photo­ source Imultiplier tubes, all layered like a giant onion Possibly open around a cylindrical magnet. During 26 years Booster of operation at the Fermi National Accelerator to visitors Laboratory in Batavia, Illinois, this behemoth, Main injector the Collider Detector at Fermilab (CDF), and recycler helped to find the top quark and chased the • Photomultiplier tubes for Higgs boson. But since the lab’s flagship parti- nuclear-structure experiment CDF cle collider, the Tevatron, was switched off in • Electronics for the Large September 2011, the detector has been surplus Hadron Collider in Europe • Central magnet for a stock — and it is now slowly being cannibal- possible particle-decay ized for parts. Beams repurposed experiment When the Tevatron closed, Fermilab for neutrino and • Possible educational display announced that the CDF would become an other experiments educational display. Along with its companion experiment, D0, the detector was supposed to form the centre­piece of a tour through simu- Tevatron lated control rooms and decommissioned Possible educational display accelerator tunnels. But tight budgets for experimental particle physicists — combined 500 m with their tendency to tinker and recycle — are pushing the outcome in a different direction, D0 at least for the CDF.
    [Show full text]
  • Arxiv:1902.03680V3 [Cs.LG] 17 Jun 2019 1
    Learning From Noisy Labels By Regularized Estimation Of Annotator Confusion Ryutaro Tanno1 ∗ Ardavan Saeedi2 Swami Sankaranarayanan2 Daniel C. Alexander1 Nathan Silberman2 1University College London, UK 2Butterfly Network, New York, USA 1 2 fr.tanno, [email protected] fasaeedi,swamiviv,[email protected] Abstract tations is modest or the tasks are ambiguous. For example, The predictive performance of supervised learning algo- vision applications in medical image analysis [2] require an- rithms depends on the quality of labels. In a typical label notations from clinical experts, which incur high costs and collection process, multiple annotators provide subjective often suffer from high inter-reader variability [3, 4, 5, 6]. noisy estimates of the “truth” under the influence of their However, if the exact process by which each annotator varying skill-levels and biases. Blindly treating these noisy generates the labels was known, we could correct the an- labels as the ground truth limits the accuracy of learning al- notations accordingly and thus train our model on a cleaner gorithms in the presence of strong disagreement. This prob- set of data. Furthermore, this additional knowledge of the lem is critical for applications in domains such as medical annotators’ skills can be utilized to decide on which exam- imaging where both the annotation cost and inter-observer ples to be labeled by which annotators [7, 8, 9]. Therefore, variability are high. In this work, we present a method for methods that can accurately model the label noise of anno- simultaneously learning the individual annotator model and tators are useful for improving not only the accuracy of the the underlying true label distribution, using only noisy ob- trained model, but also the quality of labels in the future.
    [Show full text]
  • Particle Physics Lecture 4: the Large Hadron Collider and Other Accelerators November 13Th 2009
    Subatomic Physics: Particle Physics Lecture 4: The Large Hadron Collider and other accelerators November 13th 2009 • Previous colliders • Accelerating techniques: linacs, cyclotrons and synchrotrons • Synchrotron Radiation • The LHC • LHC energy and luminosity 1 Particle Acceleration Long-lived charged particles can be accelerated to high momenta using electromagnetic fields. • e+, e!, p, p!, µ±(?) and Au, Pb & Cu nuclei have been accelerated so far... Why accelerate particles? • High beam energies ⇒ high ECM ⇒ more energy to create new particles • Higher energies probe shorter physics at shorter distances λ c 197 MeV fm • De-Broglie wavelength: = 2π pc ≈ p [MeV/c] • e.g. 20 GeV/c probes a distance of 0.01 fm. An accelerator complex uses a variety of particle acceleration techniques to reach the final energy. 2 A brief history of colliders • Colliders have driven particle physics forward over the last 40 years. • This required synergy of - hadron - hadron colliders - lepton - hadron colliders & - lepton - lepton colliders • Experiments at colliders discovered W- boson, Z-boson, gluon, tau-lepton, charm, bottom and top-quarks. • Colliders provided full verification of the Standard Model. DESY Fermilab CERN SLAC BNL KEK 3 SppS̅ at CERNNo b&el HERAPrize for atPhy sDESYics 1984 SppS:̅ Proton anti-Proton collider at CERN. • Given to Carlo Rubbia and Simon van der Meer • Ran from 1981 to 1984. Nobel Prize for Physics 1984 • Centre of Mass energy: 400 GeV • “For their decisive • 6.9 km in circumference contributions to large • Two experiments:
    [Show full text]
  • Performance Metric Elicitation from Pairwise Classifier Comparisons Arxiv:1806.01827V2 [Stat.ML] 18 Jan 2019
    Performance Metric Elicitation from Pairwise Classifier Comparisons Gaurush Hiranandani† Shant Boodaghians† Ruta Mehta† [email protected] [email protected] [email protected] Oluwasanmi Koyejo† [email protected] January 21, 2019 Abstract Given a binary prediction problem, which performance metric should the classifier optimize? We address this question by formalizing the problem of Metric Elicitation. The goal of metric elicitation is to discover the performance metric of a practitioner, which reflects her innate rewards (costs) for correct (incorrect) classification. In particular, we focus on eliciting binary classification performance metrics from pairwise feedback, where a practitioner is queried to provide relative preference between two classifiers. By exploiting key geometric properties of the space of confusion matrices, we obtain provably query efficient algorithms for eliciting linear and linear-fractional performance metrics. We further show that our method is robust to feedback and finite sample noise. 1 Introduction Selecting an appropriate performance metric is crucial to the real-world utility of predictive machine learning. Specialized teams of statisticians and economists are routinely hired in the arXiv:1806.01827v2 [stat.ML] 18 Jan 2019 industry to monitor many metrics { since optimizing the wrong metric directly translates into lost revenue [6]. Medical predictions are another important application, where ignoring cost sensitive trade-offs can directly impact lives [23]. Unfortunately, there is scant formal guidance
    [Show full text]
  • MASTER TP^ Ostrlbunon of THIS DOCUMENT IS UNL.M.TED Kaon Content of Three-Prong Decays of the Tan Lepton
    LBL--30035 DE91 007647 Kaon Content of Three-Prong Decays of the Tau Lepton* James Jackson Eastman Ph.D. Thesis "This work was supported by the Director, Office of Energy Research, Office of High Energy and Nuclear Physics, Division of High Energy Physics, of the U.S. Department of Energy under Contract No. DE-AC03-76SF00098. MASTER TP^ OSTRlBUnON OF THIS DOCUMENT IS UNL.M.TED Kaon Content of Three-Prong Decays of the Tan Lepton James Jackson Eastman Ph.D. Thesis Department of Physics University of California, Berkeley, CA 94720 and Lawrence Berkeley Laboratory, University of California, Berkeley, CA 94720 December 2, 1990 Abstract We present a series of measurements involving the production of charged kaons in three-prong hadronic decays of the T lepton. The data sample was obtained with the TPC/Two-Gamma detector facility at PEP. We set a limit on the branch­ ing fraction BR(T~ -> vrK~K°) < 0.26% at the 95% confidence level. The pro­ cess T~ —» v,K~Ka is related via SU(3) to the second-class current decay r~ —» vTit~t). We also present new measurements of the three-prong branching frac­ + + tions BR{T~ -» vTK~ir Tr- + neutrals) = 0.70 ±8:i?% and BR(T~ -> t/TK-K ir + neutrals) = 0.16 +g:J?%. This work is supported by the United States Department of Energy under Contract DE-AC03-76SF00098. ii FOR Ann. Acknowledgements HIGH-ENERGY PHYSICS EXPERIMENTS are great big projects, and I want to ex­ press my gratitude to the many scientists, engineers, technicians and programmers who built and ran the PEP4/9 experiment.
    [Show full text]
  • Designing Alternative Representations of Confusion Matrices to Support Non-Expert Public Understanding of Algorithm Performance
    153 Designing Alternative Representations of Confusion Matrices to Support Non-Expert Public Understanding of Algorithm Performance HONG SHEN, Carnegie Mellon University, USA HAOJIAN JIN, Carnegie Mellon University, USA ÁNGEL ALEXANDER CABRERA, Carnegie Mellon University, USA ADAM PERER, Carnegie Mellon University, USA HAIYI ZHU, Carnegie Mellon University, USA JASON I. HONG, Carnegie Mellon University, USA Ensuring effective public understanding of algorithmic decisions that are powered by machine learning techniques has become an urgent task with the increasing deployment of AI systems into our society. In this work, we present a concrete step toward this goal by redesigning confusion matrices for binary classification to support non-experts in understanding the performance of machine learning models. Through interviews (n=7) and a survey (n=102), we mapped out two major sets of challenges lay people have in understanding standard confusion matrices: the general terminologies and the matrix design. We further identified three sub-challenges regarding the matrix design, namely, confusion about the direction of reading the data, layered relations and quantities involved. We then conducted an online experiment with 483 participants to evaluate how effective a series of alternative representations target each of those challenges in the context of an algorithm for making recidivism predictions. We developed three levels of questions to evaluate users’ objective understanding. We assessed the effectiveness of our alternatives for accuracy in answering those questions, completion time, and subjective understanding. Our results suggest that (1) only by contextualizing terminologies can we significantly improve users’ understanding and (2) flow charts, which help point out the direction ofreading the data, were most useful in improving objective understanding.
    [Show full text]