Institut Für Neuroinformatik Ruhr-Universität Bochum

Total Page:16

File Type:pdf, Size:1020Kb

Institut Für Neuroinformatik Ruhr-Universität Bochum Institut für Ruhr-Universität Neuroinformatik Bochum Internal Rep ort Ob ject Recognition with a Sparse and Autonomously Learned Representation Based on Banana Wavelets by Norb ert Kruger Gabriele Peters Christoph von der Malsburg IRINI RuhrUniversitat Bo chum Dezemb er Institut fur Neuroinformatik ISSN Bo chum Ob ject Recognition with a Sparse and Autonomously Learned Representation Based on Banana Wavelets Norb ert Kruger x Gabriele Petersx Christoph von der Malsburgxz x RuhrUniversitat Bo chum Institut fur Neuroinformatik D Bo chum Germany z University of Southern California Dept of Computer Science and Section for Neurobiology Los Angeles CA USA Abstract We intro duce an ob ject recognition system based on the well known Elastic Graph Matching EGM but includes signicant improvements compared to earlier versions Our basic features are banana wavelets which are generalized Gab or wavelets In addition to the qualities frequency and orientation banana wavelets have the attributes curvature and size Banana wavelets can b e metrically organized A sparse and ecient representation of ob ject classes is learned utilizing this metric organization Learning is guided by a sensible amount of a priori knowledge in form of basic principles The learned representation is used for a fast matching Signicant sp eed up can b e achieved by hierarchical pro cessing of features Furthermore manual construction of ground truth is replaced by an automatic generation of suitable training examples using motor controlled feedback We motivate the biological plausibility of our approach by utilizing concepts like hierarchical pro cessing or metrical organization of features inspired by brain research and criticize a to o detailed mo delling of biological pro cessing Intro duction In this pap er we describ e a novel ob ject recognition system in which representations of ob ject classes can b e learned automatically The learned representations allow a fast and eective lo cation and identication of ob jects in compli cated scenes Our ob ject recognition system is based on three pillars Firstly our prepro cessing is based on the idea of sparse coding Secondly eective learning is guided by a priori constraints covering fundamental structure of the visual world Thirdly we use Elastic Graph Matching EGM for the lo cation and identication of ob jects A sparse representation can b e dened as a co ding of an ob ject by a smal l number of binary features taken from a large feature space A certain feature is only useful for co ding a small subset of ob jects and is not applicable for most of the other ob jects Sparse co ding has biologically motivated advantages like minimizing wiring length for forming asso ciations Baum et al p oint to the increase of asso ciative memory capacity provided by a sparse co de Ohlshausen Field argue that the retinal pro jection of the threedimensional world has a sparse structure and therefore a sparse co de meets the principle of redundancy reduction by reducing higherorder statistical correlations of the input As an additional advantage to the reasons mentioned ab ove our matching algorithm achieves a siginicant sp eedup by utilizing the fact that only a small numb er of features is required in our sparse representation of an ob ject For a more detailed discussion of sparse co ding we refer to Our representation of a certain view of an ob ject class comprises only imp ortant features These are extracted from dierent examples see gure iiv The central assumption of our learning algorithm necessitates on a priori knowledge applied to the system in the form of general principles and mechanisms Learning is inherently faced with the biasvariance dilemma If the starting conguration of the system is very general it can learn from and sp ecialize to a wide variety of domains but it will in general have to buy this advantage by having many internal degrees of freedom This is a serious problem since the numb er of examples needed to train a system scales very badly with the systems size quickly leading to totally unrealistic learning time or else with a limited set of training examples the system will trivially adapt to its accidental p eculiarities and the system will fail to generalize prop erly Supp orted by grants from the German Ministry for Science and Technology INE NEUROS and MA Electronic Eye to new examples This is the variance problem On the other hand if the initial system has few degrees of freedom it may b e able to learn eciently but unless the system is designed with much sp ecic insight into the domain at hand the solution we criticized ab ove there is great danger that the structural domain spanned by those degrees of freedom do es not cover the given domain of application at all the bias problem a) b) i) ii) iii) iv) v) Figure iiv Dierent examples of cans and faces used for learning v The learned representations We prop ose that a priori knowledge is needed to overcome the biasvariance dilemma The challenge here is to attain generality and to avoid the extreme of equipping the system with manually constructed sp ecic domain knowledge such as geometry and physics in general or even the geometric and physical structure of ob jects themselves We have formulated a numb er of a priori principles to reduce the dimension of the search space and to guide learning ie to handle the varianceproblem We assume that we can avoid the biasproblem b ecause of the general applicability of those principles All these principles are concerned with the selection of imp ortant features from a predened feature space P P P and the structure thereof P In and we have already made use of the following principles P Lo cality Features refering to dierent lo cations are treated as indep endent P Invariance Features are preferred which are invariant under a wide range of ob ject transformations P Minimal Redundancy Features should b e selected for minimal redundancy of information Here we intro duce a principle P as an imp ortant additional constraint P Lo cal Feature Assumption Signicant features of a lo cal area of the twodimensional pro jection of the visual world are lo calized curved lines We formalize P by extending the concept of Gab or wavelets see eg to banana wavelets section To the parameters frequency and orientation we add curvature and size see gure An ob ject can b e represented as a conguration of a few of these features gure v therefore it can b e co ded sparsely The space of banana wavelet resp onses can b e understo o d as a metric space its metric representing the similarity of features This metric is utilized for the learning of a representation of ob jects and for recognition of these ob jects during the matching pro cedure The banana wavelet resp onses can b e derived from Gab or wavelets resp onses by hierarchical pro cessing to gain sp eed and reduce memory requests see section A set of examples of a certain view of an ob ject class gure iiv is used to learn a sparse representation sections and which contains only the imp ortant features ie features which are robust against changes of background and illumination or slight variations in scale and orientation This sparse representation allows for quickly and eectively lo cating see section by using EGM Our system has certain analogies to the visual system of vertebrates There is evidence for curvature sensitive features pro cessed in a hierchical manner in early stages sparse co ding is discussed as a co ding scheme used in the visual system and metric organization of features seems to play an imp ortant role for information pro cessing in the brain Instead of detailed mo delling of brain areas we aim to apply some basic concepts inspired by brain research like sparse co ding hierarchical pro cessing metrical organisation of features etc in our articial ob ject recognition system We think a system do es not necessarily need to contain neurons or hebbian plasticity to b e called biologically motivated Mayb e we miss the imp ortant asp ects of information pro cessing in the brain by lo oking on a to o detailed level After all humans did not build planes with feathers but the observation of birds inspired the understanding of the basic principles of ying which are used by any airplane For a more detailed discussion of the analogy to biology we refer to To enable simultaneously a rough understanding of the basic ideas of the approach and a detailed description of the algorithm this pap er can b e read in two mo des For every subsections we give rst a short summary and then a more detailed description b eginning with the phrases Formally sp eaking or More formally The reader may skip the latter parts for a rough understanding or a rst reading size size size size frequency frequency frequency frequency curvature curvature size size size curvature curvature direction direction Figure Relation b etween Gab or wavelets and banana wavelets Left four examples of Gab or wavelets which dier in frequency and direction only Right examples of banana wavelets which are related to the Gab or wavelets on the left Banana wavelets are describ ed by two additional parameters curvature and size The Banana Space In this section we describ e our realization of principle P a feature generation based on banana wavelets and its metric organization in the banana space P gives us a signicant reduction of the search space Instead of allowing eg all linear lters as p ossible features we restrict ourself to a small subset Considering the risk of a wrong feature
Recommended publications
  • Information Scrambling in Computationally Complex Quantum Circuits
    Information Scrambling in Computationally Complex Quantum Circuits Xiao Mi,1, ∗ Pedram Roushan,1, ∗ Chris Quintana,1, ∗ Salvatore Mandr`a,2, 3 Jeffrey Marshall,2, 4 Charles Neill,1 Frank Arute,1 Kunal Arya,1 Juan Atalaya,1 Ryan Babbush,1 Joseph C. Bardin,1, 5 Rami Barends,1 Andreas Bengtsson,1 Sergio Boixo,1 Alexandre Bourassa,1, 6 Michael Broughton,1 Bob B. Buckley,1 David A. Buell,1 Brian Burkett,1 Nicholas Bushnell,1 Zijun Chen,1 Benjamin Chiaro,1 Roberto Collins,1 William Courtney,1 Sean Demura,1 Alan R. Derk,1 Andrew Dunsworth,1 Daniel Eppens,1 Catherine Erickson,1 Edward Farhi,1 Austin G. Fowler,1 Brooks Foxen,1 Craig Gidney,1 Marissa Giustina,1 Jonathan A. Gross,1 Matthew P. Harrigan,1 Sean D. Harrington,1 Jeremy Hilton,1 Alan Ho,1 Sabrina Hong,1 Trent Huang,1 William J. Huggins,1 L. B. Ioffe,1 Sergei V. Isakov,1 Evan Jeffrey,1 Zhang Jiang,1 Cody Jones,1 Dvir Kafri,1 Julian Kelly,1 Seon Kim,1 Alexei Kitaev,1, 7 Paul V. Klimov,1 Alexander N. Korotkov,1, 8 Fedor Kostritsa,1 David Landhuis,1 Pavel Laptev,1 Erik Lucero,1 Orion Martin,1 Jarrod R. McClean,1 Trevor McCourt,1 Matt McEwen,1, 9 Anthony Megrant,1 Kevin C. Miao,1 Masoud Mohseni,1 Wojciech Mruczkiewicz,1 Josh Mutus,1 Ofer Naaman,1 Matthew Neeley,1 Michael Newman,1 Murphy Yuezhen Niu,1 Thomas E. O'Brien,1 Alex Opremcak,1 Eric Ostby,1 Balint Pato,1 Andre Petukhov,1 Nicholas Redd,1 Nicholas C.
    [Show full text]
  • Quantum Permutation Synchronization
    (I) QUBO Preparation (II) Quantum Annealing (III) Global Synchronization Quantum Permutation Synchronization 1;? 2;? 2 1 Tolga Birdal Vladislav Golyanik Christian Theobalt Leonidas Guibas Unembedding 1Stanford University 2Max Planck Institute for Informatics, SIC QUBO Problem Logical Abstract Formulation We present QuantumSync, the first quantum algorithm for solving a synchronization problem in the context of com- puter vision. In particular, we focus on permutation syn- Embedding chronization which involves solving a non-convex optimiza- Quantum Annealing tion problem in discrete variables. We start by formulating Solution synchronization into a quadratic unconstrained binary opti- mization problem (QUBO). While such formulation respects Unembedding the binary nature of the problem, ensuring that the result is a set of permutations requires extra care. Hence, we: (i) Figure 1. Overview of QuantumSync. QuantumSync formulates show how to insert permutation constraints into a QUBO permutation synchronization as a QUBO and embeds its logical instance on a quantum computer. After running multiple anneals, problem and (ii) solve the constrained QUBO problem on it selects the lowest energy solution as the global optimum. the current generation of the adiabatic quantum computers D-Wave. Thanks to the quantum annealing, we guarantee reconstruction and multi-shape analysis pipelines [86, 23, global optimality with high probability while sampling the 25] because it heavy-lifts the global constraint satisfaction energy landscape to yield confidence estimates. Our proof- while respecting the geometry of the parameters. In fact, of-concepts realization on the adiabatic D-Wave computer most of the multiview-consistent inference problems can be demonstrates that quantum machines offer a promising way expressed as some form of a synchronization [108, 15].
    [Show full text]
  • Arxiv:1908.04480V2 [Quant-Ph] 23 Oct 2020
    Quantum adiabatic machine learning with zooming Alexander Zlokapa,1 Alex Mott,2 Joshua Job,3 Jean-Roch Vlimant,1 Daniel Lidar,4 and Maria Spiropulu1 1Division of Physics, Mathematics & Astronomy, Alliance for Quantum Technologies, California Institute of Technology, Pasadena, CA 91125, USA 2DeepMind Technologies, London, UK 3Lockheed Martin Advanced Technology Center, Sunnyvale, CA 94089, USA 4Departments of Electrical and Computer Engineering, Chemistry, and Physics & Astronomy, and Center for Quantum Information Science & Technology, University of Southern California, Los Angeles, CA 90089, USA Recent work has shown that quantum annealing for machine learning, referred to as QAML, can perform comparably to state-of-the-art machine learning methods with a specific application to Higgs boson classification. We propose QAML-Z, a novel algorithm that iteratively zooms in on a region of the energy surface by mapping the problem to a continuous space and sequentially applying quantum annealing to an augmented set of weak classifiers. Results on a programmable quantum annealer show that QAML-Z matches classical deep neural network performance at small training set sizes and reduces the performance margin between QAML and classical deep neural networks by almost 50% at large training set sizes, as measured by area under the ROC curve. The significant improvement of quantum annealing algorithms for machine learning and the use of a discrete quantum algorithm on a continuous optimization problem both opens a new class of problems that can be solved by quantum annealers and suggests the approach in performance of near-term quantum machine learning towards classical benchmarks. I. INTRODUCTION lem Hamiltonian, ensuring that the system remains in the ground state if the system is perturbed slowly enough, as given by the energy gap between the ground state and Machine learning has gained an increasingly impor- the first excited state [36{38].
    [Show full text]
  • Explorations in Quantum Neural Networks with Intermediate Measurements
    ESANN 2020 proceedings, European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning. Online event, 2-4 October 2020, i6doc.com publ., ISBN 978-2-87587-074-2. Available from http://www.i6doc.com/en/. Explorations in Quantum Neural Networks with Intermediate Measurements Lukas Franken and Bogdan Georgiev ∗Fraunhofer IAIS - Research Center for ML and ML2R Schloss Birlinghoven - 53757 Sankt Augustin Abstract. In this short note we explore a few quantum circuits with the particular goal of basic image recognition. The models we study are inspired by recent progress in Quantum Convolution Neural Networks (QCNN) [12]. We present a few experimental results, where we attempt to learn basic image patterns motivated by scaling down the MNIST dataset. 1 Introduction The recent demonstration of Quantum Supremacy [1] heralds the advent of the Noisy Intermediate-Scale Quantum (NISQ) [2] technology, where signs of supe- riority of quantum over classical machines in particular tasks may be expected. However, one should keep in mind the limitations of NISQ-devices when study- ing and developing quantum-algorithmic solutions - among other things, these include limits on the number of gates and qubits. At the same time the interaction of quantum computing and machine learn- ing is growing, with a vast amount of literature and new results. To name a few applications, the well-known HHL algorithm [3], quantum phase estimation [5] and inner products speed-up techniques lead to further advances in Support Vector Machines [4] and Principal Component Analysis [6, 7]. Intensive progress and ongoing research has also been made towards quantum analogues of Neural Networks (QNN) [8, 9, 10].
    [Show full text]
  • Quantum Supremacy Using a Programmable Superconducting Processor
    Article https://doi.org/10.1038/s41586-019-1666-5 Supplementary information Quantum supremacy using a programmable superconducting processor In the format provided by the Frank Arute, Kunal Arya, Ryan Babbush, Dave Bacon, Joseph C. Bardin, Rami Barends, Rupak authors and unedited Biswas, Sergio Boixo, Fernando G. S. L. Brandao, David A. Buell, Brian Burkett, Yu Chen, Zijun Chen, Ben Chiaro, Roberto Collins, William Courtney, Andrew Dunsworth, Edward Farhi, Brooks Foxen, Austin Fowler, Craig Gidney, Marissa Giustina, Rob Graff, Keith Guerin, Steve Habegger, Matthew P. Harrigan, Michael J. Hartmann, Alan Ho, Markus Hoffmann, Trent Huang, Travis S. Humble, Sergei V. Isakov, Evan Jeffrey, Zhang Jiang, Dvir Kafri, Kostyantyn Kechedzhi, Julian Kelly, Paul V. Klimov, Sergey Knysh, Alexander Korotkov, Fedor Kostritsa, David Landhuis, Mike Lindmark, Erik Lucero, Dmitry Lyakh, Salvatore Mandrà, Jarrod R. McClean, Matthew McEwen, Anthony Megrant, Xiao Mi, Kristel Michielsen, Masoud Mohseni, Josh Mutus, Ofer Naaman, Matthew Neeley, Charles Neill, Murphy Yuezhen Niu, Eric Ostby, Andre Petukhov, John C. Platt, Chris Quintana, Eleanor G. Rieffel, Pedram Roushan, Nicholas C. Rubin, Daniel Sank, Kevin J. Satzinger, Vadim Smelyanskiy, Kevin J. Sung, Matthew D. Trevithick, Amit Vainsencher, Benjamin Villalonga, Theodore White, Z. Jamie Yao, Ping Yeh, Adam Zalcman, Hartmut Neven & John M. Martinis Nature | www.nature.com Supplementary information for \Quantum supremacy using a programmable superconducting processor" Google AI Quantum and collaboratorsy (Dated: October 8, 2019) CONTENTS 2. Universality for SU(2) 30 G. Circuit variants 30 I. Device design and architecture2 1. Gate elision 31 2. Wedge formation 31 II. Fabrication and layout2 VIII. Large scale XEB results 31 III.
    [Show full text]
  • The Next Generation of Computing
    March 2021 Investment Case for QTUM: the Next Generation of Computing Quantum Computing (QC) describes the next generation of computing innovation, which could in turn support transformative scope and capacity changes in Machine Learning (ML). 1 March 2021 Investment Case for QTUM QC harnesses the peculiar properties of subatomic particles at sub-Kelvin temperatures to perform certain kinds of calculations exponentially faster than any traditional computer is capable of. They are not just faster than binary digital electronic (traditional) computers, they process information in a radically different manner and therefore have the potential to explore big data in ways that have not been possible until now. Innovation in QC is directly linked to developments in ML, which relies upon machines gathering, absorbing and optimizing vast amounts of data. Companies leading the research, development and commercialization of QC include Google, Microsoft, IBM, Intel, Honeywell, IonQ, D-Wave and Regetti Computing. Governments, financial services companies, international retail firms and defense establishments have all joined tech giants IBM, Google and Microsoft in recognizing and investing in the potential of QC. While D-Wave offered the first commercially available QC in 2011, frontrunners have mainly concentrated on providing cloud access to their nascent QCs. IBM were the first to make available their 5 and then 20 and now 65 qubit QC in 2016 (a qubit is the basic unit of quantum information—the quantum version of the classical binary bit), in order to allow researchers to work collaboratively to advance a breakthrough in this cutting-edge field. IBM have since built a community of over 260,000 registered users, who run more than one billion actions every day on real hardware and simulators.
    [Show full text]
  • Arxiv:2003.02989V2 [Quant-Ph] 26 Aug 2021
    TensorFlow Quantum: A Software Framework for Quantum Machine Learning Michael Broughton,1, 5, ∗ Guillaume Verdon,1, 2, 4, 6, y Trevor McCourt,1, 7 Antonio J. Martinez,1, 2, 4, 8 Jae Hyeon Yoo,2, 3 Sergei V. Isakov,1 Philip Massey,3 Ramin Halavati,3 Murphy Yuezhen Niu,1 Alexander Zlokapa,9, 1 Evan Peters,4, 6, 10 Owen Lockwood,11 Andrea Skolik,12, 13, 14, 15 Sofiene Jerbi,16 Vedran Dunjko,13 Martin Leib,12 Michael Streif,12, 14, 15, 17 David Von Dollen,18 Hongxiang Chen,19, 20 Shuxiang Cao,19, 21 Roeland Wiersema,22, 23 Hsin-Yuan Huang,1, 24, 25 Jarrod R. McClean,1 Ryan Babbush,1 Sergio Boixo,1 Dave Bacon,1 Alan K. Ho,1 Hartmut Neven,1 and Masoud Mohseni1, z 1Google Quantum AI, Mountain View, CA 2Sandbox@Alphabet, Mountain View, CA 3Google, Mountain View, CA 4Institute for Quantum Computing, University of Waterloo, Waterloo, Ontario, N2L 3G1, Canada 5School of Computer Science, University of Waterloo, Waterloo, Ontario, N2L 3G1, Canada 6Department of Applied Mathematics, University of Waterloo, Waterloo, Ontario, N2L 3G1, Canada 7Department of Mechanical & Mechatronics Engineering, University of Waterloo, Waterloo, Ontario, N2L 3G1, Canada 8Department of Physics & Astronomy, University of Waterloo, Waterloo, Ontario, N2L 3G1, Canada 9Division of Physics, Mathematics and Astronomy, Caltech, Pasadena, CA 91125 10Fermi National Accelerator Laboratory, P.O. Box 500, Batavia, IL, 605010 11Department of Computer Science, Rensselaer Polytechnic Institute, Troy, NY 12180, USA 12Data:Lab, Volkswagen Group, Ungererstr. 69, 80805 München, Germany 13Leiden University, Niels Bohrweg 1, 2333 CA Leiden, Netherlands 14Quantum Artificial Intelligence Laboratory, NASA Ames Research Center (QuAIL) 15USRA Research Institute for Advanced Computer Science (RIACS) 16Institute for Theoretical Physics, University of Innsbruck, Technikerstr.
    [Show full text]
  • A Quantum Computational Approach to Correspondence Problems on Point Sets
    A Quantum Computational Approach to Correspondence Problems on Point Sets Vladislav Golyanik Christian Theobalt Max Planck Institute for Informatics, Saarland Informatics Campus Abstract Modern adiabatic quantum computers (AQC) are al- ready used to solve difficult combinatorial optimisation problems in various domains of science. Currently, only a few applications of AQC in computer vision have been demonstrated. We review AQC and derive a new algorithm for correspondence problems on point sets suitable for ex- ecution on AQC. Our algorithm has a subquadratic com- putational complexity of the state preparation. Examples of successful transformation estimation and point set align- ment by simulated sampling are shown in the systematic ex- perimental evaluation. Finally, we analyse the differences Figure 1: Different 2D point sets — fish [47], qubit, kanji and composer — aligned with our QA approach. For every pair of point sets, the initial in the solutions and the corresponding energy values. misalignment is shown on the left, and the registration is shown on the right. QA is the first transformation estimation and point set alignment method which can be executed on adiabatic quantum computers. 1. Introduction Since their proposal in the early eighties [8, 43, 27], lems (QUBOP) defined as quantum computers have attracted much attention of physi- arg min qTPq; (1) cists and computer scientists. Impressive advances both q2Bn in quantum computing hardware and algorithms have been where q is a set of n binary variables, and P is a symmetric demonstrated over the last thirty years [40, 30, 61, 58, 42, matrix of weights between the variables. The operational 19, 25, 49, 65, 48].
    [Show full text]
  • Outline of Machine Learning
    Outline of machine learning The following outline is provided as an overview of and topical guide to machine learning: Machine learning – subfield of computer science[1] (more particularly soft computing) that evolved from the study of pattern recognition and computational learning theory in artificial intelligence.[1] In 1959, Arthur Samuel defined machine learning as a "Field of study that gives computers the ability to learn without being explicitly programmed".[2] Machine learning explores the study and construction of algorithms that can learn from and make predictions on data.[3] Such algorithms operate by building a model from an example training set of input observations in order to make data-driven predictions or decisions expressed as outputs, rather than following strictly static program instructions. Contents What type of thing is machine learning? Branches of machine learning Subfields of machine learning Cross-disciplinary fields involving machine learning Applications of machine learning Machine learning hardware Machine learning tools Machine learning frameworks Machine learning libraries Machine learning algorithms Machine learning methods Dimensionality reduction Ensemble learning Meta learning Reinforcement learning Supervised learning Unsupervised learning Semi-supervised learning Deep learning Other machine learning methods and problems Machine learning research History of machine learning Machine learning projects Machine learning organizations Machine learning conferences and workshops Machine learning publications
    [Show full text]
  • PDF, 4MB Herunterladen
    Status of quantum computer development Entwicklungsstand Quantencomputer Document history Version Date Editor Description 1.0 May 2018 Document status after main phase of project 1.1 July 2019 First update containing both new material and improved readability, details summarized in chapters 1.6 and 2.11 1.2 June 2020 Second update containing new algorithmic developments, details summarized in chapters 1.6.2 and 2.12 Federal Office for Information Security Post Box 20 03 63 D-53133 Bonn Phone: +49 22899 9582-0 E-Mail: [email protected] Internet: https://www.bsi.bund.de © Federal Office for Information Security 2020 Introduction Introduction This study discusses the current (Fall 2017, update early 2019, second update early 2020 ) state of affairs in the physical implementation of quantum computing as well as algorithms to be run on them, focused on applications in cryptanalysis. It is supposed to be an orientation to scientists with a connection to one of the fields involved—mathematicians, computer scientists. These will find the treatment of their own field slightly superficial but benefit from the discussion in the other sections. The executive summary as well as the introduction and conclusions to each chapter provide actionable information to decision makers. The text is separated into multiple parts that are related (but not identical) to previous work packages of this project. Authors Frank K. Wilhelm, Saarland University Rainer Steinwandt, Florida Atlantic University, USA Brandon Langenberg, Florida Atlantic University, USA Per J. Liebermann, Saarland University Anette Messinger, Saarland University Peter K. Schuhmacher, Saarland University Aditi Misra-Spieldenner, Saarland University Copyright The study including all its parts are copyrighted by the BSI–Federal Office for Information Security.
    [Show full text]
  • Quantum Computing Factsheet
    TECH FACTSHEETS FOR POLICYMAKERS SPRING 2020 SERIES Quantum Computing ASH CARTER, TAPP FACULTY DIRECTOR LAURA MANLEY, TAPP EXECUTIVE DIRECTOR TECHNOLOGY AND PUBLIC PURPOSE PROJECT AUTHOR Akhil Iyer (Harvard) EDITOR AND CONTRIBUTORS Emma Rosenfeld (Harvard) Mikhail Lukin (Harvard) William Oliver (MIT) Amritha Jayanti (Harvard) The Technology Factsheet Series was designed to provide a brief overview of each technology and related policy considerations. These papers are not meant to be exhaustive. Technology and Public Purpose Project Belfer Center for Science and International Affairs Harvard Kennedy School 79 John F. Kennedy Street, Cambridge, MA 02138 www.belfercenter.org/TAPP Statements and views expressed in this publication are solely those of the authors and do not imply endorsement by Harvard University, Harvard Kennedy School, the Belfer Center for Science and International Affairs. Design and layout by Andrew Facini Copyright 2020, President and Fellows of Harvard College Printed in the United States of America Executive Summary Quantum computing refers to the use of quantum properties—the properties of nature on an atomic scale— to solve complex problems much faster than conventional, or classical, computers. Quantum computers are not simply faster versions of conventional computers, though they are a fundamentally different computing paradigm due to their ability to leverage quantum mechanics. Harnessing quantum properties, namely the ability for the quantum computer bits (called “qubits”) to exist in multiple and interconnected states at one time, opens the door for highly parallel information processing with unprecedented new opportunities. Quantum computing could potentially be applied to solve important problems in fields such as cryptogra- phy, chemistry, medicine, material science, and machine learning that are computationally hard for con- ventional computers.
    [Show full text]
  • 谷歌(Googl.Us) 2017 年 01 月 04 日
    公司报告 | 公司深度研究 证券研究报告 谷歌(GOOGL.US) 2017 年 01 月 04 日 投资评级 谷歌人工智能深度解剖: 6 个月评级 买入(维持评级) 从 HAL 的太空漫游到 AlphaGo,AI 的春天来了 当前价格 808.01 美元 目标价格 920 美元 人工智能驱动的年代到了—谷歌以 AI 为本,融入生活,化不可能为可能 上次目标价 920 美元 作者 早在 1968 年斯坦利库布里克作品《2001:太空漫游》里的 HAL9000,到 1977 年《星球大战》里的 R2-D2,到 2001 年《AI》里的 David,到最近《星 何翩翩 分析师 战:原力觉醒》的 BB-8,数之不尽的电影机器人,有赖好莱坞梦想家前瞻 SAC 执业证书编号:S1110516080002 性的创作将我们与人工智能的距离拉近。 [email protected] 雷俊成 联系人 从 AlphaGo 跟李世石围棋博弈技惊四座,到各款智能产品,包括 Google [email protected] Home、谷歌助理和云计算硬件等,谷歌正式确立了以人工智能优先的公司 马赫 联系人 战略。AI 业务涵盖了从硬件到软件、搜索算法、翻译、语音和图像识别、 [email protected] 无人车技术以及医疗药品研究等方面。这些业务充分展示了谷歌不断在人 工智能(Artificial Intelligence)里的机器学习(Machine Learning)以及自然 语言处理(Natural Language Processing, NLP)上的精益求精。作为全球科 关注我们 技巨头,谷歌积累超过 10 年的经验,并不断在学术界招揽最优秀的团队。 谷歌构建完善的智能生态圈,将 AI 渗透到每个产品中,抱着提升服务质量、 扫码关注 改变人类生活习惯与效率的使命,将省却下来的时间去做更有意义的事。 天风证券 AI 终极目标为模仿大脑操作,GPU 促进 AI 普及,但三大难题仍需解决 研究所官方微信号 人工智能的最终目标就是要模仿人类大脑的思考和操作,但现在较成熟的 监督学习(Supervised Learning)却不是走这个模式。本质上现在的深度学习 (Deep Learning)与 20 年前的研究区别不大,不过现在的神经网络(Neural Networks)能够部署更多层数、使用更大量的数据集去训练模型和在原来的 算法基础上作出更多的附加算法和改良。而 GPU 的使用也促进了算法的多 样化和增加了找到最优化解决 方案的 概 率 。 但 最 终 无 监 督 学 习 (Unsupervised Learning)才是人类大脑最自然的学习方式。 我们认为在过去 5-10 年里,人工智能得以商业化和普及,主要鉴于计算能 力的快速增加:1)摩尔定律(Moore’s Law)的突破,让硬件价格加速下降; 2)云计算的普及,以及 3)GPU 的使用让多维计算能力提升,都大大促进 了 AI 的商业化。 机器学习目前存在的三大难题: 1、需要依靠大量数据与样本去训练和学习; 2、在特定的板块和领域里(domain and context specific)学习; 3、需要人工选择数据表达方式和学习算法以达到最优化学习。 谷歌市值给严重低估,探月业务的崛起将迎来新一个黄金十年 本文我们将详细梳理谷歌人工智能核心技术,为大家解密谷歌背后的灵魂 和骨干。对于公司的盈利核心,是以人工智能驱动的搜索和广告业务。虽 然广告业务依然占营收的 90%,但随着 Other Bets 业务在 3-5 年内崛起, 谷歌将迎来新一个黄金十年。现在市场上一直将 Facebook 与谷歌对标。谷 歌 2017 年 PE 为 19x,相对于 FB 的 22x,我们认为谷歌给严重低估。谷歌 广告业务里,移动占比的增加相对 PC 占比的减少属新常态过渡期。而 2B 云计算和 YouTube 的巨大增长潜力和在人工智能的发展上对比 FB 亦遥遥 领先。探月业务的高速营收增长也证明了谷歌的创新能力有增不减。依靠 人工智能的厚积薄发和探月业务将在 3-5 年内逐一崛起,谷歌长期可视为 VC 投资组合,哪怕只有一两个项目成功,未来市值也可获较大上翻。我们 认为 2017 年 23x PE 较合理,目标价格为 920 美元,“买入”评级。 风险提示:广告业务收入增长不及预期,探月计划研究发展受阻,人工智 能市场发展和落地不及预期等。 请务必阅读正文之后的信息披露和免责申明 1 公司报告 | 公司深度研究 内容目录 1.
    [Show full text]