Learning Hidden Quantum Markov Models

Total Page:16

File Type:pdf, Size:1020Kb

Learning Hidden Quantum Markov Models Learning Hidden Quantum Markov Models Siddarth Srinivasan GeoffGordon Byron Boots Georgia Institute of Technology Carnegie Mellon University Georgia Institute of Technology Abstract quantum circuits to simulate classical Hidden Markov Models (HMMs); (2) what happens if we take full ad- vantage of this quantum circuit instead of enforcing the Hidden Quantum Markov Models (HQMMs) classical probabilistic constraints; and (3) how do we can be thought of as quantum probabilistic learn the parameters for quantum models from data? graphical models that can model sequential data. We extend previous work on HQMMs The paper is structured as follows: first we describe with three contributions: (1) we show how related work and provide background on quantum in- classical hidden Markov models (HMMs) can formation theory as it relates to our work. Next, we be simulated on a quantum circuit, (2) we re- describe the hidden quantum Markov model and com- formulate HQMMs by relaxing the constraints pare our approach to previous work in detail, and give for modeling HMMs on quantum circuits, and a scheme for writing any hidden Markov model as an (3) we present a learning algorithm to esti- HQMM. Finally, our main contribution is the intro- mate the parameters of an HQMM from data. duction of a maximum-likelihood-based unsupervised While our algorithm requires further optimiza- learning algorithm that can estimate the parameters tion to handle larger datasets, we are able to of an HQMM from data. Our implementation is slow evaluate our algorithm using several synthetic to train HQMMs on large datasets, and will require datasets generated by valid HQMMs. We further optimization. Instead, we evaluate our learn- show that our algorithm learns HQMMs with ing algorithm for HQMMs on several simple synthetic the same number of hidden states and predic- datasets by learning a quantum model from data and tive accuracy as the HQMMs that generated filtering and predicting with the learned model. We the data, while HMMs learned with the Baum- also compare our model and learning algorithm to max- Welch algorithm require more states to match imum likelihood for learning hidden Markov models the predictive accuracy. and show that the more expressive HQMM can match HMMs’ predictive capability with fewer hidden states on data generated by HQMMs. 1INTRODUCTION 2 BACKGROUND We extend previous work on Hidden Quantum Markov Models (HQMMs), and propose a novel approach to 2.1 Related Work learning these models from data. HQMMs can be Hidden Quantum Markov Models were introduced by thought of as a new, expressive class of graphical mod- Monras et al. [2010], who discussed their relationship to els that have adopted the mathematical formalism for classical HMMs, and parameterized these HQMMs us- reasoning about uncertainty from quantum mechanics. ing a set of Kraus operators. Clark et al. [2015] further We stress that while HQMMs could naturally be imple- investigated HQMMs, and showed that they could be mented on quantum computers, we do not need such viewed as open quantum systems with instantaneous a machine for these models to be of value. Instead, feedback. We arrive at the same Kraus operator repre- HQMMs can be viewed as novel models inspired by sentation by building a quantum circuit to simulate a quantum mechanics that can be run on classical com- classical HMM and then relaxing some constraints. puters. In considering these models, we are interested in answering three questions: (1) how can we construct Our work can be viewed as extending previous work by Zhao and Jaeger [2010] on Norm-observable opera- Proceedings of the 21st International Conference on Artifi- tor models (NOOM) and Jaeger [2000] on observable- cial Intelligence and Statistics (AISTATS) 2018, Lanzarote, operator models (OOM). We show that HQMMs can Spain. JMLR: W&CP volume 7X. Copyright 2018 by the be viewed as complex-valued extensions of NOOMs, author(s). formulated in the language of quantum mechanics. We Learning Hidden Quantum Markov Models use this connection to adapt the learning algorithm The density matrix is the general quantum equivalent for NOOMs in Zhao and Jaeger [2007] into the first of the classical belief state ~x and has diagonal elements known learning algorithm for HQMMs, and demon- representing the probabilities of being in each system strate that the theoretical advantages of HQMMs also state. Consequently, the normalization condition is hold in practice. tr(⇢ˆ)=1. The off-diagonal elements represent quan- tum coherences, which have no classical interpretation. Schuld et al. [2015a] and Biamonte et al. [2016] provide The density matrix ⇢ˆ is Hermitian and can be used to general overviews of quantum machine learning, and describe the state of any quantum system. describe relevant work on HQMMs. They suggest that developing algorithms that can learn HQMMs from The density matrix can also be extended to represent data is an important open problem. We provide just the joint state of multiple variables, or that of ‘multi- such a learning algorithm in Section 4. particle’ systems, to use the physical interpretation. If we have density matrices ⇢ˆ and ⇢ˆ for two qudits Other work at the intersection of machine learning and A B (a d-state quantum system, akin to qubits or ‘quan- quantum mechanics includes Wiebe et al. [2016] on tum bits’ which are 2-state quantum systems) A and quantum perceptron models and learning algorithms. B,wecantakethetensorproducttoarriveatthe Schuld et al. [2015b] discuss simulating a perceptron density matrix for the joint state of the particles, as on a quantum computer. ⇢ˆ = ⇢ˆ ⇢ˆ .Asavaliddensitymatrix,thediagonal AB A ⌦ B 2.2 Belief States and Quantum States elements of this joint density matrix represent probabil- ities; tr (ˆ⇢ ) =1,andtheprobabilitiescorrespondto Classical discrete latent variable models represent un- AB the states in the Cartesian product of the basis states certainty with a probability distribution using a vector of the composite particles. In this paper, the joint den- ~x whose entries describe the probability of being in sity matrix will serve as the analogue to classical joint the corresponding system state. Each entry is real and probability distribution, with the off-diagonal terms non-negative, and the entries sum to 1. In general, we encoding extra ‘quantum’ information. refer to the run-time system component that maintains astateestimateofthelatentvariableasan‘observer’, Given the joint state of a multi-particle system, we can and we refer to the observer’s state as a ‘belief state.’ examine the state of just one or few of the particles using the ‘partial trace’ operation, where we ‘trace over’ In quantum mechanics, the quantum state of a parti- the particles we wish to disregard. This lets us recover cle A can be written using Dirac notation as ,a | iA a‘reduceddensitymatrix’forasubsystemofinterest. column-vector in some orthonormal basis (the row- The partial trace for a two-particle system ⇢ˆ where vector is the complex-conjugate transpose = AB Ah | we trace over the second particle to obtain the state of ( )†) with each entry being the ‘probability ampli- | iA the first particle is: tude’ corresponding to that system state. The squared norm of the probability amplitude for a system state is ⇢ˆA = trB (ˆ⇢AB)= B j ⇢ˆAB j B (2) h | | i j the probability of observing that state, so the sum of X squared norms of probability amplitudes over all the For our purposes, this operation will serve as the quan- system states must be 1 to conserve probability. For tum analogue of classical marginalization. 1 i † example, = − is a valid quantum state, | i p2 p2 Finally, we discuss the quantum analogue of ‘condition- with basis statesh 0 andi 1 having equal probability ing’ on an observation. In quantum mechanics, the 2 2 1 i 1 act of measuring a quantum system can change the p = p = 2 .However,unlikeclassicalbelief 2 2 underlying distribution, i.e., collapses it to the observed states such as ~x = 1 1 T , where the probability of 2 2 state in the measurement basis, and this is represented di fferent states reflects ignorance about the underlying ⇥ ⇤ mathematically by applying von Neumann projection system, a pure quantum state like the one described operators (denoted Pˆy in this paper) to density matrices above is the true description of the system. describing the system. One can think of the projection But how can we describe classical mixtures of quantum operator as having ones in the diagonal entries corre- systems (‘mixed states’), where we also have classi- sponding to observed system states and zeros elsewhere. cal uncertainty about the underlying quantum states? If we only observe part of a larger system, the system Such information can be captured by a ‘density ma- collapses to the states where that subsystem had the ob- trix.’ Given a mixture of N quantum systems, each served result. For example, suppose we have the follow- with probability pi,thedensitymatrixforthisensemble ing density matrix for a two-state two-particle system is defined as follows: with basis 0 A 0 B, 0 A 1 B, 1 A 0 B, 1 A 1 B : N {| i | i | i | i | i | i | i | i } 0.25 0 0 0 00.25 0.50 ⇢ˆ = pi i i (1) ⇢ˆ = | ih | AB 2 0 0.50−.25 0 3 (3) i − X 6 00 00.257 4 5 Siddarth Srinivasan, GeoffGordon, Byron Boots Table 1: Comparison between classical and quantum representations Classical probability Quantum Analogue Description Representation Representation Description Belief State ~x ⇢ˆ Density Matrix Joint Distribution ~x 1 ~x 2 ⇢ˆX ⇢ˆX Multi-particle Density Matrix ⌦ 1 ⌦ 2 Marginalization ~x = y(~x ~y ) ⇢ˆ = trY (ˆ⇢X ⇢ˆY ) Partial Trace P⌦(y,~x) ⌦ Conditional probability P (~x y)= P (states y) trY (Pˆy⇢ˆXY Pˆ†) Projection + Partial Trace |P P (y) | / y Suppose we measure the state of particle B,andfindit The probabilities of observing each output at time t is to be in state 1 B.
Recommended publications
  • Multivariate Poisson Hidden Markov Models for Analysis of Spatial Counts
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by University of Saskatchewan's Research Archive MULTIVARIATE POISSON HIDDEN MARKOV MODELS FOR ANALYSIS OF SPATIAL COUNTS A Thesis Submitted to the Faculty of Graduate Studies and Research in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy in the Department of Mathematics and Statistics University of Saskatchewan, Saskatoon, SK, Canada by Chandima Piyadharshani Karunanayake @Copyright Chandima Piyadharshani Karunanayake, June 2007. All rights Reserved. PERMISSION TO USE The author has agreed that the libraries of this University may provide the thesis freely available for inspection. Moreover, the author has agreed that permission for copying of the thesis in any manner, entirely or in part, for scholarly purposes may be granted by the Professor or Professors who supervised my thesis work or in their absence, by the Head of the Department of Mathematics and Statistics or the Dean of the College in which the thesis work was done. It is understood that any copying or publication or use of the thesis or parts thereof for finanancial gain shall not be allowed without my written permission. It is also understood that due recognition shall be given to the author and to the University of Saskatchewan in any scholarly use which may be made of any material in this thesis. Requests for permission to copy or to make other use of any material in the thesis should be addressed to: Head Department of Mathematics and Statistics University of Saskatchewan 106, Wiggins Road Saskatoon, Saskatchewan Canada, S7N 5E6 i ABSTRACT Multivariate count data are found in a variety of fields.
    [Show full text]
  • Regime Heteroskedasticity in Bitcoin: a Comparison of Markov Switching Models
    Munich Personal RePEc Archive Regime heteroskedasticity in Bitcoin: A comparison of Markov switching models Chappell, Daniel Birkbeck College, University of London 28 September 2018 Online at https://mpra.ub.uni-muenchen.de/90682/ MPRA Paper No. 90682, posted 24 Dec 2018 06:38 UTC Regime heteroskedasticity in Bitcoin: A comparison of Markov switching models Daniel R. Chappell Department of Economics, Mathematics and Statistics Birkbeck College, University of London [email protected] 28th September 2018 Abstract Markov regime-switching (MRS) models, also known as hidden Markov models (HMM), are used extensively to account for regime heteroskedasticity within the returns of financial assets. However, we believe this paper to be one of the first to apply such methodology to the time series of cryptocurrencies. In light of Moln´arand Thies (2018) demonstrating that the price data of Bitcoin contained seven distinct volatility regimes, we will fit a sample of Bitcoin returns with six m-state MRS estimations, with m ∈ {2,..., 7}. Our aim is to identify the optimal number of states for modelling the regime heteroskedasticity in the price data of Bitcoin. Goodness-of-fit will be judged using three information criteria, namely: Bayesian (BIC); Hannan-Quinn (HQ); and Akaike (AIC). We determined that the restricted 5-state model generated the optimal estima- tion for the sample. In addition, we found evidence of volatility clustering, volatility jumps and asymmetric volatility transitions whilst also inferring the persistence of shocks in the price data of Bitcoin. Keywords Bitcoin; Markov regime-switching; regime heteroskedasticity; volatility transitions. 1 2 List of Tables Table 1. Summary statistics for Bitcoin (23rd April 2014 to 31st May 2018) .
    [Show full text]
  • 1 Postulate (QM4): Quantum Measurements
    Part IIC Lent term 2019-2020 QUANTUM INFORMATION & COMPUTATION Nilanjana Datta, DAMTP Cambridge 1 Postulate (QM4): Quantum measurements An isolated (closed) quantum system has a unitary evolution. However, when an exper- iment is done to find out the properties of the system, there is an interaction between the system and the experimentalists and their equipment (i.e., the external physical world). So the system is no longer closed and its evolution is not necessarily unitary. The following postulate provides a means of describing the effects of a measurement on a quantum-mechanical system. In classical physics the state of any given physical system can always in principle be fully determined by suitable measurements on a single copy of the system, while leaving the original state intact. In quantum theory the corresponding situation is bizarrely different { quantum measurements generally have only probabilistic outcomes, they are \invasive", generally unavoidably corrupting the input state, and they reveal only a rather small amount of information about the (now irrevocably corrupted) input state identity. Furthermore the (probabilistic) change of state in a quantum measurement is (unlike normal time evolution) not a unitary process. Here we outline the associated mathematical formalism, which is at least, easy to apply. (QM4) Quantum measurements and the Born rule In Quantum Me- chanics one measures an observable, i.e. a self-adjoint operator. Let A be an observable acting on the state space V of a quantum system Since A is self-adjoint, its eigenvalues P are real. Let its spectral projection be given by A = n anPn, where fang denote the set of eigenvalues of A and Pn denotes the orthogonal projection onto the subspace of V spanned by eigenvectors of A corresponding to the eigenvalue Pn.
    [Show full text]
  • 12 : Conditional Random Fields 1 Hidden Markov Model
    10-708: Probabilistic Graphical Models 10-708, Spring 2014 12 : Conditional Random Fields Lecturer: Eric P. Xing Scribes: Qin Gao, Siheng Chen 1 Hidden Markov Model 1.1 General parametric form In hidden Markov model (HMM), we have three sets of parameters, j i transition probability matrix A : p(yt = 1jyt−1 = 1) = ai;j; initialprobabilities : p(y1) ∼ Multinomial(π1; π2; :::; πM ); i emission probabilities : p(xtjyt) ∼ Multinomial(bi;1; bi;2; :::; bi;K ): 1.2 Inference k k The inference can be done with forward algorithm which computes αt ≡ µt−1!t(k) = P (x1; :::; xt−1; xt; yt = 1) recursively by k k X i αt = p(xtjyt = 1) αt−1ai;k; (1) i k k and the backward algorithm which computes βt ≡ µt t+1(k) = P (xt+1; :::; xT jyt = 1) recursively by k X i i βt = ak;ip(xt+1jyt+1 = 1)βt+1: (2) i Another key quantity is the conditional probability of any hidden state given the entire sequence, which can be computed by the dot product of forward message and backward message by, i i i i X i;j γt = p(yt = 1jx1:T ) / αtβt = ξt ; (3) j where we define, i;j i j ξt = p(yt = 1; yt−1 = 1; x1:T ); i j / µt−1!t(yt = 1)µt t+1(yt+1 = 1)p(xt+1jyt+1)p(yt+1jyt); i j i = αtβt+1ai;jp(xt+1jyt+1 = 1): The implementation in Matlab can be vectorized by using, i Bt(i) = p(xtjyt = 1); j i A(i; j) = p(yt+1 = 1jyt = 1): 1 2 12 : Conditional Random Fields The relation of those quantities can be simply written in pseudocode as, T αt = (A αt−1): ∗ Bt; βt = A(βt+1: ∗ Bt+1); T ξt = (αt(βt+1: ∗ Bt+1) ): ∗ A; γt = αt: ∗ βt: 1.3 Learning 1.3.1 Supervised Learning The supervised learning is trivial if only we know the true state path.
    [Show full text]
  • Quantum Circuits with Mixed States Is Polynomially Equivalent in Computational Power to the Standard Unitary Model
    Quantum Circuits with Mixed States Dorit Aharonov∗ Alexei Kitaev† Noam Nisan‡ February 1, 2008 Abstract Current formal models for quantum computation deal only with unitary gates operating on “pure quantum states”. In these models it is difficult or impossible to deal formally with several central issues: measurements in the middle of the computation; decoherence and noise, using probabilistic subroutines, and more. It turns out, that the restriction to unitary gates and pure states is unnecessary. In this paper we generalize the formal model of quantum circuits to a model in which the state can be a general quantum state, namely a mixed state, or a “density matrix”, and the gates can be general quantum operations, not necessarily unitary. The new model is shown to be equivalent in computational power to the standard one, and the problems mentioned above essentially disappear. The main result in this paper is a solution for the subroutine problem. The general function that a quantum circuit outputs is a probabilistic function. However, the question of using probabilistic functions as subroutines was not previously dealt with, the reason being that in the language of pure states, this simply can not be done. We define a natural notion of using general subroutines, and show that using general subroutines does not strengthen the model. As an example of the advantages of analyzing quantum complexity using density ma- trices, we prove a simple lower bound on depth of circuits that compute probabilistic func- arXiv:quant-ph/9806029v1 8 Jun 1998 tions. Finally, we deal with the question of inaccurate quantum computation with mixed states.
    [Show full text]
  • Ergodicity, Decisions, and Partial Information
    Ergodicity, Decisions, and Partial Information Ramon van Handel Abstract In the simplest sequential decision problem for an ergodic stochastic pro- cess X, at each time n a decision un is made as a function of past observations X0,...,Xn 1, and a loss l(un,Xn) is incurred. In this setting, it is known that one may choose− (under a mild integrability assumption) a decision strategy whose path- wise time-average loss is asymptotically smaller than that of any other strategy. The corresponding problem in the case of partial information proves to be much more delicate, however: if the process X is not observable, but decisions must be based on the observation of a different process Y, the existence of pathwise optimal strategies is not guaranteed. The aim of this paper is to exhibit connections between pathwise optimal strategies and notions from ergodic theory. The sequential decision problem is developed in the general setting of an ergodic dynamical system (Ω,B,P,T) with partial information Y B. The existence of pathwise optimal strategies grounded in ⊆ two basic properties: the conditional ergodic theory of the dynamical system, and the complexity of the loss function. When the loss function is not too complex, a gen- eral sufficient condition for the existence of pathwise optimal strategies is that the dynamical system is a conditional K-automorphism relative to the past observations n n 0 T Y. If the conditional ergodicity assumption is strengthened, the complexity assumption≥ can be weakened. Several examples demonstrate the interplay between complexity and ergodicity, which does not arise in the case of full information.
    [Show full text]
  • Particle Gibbs for Infinite Hidden Markov Models
    Particle Gibbs for Infinite Hidden Markov Models Nilesh Tripuraneni* Shixiang Gu* Hong Ge University of Cambridge University of Cambridge University of Cambridge [email protected] MPI for Intelligent Systems [email protected] [email protected] Zoubin Ghahramani University of Cambridge [email protected] Abstract Infinite Hidden Markov Models (iHMM’s) are an attractive, nonparametric gener- alization of the classical Hidden Markov Model which can automatically infer the number of hidden states in the system. However, due to the infinite-dimensional nature of the transition dynamics, performing inference in the iHMM is difficult. In this paper, we present an infinite-state Particle Gibbs (PG) algorithm to re- sample state trajectories for the iHMM. The proposed algorithm uses an efficient proposal optimized for iHMMs and leverages ancestor sampling to improve the mixing of the standard PG algorithm. Our algorithm demonstrates significant con- vergence improvements on synthetic and real world data sets. 1 Introduction Hidden Markov Models (HMM’s) are among the most widely adopted latent-variable models used to model time-series datasets in the statistics and machine learning communities. They have also been successfully applied in a variety of domains including genomics, language, and finance where sequential data naturally arises [Rabiner, 1989; Bishop, 2006]. One possible disadvantage of the finite-state space HMM framework is that one must a-priori spec- ify the number of latent states K. Standard model selection techniques can be applied to the fi- nite state-space HMM but bear a high computational overhead since they require the repetitive trainingexploration of many HMM’s of different sizes.
    [Show full text]
  • Hidden Markov Models (Particle Filtering)
    CSE 473: Artificial Intelligence Spring 2014 Hidden Markov Models & Particle Filtering Hanna Hajishirzi Many slides adapted from Dan Weld, Pieter Abbeel, Dan Klein, Stuart Russell, Andrew Moore & Luke Zettlemoyer 1 Outline § Probabilistic sequence models (and inference) § Probability and Uncertainty – Preview § Markov Chains § Hidden Markov Models § Exact Inference § Particle Filters § Applications Example § A robot move in a discrete grid § May fail to move in the desired direction with some probability § Observation from noisy sensor at each time § Is a function of robot position § Goal: Find the robot position (probability that a robot is at a specific position) § Cannot always compute this probability exactly è Approximation methods Here: Approximate a distribution by sampling 3 Hidden Markov Model § State Space Model § Hidden states: Modeled as a Markov Process P(x0), P(xk | xk-1) § Observations: ek P(ek | xk) Position of the robot P(x1|x0) x x x 0 1 … n Observed position from P(e |x ) 0 0 the sensor y0 y1 yn 4 Exact Solution: Forward Algorithm § Filtering is the inference process of finding a distribution over XT given e1 through eT : P( XT | e1:t ) § We first compute P( X1 | e1 ): § For each t from 2 to T, we have P( Xt-1 | e1:t-1 ) § Elapse time: compute P( Xt | e1:t-1 ) § Observe: compute P(Xt | e1:t-1 , et) = P( Xt | e1:t ) Approximate Inference: § Sometimes |X| is too big for exact inference § |X| may be too big to even store B(X) § E.g. when X is continuous § |X|2 may be too big to do updates § Solution: approximate inference by sampling § How robot localization works in practice 6 What is Sampling? § Goal: Approximate the original distribution: § Approximate with Gaussian distribution § Draw samples from a distribution close enough to the original distribution § Here: A general framework for a sampling method 7 Approximate Solution: Perfect Sampling Robot path till time n 1 Time 1 Time n Assume we can sample Particle 1 x0:n from the original distribution .
    [Show full text]
  • Hierarchical Dirichlet Process Hidden Markov Model for Unsupervised Bioacoustic Analysis
    Hierarchical Dirichlet Process Hidden Markov Model for Unsupervised Bioacoustic Analysis Marius Bartcus, Faicel Chamroukhi, Herve Glotin Abstract-Hidden Markov Models (HMMs) are one of the the standard HDP-HMM Gibbs sampling has the limitation most popular and successful models in statistics and machine of an inadequate modeling of the temporal persistence of learning for modeling sequential data. However, one main issue states [9]. This problem has been addressed in [9] by relying in HMMs is the one of choosing the number of hidden states. on a sticky extension which allows a more robust learning. The Hierarchical Dirichlet Process (HDP)-HMM is a Bayesian Other solutions for the inference of the hidden Markov non-parametric alternative for standard HMMs that offers a model in this infinite state space models are using the Beam principled way to tackle this challenging problem by relying sampling [lO] rather than Gibbs sampling. on a Hierarchical Dirichlet Process (HDP) prior. We investigate the HDP-HMM in a challenging problem of unsupervised We investigate the BNP formulation for the HMM, that is learning from bioacoustic data by using Markov-Chain Monte the HDP-HMM into a challenging problem of unsupervised Carlo (MCMC) sampling techniques, namely the Gibbs sampler. learning from bioacoustic data. The problem consists of We consider a real problem of fully unsupervised humpback extracting and classifying, in a fully unsupervised way, whale song decomposition. It consists in simultaneously finding the structure of hidden whale song units, and automatically unknown number of whale song units. We use the Gibbs inferring the unknown number of the hidden units from the Mel sampler to infer the HDP-HMM from the bioacoustic data.
    [Show full text]
  • Modelling Multi-Object Activity by Gaussian Processes 1
    LOY et al.: MODELLING MULTI-OBJECT ACTIVITY BY GAUSSIAN PROCESSES 1 Modelling Multi-object Activity by Gaussian Processes Chen Change Loy School of EECS [email protected] Queen Mary University of London Tao Xiang E1 4NS London, UK [email protected] Shaogang Gong [email protected] Abstract We present a new approach for activity modelling and anomaly detection based on non-parametric Gaussian Process (GP) models. Specifically, GP regression models are formulated to learn non-linear relationships between multi-object activity patterns ob- served from semantically decomposed regions in complex scenes. Predictive distribu- tions are inferred from the regression models to compare with the actual observations for real-time anomaly detection. The use of a flexible, non-parametric model alleviates the difficult problem of selecting appropriate model complexity encountered in parametric models such as Dynamic Bayesian Networks (DBNs). Crucially, our GP models need fewer parameters; they are thus less likely to overfit given sparse data. In addition, our approach is robust to the inevitable noise in activity representation as noise is modelled explicitly in the GP models. Experimental results on a public traffic scene show that our models outperform DBNs in terms of anomaly sensitivity, noise robustness, and flexibil- ity in modelling complex activity. 1 Introduction Activity modelling and automatic anomaly detection in video have received increasing at- tention due to the recent large-scale deployments of surveillance cameras. These tasks are non-trivial because complex activity patterns in a busy public space involve multiple objects interacting with each other over space and time, whilst anomalies are often rare, ambigu- ous and can be easily confused with noise caused by low image quality, unstable lighting condition and occlusion.
    [Show full text]
  • Quantum Zeno Dynamics from General Quantum Operations
    Quantum Zeno Dynamics from General Quantum Operations Daniel Burgarth1, Paolo Facchi2,3, Hiromichi Nakazato4, Saverio Pascazio2,3, and Kazuya Yuasa4 1Center for Engineered Quantum Systems, Dept. of Physics & Astronomy, Macquarie University, 2109 NSW, Australia 2Dipartimento di Fisica and MECENAS, Università di Bari, I-70126 Bari, Italy 3INFN, Sezione di Bari, I-70126 Bari, Italy 4Department of Physics, Waseda University, Tokyo 169-8555, Japan June 30, 2020 We consider the evolution of an arbitrary quantum dynamical semigroup of a finite-dimensional quantum system under frequent kicks, where each kick is a generic quantum operation. We develop a generalization of the Baker- Campbell-Hausdorff formula allowing to reformulate such pulsed dynamics as a continuous one. This reveals an adiabatic evolution. We obtain a general type of quantum Zeno dynamics, which unifies all known manifestations in the literature as well as describing new types. 1 Introduction Physics is a science that is often based on approximations. From high-energy physics to the quantum world, from relativity to thermodynamics, approximations not only help us to solve equations of motion, but also to reduce the model complexity and focus on im- portant effects. Among the largest success stories of such approximations are the effective generators of dynamics (Hamiltonians, Lindbladians), which can be derived in quantum mechanics and condensed-matter physics. The key element in the techniques employed for their derivation is the separation of different time scales or energy scales. Recently, in quantum technology, a more active approach to condensed-matter physics and quantum mechanics has been taken. Generators of dynamics are reversely engineered by tuning system parameters and device design.
    [Show full text]
  • Singles out a Specific Basis
    Quantum Information and Quantum Noise Gabriel T. Landi University of Sao˜ Paulo July 3, 2018 Contents 1 Review of quantum mechanics1 1.1 Hilbert spaces and states........................2 1.2 Qubits and Bloch’s sphere.......................3 1.3 Outer product and completeness....................5 1.4 Operators................................7 1.5 Eigenvalues and eigenvectors......................8 1.6 Unitary matrices.............................9 1.7 Projective measurements and expectation values............ 10 1.8 Pauli matrices.............................. 11 1.9 General two-level systems....................... 13 1.10 Functions of operators......................... 14 1.11 The Trace................................ 17 1.12 Schrodinger’s¨ equation......................... 18 1.13 The Schrodinger¨ Lagrangian...................... 20 2 Density matrices and composite systems 24 2.1 The density matrix........................... 24 2.2 Bloch’s sphere and coherence...................... 29 2.3 Composite systems and the almighty kron............... 32 2.4 Entanglement.............................. 35 2.5 Mixed states and entanglement..................... 37 2.6 The partial trace............................. 39 2.7 Reduced density matrices........................ 42 2.8 Singular value and Schmidt decompositions.............. 44 2.9 Entropy and mutual information.................... 50 2.10 Generalized measurements and POVMs................ 62 3 Continuous variables 68 3.1 Creation and annihilation operators................... 68 3.2 Some important
    [Show full text]