
Proceedings of Student-Faculty Research Day Conference, CSIS, Pace University, May 8th, 2020 Performing Quantum Computer Vision Tasks on IBM Quantum Computers and Simulators Raj Ponnusamy Seidenberg School of CSIS, Pace University, Pleasantville, New York 10570 Email: [email protected] Abstract—Computer Vision is an interdisciplinary I. INTRODUCTION scientific field that deals with how computers can gain a high-level understanding of digital images or videos and Computer vision tasks such as Data retrieval, automate tasks that the human visual system can do. Recent advancements in Quantum computation. It has encoding, and storage are processes considered been found out that quantum computation has great easy and commonplace in classical computing. advantages in computer vision tasks such acquiring, However, they remain challenging in the quantum processing, analyzing and understanding digital images, realm. One of these challenges is storing and and extraction of high-dimensional data from the real processing image data efficiently. Due to the world for decision-making applications because of its decoherence issue and for the better utilization of parallel processing characteristics where we are seeing the potential of providing exponential speed up when the quantum communication channel. A relevant compared to classical algorithm execution. This speed-up representation making use of quantum properties is can play a big role in vision machine learning where essential for a useful treatment of images, putting training a model is usually very slow and when mathematical devices like the quantum wavelet processing with images and videos. In recent years, there transform to avail. On the other hand, the model has been a lot of researches on quantum image needs more data for better accuracy which will processing and machine learning, including the representation and classification of an image in a take high training time, For example, in a classical quantum computer. For quantum computer vision, activity recognition task, there need to be around quantum image representation plays a key role, which 1000 videos for training a classifier. Also by using substantively determines the kinds of processing tasks modern deep learning algorithms, the training time and how well they can be performed. In this paper, first, can become even longer. Because of that, we discuss Flexible Representation of Quantum Images researchers usually use Graphics Processing Units method and implement them to store and retrieve (GPUs) to make computations feasible. And hence MNIST handwritten digit into an IBM quantum device. Then we use the Quantum Support Vector Machine a next-generation method for reducing the storage model that runs on IBM quantum devices to classify and computation time for those algorithms is MNIST image data-set. This demonstrates the potential utilizing quantum computers. The purpose of this of using quantum computers for vision tasks such as paper is to study these tasks on quantizing object classification for highly efficient image and video classical computer vision tasks and addressing processing for in the big data era. The result obtained in some open questions. this paper could be used in more quantum computer vision or image processing and machine learning applications. Many machine learning problems utilize linear algebra as there are efficient ways to Index Terms—Quantum Computer Vision, compute matrix operations by representing the Quantum Image representation, IBM QisKit, Quantum data in matrices. Quantum computing makes some Machine Learning. linear algebra computations faster, so implicitly improves classical machine learning tasks. An example of this is fast matrix inversion which has been used in generating the hyperplane for QSVM. After going through all the literature we can (NEQR) [3], Quantum Image Representation positively infer that the field of storage and Through the Two-Dimensional Quantum States retrieving images from a quantum computer and Normalized Amplitude (2D-QSNA)[5] has remains largely a nascent engineering application. taken advantage on quantum properties and based but there are limits to these methods for practical on a number of qubits necessary to store an image uses. and efficiently. FRQI and NEQR are best among these algorithms as they use low numbers of We will implement the FRQI quantum qubits to represent an image and make use of few image representation algorithm to store MNIST numbers of gates to implement operations on the data-set on IBM Quantum devices and retrieving stored images. however, there are limitations to results and apply the QSVM algorithms to classify these algorithms as it cannot be used on a large, this data set. The results of our testing are touched highly detailed image and cannot extend to on as well as the implications of the progress and rectangular images. But 2D QSNA addresses some shortcomings of our approach. We finish with a of this limitation but that was having other discussion of what we can look forward to in the challenges to isolate the quantum states in order to future of this field and for ourselves as we work. In this paper, I will be implementing the continue our research on this topic. FRQI method and hence we will look at how that will work. In FRQI representation, the image is stored in a state given by the following: II. BACKGROUND 22n−1 1 X I(θ) >= (cos θ j0 > + sin θ j 1 >) ⊗ i > A. Flexible Representation of Quantum Images 2n i i i=0 (1) Computer Vision tasks are expensive in in which theta encodes the colors of the image and terms of computational complexity and jii encodes the positions of the pixels in the fundamental task is to represent the image. On image. Getting from an initial stat j0 >⊗(2n+1) to classical computer representing image pixel by the FRQI state requires the use of a unitary pixel required lot of computation resource and to operation, which we will call P. First, Hadamard process the image we use Fast Fourier Transform gates are applied to each of the qubits in the initial method to speed up the processing of the image in state. Using controlled rotation gates on the the frequency domain based on convolution current state transforms it into the FRQI state. theorem but on quantum computers, we have the Referring to the controlled rotations as R and the ability to store N bits of classical information in Hadamard gates as H, we can say that P=RH. This only log2 N quantum bits (qubits) and we have process uses 2n Hadamard gates and 22n efficient image processing techniques like controlled rotations. These controlled rotations can 2n Quantum Fourier Transform, Quantum Wavelet be implemented by Cy (2θ) and NOT operations. 2n Transform of Fourier Transform which is better It has also been shown that C Ry(2θ) can be than Fast Fourier Transform in classical computers broken down into 22n − 1 simple operations 2θ1 −2θ1 2n and also we can take advantage of quantum Ry 22n−1 and Ry 22n−1 , and 2 − 1 NOT properties like entanglement when image Operations. Therefore, the total number of simple represented in a quantum state. There are various operations to get from the initial state to the FRQI methods researched and explored to represent the state is 2n + 22n (22n−1 − 1 + 22n − 1 − 2) = image in quantum state . First Venegas-Andraca 24n − 3 (22n) + which is quadratic in 22n. The and Bose [1] introduced qubit lattice method to number of gates used to get to the FRQI state can represent each pixel in quantum which is purely still be quite large when considering the depth of hypothetical and impractical design but other large images how many pixels large images methods flexible representation of quantum images require. In order to reduce the number of simple (FRQI)[2], Novel enhanced quantum representation gates required, a process called Quantum Image Compression can be used based on image where Nx is the normalization factor . By using representation called real ket devised by Latorre Hamiltonian simulation, e−iK∆t is obtained where ^ K et. al. K = tr K . The algorithm solves the least-squares SVM problem defined as the following N B. Quantum Support Vector Machines 1 T γ X 2 min J(w; b; e) = w w + ek (4) w;b;e 2 2 k=1 T Support Vector Machines(SVM) are s.t. yk w φ (xk) + b = 1 − ek When the dual supervised learning models with associated problem is solved, one gets the following result: learning algorithms that analyze data used for −! ! b 0 1 T b 0 classification and regression analysis.it is used to F = −! = find a hyperplane in an N-dimensional space that ~α 1 T K + γ−1I ~α ~y distinctly classifies the data points. To separate the (5) two classes of data points, there are many possible −iF^ δt hyper-planes that could be chosen and the Then by using Hamiltonian technique, e algorithm is to find a plane that has the maximum can be constructed and the system is solved by margin using a cost function. The equation of the HHL algorithm[7]. Then the state for the variables is defined in the computational basis as the hyper-plane is the following: ~w · ~xi − b. The problem is solved by the following optimization following −! M ! framework: min ~;b k~wk s.t. yi(~w · xi − b) ≥ 1. The 1 X jb; ~αi = bj0i + α jki (6) dual formulation is the following: C k PM 1 PM k=1 max~α L(~α) = j=1 yjαj − 2 j;k=1 αjxj · xkαk where C is the normalizing state. In the M classification part, the aim is to classify the query X s.t. αj = 0; yjαj ≥ 0 (2) state defined as the following j=1 M ! 1 X −! jx~i = j0ij0i + j~xjjkij · xki (7) For nonlinear classification, kernel function Nx¯ K = k (x ; x ) is introduced and by Mercer’s i=1 jk j k Training data is constructed by using oracle in the theorem, PM α x · x α can be transformed j;k=1 j j k k following way: PM into the form j;k=1 αjKjkαk.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages7 Page
-
File Size-