Convolution and Applications of Convolution
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Distributions:The Evolutionof a Mathematicaltheory
Historia Mathematics 10 (1983) 149-183 DISTRIBUTIONS:THE EVOLUTIONOF A MATHEMATICALTHEORY BY JOHN SYNOWIEC INDIANA UNIVERSITY NORTHWEST, GARY, INDIANA 46408 SUMMARIES The theory of distributions, or generalized func- tions, evolved from various concepts of generalized solutions of partial differential equations and gener- alized differentiation. Some of the principal steps in this evolution are described in this paper. La thgorie des distributions, ou des fonctions g&&alis~es, s'est d&eloppeg 2 partir de divers con- cepts de solutions g&&alis6es d'gquations aux d&i- &es partielles et de diffgrentiation g&&alis6e. Quelques-unes des principales &apes de cette &olution sont d&rites dans cet article. Die Theorie der Distributionen oder verallgemein- erten Funktionen entwickelte sich aus verschiedenen Begriffen von verallgemeinerten Lasungen der partiellen Differentialgleichungen und der verallgemeinerten Ableitungen. In diesem Artikel werden einige der wesentlichen Schritte dieser Entwicklung beschrieben. 1. INTRODUCTION The founder of the theory of distributions is rightly con- sidered to be Laurent Schwartz. Not only did he write the first systematic paper on the subject [Schwartz 19451, which, along with another paper [1947-19481, already contained most of the basic ideas, but he also wrote expository papers on distributions for electrical engineers [19481. Furthermore, he lectured effec- tively on his work, for example, at the Canadian Mathematical Congress [1949]. He showed how to apply distributions to partial differential equations [19SObl and wrote the first general trea- tise on the subject [19SOa, 19511. Recognition of his contri- butions came in 1950 when he was awarded the Fields' Medal at the International Congress of Mathematicians, and in his accep- tance address to the congress Schwartz took the opportunity to announce his celebrated "kernel theorem" [195Oc]. -
Convolution! (CDT-14) Luciano Da Fontoura Costa
Convolution! (CDT-14) Luciano da Fontoura Costa To cite this version: Luciano da Fontoura Costa. Convolution! (CDT-14). 2019. hal-02334910 HAL Id: hal-02334910 https://hal.archives-ouvertes.fr/hal-02334910 Preprint submitted on 27 Oct 2019 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Convolution! (CDT-14) Luciano da Fontoura Costa [email protected] S~aoCarlos Institute of Physics { DFCM/USP October 22, 2019 Abstract The convolution between two functions, yielding a third function, is a particularly important concept in several areas including physics, engineering, statistics, and mathematics, to name but a few. Yet, it is not often so easy to be conceptually understood, as a consequence of its seemingly intricate definition. In this text, we develop a conceptual framework aimed at hopefully providing a more complete and integrated conceptual understanding of this important operation. In particular, we adopt an alternative graphical interpretation in the time domain, allowing the shift implied in the convolution to proceed over free variable instead of the additive inverse of this variable. In addition, we discuss two possible conceptual interpretations of the convolution of two functions as: (i) the `blending' of these functions, and (ii) as a quantification of `matching' between those functions. -
The Convolution Theorem
Module 2 : Signals in Frequency Domain Lecture 18 : The Convolution Theorem Objectives In this lecture you will learn the following We shall prove the most important theorem regarding the Fourier Transform- the Convolution Theorem We are going to learn about filters. Proof of 'the Convolution theorem for the Fourier Transform'. The Dual version of the Convolution Theorem Parseval's theorem The Convolution Theorem We shall in this lecture prove the most important theorem regarding the Fourier Transform- the Convolution Theorem. It is this theorem that links the Fourier Transform to LSI systems, and opens up a wide range of applications for the Fourier Transform. We shall inspire its importance with an application example. Modulation Modulation refers to the process of embedding an information-bearing signal into a second carrier signal. Extracting the information- bearing signal is called demodulation. Modulation allows us to transmit information signals efficiently. It also makes possible the simultaneous transmission of more than one signal with overlapping spectra over the same channel. That is why we can have so many channels being broadcast on radio at the same time which would have been impossible without modulation There are several ways in which modulation is done. One technique is amplitude modulation or AM in which the information signal is used to modulate the amplitude of the carrier signal. Another important technique is frequency modulation or FM, in which the information signal is used to vary the frequency of the carrier signal. Let us consider a very simple example of AM. Consider the signal x(t) which has the spectrum X(f) as shown : Why such a spectrum ? Because it's the simplest possible multi-valued function. -
Arxiv:1404.7630V2 [Math.AT] 5 Nov 2014 Asoffsae,Se[S4 P8,Vr6.Isedof Instead Ver66]
PROPER BASE CHANGE FOR SEPARATED LOCALLY PROPER MAPS OLAF M. SCHNÜRER AND WOLFGANG SOERGEL Abstract. We introduce and study the notion of a locally proper map between topological spaces. We show that fundamental con- structions of sheaf theory, more precisely proper base change, pro- jection formula, and Verdier duality, can be extended from contin- uous maps between locally compact Hausdorff spaces to separated locally proper maps between arbitrary topological spaces. Contents 1. Introduction 1 2. Locally proper maps 3 3. Proper direct image 5 4. Proper base change 7 5. Derived proper direct image 9 6. Projection formula 11 7. Verdier duality 12 8. The case of unbounded derived categories 15 9. Remindersfromtopology 17 10. Reminders from sheaf theory 19 11. Representability and adjoints 21 12. Remarks on derived functors 22 References 23 arXiv:1404.7630v2 [math.AT] 5 Nov 2014 1. Introduction The proper direct image functor f! and its right derived functor Rf! are defined for any continuous map f : Y → X of locally compact Hausdorff spaces, see [KS94, Spa88, Ver66]. Instead of f! and Rf! we will use the notation f(!) and f! for these functors. They are embedded into a whole collection of formulas known as the six-functor-formalism of Grothendieck. Under the assumption that the proper direct image functor f(!) has finite cohomological dimension, Verdier proved that its ! derived functor f! admits a right adjoint f . Olaf Schnürer was supported by the SPP 1388 and the SFB/TR 45 of the DFG. Wolfgang Soergel was supported by the SPP 1388 of the DFG. 1 2 OLAFM.SCHNÜRERANDWOLFGANGSOERGEL In this article we introduce in 2.3 the notion of a locally proper map between topological spaces and show that the above results even hold for arbitrary topological spaces if all maps whose proper direct image functors are involved are locally proper and separated. -
MATH 418 Assignment #7 1. Let R, S, T Be Positive Real Numbers Such That R
MATH 418 Assignment #7 1. Let r, s, t be positive real numbers such that r + s + t = 1. Suppose that f, g, h are nonnegative measurable functions on a measure space (X, S, µ). Prove that Z r s t r s t f g h dµ ≤ kfk1kgk1khk1. X s t Proof . Let u := g s+t h s+t . Then f rgsht = f rus+t. By H¨older’s inequality we have Z Z rZ s+t r s+t r 1/r s+t 1/(s+t) r s+t f u dµ ≤ (f ) dµ (u ) dµ = kfk1kuk1 . X X X For the same reason, we have Z Z s t s t s+t s+t s+t s+t kuk1 = u dµ = g h dµ ≤ kgk1 khk1 . X X Consequently, Z r s t r s+t r s t f g h dµ ≤ kfk1kuk1 ≤ kfk1kgk1khk1. X 2. Let Cc(IR) be the linear space of all continuous functions on IR with compact support. (a) For 1 ≤ p < ∞, prove that Cc(IR) is dense in Lp(IR). Proof . Let X be the closure of Cc(IR) in Lp(IR). Then X is a linear space. We wish to show X = Lp(IR). First, if I = (a, b) with −∞ < a < b < ∞, then χI ∈ X. Indeed, for n ∈ IN, let gn the function on IR given by gn(x) := 1 for x ∈ [a + 1/n, b − 1/n], gn(x) := 0 for x ∈ IR \ (a, b), gn(x) := n(x − a) for a ≤ x ≤ a + 1/n and gn(x) := n(b − x) for b − 1/n ≤ x ≤ b. -
Detection and Estimation Theory Introduction to ECE 531 Mojtaba Soltanalian- UIC the Course
Detection and Estimation Theory Introduction to ECE 531 Mojtaba Soltanalian- UIC The course Lectures are given Tuesdays and Thursdays, 2:00-3:15pm Office hours: Thursdays 3:45-5:00pm, SEO 1031 Instructor: Prof. Mojtaba Soltanalian office: SEO 1031 email: [email protected] web: http://msol.people.uic.edu/ The course Course webpage: http://msol.people.uic.edu/ECE531 Textbook(s): * Fundamentals of Statistical Signal Processing, Volume 1: Estimation Theory, by Steven M. Kay, Prentice Hall, 1993, and (possibly) * Fundamentals of Statistical Signal Processing, Volume 2: Detection Theory, by Steven M. Kay, Prentice Hall 1998, available in hard copy form at the UIC Bookstore. The course Style: /Graduate Course with Active Participation/ Introduction Let’s start with a radar example! Introduction> Radar Example QUIZ Introduction> Radar Example You can actually explain it in ten seconds! Introduction> Radar Example Applications in Transportation, Defense, Medical Imaging, Life Sciences, Weather Prediction, Tracking & Localization Introduction> Radar Example The strongest signals leaking off our planet are radar transmissions, not television or radio. The most powerful radars, such as the one mounted on the Arecibo telescope (used to study the ionosphere and map asteroids) could be detected with a similarly sized antenna at a distance of nearly 1,000 light-years. - Seth Shostak, SETI Introduction> Estimation Traditionally discussed in STATISTICS. Estimation in Signal Processing: Digital Computers ADC/DAC (Sampling) Signal/Information Processing Introduction> Estimation The primary focus is on obtaining optimal estimation algorithms that may be implemented on a digital computer. We will work on digital signals/datasets which are typically samples of a continuous-time waveform. -
Arxiv:1712.04732V2 [Math.NA] 17 Jan 2021 New Window Function, Runtimes for the SE Method Can Be Further Reduced
Fast Ewald summation for electrostatic potentials with arbitrary periodicity D. S. Shamshirgar,a) J. Bagge,b) and A.-K. Tornbergc) KTH Mathematics, Swedish e-Science Research Centre, 100 44 Stockholm, Sweden. A unified treatment for fast and spectrally accurate evaluation of electrostatic po- tentials subject to periodic boundary conditions in any or none of the three spatial dimensions is presented. Ewald decomposition is used to split the problem into a real- space and a Fourier-space part, and the FFT-based Spectral Ewald (SE) method is used to accelerate the computation of the latter. A key component in the unified treatment is an FFT-based solution technique for the free-space Poisson problem in three, two or one dimensions, depending on the number of non-periodic directions. The computational cost is furthermore reduced by employing an adaptive FFT for the doubly and singly periodic cases, allowing for different local upsampling factors. The SE method will always be most efficient for the triply periodic case as the cost of computing FFTs will then be the smallest, whereas the computational cost of the rest of the algorithm is essentially independent of periodicity. We show that the cost of removing periodic boundary conditions from one or two directions out of three will only moderately increase the total runtime. Our comparisons also show that the computational cost of the SE method in the free-space case is around four times that of the triply periodic case. The Gaussian window function previously used in the SE method, is here compared to a piecewise polynomial approximation of the Kaiser-Bessel window function. -
Z-Transformation - I
Digital signal processing: Lecture 5 z-transformation - I Produced by Qiangfu Zhao (Since 1995), All rights reserved © DSP-Lec05/1 Review of last lecture • Fourier transform & inverse Fourier transform: – Time domain & Frequency domain representations • Understand the “true face” (latent factor) of some physical phenomenon. – Key points: • Definition of Fourier transformation. • Definition of inverse Fourier transformation. • Condition for a signal to have FT. Produced by Qiangfu Zhao (Since 1995), All rights reserved © DSP-Lec05/2 Review of last lecture • The sampling theorem – If the highest frequency component contained in an analog signal is f0, the sampling frequency (the Nyquist rate) should be 2xf0 at least. • Frequency response of an LTI system: – The Fourier transformation of the impulse response – Physical meaning: frequency components to keep (~1) and those to remove (~0). – Theoretic foundation: Convolution theorem. Produced by Qiangfu Zhao (Since 1995), All rights reserved © DSP-Lec05/3 Topics of this lecture Chapter 6 of the textbook • The z-transformation. • z-変換 • Convergence region of z- • z-変換の収束領域 transform. -変換とフーリエ変換 • Relation between z- • z transform and Fourier • z-変換と差分方程式 transform. • 伝達関数 or システム • Relation between z- 関数 transform and difference equation. • Transfer function (system function). Produced by Qiangfu Zhao (Since 1995), All rights reserved © DSP-Lec05/4 z-transformation(z-変換) • Laplace transformation is important for analyzing analog signal/systems. • z-transformation is important for analyzing discrete signal/systems. • For a given signal x(n), its z-transformation is defined by ∞ Z[x(n)] = X (z) = ∑ x(n)z−n (6.3) n=0 where z is a complex variable. Produced by Qiangfu Zhao (Since 1995), All rights reserved © DSP-Lec05/5 One-sided z-transform and two-sided z-transform • The z-transform defined above is called one- sided z-transform (片側z-変換). -
Basics on Digital Signal Processing
Basics on Digital Signal Processing z - transform - Digital Filters Vassilis Anastassopoulos Electronics Laboratory, Physics Department, University of Patras Outline of the Lecture 1. The z-transform 2. Properties 3. Examples 4. Digital filters 5. Design of IIR and FIR filters 6. Linear phase 2/31 z-Transform 2 N 1 N 1 j nk n X (k) x(n) e N X (z) x(n) z n0 n0 Time to frequency Time to z-domain (complex) Transformation tool is the Transformation tool is complex wave z e j e j j j2k / N e e With amplitude ρ changing(?) with time With amplitude |ejω|=1 The z-transform is more general than the DFT 3/31 z-Transform For z=ejω i.e. ρ=1 we work on the N 1 unit circle X (z) x(n) z n And the z-transform degenerates n0 into the Fourier transform. I -z z=R+jI |z|=1 R The DFT is an expression of the z-transform on the unit circle. The quantity X(z) must exist with finite value on the unit circle i.e. must posses spectrum with which we can describe a signal or a system. 4/31 z-Transform convergence We are interested in those values of z for which X(z) converges. This region should contain the unit circle. I Why is it so? |a| N 1 N 1 X (z) x(n) zn x(n) (1/zn ) n0 n0 R At z=0, X(z) diverges ROC The values of z for which X(z) diverges are called poles of X(z). -
CSE 152: Computer Vision Manmohan Chandraker
CSE 152: Computer Vision Manmohan Chandraker Lecture 15: Optimization in CNNs Recap Engineered against learned features Label Convolutional filters are trained in a Dense supervised manner by back-propagating classification error Dense Dense Convolution + pool Label Convolution + pool Classifier Convolution + pool Pooling Convolution + pool Feature extraction Convolution + pool Image Image Jia-Bin Huang and Derek Hoiem, UIUC Two-layer perceptron network Slide credit: Pieter Abeel and Dan Klein Neural networks Non-linearity Activation functions Multi-layer neural network From fully connected to convolutional networks next layer image Convolutional layer Slide: Lazebnik Spatial filtering is convolution Convolutional Neural Networks [Slides credit: Efstratios Gavves] 2D spatial filters Filters over the whole image Weight sharing Insight: Images have similar features at various spatial locations! Key operations in a CNN Feature maps Spatial pooling Non-linearity Convolution (Learned) . Input Image Input Feature Map Source: R. Fergus, Y. LeCun Slide: Lazebnik Convolution as a feature extractor Key operations in a CNN Feature maps Rectified Linear Unit (ReLU) Spatial pooling Non-linearity Convolution (Learned) Input Image Source: R. Fergus, Y. LeCun Slide: Lazebnik Key operations in a CNN Feature maps Spatial pooling Max Non-linearity Convolution (Learned) Input Image Source: R. Fergus, Y. LeCun Slide: Lazebnik Pooling operations • Aggregate multiple values into a single value • Invariance to small transformations • Keep only most important information for next layer • Reduces the size of the next layer • Fewer parameters, faster computations • Observe larger receptive field in next layer • Hierarchically extract more abstract features Key operations in a CNN Feature maps Spatial pooling Non-linearity Convolution (Learned) . Input Image Input Feature Map Source: R. -
The Discrete-Time Fourier Transform and Convolution Theorems: a Brief Tutorial
The Discrete-Time Fourier Transform and Convolution Theorems: A Brief Tutorial Yi-Wen Liu 25 Feb 2013 (Revised 21 Sep 2015) 1 Definitions and interpretation 1.1 Units Throughout this semester, we will use the integer-valued variable n as the time variable for discrete-time signal processing; that is, n = −∞, ..., −1, 0, 1, ..., ∞. In this convention, the unit of n is dimensionless, but be aware that in reality the sampling period is 1/fs, where fs denotes the sampling rate. In some text books, 1/fs is denoted as T , and its unit is second. In this course we do not do this, thereby favoring conciseness of notations. 1.2 Definition of DTFT The discrete-time Fourier transform (DTFT) of a discrete-time signal x[n] is a function of frequency ω defined as follows: ∞ ∆ X(ω) = x[n]e−jωn. (1) n=−∞ X Conceptually, the DTFT allows us to check how much of a tonal component at fre- quency ω is in x[n]. The DTFT of a signal is often also called a spectrum. Note that X(ω) is complex-valued. So, the absolute value |X(ω)| represents the tonal component’s magnitude, and the angle ∠X(ω) tells us the phase of the tonal component. Also note that Eq. (1) defines the DTFT as the inner product between the signal and the complex exponential tone e−jωn. Because complex exponentials {e−jωn,ω ∈ [−π, π]} form a complete family of orthogonal bases for (practically) all signals of interest, what Eq. (1) essentially does is to project x[n] onto the space spanned by all complex exponen- tials. -
1 Convolution
CS1114 Section 6: Convolution February 27th, 2013 1 Convolution Convolution is an important operation in signal and image processing. Convolution op- erates on two signals (in 1D) or two images (in 2D): you can think of one as the \input" signal (or image), and the other (called the kernel) as a “filter” on the input image, pro- ducing an output image (so convolution takes two images as input and produces a third as output). Convolution is an incredibly important concept in many areas of math and engineering (including computer vision, as we'll see later). Definition. Let's start with 1D convolution (a 1D \image," is also known as a signal, and can be represented by a regular 1D vector in Matlab). Let's call our input vector f and our kernel g, and say that f has length n, and g has length m. The convolution f ∗ g of f and g is defined as: m X (f ∗ g)(i) = g(j) · f(i − j + m=2) j=1 One way to think of this operation is that we're sliding the kernel over the input image. For each position of the kernel, we multiply the overlapping values of the kernel and image together, and add up the results. This sum of products will be the value of the output image at the point in the input image where the kernel is centered. Let's look at a simple example. Suppose our input 1D image is: f = 10 50 60 10 20 40 30 and our kernel is: g = 1=3 1=3 1=3 Let's call the output image h.