Denoising Criterion for Variational Auto-Encoding Framework

Total Page:16

File Type:pdf, Size:1020Kb

Denoising Criterion for Variational Auto-Encoding Framework Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence (AAAI-17) Denoising Criterion for Variational Auto-Encoding Framework Daniel Jiwoong Im, Sungjin Ahn, Roland Memisevic, Yoshua Bengio∗ Montreal Institute for Learning Algorithms University of Montreal Montreal, QC, H3C 3J7 {imdaniel,ahnsungj,memisevr,¡findme¿}@iro.umontreal.ca Abstract in the inference network, the approximate posterior distri- bution for each latent variable is conditioned on an ob- Denoising autoencoders (DAE) are trained to reconstruct servation and the parameters are shared among the latent their clean inputs with noise injected at the input level, while variables. Combined with advances in training techniques variational autoencoders (VAE) are trained with noise in- such as the re-parameterization trick and the REINFORCE jected in their stochastic hidden layer, with a regularizer that encourages this noise injection. In this paper, we show that (Williams 1992; Mnih and Gregor 2014), it became possi- injecting noise both in input and in the stochastic hidden ble to train variational inference models efficiently for large- layer can be advantageous and we propose a modified vari- scale datasets. ational lower bound as an improved objective function in this Despite these advances, it is still a major challenge to setup. When input is corrupted, then the standard VAE lower obtain a class of variational distributions which is flexible bound involves marginalizing the encoder conditional distri- enough to accurately model the true posterior distribution. bution over the input noise, which makes the training crite- For instance, in the variational autoencoder (VAE), in or- rion intractable. Instead, we propose a modified training cri- der to achieve efficient training, each dimension of the latent terion which corresponds to a tractable bound when input is corrupted. Experimentally, we find that the proposed denois- variable is assumed to be independent each other and mod- ing variational autoencoder (DVAE)yields better average log- eled by a univariate Gaussian distribution whose parameters likelihood than the VAE and the importance weighted autoen- (i.e., the mean and the variance) are obtained by a nonlin- coder on the MNIST and Frey Face datasets. ear projection of the input using a neural network. Although VAE performs well in practice for a rather simple problems such as generating small and simple images (e.g., MNIST), Introduction it is desired to relax this strong restriction on the variational distributions in order to apply it to more complex real-world Variational inference (Jordan et al. 1999) has been a core problems. Recently, there have been efforts in this direction. component of approximate Bayesian inference along with (Salimans, Kingma, and Welling 2015) integrated MCMC the Markov chain Monte Carlo (MCMC) method (Neal steps into the variational inference such that the variational 1993). It has been popular to many researchers and practi- distribution becomes closer to the target distribution as it tioners because the problem of learning an intractable pos- takes more MCMC steps inside each iteration of the varia- terior distribution is formulated as an optimization problem tional inference. Similar ideas but applying a sequence of in- which has many advantages compared to MCMC; (i) we can vertible and deterministic non-linear transformations rather easily take advantage of many advanced optimization tools than MCMC are also proposed by (Dinh, Krueger, and Ben- (Kingma and Ba 2014a; Duchi, Hazan, and Singer 2011; gio 2015) and (Rezende and Mohamed 2015). Zeiler 2012), (ii) the training by optimization is usually faster than the MCMC sampling, and (iii) unlike MCMC, On the other hand, the denoising criterion, where the input where it is difficult to decide when to finish the sampling, is corrupted by adding some noise and the model is asked the stopping criterion of variational inference is more clear. to recover the original input, has been studied extensively One remarkable recent advance in variational inference for deterministic generative models (Seung 1998; Vincent is to use the inference network (also known as the recog- et al. 2008; Bengio et al. 2013). These studies showed that nition network) as the approximate posterior distribution the denoising criterion plays an important role in achieving (Kingma and Welling 2014; Rezende and Mohamed 2014; good generalization performance (Vincent et al. 2008) be- Dayan et al. 1995; Bornschein and Bengio 2014). Unlike cause it makes the nearby data points in the low dimensional the traditional variational inference where different vari- manifold to be robust against the presence of small noise ational parameters are required for each latent variable, in the high dimensional observation space (Seung 1998; Vincent et al. 2008; Rifai 2011; Alain and Bengio 2014; ∗CIFAR Senior Fellow Im, Belghazi, and Memisevic 2016). Therefore, it seems a Copyright c 2017, Association for the Advancement of Artificial legitimate question to ask if the denoising criterion (where Intelligence (www.aaai.org). All rights reserved. we add the noise to the inputs) can also be advantageous for 2059 the variational auto-encoding framework where the noise is One interesting aspect of VAE is that the approximate dis- added to the latent variables, but not to the inputs, and if so, tribution q is conditioned on the observation x, resulting in how can we formulate the problem for efficient training. Al- a form qφ(z|x). Similar to the generative network, we use a though it has not been considerably studied how to combine neural network for qφ(z|x) with x and z as its input and out- these, there has been some evidences of its usefulness1.For put, respectively. The variational parameter φ, which is also example, (Rezende and Mohamed 2014) pointed out that in- the weights of the neural network, is shared among all obser- jecting additional noise to the recognition model is crucial vations. We call this network qφ(z|x) the inference network, to achieve the reported accuracy for unseen data, advocat- recognition network. ing that in practice denoising can help the regularization of The objective of VAE is to maximize the following varia- probabilistic generative models as well. tional lower bound with respect to the parameters θ and φ. In this paper, motivated by the DAE and the VAE, we p x, z study the denoising criterion for variational inference based p x ≥ E θ( ) log θ( ) qφ(z|x) log (1) on recognition networks, which we call the variational auto- qφ(z|x) encoding framework throughoutthe paper. Our main con- E p x|z − KL q z|x ||p z . = qφ(z|x) [log θ( )] ( φ( ) ( )) tributions are as follows. We introduce a new class of ap- (2) proximate distributions where the recognition network is ob- tained by marginalizing the input noise over a corruption Note that in Eqn. (2), we can interpret the first term as a distribution, and thus provides capacity to obtain a more reconstruction accuracy through an autoencoder with noise flexible approximate distribution class such as the mixture injected in the hidden layer that is the output of the infer- of Gaussian. Because applying this approximate distribu- ence network, and the second term as a regularizer which tion to the standard VAE objective makes the training in- enforces the approximate posterior to be close to the prior tractable, we propose a new objective, called the denoising and maximizes the entropy of the injected noise. variational lower bound, and show that, given a sensible cor- The earlier approaches to train this type of models were ruption function, this is (i) tractable and efficient to train , based on the variational EM algorithm: in the E-step, fix- and (ii) easily applicable to many existing models such as the ing θ, we update φ such that the approximate distribution variational autoencoder, the importance reweighted autoen- qφ(z|x) close to the true posterior distribution pθ(z|x), and coder (IWAE) (Burda, Grosse, and Salakhutdinov 2015), then in the M-step, fixing φ, we update θ to increase the and the neural variational inference and learning (NVIL) marginal log-likelihood. However, with the VAE it is possi- (Mnih and Gregor 2014). In the experiments, we empiri- ble to apply the backpropagation on the variational param- cally demonstrate that the proposed denoising criterion for eter φ by using the re-parameterization trick (Kingma and variational auto-encoding framework helps to improve the Welling 2014), considering z as a function of i.i.d. noise and performance in both the variational autoencoders and the of the output of the encoder (such as the mean and variance importance weighted autoencoders (IWAE) on the binarized of the Gaussian). Armed with the gradient on these param- MNIST dataset and the Frey Face dataset. eters, the gradient on the generative network parameters θ can readily be computed by back-propagation, and thus we φ θ Variational Autoencoders can jointly update both and using efficient optimization algorithms such as the stochastic gradient descent. The variational autoencoder (Kingma and Welling 2014; Although our exposition in the following proceeds mainly Rezende and Mohamed 2014) is a particular type of vari- with the VAE for simplicity, the proposed method can be ational inference framework which is closely related to our applied to a more general class of variational inference focus in this work (see Appendix for background on varia- methods which use the inference network qφ(z|x). This in- tional inference). With the VAE, the posterior distribution is cludes other recent models such as the importance weighted defined as pθ(z|x) ∝ pθ(x|z)p(z). Specifically, we define a autoencoders (IWAE), the neural variational inference and prior p(z) on the latent variable z ∈ RD, which is usually learning (NVIL), and DRAW (Gregor et al. 2015). set to an isotropic Gaussian distribution N (0,σID). Then, we use a parameterized distribution to define the observa- Denoising Criterion in Variational Framework tion model pθ(x|z).
Recommended publications
  • Blind Denoising Autoencoder
    1 Blind Denoising Autoencoder Angshul Majumdar In recent times a handful of papers have been published on Abstract—The term ‘blind denoising’ refers to the fact that the autoencoders used for signal denoising [10-13]. These basis used for denoising is learnt from the noisy sample itself techniques learn the autoencoder model on a training set and during denoising. Dictionary learning and transform learning uses the trained model to denoise new test samples. Unlike the based formulations for blind denoising are well known. But there has been no autoencoder based solution for the said blind dictionary learning and transform learning based approaches, denoising approach. So far autoencoder based denoising the autoencoder based methods were not blind; they were not formulations have learnt the model on a separate training data learning the model from the signal at hand. and have used the learnt model to denoise test samples. Such a The main disadvantage of such an approach is that, one methodology fails when the test image (to denoise) is not of the never knows how good the learned autoencoder will same kind as the models learnt with. This will be first work, generalize on unseen data. In the aforesaid studies [10-13] where we learn the autoencoder from the noisy sample while denoising. Experimental results show that our proposed method training was performed on standard dataset of natural images performs better than dictionary learning (K-SVD), transform (image-net / CIFAR) and used to denoise natural images. Can learning, sparse stacked denoising autoencoder and the gold the learnt model be used to recover images from other standard BM3D algorithm.
    [Show full text]
  • A Comparison of Delayed Self-Heterodyne Interference Measurement of Laser Linewidth Using Mach-Zehnder and Michelson Interferometers
    Sensors 2011, 11, 9233-9241; doi:10.3390/s111009233 OPEN ACCESS sensors ISSN 1424-8220 www.mdpi.com/journal/sensors Article A Comparison of Delayed Self-Heterodyne Interference Measurement of Laser Linewidth Using Mach-Zehnder and Michelson Interferometers Albert Canagasabey 1,2,*, Andrew Michie 1,2, John Canning 1, John Holdsworth 3, Simon Fleming 2, Hsiao-Chuan Wang 1,2 and Mattias L. Åslund 1 1 Interdisciplinary Photonics Laboratories (iPL), School of Chemistry, University of Sydney, 2006, NSW, Australia; E-Mails: [email protected] (A.M.); [email protected] (J.C.); [email protected] (H.-C.W.); [email protected] (M.L.Å.) 2 Institute of Photonics and Optical Science (IPOS), School of Physics, University of Sydney, 2006, NSW, Australia; E-Mail: [email protected] (S.F.) 3 SMAPS, University of Newcastle, Callaghan, NSW 2308, Australia; E-Mail: [email protected] (J.H.) * Author to whom correspondence should be addressed; E-Mail: [email protected]; Tel.: +61-2-9351-1984. Received: 17 August 2011; in revised form: 13 September 2011 / Accepted: 23 September 2011 / Published: 27 September 2011 Abstract: Linewidth measurements of a distributed feedback (DFB) fibre laser are made using delayed self heterodyne interferometry (DHSI) with both Mach-Zehnder and Michelson interferometer configurations. Voigt fitting is used to extract and compare the Lorentzian and Gaussian linewidths and associated sources of noise. The respective measurements are wL (MZI) = (1.6 ± 0.2) kHz and wL (MI) = (1.4 ± 0.1) kHz. The Michelson with Faraday rotator mirrors gives a slightly narrower linewidth with significantly reduced error.
    [Show full text]
  • AN279: Estimating Period Jitter from Phase Noise
    AN279 ESTIMATING PERIOD JITTER FROM PHASE NOISE 1. Introduction This application note reviews how RMS period jitter may be estimated from phase noise data. This approach is useful for estimating period jitter when sufficiently accurate time domain instruments, such as jitter measuring oscilloscopes or Time Interval Analyzers (TIAs), are unavailable. 2. Terminology In this application note, the following definitions apply: Cycle-to-cycle jitter—The short-term variation in clock period between adjacent clock cycles. This jitter measure, abbreviated here as JCC, may be specified as either an RMS or peak-to-peak quantity. Jitter—Short-term variations of the significant instants of a digital signal from their ideal positions in time (Ref: Telcordia GR-499-CORE). In this application note, the digital signal is a clock source or oscillator. Short- term here means phase noise contributions are restricted to frequencies greater than or equal to 10 Hz (Ref: Telcordia GR-1244-CORE). Period jitter—The short-term variation in clock period over all measured clock cycles, compared to the average clock period. This jitter measure, abbreviated here as JPER, may be specified as either an RMS or peak-to-peak quantity. This application note will concentrate on estimating the RMS value of this jitter parameter. The illustration in Figure 1 suggests how one might measure the RMS period jitter in the time domain. The first edge is the reference edge or trigger edge as if we were using an oscilloscope. Clock Period Distribution J PER(RMS) = T = 0 T = TPER Figure 1. RMS Period Jitter Example Phase jitter—The integrated jitter (area under the curve) of a phase noise plot over a particular jitter bandwidth.
    [Show full text]
  • Signal-To-Noise Ratio and Dynamic Range Definitions
    Signal-to-noise ratio and dynamic range definitions The Signal-to-Noise Ratio (SNR) and Dynamic Range (DR) are two common parameters used to specify the electrical performance of a spectrometer. This technical note will describe how they are defined and how to measure and calculate them. Figure 1: Definitions of SNR and SR. The signal out of the spectrometer is a digital signal between 0 and 2N-1, where N is the number of bits in the Analogue-to-Digital (A/D) converter on the electronics. Typical numbers for N range from 10 to 16 leading to maximum signal level between 1,023 and 65,535 counts. The Noise is the stochastic variation of the signal around a mean value. In Figure 1 we have shown a spectrum with a single peak in wavelength and time. As indicated on the figure the peak signal level will fluctuate a small amount around the mean value due to the noise of the electronics. Noise is measured by the Root-Mean-Squared (RMS) value of the fluctuations over time. The SNR is defined as the average over time of the peak signal divided by the RMS noise of the peak signal over the same time. In order to get an accurate result for the SNR it is generally required to measure over 25 -50 time samples of the spectrum. It is very important that your input to the spectrometer is constant during SNR measurements. Otherwise, you will be measuring other things like drift of you lamp power or time dependent signal levels from your sample.
    [Show full text]
  • Estimation from Quantized Gaussian Measurements: When and How to Use Dither Joshua Rapp, Robin M
    1 Estimation from Quantized Gaussian Measurements: When and How to Use Dither Joshua Rapp, Robin M. A. Dawson, and Vivek K Goyal Abstract—Subtractive dither is a powerful method for remov- be arbitrarily far from minimizing the mean-squared error ing the signal dependence of quantization noise for coarsely- (MSE). For example, when the population variance vanishes, quantized signals. However, estimation from dithered measure- the sample mean estimator has MSE inversely proportional ments often naively applies the sample mean or midrange, even to the number of samples, whereas the MSE achieved by the when the total noise is not well described with a Gaussian or midrange estimator is inversely proportional to the square of uniform distribution. We show that the generalized Gaussian dis- the number of samples [2]. tribution approximately describes subtractively-dithered, quan- tized samples of a Gaussian signal. Furthermore, a generalized In this paper, we develop estimators for cases where the Gaussian fit leads to simple estimators based on order statistics quantization is neither extremely fine nor extremely coarse. that match the performance of more complicated maximum like- The motivation for this work stemmed from a series of lihood estimators requiring iterative solvers. The order statistics- experiments performed by the authors and colleagues with based estimators outperform both the sample mean and midrange single-photon lidar. In [3], temporally spreading a narrow for nontrivial sums of Gaussian and uniform noise. Additional laser pulse, equivalent to adding non-subtractive Gaussian analysis of the generalized Gaussian approximation yields rules dither, was found to reduce the effects of the detector’s coarse of thumb for determining when and how to apply dither to temporal resolution on ranging accuracy.
    [Show full text]
  • Image Denoising by Autoencoder: Learning Core Representations
    Image Denoising by AutoEncoder: Learning Core Representations Zhenyu Zhao College of Engineering and Computer Science, The Australian National University, Australia, [email protected] Abstract. In this paper, we implement an image denoising method which can be generally used in all kinds of noisy images. We achieve denoising process by adding Gaussian noise to raw images and then feed them into AutoEncoder to learn its core representations(raw images itself or high-level representations).We use pre- trained classifier to test the quality of the representations with the classification accuracy. Our result shows that in task-specific classification neuron networks, the performance of the network with noisy input images is far below the preprocessing images that using denoising AutoEncoder. In the meanwhile, our experiments also show that the preprocessed images can achieve compatible result with the noiseless input images. Keywords: Image Denoising, Image Representations, Neuron Networks, Deep Learning, AutoEncoder. 1 Introduction 1.1 Image Denoising Image is the object that stores and reflects visual perception. Images are also important information carriers today. Acquisition channel and artificial editing are the two main ways that corrupt observed images. The goal of image restoration techniques [1] is to restore the original image from a noisy observation of it. Image denoising is common image restoration problems that are useful by to many industrial and scientific applications. Image denoising prob- lems arise when an image is corrupted by additive white Gaussian noise which is common result of many acquisition channels. The white Gaussian noise can be harmful to many image applications. Hence, it is of great importance to remove Gaussian noise from images.
    [Show full text]
  • Active Noise Control Over Space: a Subspace Method for Performance Analysis
    applied sciences Article Active Noise Control over Space: A Subspace Method for Performance Analysis Jihui Zhang 1,* , Thushara D. Abhayapala 1 , Wen Zhang 1,2 and Prasanga N. Samarasinghe 1 1 Audio & Acoustic Signal Processing Group, College of Engineering and Computer Science, Australian National University, Canberra 2601, Australia; [email protected] (T.D.A.); [email protected] (W.Z.); [email protected] (P.N.S.) 2 Center of Intelligent Acoustics and Immersive Communications, School of Marine Science and Technology, Northwestern Polytechnical University, Xi0an 710072, China * Correspondence: [email protected] Received: 28 February 2019; Accepted: 20 March 2019; Published: 25 March 2019 Abstract: In this paper, we investigate the maximum active noise control performance over a three-dimensional (3-D) spatial space, for a given set of secondary sources in a particular environment. We first formulate the spatial active noise control (ANC) problem in a 3-D room. Then we discuss a wave-domain least squares method by matching the secondary noise field to the primary noise field in the wave domain. Furthermore, we extract the subspace from wave-domain coefficients of the secondary paths and propose a subspace method by matching the secondary noise field to the projection of primary noise field in the subspace. Simulation results demonstrate the effectiveness of the proposed algorithms by comparison between the wave-domain least squares method and the subspace method, more specifically the energy of the loudspeaker driving signals, noise reduction inside the region, and residual noise field outside the region. We also investigate the ANC performance under different loudspeaker configurations and noise source positions.
    [Show full text]
  • ECE 417 Lecture 3: 1-D Gaussians Mark Hasegawa-Johnson 9/5/2017 Contents
    ECE 417 Lecture 3: 1-D Gaussians Mark Hasegawa-Johnson 9/5/2017 Contents • Probability and Probability Density • Gaussian pdf • Central Limit Theorem • Brownian Motion • White Noise • Vector with independent Gaussian elements Cumulative Distribution Function (CDF) A “cumulative distribution function” (CDF) specifies the probability that random variable X takes a value less than : Probability Density Function (pdf) A “probability density function” (pdf) is the derivative of the CDF: That means, for example, that the probability of getting an X in any interval is: Example: Uniform pdf The rand() function in most programming languages simulates a number uniformly distributed between 0 and 1, that is, Suppose you generated 100 random numbers using the rand() function. • How many of the numbers would be between 0.5 and 0.6? • How many would you expect to be between 0.5 and 0.6? • How many would you expect to be between 0.95 and 1.05? Gaussian (Normal) pdf Gauss considered this problem: under what circumstances does it make sense to estimate the mean of a distribution, , by taking the average of the experimental values, ? He demonstrated that is the maximum likelihood estimate of if Gaussian pdf Attribution: jhguch, https://commons. wikimedia.org/wik i/File:Boxplot_vs_P DF.svg Unit Normal pdf Suppose that X is normal with mean and standard deviation (variance ): Then is normal with mean 0 and standard deviation 1: Central Limit Theorem The Gaussian pdf is important because of the Central Limit Theorem. Suppose are i.i.d. (independent and identically distributed), each having mean and variance . Then Example: the sum of uniform random variables Suppose that are i.i.d.
    [Show full text]
  • Topic 5: Noise in Images
    NOISE IN IMAGES Session: 2007-2008 -1 Topic 5: Noise in Images 5.1 Introduction One of the most important consideration in digital processing of images is noise, in fact it is usually the factor that determines the success or failure of any of the enhancement or recon- struction scheme, most of which fail in the present of significant noise. In all processing systems we must consider how much of the detected signal can be regarded as true and how much is associated with random background events resulting from either the detection or transmission process. These random events are classified under the general topic of noise. This noise can result from a vast variety of sources, including the discrete nature of radiation, variation in detector sensitivity, photo-graphic grain effects, data transmission errors, properties of imaging systems such as air turbulence or water droplets and image quantsiation errors. In each case the properties of the noise are different, as are the image processing opera- tions that can be applied to reduce their effects. 5.2 Fixed Pattern Noise As image sensor consists of many detectors, the most obvious example being a CCD array which is a two-dimensional array of detectors, one per pixel of the detected image. If indi- vidual detector do not have identical response, then this fixed pattern detector response will be combined with the detected image. If this fixed pattern is purely additive, then the detected image is just, f (i; j) = s(i; j) + b(i; j) where s(i; j) is the true image and b(i; j) the fixed pattern noise.
    [Show full text]
  • Adaptive Noise Suppression of Pediatric Lung Auscultations with Real Applications to Noisy Clinical Settings in Developing Countries Dimitra Emmanouilidou, Eric D
    IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 62, NO. 9, SEPTEMBER 2015 2279 Adaptive Noise Suppression of Pediatric Lung Auscultations With Real Applications to Noisy Clinical Settings in Developing Countries Dimitra Emmanouilidou, Eric D. McCollum, Daniel E. Park, and Mounya Elhilali∗, Member, IEEE Abstract— Goal: Chest auscultation constitutes a portable low- phy or other imaging techniques, as well as chest percussion cost tool widely used for respiratory disease detection. Though it and palpation, the stethoscope remains a key diagnostic de- offers a powerful means of pulmonary examination, it remains vice due to its portability, low cost, and its noninvasive nature. riddled with a number of issues that limit its diagnostic capabil- ity. Particularly, patient agitation (especially in children), back- Chest auscultation with standard acoustic stethoscopes is not ground chatter, and other environmental noises often contaminate limited to resource-rich industrialized settings. In low-resource the auscultation, hence affecting the clarity of the lung sound itself. high-mortality countries with weak health care systems, there is This paper proposes an automated multiband denoising scheme for limited access to diagnostic tools like chest radiographs or ba- improving the quality of auscultation signals against heavy back- sic laboratories. As a result, health care providers with variable ground contaminations. Methods: The algorithm works on a sim- ple two-microphone setup, dynamically adapts to the background training and supervision
    [Show full text]
  • Lecture 18: Gaussian Channel, Parallel Channels and Rate-Distortion Theory 18.1 Joint Source-Channel Coding (Cont'd) 18.2 Cont
    10-704: Information Processing and Learning Spring 2012 Lecture 18: Gaussian channel, Parallel channels and Rate-distortion theory Lecturer: Aarti Singh Scribe: Danai Koutra Disclaimer: These notes have not been subjected to the usual scrutiny reserved for formal publications. They may be distributed outside this class only with the permission of the Instructor. 18.1 Joint Source-Channel Coding (cont'd) Clarification: Last time we mentioned that we should be able to transfer the source over the channel only if H(src) < Capacity(ch). Why not design a very low rate code by repeating some bits of the source (which would of course overcome any errors in the long run)? This design is not valid, because we cannot accumulate bits from the source into a buffer; we have to transmit them immediately as they are generated. 18.2 Continuous Alphabet (discrete-time, memoryless) Channel The most important continuous alphabet channel is the Gaussian channel. We assume that we have a signal 2 Xi with gaussian noise Zi ∼ N (0; σ ) (which is independent from Xi). Then, Yi = Xi + Zi. Without any Figure 18.1: Gaussian channel further constraints, the capacity of the channel can be infinite as there exists an inifinte subset of the inputs which can be perfectly recovery (as discussed in last lecture). Therefore, we are interested in the case where we have a power constraint on the input. For every transmitted codeword (x1; x2; : : : ; xn) the following inequality should hold: n n 1 X 1 X x2 ≤ P (deterministic) or E[x2] = E[X2] ≤ P (randomly generated codebits) n i n i i=1 i=1 Last time we computed the information capacity of a Gaussian channel with power constraint.
    [Show full text]
  • Noise Reduction Through Active Noise Control Using Stereophonic Sound for Increasing Quite Zone
    Noise reduction through active noise control using stereophonic sound for increasing quite zone Dongki Min1; Junjong Kim2; Sangwon Nam3; Junhong Park4 1,2,3,4 Hanyang University, Korea ABSTRACT The low frequency impact noise generated during machine operation is difficult to control by conventional active noise control. The Active Noise Control (ANC) is a destructive interference technic of noise source and control sound by generating anti-phase control sound. Its efficiency is limited to small space near the error microphones. At different locations, the noise level may increase due to control sounds. In this study, the ANC method using stereophonic sound was investigated to reduce interior low frequency noise and increase the quite zone. The Distance Based Amplitude Panning (DBAP) algorithm based on the distance between the virtual sound source and the speaker was used to create a virtual sound source by adjusting the volume proportions of multi speakers respectively. The 3-Dimensional sound ANC system was able to change the position of virtual control source using DBAP algorithm. The quiet zone was formed using fixed multi speaker system for various locations of noise sources. Keywords: Active noise control, stereophonic sound I-INCE Classification of Subjects Number(s): 38.2 1. INTRODUCTION The noise reduction is a main issue according to improvement of living condition. There are passive and active control methods for noise reduction. The passive noise control uses sound absorption material which has efficiency about high frequency. The Active Noise Control (ANC) which is the method for generating anti phase control sound is able to reduce low frequency.
    [Show full text]