Signal Reconstruction from Noisy, Aliased, and Nonideal Samples: What Linear Mmse Approaches Can Achieve
Total Page:16
File Type:pdf, Size:1020Kb
18th European Signal Processing Conference (EUSIPCO-2010) Aalborg, Denmark, August 23-27, 2010 SIGNAL RECONSTRUCTION FROM NOISY, ALIASED, AND NONIDEAL SAMPLES: WHAT LINEAR MMSE APPROACHES CAN ACHIEVE Alvaro Guevara and Rudolf Mester Visual Sensorics and Information Processing Lab (VSI) Computer Science Dept., Goethe University, Frankfurt, Germany Email: falvaro.guevara, [email protected] ABSTRACT received little attention in the signal and image processing This paper addresses the problem of interpolating a (non- literature during the recent two decades, whereas early exten- bandlimited) signal from a discrete set of noisy measurements sive work using Wiener filtering (e.g. [7], or [8]) seems to obtained from non-d sampling kernels. We present a linear be forgotten or pushed aside, possibly due to the extensive estimation approach, assuming the signal is given by a con- usage of formulations in the Fourier domain, which do not tinuous model for which first and second order moments are really simplify the exposition and practical application in case known. The formula provides a generalization of the well- of discrete signals. However, more recently, statistical recon- known discrete-discrete Wiener style estimator, but does not struction methods seem to regain attention. These methods necessarily involve Fourier domain considerations. Finally, can be classified as discrete or continuous approaches. On the some experiments illustrate the flexibility of the method under discrete side, the work of Leung et al. [9] on image interpola- strong noise and aliasing effects, and shows how the input tion shows a comparison on the performance between several autocorrelation, the sampling kernel and the noise process ACF image models for both ideal and nonideal sampling. Shi shape the form of the optimal interpolating kernels. and Reichenbach [10] derive the Wiener filter for 2-D images in the frequency domain, and propose a parametric Markov 1. INTRODUCTION random field to model the ACF from the sampled (low reso- lution) data. On the continuous counterpart, Ruiz-Alzola et Sixty years after Shannon’s key contribution, we observe a al. [6] present a comparison between Kriging (a quite popular revived interest in sampling theory mainly due to new devel- interpolation method in geostatistics) and Wiener filtering, opments in e.g., spline-based signal processing [1, 2, 3], and based on finite sets of noisy samples. A short section in a estimation theory [4]. The essential message of the present contribution by Ramani et al. [11] determines the filter in the paper is that even for non-ideal sampling, that is: for non- frequency domain. bandlimited signals and/or for sampling kernels that strongly deviate from Dirac d-pulses, a linear reconstruction which In contrast to that, we proceed as follows: we first find is optimum in the least squares (LS) sense is possible and the optimal MMSE estimator in the case where only a fi- attractive. This statement should appear obvious and self- nite number of samples are available. This is also done in evident to anybody trained in statistical signal processing, but [6], but in contrast to that paper we also provide the con- there is only very few work which actually goes along that nection to the case of infinitely many discrete samples. The line. This linear MMSE solution can be achieved in a much formula obtained is shown to be equivalent to the frequency more straightforward way than this may appear from earlier domain version presented in [12], but both its derivation as publications on reconstruction from non-ideal sampling. For well as its final structure are simpler than in [12]. A short this approach, the input signals are assumed to be realizations section provides the link between the proposed formula and from a wide-sense stationary (WSS) process, and the first and the WSK theorem. For completeness, we review the usual second order moment functions – the autocorrelation function discrete-discrete approach, as found for instance in [13, 8], (ACF) – must be given, or estimated. We do not see many and illustrate with some experimental results. practical situations where these very mild requirements cannot be met. The range of application of this approach includes sit- uations when we do not deal with bandlimited signals, that is: 2. NOTATION AND FUNDAMENTAL even if the conditions of the Whittaker-Shannon-Kotel’nikov ASSUMPTIONS (WSK) theorem are not met. The price to pay for that is (obviously) that a perfect reconstruction is not possible, but In this work, we denote discrete signals with brackets, e.g., a simple, straightforward MSE-optimal solution may be an c[k];k 2 Z and continuous signals with parenthesis, e.g., attractive goal in many situations anyway. s(x);x 2 R. The continuous-space Fourier transform of a sig- We stress that the essential mathematical principles em- nal s(x) is expressed as S(w) and the discrete-space Fourier ployed here date back to Gauss and Wiener, and that the transform of a sequence c[k] is expressed as C(e jw ). Dis- decisive point is to use these principles in an unbiased man- crete convolution is indicated with an asterisk (∗) and for its ner. The richness of this theory lies partly on the fact that continuous counterpart a star (?) is employed. We write f¯(x) it is easily adaptable to different tasks in signal and image (resp. p¯[k]) for the time-reversed function f¯(x) = f (−x) (resp. processing, such as optimal filtering [5]. We focus on this p¯[k] = p[−k]). Each of the stochastic processes considered work in the reconstruction from regularly spaced noisy dis- here is a zero-mean wide-sense stationary (WSS) process, crete samples, also known as smoothing or approximation [6]. unless explicitly stated otherwise. For such a process fsg, Statistical approaches to reconstruction from samples have we denote by rss(d) its autocorrelation (or autocovariance) © EURASIP, 2010 ISSN 2076-1465 1291 reconstruction analysis v[k] 3. OBTAINING THE COVARIANCE FUNCTIONS kernel kernel sampling s(x) g[k] u[k] sˆ(x) For readers familiar with linear estimation theory, it is a¯(x) digital ϕ(x) not at all surprising that the optimal reconstruction sˆ de- filter noise pends only on the covariances Cov[s(x);s(x˜)] = rss(x − x˜), Cov[s(x);g[k]], and Cov[g[k];g[`]]. In this section, we ex- δ(x kD) k − press the last two covariances in terms of the signal autocor- relation function rss, the sampling kernel a, and the noise autocorrelation rvv, by adapting results from the theory of Figure 1: The block diagram represents the sampling- linear systems with stochastic inputs. reconstruction problem. Eq.1 is the defining expression for the linear shift-invariant (LSI) system with stochastic input s(x), impulse response a¯, and output z(x). According to the well-known correlations function (ACF), namely, formulas for LSI systems (cf. [15, p.272]), an straightforward adaption to our current setting shows that de f rss(d) = E[s(x) · s(x + d)]; x arbitrary: rsz(t) = (rss ? a)(t); (4) For a random vector ~g, Cg denotes its covariance matrix, rzz(t) = (rss ? a ? a¯)(t): (5) de f T Assume furthermore that the noise and the signal are uncorre- Cg = Cov[~g;~g] = E ~g ·~g : lated. Then, we obtain For reasons of compact and simple presentation, we ad- dress the problem in a single dimension; generalization to Cov[g[k];s(x)] = Cov[z[kD];s(x)] = (rss ? a)(x − kD); (6) higher dimensions is straightforward. The process of sam- pling and reconstruction can be summarized as follows: the as a consequence of eq.4. Similarly, input signal s(x) is sampled with a sampling device character- Cov[g[i];g[`]] = Cov[z[iD];z[`D]] + Cov[v[i];v[`]] ized by the analysis kernel1 a(x) and the sampling raster width D. Thus, if we define the function z(x) as the convolution = (rss ? a ? a¯)((` − i)D) + rvv[` − i]; (7) between the signal s and the kernel a, i.e., following eq.5. z(x) = (s ? a¯)(x); (1) 4. MMSE ESTIMATION IN THE MIXED then the samples g[k] are given by CONTINUOUS-DISCRETE CASE g[k] = z(kD) + v[k]; k 2 Z; 4.1 Estimation from finite noisy samples where v represents a zero-mean noise term with known co- Let us assume that we observe K samples from the signal T variance function rvv. s, which are assembled in the vector ~g = (g[N1];:::;g[NK]) . The samples are called ideal (or the sampling process is There are no special requirements on the choice of the sam- said to be ideal) if the analysis kernel a(x) is equal to the pling locations Ni, but in our case we assume that they are Dirac impulse d(x). In this case and in the abscense of noise, equally spaced. For each point x, we design the estimate sˆ(x) the sample g[k] agrees with the signal value s(kD). to explicitly depend linearly on the sample vector~g as follows: The reconstruction aims at finding a linear estimate sˆ(x) of s(x) from the samples g[k] of the form sˆ(x) = ~wT (x) ·~g: (8) +¥ where the weighting vector ~w is to be chosen in order to sˆ(x) = ∑ g[k] · rrec(x − kD); (2) minimize the mean-square error Q = E(s(x) − sˆ(x))2. k=−¥ Introducing the vector that is optimum in the least squares sense, that is: the second de f T power Q of the error signal should be minimized. We call ~f (x) = ( f1(x);:::; fK(x)) the function rrec the reconstruction kernel.