Blind Deconvolution of Backscattered Ultrasound Using Second-Order Statistics

Total Page:16

File Type:pdf, Size:1020Kb

Blind Deconvolution of Backscattered Ultrasound Using Second-Order Statistics Blind Deconvolution of Backscattered Ultrasound using Second-Order Statistics Carlos E. Davila and Tao He Electrical Engineering Department, Southern Methodist University Dallas, Texas 75275-0338 E-mail: [email protected] Phone: (214) 768-3197 Fax: (214) 768-3573 Abstract A method for blind deconvolution of ultrasound based on second-order statistics is proposed. This approach relaxes the requirement made by other blind deconvolution methods that the tissue reflectivity be statistically white or broad-band. Two methods are described, the ¯rst is a basic second-order deconvolution method, while the second incorporates an optimally weighted least squares solution. Experimental results using synthetic and actual ultrasound data demonstrate the potential bene¯ts of this method. Submitted to IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control, July 23, 2001 1 Background Ultrasound is used extensively in medicine as a relatively inexpensive approach for non- invasively imaging organ structures and measuring blood flow. The basic premise behind ultrasonic imaging is that an ultrasonic pulse transmitted through tissue will be reflected or backscattered, particularly at distinct tissue boundaries. The underlying tissue structures can then be imaged by measuring the reflections of the transmitted pulse at each tissue boundary. While models for ultrasonic backscattering can be quite complex, a simple model that is often used is based on a linear convolution ZZZ µ ¶ e¡2az 2z r(t)= g(x; y; z)s(x; y)p t ¡ dxdydz (1) z c where s(x; y) is the lateral distribution of the transmitted ultrasound wave, c is the speed of sound through the tissue (assumed constant), a is the average attenuation constant of the tissue in the ultrasound propagation path, and g(x; y; z) is the tissue reflectivity. The variables x; y; and z are 3-dimensional spatial position variables, and t is time. This convolu- tional model is valid in a weakly scattering medium (no secondary or higher-order reflections) consisting of anisotropic scatterers whose size is small relative to the wavelength of the ul- trasound [1, 2, 3, 4, 5]. Lateral distortion in the (x; y) plane can be reduced by using a ¯nely focused beam generated by a phased array of transducers [6, 7, 8, 9]; moreover distance dependent attenuation can be compensated by multiplying the received reflected signal by cte2act, or more typically, by passing the received signal through a static log-like nonlinearity [10]. Assuming that lateral distortion and attenuation have been compensated, the reflected 1 pulse can be expressed as a one-dimensional convolution Z µ ¶ 2z r(t)= g(z)p t ¡ dz (2) c The resolution along the axial direction is limited by the spatial spread of the transmitted pulse; if the pulse were a true delta function, then the reflected pulse would correspond exactly to the tissue reflectivity. In practice the transmitted pulse deviates from an ideal impulse, and closely spaced tissue boundaries will not be resolved. For this reason deconvo- lution methods have been used to improve the axial resolution of ultrasound scans. If the transmitted pulse is known or can be easily measured, then a straight-forward least squares or Wiener ¯lter-based estimation can be used to solve for the tissue reflectivity. A number of deconvolution approaches based on the assumption that the pulse is known have been published [1, 11, 12, 13, 14, 15]. In practice, the transmitted pulse is di±cult to measure since this would require a second transducer. Even if a second transducer were present, it is not clear where this transducer could be placed in order to accurately measure the transmit- ted pulse. The pulse could be measured o®-line using a target designed to behave like point reflector, however this would fail to incorporate temporal variations in the pulse shape, and the target may not adequately model a true point reflector. For this reason most recent ap- proaches to ultrasound deconvolution have been based on so-called \blind" methods, wherein the tissue reflectivity is estimated without any knowledge of the ultrasonic pulse or probing signal. If the transmitted pulse is minimum phase, and the tissue reflectivity is modeled as white noise, then it is possible to use the second-order statistics of the reflected pulse to estimate the pulse signal [16, 5]. Given an estimate of the pulse, a Wiener ¯lter or any one of the aforementioned deconvolution methods can subsequently be used to estimate the 2 tissue reflectivity. Unfortunately, the transmitted pulse is often not minimum phase [17, 18]. Several authors have used homomorphic deconvolution, wherein the complex cepstrum of the reflected ultrasound is used to estimate the transmitted pulse [19, 17, 20, 21]. Again, the pulse estimation step must be followed by a conventional deconvolution step. The com- plex cepstrum used in homomorphic deconvolution is di±cult to measure if the transmitted pulse has zeros close to the unit circle [22]. In this case, it is necessary to multiply the received signal by an exponential function in order to migrate unstable zeros away from the unit circle [22]. Moreover, the homomorphic deconvolution approach also assumes that the tissue reflectivity can be modeled as wide-band (white) noise. Other investigators have used higher order statistics to estimate the transmitted ultrasound pulse [18, 23]. This approach assumes the tissue reflectivity can be modeled as white, non-Gaussian noise. In particular, the reflectivity kurtosis should be an impulse [24, 25, 26]. While it appears likely that tissue reflectivities are non-Gaussian [18], there do not appear to be any de¯nitive studies which suggest that reflectivity can be modeled as white noise. Moreover, higher-order methods re- quire a large amount of data to obtain reasonable estimates of the higher-order statistics and are computationally intensive. Both homomorphic and higher-order statistics deconvolution methods assume the tissue reflectivity can be modeled as white noise. Any deviation from this assumption is likely to have a detrimental e®ect on the quality of the deconvolution. The assumption that tissue reflectivity can be modeled as white noise appears to be questionable for a large class of tissue structures that undergo continuous (rather than abrupt) changes in acoustic impedance. The relationship between acoustic impedance h(z) and reflectivity has been studied in [27] and is given by 1 ±h(z) g(z)= (3) 4z ±z 3 This relationship takes into account higher-order reflections, which are usually ignored in convolutional models of backscattering. The relationship in (3) predicts that tissues which undergo a continuous variation in acoustic impedance with depth are unlikely to have white noise reflectivity. Another complicating factor with the use of homomorphic and higher order deconvolution methods is the assumption that the transmitted ultrasound pulse has a time invariant shape. It is well known that the pulse shape changes with depth due to frequency dependent attenuation [3]. In this paper, a method for ultrasound deconvolution based on second-order statistics is de- scribed. This approach o®ers several advantages over the two blind deconvolution methods mentioned above. First, no assumptions are made about the statistical nature of the tissue reflectivity. In theory, the reflectivity can be either broad-band or narrow-band without af- fecting the viability of the deconvolution. Second-order statistics can also be estimated more accurately and with fewer computations than higher order statistics. Second-order deconvo- lution methods are not sensitive to the location of the zeros of the transmitted ultrasound, so no exponential weighting or other kinds of preprocessing are necessary. Another appealing characteristic of this method is that it estimates the tissue reflectivity in a single step. In other words, it is not necessary to ¯rst estimate the transmitted ultrasound pulse prior to doing the deconvolution step. The deconvolution step is subject to errors, even when the pulse estimation is good due to the presence of noise in the reflected signal. The method to be described is closely related to second-order blind deconvolution of communications chan- nels [28, 29, 30]. However since the transmitted ultrasound has ¯nite duration, the equations which model the autocorrelation matrix of the reflected ultrasound do not possess multiple Toeplitz matrices which can lead to sensitivity problems in blind second-order deconvolution of communications channels [31, 32, 33]. 4 2 Second-Order Blind Deconvolution If the spatial variable z and the time variable t in (2) are discretized, then the convolution integral can be expressed as a standard discrete-time convolution r(n)=g(n) ?p(n)+v(n) (4) where: ² g(n): tissue reflectivity to be estimated, length K. ² p(n): transmitted ultrasound, length L, ² r(n): reflected pulse, length N ´ K + L ¡ 1. ² v(n): measurement and modelling error, length N. In matrix form, r(n) can be expressed as r = Gp where · ¸T r = r(0) r(1) ¢¢¢ r(N ¡ 1) (5) · ¸T p = p(0) p(1) ¢¢¢ p(L ¡ 1) (6) · ¸T v = v(0) v(1) ¢¢¢ v(N ¡ 1) (7) 5 and 2 3 6 g(0) 0 ¢¢¢ 0 7 6 7 6 7 6 7 6 g(1) g(0) ¢¢¢ 0 7 6 7 6 7 6 . 7 6 . .. 7 6 7 6 7 6 7 6 g(L ¡ 1) g(L ¡ 2) ¢¢¢ g(0) 7 6 7 6 7 6 . 7 G = 6 . .. 7 (8) 6 7 6 7 6 7 6 ¡ ¡ ¢¢¢ ¡ 7 6 g(K 1) g(K 2) g(K L) 7 6 7 6 7 6 ¡ ¢¢¢ ¡ 7 6 0 g(K 1) g(K L +1) 7 6 7 6 . 7 6 . ... 7 6 . 7 4 5 00¢¢¢ g(K ¡ 1) The matrix G has dimension N £ L. The transmitted ultrasound p(n) is assumed to be £ ¤ T persistently exciting, meaning that Rp ´ E pp has full rank [34].
Recommended publications
  • Blind Image Deconvolution of Linear Motion Blur
    Blind image deconvolution of linear motion blur Florian Brusius, Ulrich Schwanecke, and Peter Barth RheinMain University of Applied Sciences, Wiesbaden, Germany Abstract. We present an efficient method to deblur images for informa- tion recognition. The method is successfully applied directly on mobile devices as a preprocessing phase to images of barcodes. Our main con- tribution is the fast identifaction of blur length and blur angle in the fre- quency domain by an adapted radon transform. As a result, the barcode recognition rate of the deblurred images has been increased significantly. Keywords: Blind deconvolution, image restoration, deblurring, motion blur estimation, barcodes, mobile devices, radon transform 1 Introduction Increasingly, mobile smartphone cameras are used as alternative input devices providing context information { most often in the form of barcodes. The pro- cessing power of smartphones has reached a state to allow to recognise all kinds of visually perceptible information, like machine-readable barcode tags and the- oretically even printed text, shapes, and faces. This makes the camera act as an one-click link between the real world and the digital world inside the device [9]. However, to reap the benefits of this method the image has to be correctly recognised under various circumstances. This depends on the quality of the cap- tured image and is therefore highly susceptible to all kinds of distortions and noise. The photographic image might be over- or underexposed, out of focus, perspectively distorted, noisy or blurred by relative motion between the camera and the imaged object. Unfortunately, all of those problems tend to occur even more on very small cameras.
    [Show full text]
  • Blind PSF Estimation and Methods of Deconvolution Optimization
    Blind PSF estimation and methods of deconvolution optimization Yu A Bunyak1, O Yu Sofina2 and R N Kvetnyy2 1InnoVinn Inc. Vinnitsa, Ukraine 2Vinnitsa National Technical University, Vinnitsa, Ukraine E-mail: [email protected] Abstract. We have shown that the left side null space of the autoregression (AR) matrix operator is the lexicographical presentation of the point spread function (PSF) on condition the AR parameters are common for original and blurred images. The method of inverse PSF evaluation with regularization functional as the function of surface area is offered. The inverse PSF was used for primary image estimation. Two methods of original image estimate optimization were designed basing on maximum entropy generalization of sought and blurred images conditional probability density and regularization. The first method uses balanced variations of convolution and deconvolution transforms to obtaining iterative schema of image optimization. The variations balance was defined by dynamic regularization basing on condition of iteration process convergence. The regularization has dynamic character because depends on current and previous image estimate variations. The second method implements the regularization of deconvolution optimization in curved space with metric defined on image estimate surface. It is basing on target functional invariance to fluctuations of optimal argument value. The given iterative schemas have faster convergence in comparison with known ones, so they can be used for reconstruction of high resolution images series in real time. 1. Introduction Many modern applications need real time reconstruction of high resolution images of some millions pixels size, which are corrupted by defocusing, medium penetration, camera jitter and other factors. Usually, the model of corruption is presented as convolution of original image signal and point spread function (PSF) [5, 23, 27].
    [Show full text]
  • Chapter 15 - BLIND SOURCE SEPARATION: Principal & Independent Component Analysis C G.D
    HST-582J/6.555J/16.456J Biomedical Signal and Image Processing Spring 2008 Chapter 15 - BLIND SOURCE SEPARATION: Principal & Independent Component Analysis c G.D. Clifford 2005-2008 Introduction In this chapter we will examine how we can generalize the idea of transforming a time series into an alternative representation, such as the Fourier (frequency) domain, to facil- itate systematic methods of either removing (filtering) or adding (interpolating) data. In particular, we will examine the techniques of Principal Component Analysis (PCA) using Singular Value Decomposition (SVD), and Independent Component Analysis (ICA). Both of these techniques utilize a representation of the data in a statistical domain rather than a time or frequency domain. That is, the data are projected onto a new set of axes that fulfill some statistical criterion, which implies independence, rather than a set of axes that represent discrete frequencies such as with the Fourier transform, where the independence is assumed. Another important difference between these statistical techniques and Fourier-based tech- niques is that the Fourier components onto which a data segment is projected are fixed, whereas PCA- or ICA-based transformations depend on the structure of the data being ana- lyzed. The axes onto which the data are projected are therefore discovered. If the structure of the data (or rather the statistics of the underlying sources) changes over time, then the axes onto which the data are projected will change too1. Any projection onto another set of axes (or into another space) is essentially a method for separating the data out into separate components or sources which will hopefully allow us to see important structure more clearly in a particular projection.
    [Show full text]
  • Revisiting Bayesian Blind Deconvolution
    Journal of Machine Learning Research 15 (2014) 3775-3814 Submitted 5/13; Revised 5/14; Published 11/14 Revisiting Bayesian Blind Deconvolution David Wipf [email protected] Visual Computing Group Microsoft Research Building 2, No. 5 Danling Street Beijing, P.R. China, 100080 Haichao Zhang [email protected] School of Computer Science Northwestern Polytechnical University 127 West Youyi Road Xi'an, P.R. China, 710072 Editor: Lawrence Carin Abstract Blind deconvolution involves the estimation of a sharp signal or image given only a blurry observation. Because this problem is fundamentally ill-posed, strong priors on both the sharp image and blur kernel are required to regularize the solution space. While this naturally leads to a standard MAP estimation framework, performance is compromised by unknown trade-off parameter settings, optimization heuristics, and convergence issues stemming from non-convexity and/or poor prior selections. To mitigate some of these problems, a number of authors have recently proposed substituting a variational Bayesian (VB) strategy that marginalizes over the high-dimensional image space leading to better estimates of the blur kernel. However, the underlying cost function now involves both in- tegrals with no closed-form solution and complex, function-valued arguments, thus losing the transparency of MAP. Beyond standard Bayesian-inspired intuitions, it thus remains unclear by exactly what mechanism these methods are able to operate, rendering un- derstanding, improvements and extensions more difficult. To elucidate these issues, we demonstrate that the VB methodology can be recast as an unconventional MAP problem with a very particular penalty/prior that conjoins the image, blur kernel, and noise level in a principled way.
    [Show full text]
  • Deep Wiener Deconvolution: Wiener Meets Deep Learning for Image Deblurring
    Deep Wiener Deconvolution: Wiener Meets Deep Learning for Image Deblurring Jiangxin Dong Stefan Roth MPI Informatics TU Darmstadt [email protected] [email protected] Bernt Schiele MPI Informatics [email protected] Abstract We present a simple and effective approach for non-blind image deblurring, com- bining classical techniques and deep learning. In contrast to existing methods that deblur the image directly in the standard image space, we propose to perform an explicit deconvolution process in a feature space by integrating a classical Wiener deconvolution framework with learned deep features. A multi-scale feature re- finement module then predicts the deblurred image from the deconvolved deep features, progressively recovering detail and small-scale structures. The proposed model is trained in an end-to-end manner and evaluated on scenarios with both simulated and real-world image blur. Our extensive experimental results show that the proposed deep Wiener deconvolution network facilitates deblurred results with visibly fewer artifacts. Moreover, our approach quantitatively outperforms state-of-the-art non-blind image deblurring methods by a wide margin. 1 Introduction Image deblurring is a classical image restoration problem, which has attracted widespread atten- tion [e.g.,1,3,9, 10, 56]. It is usually formulated as y = x ∗ k + n; (1) where y; x; k, and n denote the blurry input image, the desired clear image, the blur kernel, and image noise, respectively. ∗ is the convolution operator. Traditional methods usually separate this problem arXiv:2103.09962v1 [cs.CV] 18 Mar 2021 into two phases, blur kernel estimation and image restoration (i.e., non-blind image deblurring).
    [Show full text]
  • Blind Deconvolution and Adaptive Algorithms for De-Reverberation
    Master Thesis Electrical Engineering with Emphasis on signal processing BLIND DECONVOLUTION AND ADAPTIVE ALGORITHMS FOR DE-REVERBERATION Suryavamsi Atresya Uppaluru Supervisor: Dr. Nedelko Grbic Examiner: Dr. Benny Sällberg Department of Signal Processing School of Engineering (ING) Blekinge Institute of Technology i This thesis is submitted to the School of Engineering at Blekinge Institute of Technology (BTH) in partial fulfillment of the requirements for the degree of Master of Science in Electrical Engineering with emphasis on Signal Processing. Contact Information: Author: Suryavamsi Atresya Uppaluru E-mail: [email protected] Supervisor: Dr. Nedelko Grbic School of Engineering (ING) E-mail: [email protected] Phone: +46 455 38 57 27 Examiner: Dr. Benny Sällberg School of Engineering (ING) E-mail: [email protected] Phone: +46 455 38 55 87 School of Engineering Blekinge Institute of Technology Internet: www.bth.se/ing 371 79 Karlskrona Phone: +46 455 38 50 00 Sweden. Fax: +46 455 38 50 57 ii ACKNOWLEDGEMENT I would like to express sincere gratitude to our Supervisor Dr.Nedelko Gribic for his sagacious guidance and scholarly advice which enabled us to complete the thesis. I humbly thank the examiner Dr. Benny Sallberg for the encouragement and support. I express my deep gratitude to my parents in particular and my family who always reinforced every effort of mine. I would like to thank God and my friends for giving me constant support in every endeavor of mine. I would like to express my gratitude towards everyone who contributed their precious time and effort to help me, with whom I could complete my thesis successfully.
    [Show full text]
  • Channel Equalization and Blind Deconvolution
    Advanced Digital Signal Processing and Noise Reduction, Second Edition. Saeed V. Vaseghi Copyright © 2000 John Wiley & Sons Ltd ISBNs: 0-471-62692-9 (Hardback): 0-470-84162-1 (Electronic) 15 CHANNEL EQUALIZATION AND BLIND DECONVOLUTION 15.1 Introduction 15.2 Blind-Deconvolution Using Channel Input Power Spectrum 15.3 Equalization Based on Linear Prediction Models 15.4 Bayesian Blind Deconvolution and Equalization 15.5 Blind Equalization for Digital Communication Channels 15.6 Equalization Based on Higher-Order Statistics 15.7 Summary lind deconvolution is the process of unravelling two unknown signals that have been convolved. An important application of blind B deconvolution is in blind equalization for restoration of a signal distorted in transmission through a communication channel. Blind equalization has a wide range of applications, for example in digital telecommunications for removal of intersymbol interference, in speech recognition for removal of the effects of microphones and channels, in deblurring of distorted images, in dereverberation of acoustic recordings, in seismic data analysis, etc. In practice, blind equalization is only feasible if some useful statistics of the channel input, and perhaps also of the channel itself, are available. The success of a blind equalization method depends on how much is known about the statistics of the channel input, and how useful this knowledge is in the channel identification and equalization process. This chapter begins with an introduction to the basic ideas of deconvolution and channel equalization. We study blind equalization based on the channel input power spectrum, equalization through separation of the input signal and channel response models, Bayesian equalization, nonlinear adaptive equalization for digital communication channels, and equalization of maximum-phase channels using higher-order statistics.
    [Show full text]
  • Independent Component Analysis and Nongaussianity for Blind Image Deconvolution and Deblurring
    Integrated Computer-Aided Engineering 15 (2008) 219–2 219 IOS Press Independent component analysis and nongaussianity for blind image deconvolution and deblurring Hujun Yin and Israr Hussain School of Electrical and Electronic Engineering, The University of Manchester, Manchester M60 1QD, UK E-mail: [email protected], [email protected] Abstract. Blind deconvolution or deblurring is a challenging problem in many signal processing applications as signals and images often suffer from blurring or point spreading with unknown blurring kernels or point-spread functions as well as noise corruption. Most existing methods require certain knowledge about both the signal and the kernel and their performance depends on the amount of prior information regarding the both. Independent component analysis (ICA) has emerged as a useful method for recovering signals from their mixtures. However, ICA usually requires a number of different input signals to uncover the mixing mechanism. In this paper, a blind deconvolution and deblurring method is proposed based on the nongaussianity measure of ICA as well as a genetic algorithm. The method is simple and does not require prior knowledge regarding either the image or the blurring process, but is able to estimate or approximate the blurring kernel from a single blurred image. Various blurring functions are described and discussed. The proposed method has been tested on images degraded by different blurring kernels and the results are compared to those of existing methods such as Wiener filter, regularization filter, and the Richardson-Lucy method. Experimental results show that the proposed method outperform these methods. 1. Introduction the restoration process may become straightforward or even trivial.
    [Show full text]
  • Research Article PSF Estimation Via Gradient Cepstrum Analysis for Image Deblurring in Hybrid Sensor Network
    Hindawi Publishing Corporation International Journal of Distributed Sensor Networks Volume 2015, Article ID 758034, 11 pages http://dx.doi.org/10.1155/2015/758034 Research Article PSF Estimation via Gradient Cepstrum Analysis for Image Deblurring in Hybrid Sensor Network Mingzhu Shi1 and Shuaiqi Liu2 1 College of Electronic and Communication Engineering, Tianjin Normal University, Tianjin 300387, China 2College of Electronic and Information Engineering, Hebei University, Baoding 071000, China Correspondence should be addressed to Mingzhu Shi; [email protected] Received 4 December 2014; Accepted 13 January 2015 Academic Editor: Qilian Liang Copyright © 2015 M. Shi and S. Liu. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. In hybrid sensor networks, information fusion from heterogeneous sensors is important, but quite often information such as image is blurred. Single image deblurring is a highly ill-posed problem and usually regularized by alternating estimating point spread function (PSF) and recovering blur image, which leads to high complexity and low efficiency. In this paper, we first propose an efficient PSF estimation algorithm based on gradient cepstrum analysis (GCA). Then, to verify the accuracy of the strategy, estimated PSFs are used for image deconvolution step, which exploits a novel total variation model coupling with a gradient fidelity term. We also adopt an alternating direction method (ADM) numerical algorithm with rapid convergence and high robustness to optimize the energy function. Both synthetic and real blur experiments show that our scheme can estimate PSF rapidly and produce comparable results without involving long time consuming.
    [Show full text]
  • Wiener Meets Deep Learning for Image Deblurring
    Deep Wiener Deconvolution: Wiener Meets Deep Learning for Image Deblurring Jiangxin Dong Stefan Roth MPI Informatics TU Darmstadt [email protected] [email protected] Bernt Schiele MPI Informatics [email protected] Abstract We present a simple and effective approach for non-blind image deblurring, com- bining classical techniques and deep learning. In contrast to existing methods that deblur the image directly in the standard image space, we propose to perform an explicit deconvolution process in a feature space by integrating a classical Wiener deconvolution framework with learned deep features. A multi-scale feature re- finement module then predicts the deblurred image from the deconvolved deep features, progressively recovering detail and small-scale structures. The proposed model is trained in an end-to-end manner and evaluated on scenarios with both simulated and real-world image blur. Our extensive experimental results show that the proposed deep Wiener deconvolution network facilitates deblurred results with visibly fewer artifacts. Moreover, our approach quantitatively outperforms state-of-the-art non-blind image deblurring methods by a wide margin. 1 Introduction Image deblurring is a classical image restoration problem, which has attracted widespread atten- tion [e.g.,1,3,9, 10, 56]. It is usually formulated as y = x ∗ k + n; (1) where y; x; k, and n denote the blurry input image, the desired clear image, the blur kernel, and image noise, respectively. ∗ is the convolution operator. Traditional methods usually separate this problem into two phases, blur kernel estimation and image restoration (i.e., non-blind image deblurring).
    [Show full text]
  • Contrasts, Independent Component Analysis, and Blind Deconvolution Pierre Comon
    Contrasts, Independent Component Analysis, and Blind Deconvolution Pierre Comon To cite this version: Pierre Comon. Contrasts, Independent Component Analysis, and Blind Deconvolution. International Journal of Adaptive Control and Signal Processing, Wiley, 2004, 18 (3), pp.225–243. 10.1002/acs.791. hal-00542916 HAL Id: hal-00542916 https://hal.archives-ouvertes.fr/hal-00542916 Submitted on 3 Dec 2010 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. INTERNATIONAL JOURNAL OF ADAPTIVE CONTROL AND SIGNAL PROCESSING Int. J. Adapt. Control Signal Process. 2004; 00:1–16 Prepared using acsauth.cls [Version: 2002/11/11 v1.00] Contrasts, Independent Component Analysis, and Blind Deconvolution P. Comon I3S, CNRS-UNSA, Sophia-Antipolis SUMMARY A general definition of contrast criteria is proposed, which induces the concept of trivial filters. These optimization criteria enjoy identifiability properties, and aim at delivering outputs satisfying specific properties, such as statistical independence or a discrete character. Several ways of building new contrast criteria are described. It is then briefly elaborated on practical numerical algorithms. Copyright c 2004 John Wiley & Sons, Ltd. key words: Independent Component Analysis (ICA), Contrast criteria, Blind Deconvolution and Equalization, MIMO linear systems 1.
    [Show full text]
  • Deconvolved Image Restoration from Autocorrelations Daniele Ancora* and Andrea Bassi
    JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015 1 Deconvolved Image Restoration from Autocorrelations Daniele Ancora* and Andrea Bassi Abstract—Recovering a signal from auto-correlations or, equiv- problem: methods based on alternating projections, protocols alently, retrieving the phase linked to a given Fourier modulus, based on optimization and iterative approaches inspired dif- is a wide-spread problem in imaging. This problem has been ferent inversion strategies. Although the field is in continuous tackled in a number of experimental situations, from optical microscopy to adaptive astronomy, making use of assumptions progress, Shechtman et al. provides an extensive review on based on constraints and prior information about the recovered the topic [11]. object. In a similar fashion, deconvolution is another common Measurements relying on the estimation of the object’s auto- problem in imaging, in particular within the optical community, correlation might be corrupted from blurring, which is given allowing high-resolution reconstruction of blurred images. Here by the limited bandwidth of the detection system or limited we address the mixed problem of performing the auto-correlation inversion while, at the same time, deconvolving its current by diffraction. In this case, two consecutive inverse problems estimation. To this end, we propose an I-divergence optimization, (deconvolution and phase retrieval) should be solved. Here, driving our formalism into a widely used iterative scheme, instead, we propose an iterative procedure that allows one to inspired by Bayesian-based approaches. We demonstrate the obtain a deconvolved image from auto-correlation measure- method recovering the signal from blurred auto-correlations, ments, solving both problems at the same time.
    [Show full text]