List of Abbreviations

STEM Scanning Transmission Electron

PSF Point Spread Function

HAADF High-Angle Annular Dark-Field

Z-Contrast Atomic Number-Contrast

ICM Iterative Constrained Methods

RL Richardson-Lucy

BD Blind

ML Maximum Likelihood

MBD Multichannel

SA Simulated Annealing

SEM Scanning Electron Microscopy

TEM Transmission Electron Microscopy

MI Mutual Information

PRF Point Response Function

Registration and 3D Deconvolution of STEM Data

M. MERCIMEK*, A. F. KOSCHAN*, A. Y. BORISEVICH‡, A.R. LUPINI‡, M. A. ABIDI*, & S. J. PENNYCOOK‡. *IRIS Laboratory, Electrical Engineering and Computer Science, The University of Tennessee, 1508 Middle Dr, Knoxville, TN 37996 ‡Materials Science and Technology Division, Oak Ridge National Laboratory, Oak Ridge, TN 37831

Key words. 3D deconvolution, Depth sectioning, Electron , 3D Reconstruction.

Summary A common characteristic of many practical modalities is that imaging process distorts the signals or scene of interest. The principal data is veiled by noise and less important signals. We split the 3D enhancement process into two separate stages; deconvolution in general is referring to the improvement of fidelity in electronic signals through reversing the process, and registration aims to overcome significant misalignment of the same structures in lateral planes along focal axis. The dataset acquired with VG HB603U STEM operated at 300 kV, producing a beam with a diameter of 0.6 A˚ equipped with a Nion aberration corrector has been studied. The initial experiments described how aberration corrected Z- Contrast Imaging in STEM can be combined with the enhancement process towards better visualization with a depth resolution of a few nanometers.

I. INTRODUCTION

3D characterization of both biological samples and non-biological materials provides us nano-scale resolution in three dimensions permitting the study of complex relationships between structure and existing functions. In Microscopic imaging a collection of 2D images of an object taken at different planes through optical depth sectioning is one of the bases for the reconstruction of the specimen in 3D space. Although the 3D resolution of tomography method is superior, optical depth sectioning has advantages in terms of reduction in sample irradiation, requirement of fewer images, low processing time [Nellist et al. (2006)]. As 3D images can be combined by pilling up consecutive sections, lateral views can be presented to the observer. Especially for the researchers in electron microscopy field, revealing atomic arrangements by means of visual correction of the specimen is indispensable in gaining knowledge in several ways; first principles calculations, chemical reactivity measurements, electrical properties, point defects, and optical properties.

Rather than just visualizing 3D sparse point cloud, two main approximations can be adopted [Diaspro et al. (1996)]. The first approach is to extract distinctive features from the set of 2D images. The segmented images using simple binary thresholding and digital filtering can be used to produce 3D representation of the primitives. These primitives such as polygon meshes or contours are then rendered using conventional surface rendering techniques (Fig.1). The second approach is to transform the set of acquired images into another set of images such that the blurring effects are removed by investigating the image formation system and to apply rendering volumetrically afterwards. A major advantage of the latter approach is that the 3D volume can be displayed without any knowledge of the geometry of the dataset and hence without intermediate conversion to a surface representation. The entire dataset is preserved in volume rendering, any part, including internal structures and details that may be lost when reducing to geometric structures with surface rendering.

In order to achieve a good representation of the structures, non-uniform behavior of the 3D transfer function of the imaging system should be corrected. Moreover during optical depth sectioning, displacements results in field of scan drifts which is mostly but not purely rigid for different focal planes. These correction methods are basically called deconvolution and registration techniques respectively and are outlined in the following sections. The purpose of this paper is to examine these two sources of degradations impairing the observed Atomic Number-contrast (Z-contrast) mode signals of STEM imaging system.

Fig.1: Surface rendering towards 3D reconstruction for amorphous structure of gold particles on carbon substrate. a) Raw 2D slice image of the material from at an in-focus plane, b) Image after digital filtering and contour extraction operations, c)-d) 3D model estimate through triangulation of edge images of all slices after triangulation is applied to the distinctive edge points. II. IMAGING CHARACTERISTICS

An imaging system deforms the signal of interest. It is degraded by noise, blur and the presence of other extraneous data. Separating the data stream into its useful components is essential to product improved evaluation. This linear model can precisely represent the degradation of the true image or object function,

g(n1 ,n2 ,n3 )  f (n1 ,n2 ,n3 )  h(n1 ,n2 ,n3 )  n(n1 ,n2 ,n3 ) (1) where (n1, n2 , n3 )represents discrete voxel coordinates of the image frame, g(n1, n2 , n3 ) is the observed image, f (n1,n2 ,n3 ) is the true image, h(n1 , n2 , n3 ) is the image formation function or Point Spread Function

(PSF),  is the convolution process, n(n1 ,n2 ,n3 ) is the additive noise. For the real applications (1) can be described as a convolution integral,

g(x, y, z)   f (x , y , z  ).h(,, ).d.d.d  n(x, y, z) (2) in real world coordinates. Since in most of the physical applications, data is stored and displayed electronically, form (1) is used for computations. This model enables a simple multiplicative–additive representation in the spatial-frequency (Fourier domain),

G(1,2 ,3 )  F(1,2 ,3 )  H(1,2 ,3 )  N(1,2 ,3 ) (3) Assuming that the imaging process is linear and shift-invariant a single function completely can describe the functioning of the instrument at any point in the specimen. Observed image can be considered superposition of the emitted signals (fluorescence microscopy), or transmitted signals (incoherent Z- contrast imaging in STEM) as a result of „illuminating‟ each point of the sample with PSF. The convolution essentially shifts the PSF in 3D space so that it is centered at each point in the specimen and then sums the contributions of all these shifted PSFs [McNally et all. (1999)]. Although fundamentally different, laser scanning , wide-field microscopy, scanning electron microscopy, transmission electron microscopy, scanning transmission electron microscopy, aerial imaging, , are all based on the same process of image formation produced by a point source object. It can be characterized both theoretically and experimentally, and a precise understanding of the physical parameters specific to each imaging system forms the basis of most deconvolution algorithms. In electron microscopy, takes its name from light optics; too spherical shapes of the early . Spherical aberration causes rays at higher angles to be over-focused. Chromatic aberration causes rays at different energies (different wavelengths) of electrons to be focused differently. These aberrations are difficult to describe perfectly in theoretically terms and cause a breakdown of axial symmetry and shift invariance properties.

In Z-contrast mode of the STEM, the image is an incoherent one, which is simply the true image blurred by an amount independent of the specimen. Mathematically, the observed image is given in Pennycook et al. (2003) as;

I(R)  O(R)  P 2  N(R) (4) R represents coordinates of the data, O is the object function or true image function, often represented simply as a function of Z -atomic number for each atom-, and P 2 is the effective probe intensity profile, including any broadening within the crystal. The incident electron beam is placed on one point of the sample and the electrons are scattered by the atoms are collected at the high-angle annular dark-field (HAADF) detector. The probe then moves to the next pixel and the scattered electrons are collected again at the detector. In STEM imaging as an example, a probe of atomic dimensions is scanned across a specimen. The annular detector located after specimen detects transmitted highly scattered intensity reaching, expressed as Z-contrast imaging. This kind of an image only represents the probability that the electrons strike a certain position on the detector. Electrons have wave-like properties, with wavelengths much less than that of the visible light. The de Broglie wavelength for electrons was defined as;

1.22   . (5) E where E is the energy of the electrons. Our naked-eye has ability to distinguish details up to 0.1-0.2 mm. For a 100keV electron probe the wavelength is ~0.004 nm (4 pm). Considering visible light has wavelengths between 400 nm and 700 nm, through using an much better resolution, which defined as the smallest distance between two points that can be distinguished, can be achieved. The Rayleigh criterion describes it as;

0.61   (6) nsin where n is the refractive index of the medium surrounding the , and  is the semi angle of the electron probe with the z axis or the acceptance angle. The small electron wavelength yields an appropriate resolution for seeing atomic structures. Major difficulties about this imaging system which we have to overcome with are;

1. In every 2D slice appears a lot of intensity information that does not represent an object appearing at this slice depth. Practically, a lot of pixels seem to have a significant value; however some of them should be removed from the scene by an algorithm. This superfluous data will make it very difficult to reconstruct edges and-or guess a threshold value that will result in useful simplified data to go on with. 2. Some of the simplest approaches to deconvolution, actually ignore the imaging formula altogether. These approaches use the information in only a single focal plane, and so the method must be independently applied to each focal-plane image. Thus, the simple 2D methods work by boosting only the higher spatial frequencies in the specimen. However, most specimens are a complex mixture of low and high spatial frequencies, so these 2D filtering methods run the risk of removing components of interest within the specimen. True image recovering can be obtained using a 3D operation depending on what we know about the imaging modality. 3. In STEM technology there is a big difference between depth and lateral resolution due to the physical constraints. This big difference leads to an anisotropic 3D image quality. In a good experimental research, the main peak of the PSF should contain more than one slice. Since the depth resolution is about a few nanometers, the particles under investigation should have at least a diameter at of depth resolution level. 4. The resolution limit for electron was given as spherical and chromatic aberrations, in recent years aberration correction opens a pathway bringing atomic sensitivity to so called nano- imaging system of once and pico-imaging system of now. Nevertheless, highly sophisticated studies are held on optical depth sectioning based 3D reconstruction using Z-contrast imaging mode of STEM, due to the physical limitations a good 3D modeling of the material is not completely achieved recently.

III. REGISTRATION

The images we are dealing with can be are of objects with a size of only a few nm in STEM imaging. In 3D, the object is sectioned plane by plane in a process involving the continuous displacement of either the focal plane or the objective lens along the optical axis. During depth sectioning these displacements results in field of scan shifts when for different focal planes. This can be an extremely important problem if a beam with a diameter of 0.6 A˚ is obtained and then used for scanning the specimen. The significant misalignment of the same structures in lateral plane (x-y) along focal axis (z) is rather an important problem. Pair-wise image registration is the problem of matching two or more overlapping scans to build a consistent model, thus extrapolating the relative pose of one set with respect to the others. Given two or more scans and an initial guess for a transformation that will bring one set (the source) to the correct pose in the coordinate system of the other set (the target), the output is a refined transformation of the slices in lateral plane. The main steps of the most registration algorithms are indicated in Zitova & Flusser (2003) as;

1. Feature detection. Salient and distinctive objects are manually or, preferably, automatically detected. 2. Feature matching. In this step, the correspondence between the features detected in the sensed image and those detected in the reference image is established. 3. Transform model estimation. The parameters of the mapping functions are computed by means of the established feature correspondence. 4. Image resampling and transformation. The sensed image is transformed by means of the mapping functions. Image values in non-integer coordinates are computed by the appropriate interpolation technique.

Image registration algorithms are classified mainly under either intensity-based methods or feature-based methods. Intensity-based methods compare intensity patterns without attempting to extract salient features of the datasets, while feature-based methods find correspondence between distinctive image features such as corners, edges, curves. Intensity-based methods are applied globally over the datasets. Feature-based methods establish correspondence between a number of points in images and estimate a transformation model to map the points in the target image to their correspondences in the reference image. The registration methods show big diversity regarding how the main pipeline steps are held in the algorithm. The implementation of each registration step has its typical problems. The detected feature sets in the reference and target images must have enough common elements, even in situations when the images do not cover exactly the same scene or when there are object occlusions or other unexpected changes. The detection methods should have good localization accuracy and should be insensitive to the assumed image degradation. Up to a point imaging system distortions can be tolerated however in an ideal case the algorithm should be able to detect the same features in all projections of the scene regardless of the particular image deformation.

IV. DECONVOLUTION TECHNIQUES

The deconvolution process can be considered as a post-processing. There are in general three generic ways to obtain a PSF given in (1); computing the PSF from a mathematical model, experimentally measuring the PSF as the image of a small fluorescent microsphere or bead, and simultaneously estimating the specimen and the PSF from the recorded image. The accuracy and quality of the PSF are essential to ensure the correct performance of any deconvolution algorithm. The best imaging systems have PSFs which are symmetrical in z axis and composed of circularly symmetric diffraction rings. Noise, incorrect estimates of aberrations and incorrect scaling of the PSF may cause major artefacts in the restored image. The noise present in the image considerably limits the efficiency of deconvolution algorithms. It must be taken into account, either directly in the algorithms themselves, or by adapted filtering. Noise reduces the likelihood of detecting small and highly attenuated objects.

Various other approaches have been proposed depend upon the assumptions on particular degradations and image models. In general, there are some existing architectures such as nonlinear perceptrons, radial basis functions, projection pursuit nets, hinging hyper-planes, probabilistic nets, random nets, high order nets, and wavelet transforms [Si-Yang (2006)] in sense to implement a kind of deconvolution process. Due to low portability of these well-defined models computational methods are much more preferable. Regardless of the technique chosen, data collection must be optimized and an appropriate deconvolution algorithm selected, namely the deconvolution process is directly linked to the image formation process. Image deconvolution is an ill-posed and a priori information available is needed. A priori information is used to lead well-posedness [Nourrit et al. (2005)]. In this part different aspects of deconvolution methods are mentioned to clarify the appropriate selection of the deconvolution method. 3D Deconvolution methods can be classified as;

1. Linear (direct) methods 2. Nonlinear iterative methods 3. Statistical methods, 4. Blind, semi-blind methods.

In classical linear image restoration, PSF is assumed to be known, mostly neglecting the noise, degradation can be predicted using one of the many known restoration methods, such as inverse filtering, Wiener filtering, least-squares filtering, recursive Kalman filtering [Kundur & Hatzinakos (1996)]. The convolution operation can be constructed as a matrix multiplication, where one of the inputs is converted into a Toeplitz matrix. This kind matrix-vector multiplication is common only in 1D and 2D linear deconvolution applications. The formulation of deconvolution as a linear algebraic problem allows well- established regularization techniques such as Tikhonov regularization [Starck et al. (2002)], edge Total Variation regularization [Rudin et al. (1992), Bioucas-Dias et al. (2006)]. Linear regularized methods have the advantage of being very attractive from a computation point of view when the PSF functions are known. They are fast to compute, however, restoration or deconvolution by traditional linear techniques results in artefacts such as oscillations or 'ringing' around sharp changes in intensity in the image, and generation of negative pixel values, during the interpretation of these images negative values yield us to some mistakes [Razaz & Nicholson (2000)].

Incorporating additional information leads to constrained deconvolution methods in which our estimate of the specimen is forced to satisfy one or more physical constraints such as being non-negative and having a finite size or support constraint, which relies on a blurred image being larger than either the true image or the PSF. Another important constraint which employed in Law & Lane (1996) is, to assume that the spectrum of the unknown PSF is a low-pass filter, whereupon the convolution can be assumed to be a low resolution image of the true object. This kind of methods is generally called the iterative constrained methods (ICM), different from classical linear deconvolution methods that checking the existence of these constraints is to be done during an iterative process and unlike the linear methods; we approach to the solution gradually. The main problem with these kinds of iterative methods is, convergence is not always guaranteed. Optimization techniques can be added to the iteration steps to ensure optimal iteration number is reached to stop the algorithm. Jansson‟s and Van-Cittert‟s methods are two most commonly used methods of ICM class.

The next degree of sophistication is statistical image restoration. In addition to these physical constraints statistical methods add statistical information about the noise present during imaging. Considering the nature of such an imaging system, deconvolution is seen as a statistical estimation problem and is solved by any one of the existing methods for statistical estimation. Among these methods, the most widely used ones are maximum-likelihood and maximum entropy methods [Starck et al. (2002)]. Maximum likelihood deconvolution is more recent improved subsets of Iterative Constrained algorithms. If the true image is assumed to be corrupted with Poisson noise, update of true image by using expectation minimization or commonly called Richardson-Lucy (RL) method [Lucy (1974), Richardson (1972)]. Maximum Likelihood (ML) method is considered to be computationally very expensive, when applied to 2D reconstructions, and is therefore impractical for 3D cases [Razaz & Nicholson (2000)].

Blind deconvolution (BD) methods can be divided into two main types [Yitzhaky et al. (1999)]; direct methods are performed straightforwardly in one step without iterative or recursive operations, and blur identification is performed separately from and prior to the restoration operation to present as an input to the known classical linear image restoration methods in the next step and indirect methods involve iterative techniques for the blur identification and image restoration processes, simultaneous estimations of PSF and the true image are calculated based on statistical or deterministic priori of the object function. Another classification suggests putting BD methods into four categories. In parametric BD methods, form of the PSF can be assumed in advance (e.g., motion blur, defocus, and atmospheric turbulence) instead of finding the PSF itself, one can try to estimate the parameters of its model; the advantages are, obviously, in having a smaller number of variables [Bronstein et al. (2005)]. Iterative BD methods proposed first in Ayers & Dainty (1988) and can be considered in the class of non-parametric iterative constrained methods. By definition there used no parametric form about the PSF function. After a random initial guess is made for true image, the PSF can be estimated using degraded image and the image estimate. First BD attempts were based on single channel formulations. In Sroubek et al. (2007) the authors claimed their method was the first method performing deconvolution and resolution enhancement simultaneously with a Multichannel blind deconvolution (MBD) method. Statistical BD methods are based on statistical or deterministic priors of the source image, the blurring kernel, and the noise. Statistical methods can be sometimes a base for BD algorithms if we are also totally unaware of the PSF function. Algorithms to extend the basic RL algorithm‟s usage to estimate the unknown PSF as well, by alternatively iterating on each of the unknowns were proposed in Fish et al. 1995) and Biggs & Andrews (1997). In heuristic BD methods, the problem can be viewed as an optimization problem and the restored image can be obtained by minimizing the cost function with some heuristic methods such as stochastic relaxation, simulated annealing (SA), and genetic algorithms. In these kinds of BD algorithms both PSF and true image can be estimated alone without any assumptions or a priori constraints. In Chen et al. (2000) the conventional SA algorithm is accelerated through a pyramid scheme, beginning from low resolution estimations, these estimations are expanded and used as the initial values of the next layer.

V. DATASET AND EXPERIMENTAL RESULTS

Conventional light microscopes use a series of glass lenses to bend light waves and create a magnified image. In SEM, magnified images are created by using electrons instead of light waves of optical microscopy. In TEM, a beam of electrons are passed through the sample and has image forming lenses which create a direct image of a viewing screen (like a bright-field ). STEM is a type of TEM. With it, the electrons pass through the specimen, but, as in scanning electron microscopy, the electron optics focus the beam into a narrow spot which is scanned over the sample in a raster. In STEM, the focusing is done before the electrons reach the specimen to form a very tiny probe (of the order of 1 Å), which is scanned over the sample. The scattered electrons are then collected in order to form the image as a function of position. Samples composed of heavy atoms or particles randomly distributed in amorphous or off-axis light matrices should provide the best vehicle for the demonstration of depth sensing abilities of STEM, because propagation of the beam through the sample results in a simple broadening of the probe.

Sub-angstrom level beams can be allowed in electron microscopy imaging through advances in aberration corrected STEM, recently. 0.3 Å is the quantum mechanical limit to resolution of a microscope. Aberration correction of the probe-forming lens in the STEM provides not only a significant improvement in transverse resolution but in addition brings depth resolution at the nanometer scale. Aberration correction therefore opens up the possibility of 3D imaging by optical depth sectioning (Fig.2). For the scanning system various sensors to detect various characteristic of the specimen is located. During depth sectioning of each slice, high angle electrons scatterings contributes an HAADF image. This imaging reveals the Z–contrast information of the material. Brightness of these atoms will be proportional to Z2.

ORNL Microscopes; HB603U STEM, operated at 300kV, produces a beam with a diameter of 0.6 Å equipped with a Nion aberration corrector. The corrected electron probe enables not only a better spatial resolution but also the higher peak intensity to single atoms.

Towards preventing the degradations due to the field of scan shifts, an area-based image registration scheme based on intensity values of the pixels can be a rather good choice for microscopy data. Mutual Information (MI) or relative entropy is a basic concept from information theory measuring the statistical dependence between two random variables or the amount of information that one variable contains about the other [Maes et al. (1996)] and is a leading technique in multimodal registration. Basic MI registration criterion generally states that the MI of the image intensity values of corresponding voxel or pixel pairs is maximal if the images are geometrically aligned. When physical structures align, information is “shared” between the images. Because no assumptions are made regarding the nature of the relationship between the image intensities in both images, this criterion is very general and powerful and can be applied automatically without prior segmentation on a large variety of applications.

Fig 2: Optical Sketch of the basic principle for optical depth sectioning a sample by acquisition of a through-focal series.

Assuming the imaging system is linear, the probe function preserves its symmetric shape through optical depth sectioning and noise is additive not multiplicative, we can use the PSF (or point response function (PRF), same definitions in different disciplines) to find an approximate solution to object function of the specimen (Fig.4). Rough PSF is obtained through aberration correction tests. Nevertheless, to find the real PSF of the imaging is hard we are aiming to find an approximation to the solution. Considering the degradation has the scheme shown in Fig.3, we should apply the reverse operation by going from degraded image and PSF to true image.

We adopted the deconvolution method proposed in Dougherty (2005), which is an iterative non- negative least squares solver. It is a linear deconvolution method, in which we know the information about the PSF and try recover true image form degraded image. The method involves solving equation (7). The derivative of that equation w.r.t x will be the approximated solution for true image. For the solution of the equation an iterative algorithm with regularization to overcome the fluctuations arising from the PSF noise was applied by Dougherty. The method is iterative since non-negativity constraint check for the approximated true image function of each step. The method shows big similarity with the Wiener filtering method. After applying the deconvolution method, the 3D models in Fig.5 are obtained.

a) b) Fig.3: a) x-z section of 3D PSF, b) Several x-y sections of 3D PSF at -80,-60,-40,-20,0, 20, 40, 60, 80 A˚ defocus planes

Fig.4:3D convolution scheme of a slice

a)

b)

c) Fig 5: 2D projections of Depth color coded 3D Model d) Raw, e) Aligned, f) Aligned & deconvolved data

VI. CONCLUSIONS

In this research, a very unique data obtained with a high-resolution electron microscope, was studied. An iterative non-negative least squares solver, proposed by Dougherty, towards approximating the true image function is employed. Two main problems effecting the 3D visualization of the data are handled; misalignments of the depth sections, degradations seen on the image model.

A visual significant enhancement is obtained. Since there assumed simplification for the PSF function the real deconvolution process is just approximated. Shift invariance assumption does generally exist for 3D deconvolution problems, that is to say PSF usually varies with depth, and hence so called shift varying deconvolution should be considered for further studies.

VII. REFERENCES

Ayers, G. R., Dainty J. C., “Iterative blind deconvolution method and its applications”, Optics Letters, vol. 13, no.7, pp. 547-549, July 1988.

Benthem, K-V, Lupini, A. R., Oxley, M. P., Findlay, S D., Allen, L. J., Pennycook S. J., Three-dimensional ADF imaging of individual atoms by through-focal series scanning transmission electron microscopy, Ultra-microscopy Volume 106, Issues 11-12, , Proceedings of the International Workshop on Enhanced Data Generated by Electrons, October-November 2006, Pages 1062-1068.

Biggs, D.S.C., Andrews, M., “Iterative blind deconvolution of extended objects”, International Conference on Image Processing, Proceedings, Volume 2, Page(s): 454 - 457, 26-29 Oct. 1997.

Bioucas-Dias, J.M. Figueiredo, M.A.T. Oliveira, J.P. “Total variation image deconvolution: a majorization- minimization approach,” in Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP '06), vol. 2, Toulouse, France, May 2006.

Bronstein, M. M., Bronstein, A. M., Zibulevsky M., Zeevi, Y. Y., “Blind Deconvolution of Images Using Optimal Sparse Representations”, IEEE Transactions On Image Processing, Vol. 14, No.6, June 2005.

Chen, Y.-W., Enokura, T., Mendoza, N., and Nakao, Z., “A pyramid model for blind deconvolution based on simulated annealing”, Industrial Electronics Society, IECON 2000. 26th Annual Conference of the IEEE Volume 4, Page(s): 2843-2848, 2000.

Diaspro, A. Beltrame, F. Fato, M. Ramoino, P., “Characterizing biostructures and cellular events in 2D/3D using wide-field and confocal microscopy”, IEEE Engineering in Medicine and Biology Magazine, Volume: 15, Issue: 1, Page(s): 92-100, 1996.

Dougherty, R.P., "Extensions of DAMAS and benefits and limitations of deconvolution in beamforming", 11th AIAA/CEAS Aeroacoustics Conference, Monterey, California, May 23-25, 2005

Fish, D. A., Brinicombe A. M., Pike E.R. Walker J. G., “Blind deconvolution by means of the Richardson-Lucy algorithm”, Journal of the Optical Society of America A Volume 12 No.1, Page(s) 58–65, 1995.

Kundur, D., Hatzinakos, D., “Blind image deconvolution”, Signal Processing Magazine, IEEE, Volume 13, Issue 3, Page(s):43 – 64, May 1996.

Law, N. F., Lane, R. G. “Blind deconvolution using least squares minimization”, Optics Communications, Volume 128, Issue 4-6, pp. 341-352, 1996.

Lucy, L. B., “An iterative technique for rectification of observed distributions”, The Astronomical Journal, vol.79, no. 6, pp.745–765, 1974.

Maes, F., Collignon, A., Vandermeulen, D., Marchal, G., and Suetens, P. "MultiModality Image Registration Maximization of Mutual Information, p. 0014, Workshop on Mathematical Methods in Biomedical Image Analysis (MMBIA '96),

McNally, J. G., Karpova, T., Cooper J., and Conchello J. A., “Three-dimensional imaging by deconvolution microscopy”, METHODS, vol.19, pp.373-385, 1999.

Nellist, P. D., Behan, G., Kirkland, A. I., Hetherington, C. J. D., “Confocal operation of a transmission electron microscope with two aberration correctors”, Applied Physics Letters, Vol.: 89, Issue 12, id.:124105, 2006.

Nourrit, V., Vohnsen, B., and Artal, P., “Blind deconvolution for high-resolution confocal scanning laser ophthalmoscopy”, Journal of Optics A: Pure and Applied Optics, vol. 7, no.10, pp.585–592, 2005.

Pennycook, S. J. , Lupini A. R., Borisevich, A., Varela ,M., Peng Y., Nellist, P. D., Duscher, G., Buczko, R. and Pantelides, S. T., “Transmission Electron Microscopy: Overview and Challenges”, Characterization And Metrology For ULSI Technology: 2003 International Conference On Characterization And Metrology For ULSI Technolog- AIP Conference Proceedings, Volume 683, pp. 627-633, 2003.

Razaz, M. and Nicholson, S., “3D blind image reconstruction using combined nonlinear and statistical techniques”, Signal Processing X: Theories and Applications, vol. 4, pp. 932-935, 2000.

Richardson, W. H., “Bayesian-based iterative method of image restoration”, Journal of the Optical Society of America, vol.62, no.1, pp.55–59, 1972.

Rudin, L. I., Osher, S. and Fatemi, E., "Nonlinear total variation based noise removal algorithms, " Proceeding of the 11th Annual International Conference of the center for Nonlinear Studies, Physica D, Vol.60, pp-259-268, Nov 1992.

Sibarita, J.-B., “Deconvolution Microscopy”, Advances in Biochemical Engineering-Biotechnology, Vol. 95, Rietdorf (ed.), Microscopy Techniques, Springer Verlag, pp.201-244, 2005.

Si-Yang, G. “Blind Deconvolution Images Using Optimal Sparse Representation”, Journal of Communication and Computer, Volume 3, No.11, Nov. 2006.

Sroubek, F., Cristobal, G., Flusser, J., “A Unified Approach to Super-resolution and Multi-channel Blind Deconvolution”, Image Processing, IEEE Transactions on, Vol.16, Issue 9, Page(s):2322–2332, 2007

Starck, J., Pantin, E., and Murtagh, F., “Deconvolution in Astronomy: A Review”, Publications of the Astronomical Society of the Pacific, vol. 114, pp. 1051-1069, Oct. 2002

Yitzhaky, Y., Milberg, R., Yohaev, S., and Kopeika, N. S., “Comparison of direct blind deconvolution methods for motion-blurred images”, Journal of Applied Optics, vol. 38, no. 20: Page(s):4325-4332, 1999.

Zitova, B. and Flusser, J. “Image registration methods: a survey”, Image and Vision Computing,Vol. 21, Page(s): 977-1000, 2003.