List of Abbreviations STEM Scanning Transmission Electron Microscopy
Total Page:16
File Type:pdf, Size:1020Kb
List of Abbreviations STEM Scanning Transmission Electron Microscopy PSF Point Spread Function HAADF High-Angle Annular Dark-Field Z-Contrast Atomic Number-Contrast ICM Iterative Constrained Methods RL Richardson-Lucy BD Blind Deconvolution ML Maximum Likelihood MBD Multichannel Blind Deconvolution SA Simulated Annealing SEM Scanning Electron Microscopy TEM Transmission Electron Microscopy MI Mutual Information PRF Point Response Function Registration and 3D Deconvolution of STEM Data M. MERCIMEK*, A. F. KOSCHAN*, A. Y. BORISEVICH‡, A.R. LUPINI‡, M. A. ABIDI*, & S. J. PENNYCOOK‡. *IRIS Laboratory, Electrical Engineering and Computer Science, The University of Tennessee, 1508 Middle Dr, Knoxville, TN 37996 ‡Materials Science and Technology Division, Oak Ridge National Laboratory, Oak Ridge, TN 37831 Key words. 3D deconvolution, Depth sectioning, Electron microscope, 3D Reconstruction. Summary A common characteristic of many practical modalities is that imaging process distorts the signals or scene of interest. The principal data is veiled by noise and less important signals. We split the 3D enhancement process into two separate stages; deconvolution in general is referring to the improvement of fidelity in electronic signals through reversing the convolution process, and registration aims to overcome significant misalignment of the same structures in lateral planes along focal axis. The dataset acquired with VG HB603U STEM operated at 300 kV, producing a beam with a diameter of 0.6 A˚ equipped with a Nion aberration corrector has been studied. The initial experiments described how aberration corrected Z- Contrast Imaging in STEM can be combined with the enhancement process towards better visualization with a depth resolution of a few nanometers. I. INTRODUCTION 3D characterization of both biological samples and non-biological materials provides us nano-scale resolution in three dimensions permitting the study of complex relationships between structure and existing functions. In Microscopic imaging a collection of 2D images of an object taken at different planes through optical depth sectioning is one of the bases for the reconstruction of the specimen in 3D space. Although the 3D resolution of tomography method is superior, optical depth sectioning has advantages in terms of reduction in sample irradiation, requirement of fewer images, low processing time [Nellist et al. (2006)]. As 3D images can be combined by pilling up consecutive sections, lateral views can be presented to the observer. Especially for the researchers in electron microscopy field, revealing atomic arrangements by means of visual correction of the specimen is indispensable in gaining knowledge in several ways; first principles calculations, chemical reactivity measurements, electrical properties, point defects, and optical properties. Rather than just visualizing 3D sparse point cloud, two main approximations can be adopted [Diaspro et al. (1996)]. The first approach is to extract distinctive features from the set of 2D images. The segmented images using simple binary thresholding and digital filtering can be used to produce 3D representation of the primitives. These primitives such as polygon meshes or contours are then rendered using conventional surface rendering techniques (Fig.1). The second approach is to transform the set of acquired images into another set of images such that the blurring effects are removed by investigating the image formation system and to apply rendering volumetrically afterwards. A major advantage of the latter approach is that the 3D volume can be displayed without any knowledge of the geometry of the dataset and hence without intermediate conversion to a surface representation. The entire dataset is preserved in volume rendering, any part, including internal structures and details that may be lost when reducing to geometric structures with surface rendering. In order to achieve a good representation of the structures, non-uniform behavior of the 3D transfer function of the imaging system should be corrected. Moreover during optical depth sectioning, displacements results in field of scan drifts which is mostly but not purely rigid for different focal planes. These correction methods are basically called deconvolution and registration techniques respectively and are outlined in the following sections. The purpose of this paper is to examine these two sources of degradations impairing the observed Atomic Number-contrast (Z-contrast) mode signals of STEM imaging system. Fig.1: Surface rendering towards 3D reconstruction for amorphous structure of gold particles on carbon substrate. a) Raw 2D slice image of the material from at an in-focus plane, b) Image after digital filtering and contour extraction operations, c)-d) 3D model estimate through triangulation of edge images of all slices after triangulation is applied to the distinctive edge points. II. IMAGING CHARACTERISTICS An imaging system deforms the signal of interest. It is degraded by noise, blur and the presence of other extraneous data. Separating the data stream into its useful components is essential to product improved evaluation. This linear model can precisely represent the degradation of the true image or object function, g(n1 ,n2 ,n3 ) f (n1 ,n2 ,n3 ) h(n1 ,n2 ,n3 ) n(n1 ,n2 ,n3 ) (1) where (n1, n2 , n3 )represents discrete voxel coordinates of the image frame, g(n1, n2 , n3 ) is the observed image, f (n1,n2 ,n3 ) is the true image, h(n1 , n2 , n3 ) is the image formation function or Point Spread Function (PSF), is the convolution process, n(n1 ,n2 ,n3 ) is the additive noise. For the real applications (1) can be described as a convolution integral, g(x, y, z) f (x , y , z ).h(,, ).d.d.d n(x, y, z) (2) in real world coordinates. Since in most of the physical applications, data is stored and displayed electronically, form (1) is used for computations. This model enables a simple multiplicative–additive representation in the spatial-frequency (Fourier domain), G(1,2 ,3 ) F(1,2 ,3 ) H(1,2 ,3 ) N(1,2 ,3 ) (3) Assuming that the imaging process is linear and shift-invariant a single function completely can describe the functioning of the instrument at any point in the specimen. Observed image can be considered superposition of the emitted signals (fluorescence microscopy), or transmitted signals (incoherent Z- contrast imaging in STEM) as a result of „illuminating‟ each point of the sample with PSF. The convolution essentially shifts the PSF in 3D space so that it is centered at each point in the specimen and then sums the contributions of all these shifted PSFs [McNally et all. (1999)]. Although fundamentally different, laser scanning confocal microscopy, wide-field microscopy, scanning electron microscopy, transmission electron microscopy, scanning transmission electron microscopy, aerial imaging, astronomy, are all based on the same process of image formation produced by a point source object. It can be characterized both theoretically and experimentally, and a precise understanding of the physical parameters specific to each imaging system forms the basis of most deconvolution algorithms. In electron microscopy, spherical aberration takes its name from light optics; too spherical shapes of the early lenses. Spherical aberration causes rays at higher angles to be over-focused. Chromatic aberration causes rays at different energies (different wavelengths) of electrons to be focused differently. These aberrations are difficult to describe perfectly in theoretically terms and cause a breakdown of axial symmetry and shift invariance properties. In Z-contrast mode of the STEM, the image is an incoherent one, which is simply the true image blurred by an amount independent of the specimen. Mathematically, the observed image is given in Pennycook et al. (2003) as; I(R) O(R) P 2 N(R) (4) R represents coordinates of the data, O is the object function or true image function, often represented simply as a function of Z -atomic number for each atom-, and P 2 is the effective probe intensity profile, including any broadening within the crystal. The incident electron beam is placed on one point of the sample and the electrons are scattered by the atoms are collected at the high-angle annular dark-field (HAADF) detector. The probe then moves to the next pixel and the scattered electrons are collected again at the detector. In STEM imaging as an example, a probe of atomic dimensions is scanned across a specimen. The annular detector located after specimen detects transmitted highly scattered intensity reaching, expressed as Z-contrast imaging. This kind of an image only represents the probability that the electrons strike a certain position on the detector. Electrons have wave-like properties, with wavelengths much less than that of the visible light. The de Broglie wavelength for electrons was defined as; 1.22 . (5) E where E is the energy of the electrons. Our naked-eye has ability to distinguish details up to 0.1-0.2 mm. For a 100keV electron probe the wavelength is ~0.004 nm (4 pm). Considering visible light has wavelengths between 400 nm and 700 nm, through using an electron microscope much better resolution, which defined as the smallest distance between two points that can be distinguished, can be achieved. The Rayleigh criterion describes it as; 0.61 (6) nsin where n is the refractive index of the medium surrounding the lens, and is the semi angle of the electron probe with the z axis or the acceptance angle. The small electron wavelength yields an appropriate resolution for seeing atomic structures. Major difficulties about this imaging system which we have to overcome with are; 1. In every 2D slice appears a lot of intensity information that does not represent an object appearing at this slice depth. Practically, a lot of pixels seem to have a significant value; however some of them should be removed from the scene by an algorithm. This superfluous data will make it very difficult to reconstruct edges and-or guess a threshold value that will result in useful simplified data to go on with. 2. Some of the simplest approaches to deconvolution, actually ignore the imaging formula altogether.