This Is a Repository Copy of Ptychography. White Rose
Total Page:16
File Type:pdf, Size:1020Kb
This is a repository copy of Ptychography. White Rose Research Online URL for this paper: http://eprints.whiterose.ac.uk/127795/ Version: Accepted Version Book Section: Rodenburg, J.M. orcid.org/0000-0002-1059-8179 and Maiden, A.M. (2019) Ptychography. In: Hawkes, P.W. and Spence, J.C.H., (eds.) Springer Handbook of Microscopy. Springer Handbooks . Springer . ISBN 9783030000684 https://doi.org/10.1007/978-3-030-00069-1_17 This is a post-peer-review, pre-copyedit version of a chapter published in Hawkes P.W., Spence J.C.H. (eds) Springer Handbook of Microscopy. The final authenticated version is available online at: https://doi.org/10.1007/978-3-030-00069-1_17. Reuse Items deposited in White Rose Research Online are protected by copyright, with all rights reserved unless indicated otherwise. They may be downloaded and/or printed for private study, or other acts as permitted by national copyright laws. The publisher or other rights holders may allow further reproduction and re-use of the full text version. This is indicated by the licence information on the White Rose Research Online record for the item. Takedown If you consider content in White Rose Research Online to be in breach of UK law, please notify us by emailing [email protected] including the URL of the record and the reason for the withdrawal request. [email protected] https://eprints.whiterose.ac.uk/ Ptychography John Rodenburg and Andy Maiden Abstract: Ptychography is a computational imaging technique. A detector records an extensive data set consisting of many inference patterns obtained as an object is displaced to various positions relative to an illumination field. A computer algorithm of some type is then used to invert this data into an image. It has three key advantages: it does not depend upon a good-quality lens, or indeed on using any lens at all; it can obtain the image wave in phase as well as in intensity; and it can self-calibrate in the sense that errors that arise in the experimental set-up can be accounted for and their effects removed. Its transfer function is in theory perfect, with resolution being wavelength-limited. Although the main concepts of ptychography were developed many years ago, it has only recently (over the last ten years) become widely adopted. This chapter surveys visible light, X-ray, electron, and EUV ptychography as applied to microscopic imaging. It describes the principal experimental arrangements used at these various wavelengths. It reviews the most common inversion algorithms that are nowadays employed, giving examples of meta code to implement these. It describes, for those new to the field, how to avoid the most common pitfalls in obtaining good quality reconstructions. It also discusses more advanced techniques such as modal decomposition and strategies to cope with 3D multiple scattering. 1 1) Introduction We have an object, possibly a very small object, and we want to make a magnified image of it. There are various strategies open to us. For many years the answer was to make one or more lenses as accurately as possible, and arrange them as accurately as possible in a microscope column (Figure 1a). New possibilities opened up with the advent of computers. If the image has minor systematic faults arising from the physical hardware, we can process it in the computer to improve upon it (Figure 1b). Alternatively we can build flexible adaptable optics that are controlled by a feedback loop, via detailed quantification of the distortion in the image. One of the biggest breakthroughs in transmission electron imaging relies on computationally measuring lens aberrations, and then compensating for them using complicated non-round lenses driven by dozens of variable currents (Figure 1c and see Chapter **EDITOR** in this volume). Similar compensation strategies have been employed in astronomical telescopes and in many other fields of optics. figs 1a-1d object good op0cs image improved object imperfect op0cs process image image object adap0ve op0cs process Feedback calcula0on image object encoding op0cs decoding process Figure 1: The computational imaging paradigm. a) The conventional microscope. b) Errors in aberrated or imperfect optics are corrected by post-processing. c) Errors in the optics are measured in a detector plane. Variable optics are then adjusted to improve the image via a feedback computation. d) Ptychography: the detector measures something rich in information, but not the image. Decoding computation is used to form the final image. The system is a type of transmission line. A more radical conceptual leap is to realise that perhaps we should not worry about our image at all. All we need is a detector lying somewhere in an optical system that records something rich in information (Figure 1d), whether an 2 image or not. Now what we need is optics that encodes information about our object as efficiently as possible before it reaches the detector. Once we have those data, we can decode them (assuming we know something about the encoding optics) and computationally generate our image. This imaging paradigm is nowadays widely called ‘computational imaging.’ A communication channel replaces the optical transfer function of the traditional lens: structural information is transferred via three steps: physical encryption; detection; and finally a decoding algorithm. Ptychography is a method of computational imaging. It employs a source of radiation (light or matter waves of arbitrary wavelength), an object that scatters that radiation, and a detector. We will see that we can have all sorts of optical elements between the source and the object, and between the object and the detector: the variety of modern implementations of ptychography is enormous. But it has five defining properties: (1) There must be an optical component, which is usually, but not always, the object itself, which can move laterally relative to whatever is illuminating that optical component. (2) The detector must be in an optical plane in which the radiation scattered from the optical component has intermixed to create an interference pattern, usually a diffraction pattern, but more generally any interference pattern, possibly even an image. (3) The detector collects at least two interference patterns, arising from a minimum of at least two known different lateral physical offsets of the illumination with respect to the object or other relevant optical component (modern implementations can use 100s or 1000s of lateral offsets). The offsets must be arranged so that adjacent areas of illumination overlap with one another. (4) The source of radiation must be substantially (but not necessarily wholly) coherent. (5) The image of the object is generated by a computer algorithm, which solves for the phase of the waves that arrived at the detector, even though the detector itself was only able to measure the intensity (flux or power) of the radiation that impinged upon it. If we were designing a normal communication channel, say a telephone transmission line, the very last thing we would ever choose to do is to have this catastrophic disposal of phase information right in the middle of the system. But that’s what happens with light, X-ray and electron detectors. An important strength of ptychography is that it can handle this regrettable ‘phase problem’ with no effort at all. One thing is clear. Unlike the immediacy of a conventional microscope, ptychography puts a huge obstruction between the microscopist and the image. First, we must wait while at least the two interference patterns are recorded: the experiment takes time. Second, we have to rely on the computer to reconstruct the image from the data. The data usually looks nothing at all like the object of 3 interest: we must wholly trust a computer algorithm to deliver our results, something that unnerves quite a lot of scientists. So, why would anyone want to use such a roundabout way of creating an image? There are three key benefits of the technique. (1) It does not need a lens. Most implementations of ptychography do in fact use a lens, but the lens can be very poor quality: it will not affect the final (high) resolution of the ptychographic image. This has been the driving motive for X-ray ptychography, where high-resolution lenses are hard to make and very costly. X- ray ptychography nowadays routinely improves upon lens-based X-ray imaging resolution by a factor of between 3 and 10. Ironically, the original motive of ptychography was to overcome the resolution limit of electron lenses, but aberration correctors (Chapter **EDITOR**) now provide such high resolution – fractions of an atomic diameter – that extra ptychographic resolution has little to offer, at least at the time of writing. Of course, at visible light wavelengths, lens optics is a mature field, already offering wavelength-limited resolution. (2) It produces a complex image (that is, both the modulus and phase of the exit wavefield at the back of the object). Image phase is an elusive thing, which is nevertheless crucial for imaging transparent objects like live biological cells that do not absorb radiation, but do introduce a phase change in the wave that passes through them. Consequently all sorts of optical methods have been developed over the last century for expressing the phase in an image, for example Zernike contrast, differential phase contrast, holography or processing through-focal series. However, it is a matter of fact that ptychography produces extraordinarily good phase images: the transfer function of the technique is, at least under most circumstances, almost perfect. This phase signal remains a pressing need over all wavelengths, and is the main motive for ptychography at visible light and electron wavelengths. It is also the key to the success of high-resolution X-ray ptycho-tomography (see section 6.1).