<<

Basics of Imaging SPECIAL EDITION OF & Imaging Microscopy & RESEARCH • DEVELOPMENT • PRODUCTION

www.gitverlag.com GIVE YOUR EYES TIME TO ADJUST TO THE HIGH LIGHT EFFICIENCY. OLYMPUS SZX2.

If you’re used to other stereo , you’d better be prepared before you start working with the new Olympus SZX2 range. The peerless resolution and exceptional light efficiency enable you to see with brightness and brilliance like never before, while providing optimum protection for your specimens at the same time. All this makes Olympus SZX2 stereo microscopes the perfect tools for precise fluorescence analysis on fixed and living samples. Importantly, the innovations present in the SZX2 series go far beyond fluorescence. The optimised 3-D effect offers superior conditions for specimen preparation and precise manipulation. The unique ultra-flat LED light base provides bright and even illumination of the entire field of view at a constant colour temperature. Moreover, the ComfortView and new trinocular observation tubes make sure your work is as relaxed as it is successful. The future looks bright with Olympus.

For more information, contact: Olympus Life and Material Science Europa GmbH Phone: +4940237735426 E-mail: [email protected] www.olympus-europa.com

Olympus Microscopy Introduction In this wide-ranging review, Olympus microscopy experts draw on their extensive knowledge and many years of experience to provide readers with a highly useful and rare overview of the combined fields of microscopy and imaging.

By discussing both these fields together, this review aims to help clarify and summarise the key points of note in this story. Encompassing the creation of highly detailed speci- men views through to final image capture and processing, the story will also be backed up by useful insights, exam- ples, as well as handy hints and tips along the way.

To make it easy to dip in and dip out of this comprehensive Content overview, the review is clearly structured into several chap- Editorials 2–3 ters. These take the reader from explanations of the physics Trust the Colours 4 of light, colour and resolution, through details of contrast The Resolving Power 12 and fluorescence techniques, to finish up with discussion Contrast and Microscopy  22 on 3D imaging. Highlighting and summarising key points Shining Fluorescence Details 34 made within this review are a series of information boxes. 3D Imaging  42

These also aim to provide useful insight on techniques, as Contact and Acknowledgement 52 well as expert advice on set up. Imprint 52

Basics of light Microscopy & Imaging •  Dear Reader Although microscopes are becoming more and more easy to use, it still remains important to have an “A Huge Number of appreciation of the fundamental principles that determine the resolution and contrast seen in micro- Techniques in a Relatively scope images. This series of articles, by authors from Olympus, aims to provide an overview of the Short Space” physical principles involved in microscope image formation and the various commonly used contrast mechanisms together with much practically oriented advice. Inevitably it is impossible to cover any of the many aspects in great depth. However, their approach, which is to discuss applications and pro- vide practical advice for many aspects of modern day microscopy, will prove attractive to many.

The articles begin by setting the scene with a discussion of the factors that determine the resolving power of the conventional microscope. This permits the introduction of important concepts such as numerical , Airy disc, , the difference between depth of and and the concept of parfocality. Common contrast mechanisms such as darkfield and phase contrast are then introduced, followed by differential interference contrast and polarisation contrast imaging. Throughout the discussion, the importance of digital image processing is emphasised and simple examples such as histogram equalisation and the use of various filters are discussed.

The contrast mechanisms above take advantage of the fact that both the amplitude and phase of the light is altered as it passes through the specimen. Spatial variations in the amplitude (attenuation) and/or the phase are used to provide the image contrast. Another extremely important source of im- age contrast is fluorescence, whether it arise naturally or as a result of specific fluorescent labels hav- ing been deliberately introduced into the specimen. The elements of fluorescence microscopy and techniques of spectral unimixing are discussed and brief mention is made of more advanced tech- niques where spatial variations in, for example, fluorescence lifetime are used to provide image con- trast. Finally, techniques to provide three-dimensional views of an object such as those afforded by Tony Wilson Ph.D Professor of Engineering Science stereo microscopes are discussed together with a very brief mention of the confocal microscope. University of Oxford The authors have attempted the very difficult task of trying to cover a huge number of techniques in a United relatively short space. I hope you enjoy reading these articles.

“Unique presentation of technical aspects in connection with image processing”

As a regular reader of “Imaging & Microscopy,” the title of this issue, “Basics of Light Microscopy & Imaging,” must certainly seem familiar to you. From March 2003 to November 2005, we had a series of eleven articles with the same name and content focus.

In collaboration with the authors, Dr Manfred Kässens, Dr Rainer Wegerhoff, and Dr Olaf Weidlich of Olympus, a lasting “basic principles” series was created that describes the different terms and tech- niques of modern light microscopy and the associated image processing and analysis. The presenta- tion of technical aspects and applications in connection with image processing and analysis was unique at that time and became the special motivation and of the authors.

For us, the positive feedback from the readers on the individual contributions time and again con- firmed the high quality of the series. This was then also the inducement to integrate all eleven articles into a comprehensive compendium in a special issue. For this purpose, the texts and figures have been once more amended or supplemented and the layout redesigned. The work now before you is a guide that addresses both the novice and the experienced user of light microscopic imaging. It serves as a reference work, as well as introductory reading material.

A total of 20,000 copies of “Basics of Light Microscopy & Imaging” were printed, more than double Dr. Martin Friedrich the normal print run of ”Imaging & Microscopy.” With this, we would like to underline that we are just Head Imaging & Microscopy as interested in communicating fundamental information as in the publication of new methods, tech- GIT Verlag, A Wiley Company nologies and applications. Germany Enjoy reading this special issue.

 • Basics of light Microscopy & Imaging There is a great merit for publishing “Basics of LIGHT MICROSCOPY and IMAGING: From Colour to “You Can Get Resolution, from Fluorescence to 3D imaging”. It is a great merit since it is very important to define, to Keywords for Expanding clarify, and to introduce pivotal elements for the comprehension and successive utilisation of optical Your Knowledge” concepts useful for microscopical techniques and imaging. The niche still occupied by optical micros- copy, within a scenario where resolution “obsession” plays an important role, is mainly due to the unique ability of light-matter interactions to allow temporal three-dimensional studies of specimens. This is of key relevance since the understanding of the complicate and delicate relationship existing between structure and function can be better realised in a 4D (x-y-z-t) situation. In the last years, sev- eral improvements have pushed optical microscopy, from confocal schemes [1-6] to multiphoton ar- chitectures, from 7-9 folds enhancements in resolution to single tracking and imaging allow- ing to coin the term optical nanoscopy [7]. Advances in biological labelling as the ones promoted by the utilisation of visible fluorescent [8-12] and of overwhelming strategies like the “F” tech- niques (FRAP – fluorescence recovery after photobleaching, FRET – fluorescence resonance energy transfer, FCS – fluorescence correlation , FLIM – fluorescence lifetime imaging micros- copy) [4, 13-15] collocate optical microscopy in a predominant position over other microscopical tech- niques. So it becomes mandatory for primers, and more in general for all those researchers looking for answers about their biological problems that can be satisfied by using the , to have a good starting point for finding the optimal microscopical technique and for understanding what can be done and what cannot be done. This long note on Basics can be a good point for starting. It brings the reader through different concepts and techniques. The reader can get the keywords for expanding her/his knowledge. There are some important concepts like the concept of resolution and the related sampling problems, spectral unmixing and photon counting that are introduced for further readings. Alberto Diaspro Ph.D This article reports some interesting examples and a good link between the different mechanisms of Professor of contrast, from DIC to phase contrast until fluorescence methods. Treatment is not rigorous, but it keeps University of Genoa Italy the audience interested and is sufficiently clear. I read it with interest even if I would prefer to have more bibliographic modern references to amplify the achievable knowledge. References [1] Amos B. (2000): Lessons from the history of light microscopy. Nat Biol. 2(8), E151-2. [2] Wilson T., Sheppard C. (1984): Theory and practice of scanning optical microscopy. Academic Press, London. [3] Diaspro A. (2001): Confocal and two-photon microscopy : foundations, applications, and advances. Wiley-Liss, New York. [4] Pawley JB. (2006): Handbook of Biological , 3rd edition, Plenum-Springer, New York. [5] Hawkes PW., Spence JCH. (2006): Science of Microscopy, Springer, New York. [6] Masters BR. (2006): Confocal Microscopy And Multiphoton Excitation Microscopy: The Genesis of Live Cell Imaging, SPIE Press Monograph Vol. PM161, USA. [7] Hell SW. (2003): Toward fluorescence nanoscopy. Nat Biotechnol. 21, 1347. [8] Jares-Erijman EA., Jovin TM. (2003): Nat Biotechnol. 21(11), 1387. [9] Tsien RY. (2006): Breeding to spy on cells. Harvey Lect. 2003-2004, 99, 77. [10] Tsien RY. (1998): The fluorescent . Annu Rev Biochem. 67, 509. [11] Pozzan T. (1997): Protein-protein interactions. Calcium turns turquoise into gold. Nature. 388, 834 [12] Diaspro A. (2006): Shine on ... proteins. Microsc Res Tech. 69(3),149 [13] Periasamy A. (2000): Methods in Cellular Imaging, Oxford University Press, New York. [14] Becker W. (2005): Advanced Time-Correlated Single Photon Counting Techniques, Springer, Berlin. [15] Periasamy A., Day RN. (2006): Molecular Imaging : FRET Microscopy and Spectroscopy, An American Physiological Society Book, USA.

Layout Circulation Imprint Ruth Herrmann 20,000 copies [email protected] Published by The publishing house is granted the exclusive right, with GIT VERLAG GmbH & Co. KG Litho and Coverdesign regard to space, time and content to use the works/ Elke Palzer editorial contributions in unchanged or edited form for Sector Management [email protected] any and all purposes any number of times itself, or to Anna Seidinger transfer the rights for the use of other organizations in [email protected] GIT VERLAG GmbH & Co. KG which it holds partnership interests, as well as to third Rösslerstrasse 90 parties. This right of use relates to print as well as elec- Journal Management 64293 Darmstadt tronic media, including the Internet, as well as databases/ Dr. Martin Friedrich Germany data carriers of any kind. [email protected] Tel.: +49 6151 8090 0 Fax: 49 6151 8090 133 Authors All names, designations or signs in this issue, whether www.gitverlag.com Dr. Rainer Wegerhoff referred to and/or shown, could be trade names of the [email protected] respective owner. Dr. Olaf Weidlich In Cooperation with [email protected] Olympus Life and Material Science Europa GmbH Dr. Manfred Kässens Wendenstrasse 14–18 [email protected] D-20097 Hamburg, Germany Editorial Assistance Tel.: +49 40 23772-0 Lana Feldmann Fax: +49 40 23 77 36 47 [email protected] www.olympus-europa.com

Production Manager Printed by Dietmar Edhofer Frotscher Druck [email protected] Riedstrasse 8, 64295 Darmstadt, Germany

Basics of light Microscopy & Imaging •  Trust the Colours

 • Basics of light Microscopy & Imaging Trust the Colours Light by different colour models, i.e. in differ- ent colour spaces. These are what the to- How do we describe light that reaches kens HSI, RGB, and CYMK stand for. Each our eyes? We would probably use two of these models gives a different perspec- key words: brightness and colour. Let us tive and is applied in fields where it can compare these expressions to the way be used more easily than the other mod- light is characterised in natural sciences: els. But nevertheless each model de- It is looked upon as an electromagnetic scribes the same physical reality. wave having specific amplitude and dis- tinct . Both, the workaday HSI and the scientist’s perspective essentially Let us stick to the grass and assume it is mean the same. The wave’s amplitude fresh spring grass. How would we de- gives the brightness of the light, whereas scribe it in everyday language? We would the wavelength determines its colour. characterise it as green – and not as blue Figure 1 shows how colour and wave- or orange. This is the basic hue of a col- length are related.

Colour Colour always arises from light. There is no other source. The human eye can per- ceive light in the colours of a wavelength range between 400 and 700 nm. But in- stead of having just one distinct colour, Fig. 1: Visible light spectrum. light is usually composed of many frac- tions with different colours. This is what a spectrum of a light implies. For exam- ple look at the spectrum of our life source, the central sun, which emits light of all different colours (fig. 2).

White light All colours of a light source superimpose. So light sources have the colour depend- ent on the predominant wavelength pal- ette. A candle for example may look ­yellowish, because the light mainly com- prises in the 560–600 nm range. Light from sources emitting in the whole spectral range with somewhat com- Fig. 2: Spectrum of the sun, and spectra of com- parable amounts appears as white light. mon sources of visible light.

Green grass our. We would probably characterise it What makes us see objects, which are not additionally as “full” or “deep” – and not light sources themselves? There are dif- as pale. This is the saturation of a colour. ferent processes, which happen when Then one would describe it as bright – light meets physical matter. They are and not as dark. This is the brightness or called , , intensity of a colour. Hue, Saturation, and absorption. They all happen together, and Intensity form the HSI colour model. but usually one process dominates. See The Munsell Colour Tree (fig. 3) gives a for example grass or other dense physical three-dimensional geometrical represen- matters: What we see when observing tation of this model. Here, the hue value grass is mainly reflected light. But why is represented by an angle, the satura- does grass appear to be green? The rea- tion by a radius from the central axis, son is that grass reflects only portions of and intensity or brightness by a vertical the white daylight. At the same time it ab- position on the cylinder. sorbs the red and blue portions of the daylight. Thus the green light remains. RGB The HSI model is suitable to describe and Colour models discern a colour. But there is another col- our model, which better how our Much that we know and observe related human perception mechanism works. to colour can be mathematically described The human eye uses three types of cone

Trust the Colours Basics of light Microscopy & Imaging •  tical application of the RGB model is many-fold: For example, besides human perception, digital cameras, monitors, and image file formats also function in a way which can be described by the addi- tion of the three primary colours.

CYM The overlap of the three primary additive colours red, green, and blue creates the colours cyan (C), yellow (Y), and magenta (M). These are called complementary or primary subtractive colours, because they are formed by subtraction of red, green, and blue from white. In this way, yellow light is observed, when blue light is removed from white light. Here, all the other colours can be produced by sub- tracting various amounts of cyan, yellow, and magenta from white. Subtraction of all three in equal amount generates black, i.e. the absence of light. White cannot be generated by the complemen- tary colours (fig. 4). The CYM model and its more workable extension, the CYMK model, find their applications in the tech- nology of optical components such as fil- ters as well as for printers, for example.

Colour shift Let us now examine a colour phenome- non which astonishes us in daily life: the colour shift. This also gives us the chance to take the first step into light microscopy and look closer into its typical light sources halogen bulb, xenon arc lamp and mercury arc lamp. It is a common experience to buy a pair of black trousers that is obviously dark blue when you look at them back cell photo receptors which are sensitive ceives an equal amount of all the three home. This inconvenient phenomenon of to light, respectively in the red V(R), colours as white light. The addition of an colour shift is not restricted to the blue green V(G), and blue V(B) spectral range. equal amount of red and blue light yields trousers. The so-called “white light” that These colours are known as primary col- magenta light, blue and green yields is generated by a halogen bulb at a mi- ours. The clue is that all of the existing cyan, and green and red yields yellow croscope, differs a lot from light from a colours can be produced by adding vari- (fig. 4). All the other colours are gener- xenon burner. At first glance it is the in- ous combinations of these primary col- ated by stimulation of all three types of tensity that is obviously different, but ours. For example, the human eye per- cone cells to a varying degree. The prac- even if you reduce the light intensity of a xenon burner, the halogen light will give you a more yellowish impression when projected onto a white surface. Further- more, dimming the halogen bulb light can make the colour even more red-like. This can be easily observed at the micro- scope if you focus on a white specimen area and increase the light power slowly. The image observed will change from a yellow-red to a more bluish and very bright image. This means that with in- creasing power, the intensity or availa- bility of different wavelengths (colours) has been changed. An additional aspect to consider here are the subsequent light Fig. 3: Munsell Colour Tree. Fig. 4: Primary colours. perception and interpretation. They are

 • Basics of light Microscopy & Imaging Trust the Colours Fig. 5a-c is showing the emission spectra of three typically used light sources at a microscope: (a) the tungsten halogen bulb, TF = colour temperature at different power settings, (b) the mercury burner, (c) the xenon burner. done by eye and brain and result in the power setting should be kept at one level, But here a problem arises: If the light eventually recognised colour. e.g. 9V (TF= 3200 K; colour temperature intensity is to be kept fixed in the light But let us come back to the light gen- at +9V). This light intensity level is often microscope the light might be too bright eration. The overall spectrum of a light marked at the microscope frame by a for observation. In daily life if sunlight is source at a defined power setting is de- photo pictogram. too bright sunglasses help – but they scribed with the term colour tempera- ture. The term colour temperature is a help to describe the spectrum of light sources as if a black piece of ideal metal is heated up. If a temperature of about Box 1: Centring of a mercury burner 3500 K is reached, this metal will have a yellowish colour. This colour will change Depending on the type (inverted or upright) and manufacturer of the micro- into a bluish white when it reaches scope there will be some individual differences but the strategy remains the 6000K. same. For a 12V / 100W tungsten halogen Please also see the instruction manual of the microscope. lamp at +9 Volt the colour temperature is 1. Start the power supply for the burner, use a UV-protection shield and approximately 3200K (fig. 5a) whereas ensure that a -cube is in the light path. for a 75 W xenon burner it is 6000K. So the colour temperature gives us a good 2. Locate a white paper card on the stage of the microscope and open the hint about the overall colour shift. Yet it shutter. If the light is too bright insert optional available neutral density will not give an idea of the individual in- filters (e.g. ND25 – only 25 % of the light intensity will pass through). tensities at defined wavelengths. This 3. Bring the focus to the lowest position. knowledge is of high importance if we 4. Get a free objective position or on upright microscopes a 20x objective have to use a light source for example for in the light path. fluorescence microscopy. In this case the light source has to produce a suffi- 5. If available, open the aperture stop and close the field stop. cient intensity of light in a range of 6. Optimise brightness with the burner centring knobs (mostly located at ­wavelengths that match the excitation the side of the lamp house) range of the fluorochrome under obser- 7. From the lamp house, an image of the arc of the burner itself and the vation. mirrored image are projected on the card. To see them in focus, use a collector focussing screw (Figure A). Microscope Light Sources 8. If only one spot can be located or the second is not the same size, the mirror (Figure B) has to be re-centred as well. At most lamp houses Tungsten – halogen bulbs there are screws at the back for screwdrivers for this option. Rotate Nearly all light microscopes are equipped them until the images have the same size as shown in (Figure C). with a (10W–100W) either 9. Locate both images parallel to each other and overlay them by using the burner centring screws for general use, or as an addition to an- (Figure D). other light source. A wide range of opti- 10. Defocus the images with the collector focusing screw and open the field stop. cal contrast methods can be driven with this type of light source, covering all 11. Remove the white paper card and bring a homogenous fluorescent specimen into focus (e.g. try some wavelengths within the visible range but curry on the cover slip). with an increase in intensity from blue to 12. Fine adjustment of the homogenous illumination can only be performed under observation: if necessary red. Additionally, the spectral curve al- readjust the collector focusing screw so that the total field of view is equally illuminated. ters with the used power (fig. 5a). To 13. If digital acquisition is of prime interest, the fine adjustment can also be performed at the monitor, to achieve similar looking colours in the optimise the illumination for the size of the CCD. prevalent brightfield microscopy the

Trust the Colours Basics of light Microscopy & Imaging •  might not only reduce the intensity but colours of a stained section for inner glass surface of the burner with also let us see the world in other colours. example on a neutral white background ongoing lifetime. In light microscopy there are light filters in brightfield microscopy (fig. 11). that only reduce the intensity. These fil- Xenon Arc lamp (XBO) ters are called neutral density filters (ND) Mercury Arc lamp Xenon burners are the first choice light or neutral grey filters. They are charac- The mercury burner is characterised by sources when a very bright light is terised by the light that they will trans- peaks of intensity at 313, 334, 365, 406, needed for reflected microscopy, such as mit. Therefore, a ND50 will allow half 435, 546 and 578nm and lower intensi- differential interference contrast on dark the light intensity to pass through ties at other wavelengths (see fig. 5b). objects, or quantitative analysis of fluo- whereas a ND25 will reduce the intensity This feature enables the mercury burner rescence signals as for example in ion to 25 % without changing the colours. If to be the most used light source for fluo- ratio measurement. They show an even the spectrum is changed, we will have rescence applications. Whenever the intensity across the , colour filters. There is a huge variety of peak emissions of the burner match the brighter than the halogen bulb but they colour filters available but here we will excitation needs of the fluorochromes a do not reach the intensity peaks of the only discuss the so-called light balancing good (depending on the specimen) signal mercury burners. The xenon burner daylight filter (LBD). can be achieved. However, these benefits allows the direct anal- This filter is used together with the are reduced by a relative short lifetime of ysis of intensities at different fluorescence halogen light source to compensate for the burner of about 300 hours and a excitation or emission wavelengths. The the over distribution of the long (red) small change of the emission spectrum lifetime is of about 500–3000 hours de- wavelengths. This enables us to see the due to deposits of cathode material to the pending on use (frequent on/off switch- ing reduces the lifetime), and the type of burner (75 or 150W). For optimisation of illumination especially with the mercury Table 1: Comparison of different light sources and their colour temperature performance and xenon burners the centring and alignment is very important. Light source Colour temperature

Vacuum lamp (220 W / 220 V) 2790 K Coming to the point Nitraphot (tungsten filament) lamp B (500 W / 220 V) 3000 K Photo and cinema lamps as well as colour control lamp (Fischer) 3200 K To ensure that the light source is able to Photo and cinema (e.g., nitraphot (tungsten filament) lamp S) 3400 K emit the required spectrum is one part of Yellow flash bulb 3400 K the colour story; to ensure that the objec- Clear flash bulb 3800 K tive used can handle this effec- Moonlight 4120 K tively is another. When “white” light is Beck arc lamp 5000 K passing through the systems of an White arc lamp as well as blue flash bulb 5500 K objective refraction occurs. Due to the flash unit 5500-6500 K physics of light, blue (shorter) wave- Sun only (morning and afternoon) 5200-5400 K lengths are refracted to a greater extent Sun only (noontime) 5600 K than green or red (longer) wavelengths. Sun and a cloudless sky 6000 K This helps to create the blue sky but is Overcast sky 6700 K not the best for good colour reproduction Fog, very hazy 7500-8500 K at a microscope. If objectives did not Blue northern sky at a 45° vertical angle 11000 K compensate this aberration then as out- International standard for average sunlight 5500 K lined in fig. 6 there would be focus points

 • Basics of light Microscopy & Imaging Trust the Colours Fig. 6: Schematic of axial chromatic aberration. Fig. 7: The principle behind how CCD cameras function.

for all colours along the optical axes. The look of colour (fig. 8). These filters ensure that only This would create colour fringes sur- green, red or blue light is permitted to rounding the image. Let us now go one step further and see come into contact with the light-sensitive To reduce this effect how the light with its colours – after hav- part of the single sensors or pixels (pic- combinations have been used since the ing passed the microscope – is detected ture element). The proportion of colours 18th century. These types of lenses com- in modern camera systems. Here, digital is generally two green pixels to one red bine the blue and red wavelengths to one cameras have become standard for ac- and one blue pixel. The colour image is focus point. Further colour correction at quisition of colour or monochrome im- actually generated by the ini- the visible range can be achieved by add- ages in the microscopy field. But how do tially via complex algorithms. This in- ing different sorts of glass systems to- these cameras work and how can it be volves assessing and processing the re- gether to reach the so-called Fluorite ob- guaranteed that the colours in the im- spective signals of the green, red and jectives (the name originally comes from ages they provide are accurate? This is blue pixels accordingly. For example, a the fluorspar, introduced into the glass of critical importance because it is a pri- single pixel with a bit depth of 8 becomes formulation). They are also known as Ne- mary prerequisite for all activities such a 24-bit colour pixel (fig. 9). ofluar, Fluotar, or Semi-. To as image documentation, archiving or Depending on the imaging method, correct the complete range from near in- analysis. the reflected or transmitted light of the frared (IR) to ultraviolet (UV) the finest object is focused onto a light-sensitive class of objectives, apochromate, has Detecting colour CCD sensor. So what is a CCD (Charge been designed. Beside colour correction Coupled Device) element? This is semi- the transmission power of an objective Light detection functions in the same way conductor architecture designed to read can also be of prime interest. This is es- for both colour and monochrome cam- out electric charges from defined storage pecially the case if high power near UV is eras. Fig. 7 illustrates how CCD digital areas (figs. 7, 10). Due to the special rea- needed at e.g. 340 nm (excitation of the cameras (Charged Coupled Device) func- dout method, the light-sensitive area of a calcium sensitive Fura-2) or IR is tion. In colour cameras, mosaic filters pixel is restricted to just 20 % of the ac- needed for 900 nm IR-DIC. are mounted over the CCD elements tual pixel surface. This is why special

Fig. 8: Bayer colour filter mosaic array and underlying photodiodes. Fig. 9: Bit depth and grey levels in digital images.

Trust the Colours Basics of light Microscopy & Imaging •  lenses above the sensor focus all incom- on the acquisition environment. The rea- based on the pixels within the section – ing light onto the light-sensitive area of son for this is what is referred to as col- one for each of the three colour compo- the pixel surface. The light generates our temperature. As we have seen this nents Red (R), Green (G), Blue (B). These electron-hole pairs via the photoelectric term describes a specific physical effect correction factors are defined such that effect. These electrical charges are col- – i.e., the spectral compositions of differ- the pixels within the section will be grey lected, combined into charge packets and ing light sources are not identical and on average – the section will have no col- subsequently transported through the furthermore are determined by tempera- our at all. The whole image is then cor- entire CCD sensor. The charge is con- ture (see Table 1). This means that col- rected automatically using these correc- verted into a voltage first because our temperature is of tremendous signifi- tion factors (fig. 11c, d). processing voltage is significantly easier cance with regard to digital image The following is a more detailed de- than processing current. This analogue acquisition and display as it influences scription of what takes place. The aver- output signal is then amplified firstly on both colour perception and colour dis- age intensity I for each pixel (n) within the chip itself and then again outside the play to such a critical degree. A person’s the section will be calculated: In= chip. An analogue/digital converter con- eyes automatically correct against this (R+G+B)/3. The colour factor (F) for each verts the voltage (the signal) into a bi- effect subconsciously by adapting to colour component, of each pixel within nary format. changing lighting conditions. This means the section will then be determined based There are various bit depths, with the that he/she will see those things that are on this calculation, e.g., for the red col- standard being 8 bit. This means that 256 known to be white as white. our factor: Fn (R) = (In/R). combinations (intensity values) are avail- Digital or video cameras are unfortu- Take a look at this example. A pixel able for each pixel (figs. 9, 10). Currently, nately not intelligent enough to register has the following colour components: many new cameras appear on the market changing light conditions on their own (R,G,B) = (100,245,255) – thus an average with bit depths of 12 bit (4096 intensity and to correct against the resulting col- intensity of In =200 and a red colour fac- levels). There are even cameras offering our shifts. This is why these kinds of tor of Fn (R) = (200/100) = 2.0. The three bit depths up to 16 bit (65656 intensity cameras often yield images whose col- colour factors will be averaged for all levels). The number of pixels a camera ours are tinged. Correcting colour tinge(s) pixels within the circle, meaning that a has depends on the CCD chip used in it in true-colour images is known as white correction factor (, and and the technology to create the image. balance (see fig. 11). Many cameras tend ) will be determined for each col- At the moment cameras are on offer with to adapt image colours at acquisition, our component. And now, the colour a resolution of up to 12.5 million pixels. this kind of colour displacement can be components of all the image’s pixels will corrected retroactively. To do so, re- be multiplied by the corresponding cor- Colour temperature and white balance quires an image with an area where the rection factor(s). user knows there should be no colour. In Let us now continue on the colour tem- fact, this particular image area should be Why do colours not match easily? perature subject of the microscope light black, white or grey. sources and review its impact on the col- It would be ideal to be able to pick any ours in . As we know light Automated white balance adjustments , monitor or printer at ran- is essential for both microscopes and dom and have the resulting colours on- cameras. There are many types of light, Most imaging software programmes of- screen and in the printed output be ac- as was explained above for microscope fer embedded algorithms to correct for ceptably close to the original. But light sources. The sun may yield different colour tinges. To do this, the user has to unfortunately, getting colours to match is colours depending on the time of day and define a section interactively within the hard to achieve. in the microscope there are variations image area where it is certain that the For one thing, colour is subjective. depending on the source used as well as pixels should be white, black or grey (but This means that colour is an intrinsic at present are tinged). First of all, three feature of an object and colours are correction factors will be calculated purely subjective – as interpreted by the visual system and the brain. Another critical aspect is that lighting affects col-

Fig. 10: Creation of a digital image.

10 • Basics of light Microscopy & Imaging Trust the Colours Fig. 11: White balance adjustment using microscope and software options on a histological specimen. a: Image with wrong microscope settings (low power 3V, no LBD filter), note that areas that are not stained (background) show a yellowish colour. b: image using optimised microscope settings (9V + LBD filter + Neutral density filer ND6), note that the background becomes white. c: Image after digital white balancing of image (a), note that with the help of a software white balancing on the yellowish image (a) the colours can be recalculated to some extend to a well balanced image (compare with b or d). d: Image after digital white balancing of image (b), note that an already well balanced image (b) can still be enhanced in quality. Areas that are used to define (neutral background) regions of interest (ROI), are marked with (x).

our. So a printout will vary depending on require colour management through sources, objectives and filters, to enable the lighting. It will look different under software. the assembly of a system with the appro- incandescent light, fluorescent light and If every program used the same ap- priate properties for a particular appli- in daylight. Furthermore, colours affect proach to colour management and every cation. The photographer in turn works other colours. This means your percep- printer, monitor, scanner and camera with an imaging system which takes par- tion of a colour will change depending on were designed to work with that colour tial information and processes it to the colours surrounding that colour. In management scheme, colour manage- achieve colour balance. Printers and addition, monitors can display colours ment would obviously be a snap. If the monitors in turn repeat the process – and that printers cannot print. Printers can hardware devices all used the same remove one further from the original print colours that monitors cannot dis- standard colour model, for example, or data. play. Cameras can record colours that came with profiles that would translate Several hundred years in the develop- neither monitors nor printers can pro- colour information to and from a stand- ment of have managed to hide duce. A colour model is at the very least ard as needed, you’d be able to move col- some of the complexity from the casual simply a way of representing colours our information around – from program user. But microscopy is a good deal more mathematically. When different devices to program or scanned image to printer complicated than it might seem. An use different colour models they have to – without ever introducing errors. awareness of what is really going on – in translate colours from one model to an- our eyes, in our brains, in the optics, on other. This often results in error. Given Conclusion the CCD and on the screen – enables us all the limitations inherent in trying to to appreciate just what a wonderful match colours, it is important to under- The light that reaches our eye consists of achievement that final image is, and how stand the difference between correctable many different colours, and different much care is required to ensure the best colour errors and non-correctable er- light sources produce a different mix of results. rors. Correctable errors are ones where these. Different objects in the world ab- you can do something about them via sorb and reflect different wavelengths – software – such as with a colour man- that is what gives them their colour. agement system or white balance. Non- There are many colour models to map correctable errors are the ones that you this multi-dimensional world, each de- can’t do anything about because the in- scribing the same physical reality but formation you need to correct them sim- having different applications. Our vision ply does not exist. Better lenses, better systems give us only a partial view of it. optical coatings and improved CCD ar- For effective imaging, the microscopist rays can all minimise the non-correcta- must be aware of the complexity of the ble errors. Correctable errors, however, world of light. There is a variety of light

Trust the Colours Basics of light Microscopy & Imaging • 11 The Resolving Power

12 • Basics of light Microscopy & Imaging The Resolving Power Human vision area, spread over the total amount of sensory cells. More individual sensors For most of us, seeing something is nor- can catch differences and translate them mal and we are accustomed to evaluate into a recognised image. things by looking at them. But the seeing Unfortunately, we can move closer act uses many different and complex only up to a certain distance, after which structures that enable us not only to our flexible optical system is no longer catch images but to process and inter- able to project the image clearly onto the pret them at the same time. Every image retina. If the words are typed too small, that is projected on the retina of our eye like these [“you can not resolve it without

is transformed into neuronal impulses optical help”] “You can not resolve it without optical help”, our resolution is creating and influencing behaviour: this reached. is what we call seeing. Therefore, what The wavelength that transports the we see is not necessarily what another information of the image also influences person sees. what we can resolve. Shorter blue wave- Compared to these processes within lengths carry finer details than longer our brain, the first steps of vision seem to ones from the same object. Using a mi- be much simpler. To create a projection croscope it would be easy to read the text onto the layer of the retina, where mil- and to document it via the sensors of a lions of light sensitive sensory cells are camera. But these tools are also re- located, the light passes through an opti- stricted by the resolution of their optical cal system that consists of cornea, aque- system and the number and sensitivity of ous humour, iris, pupil, focus lens and sensors – the pixels. the vitreous humour, see fig. 12. All these elements together create what the raster What is resolution? framework of sensory cells can translate within their capability into neuronal ac- Resolution or resolving power of a micro- tivity. scope can be defined as the smallest dis- tance apart at which two points on a Eagles and mice specimen can still be seen separately. In the transmitted light microscope, the xy- Those two different set ups – the R is determined essentially by system and the sensor system – restrict three parameters: the wavelength λ of

Fig. 12: of the human eye and cross section of the retina.

the options of human vision at the very the illuminating light, and the numerical first step. Details can be found at www. aperture (NA) of the objective (NAobj) as mic-d.com/curriculum/lightandcolor/ well as the (NAcond). humanvision.html. Having eagle eyed op- tical systems alone will not enable us to R = 1.22* λ / (NAobj+NAcond) (1) see mice from far away. To resolve a When the aperture of the condenser is small object (the mouse) against a large adjusted to that of the objective, i.e. the background needs special physical prop- aperture of the condenser is essentially erties of the optics and also requires a the same as the objective aperture, the certain sensitivity and number of sensors equation (1) simplifies to: used to translate the information.

If this text is getting too small, you have to come closer to resolve it. R = 0.61* λ /NAobj (2) This means your eyes can get light from the small letters in a wider angle than This equation is used for both transmit- before. You are now focusing on a smaller ted light microscopy and for reflected

The Resolving Power Basics of light Microscopy & Imaging • 13 light microscopy. Note here that resolu- tion is NOT directly dependent on the . Furthermore the end magnification should not be higher than 1000x the NA of the objective, because then the image will be only enlarged but no further resolution will be visible. This is called empty magnification.

What is the of an objective?

The numerical aperture of a microscope Fig. 13: Angular aper- objective is a measure of its ability to ture of an objective. gather light and thus resolve fine speci- men detail at a fixed objective distance. The numerical aperture is calculated Table 2: Numerical (NA) for different types of objectives and with the following formula: Magnification Plan Achromat Plan Fluorite Plan Apochromat NA = n*(sin µ) (3) 4 x 0.1 0.13 0.16 10x 0.25 0.3 0.4 n is the of the medium 20x 0.4 0.5 0.75 between the front lens of the objective 40x 0.65 0.75 0.9 and the specimen cover glass. It is n = 60x 0.8 (dry) 0.9 (dry) 1.42 () 1.00 for air and n = 1.51 for oil. µ is one 100x 1.25 (oil) 1.3 (oil) 1.4 (oil) half the angular aperture, as can be seen in fig. 13. The bigger µ, the higher is the numerical aperture. Working in air, the rectly following the index for magnifica- To achieve higher NA than 0.95 for ob- theoretical maximum value of the nu- tion (fig. 14), e.g. a UPlanFLN 60x/0.9 ob- jectives they have to be used with immer- merical aperture is NA = 1 (µ = 90°). The jective produces a 60x magnification sion media between front lens and speci- practical limit is NA = 0.95. with a numerical aperture of 0.9. The men. Oil or water is mostly used for that Numerical Aperture strongly differs de- purpose. Because objectives have to be Numerical aperture in practice pending on the type of objective or con- specially designed for this, the type of im- denser, i.e. the optical aberration correc- mersion media is always mentioned on the For all microscope objectives the resolv- tion. Table 2 lists some typical numerical objective like on the UPlanFLN 60x/1.25 ing power is mentioned with the number apertures for different objective magnifi- Oil Iris objective. This objective needs oil for the numerical aperture shown di- cations and corrections. as an immersion media and can not pro- duce good image quality without it. For special techniques like the Total Box 2: How to set Köhler illumination Internal Reflection Fluorescence Micros- copy (TIRFM), objectives are produced To align a microscope that can change the distance of the condenser to the area of the specimen for Köhler with very high NA leading by the Olym- illumination you should: pus APO100x OHR with a NA of 1.65. The 1. Focus with a 10x or higher magnifying objective on a specimen, so that you can see at least something of resolution that can be achieved, particu- interest in focus. (If your condenser has a front lens please swing it in when using more then 10x magnify- larly for transmitted light microscopy, is ing objectives.) depending on the correct light alignment of the microscope. Köhler illumination is 2. Close the field stop at the light exit. recommended to produce equally dis- 3. Move the condenser up or down to visualise the closed field stop. Now only a round central part of your tributed transmitted light and ensure the field of view is illuminated. microscope reaches its full resolving po- 4. If the illumination is not in the centre the condenser has to be moved in the XY direction to centre it. tential (see box 2). 5. Finely adjust the height of the condenser so that the edges of the field stop are in focus and the diffraction colour at this edge of the field stop is blue green. What resolution can be reached with a 6. Open the field stop to such an amount that the total field of view is illuminated (re-centring in xy direction light microscope? may be necessary). To make the subject more applicable, 7. For best viewing contrast at brightfield close the aperture stop to an amount of 80 % of the objective nu- some resolution numbers shall be given merical aperture. here. Using a middle wavelength of 550 nm, the Plan Achromat 4x provides a In practice, you can slowly close the aperture stop of your condenser while looking at a specimen. At the mo- resolution of about 3.3 µm, whereas the ment when the first change in contrast occurs, the NA of the condenser is getting smaller then the NA of the Plan Apochromat reaches about 2.1 µm. objective and this setting can be used. For documentation the condenser NA should be set according to that The Plan Achromat 40x provides a reso- of the objective. To visualise the aperture stop directly you can remove one and see the aperture lution of 0.51 µm and the Plan Apochro- stop working as a normal diaphragm. mat of 0.37 µm. The real-world limit for

14 • Basics of light Microscopy & Imaging The Resolving Power the resolution which can be reached with Table 3: Optical resolution and number of needed pixels. The number of pixels a 1/2 inch chip should a Plan Apochromat 100x is often not have to meet the Nyquist criterion (2 pixels per feature) and the optimum resolution (3 pixels per higher than R = 0.24 µm. feature). The number for LP/mm is given for the projection of the image on the CCD. To give a comparison to other micro- Objective Magnification NA Resolution Lp/mm CCD resolution 1/2´´ CCD resolution 1/2´´ scopes that do not work with visible light, (µm) (on CCD) Nyquist limit Necessary resolution the resolution reached nowadays in a 2 pixel/lp 3 pixel/lp scanning is about R PlanApoN 2 0,08 4,19 119 1526 x 1145 2289 x 1717 = 2.3 nm. In a transmission electron mi- UPlanSApo 4 0,16 2,10 119 1526 x 1145 2289 x 1717 croscope, structures down to a size of 0.2 UPlanSApo 10 0,4 0,84 119 1526 x 1145 2289 x 1717 nm can be resolved. Scanning probe mi- UPlanSApo 20 0,75 0,45 111 1420 x 1065 2131 x 1598 croscopes even open the gates to the UPlanSApo 40 0,9 0,37 67 858 x 644 1288 x 966 atomic, sub Angstrøm dimensions and UPlanSApo 100 1,4 0,24 42 534 x 401 801 x 601 allow the detection of single .

CCD chips in digital cameras, for exam- the range of colour values a pixel can What are airy disks? ple. Or, a video camera sensor may pro- have, and thus determines a kind of a Every specimen detail that is illuminated vide voltage signals that are read out and brightness or colour resolution. Yet it is within a microscope creates a so-called digitised in special frame grabber cards. the digital sampling which defines the diffraction pattern or pattern. But what is of more interest here is the spatial resolution in a digital image. Both This is a distribution of a bright central principle which lies behind the actual re- spatial and brightness resolutions give spot (the Airy disk or primary maximum), alisations. The optical image is continu- the image the capability to reproduce and the so called secondary maxima sep- ous-tone, i.e. it has continuously varying fine details that were present in the orig- arated by dark regions (minima or rings areas of shades and colour tones. The inal image. The spatial resolution de- of the Airy disk pattern; see fig. 15, 16) continuous image has to be digitised and pends on the number of pixels in the dig- created by interference (More details: quantified; otherwise it cannot be dealt ital image. At first glance, the following www.mic-d.com/ curriculum/lightand- with in a computer. To do so, the original rule makes sense: the higher the number color/diffraction.html). When two details image is first divided into small separate of pixels within the same physical dimen- within the specimen are closely together blocks, usually square shaped, which are sions, the higher becomes the spatial res- we can only see them separated if the called pixels. Next, each pixel is assigned olution. See the effect of different num- two central spots are not too close to a discrete brightness value. The first step bers of pixels on the actual specimen each other and the Airy disks themselves is called digital sampling, the second structure in fig. 17. The first image (175 are not overlapping. This is what the pixel quantisation. Both convert the con- x 175) provides the image information as Rayleigh criterion describes. Good dis- tinuous-tone optical image into a two-di- reasonably expected, whereas specimen tinction is still possible when one Airy mensional pixel array: a digital image. details will be lost with fewer pixels (44 x disk just falls in the first minimum of the 44). With even fewer, the specimen fea- other (fig. 15). Digital resolution – what is it for? tures are masked and not visible any The smaller the Airy disks, the higher more. This effect is called pixel the resolution in an image. Objectives The pixel quantisation of the blocking. which have a higher numerical aperture image intensities depends produce smaller Airy disks (fig. 16) from on the bit depth or dy- the same specimen detail than low NA namic range of the objectives. But the better the resolution converting sys- in the xy direction, the less is the speci- tem. The bit men layer that is in sharp focus at the depth defines same time (Depth of field), because the the number of resolution also gets better in the z direc- grey levels or Fig. 14: What is what on an objective? tion. Like higher magnification, higher Olympus: Manufacturer resolution always creates less depth of PlanApo: Plan: Flat field correction; Apo: field. Also in most cases the increase of Apochromatic; NA means that the objective gets closer 60x: Linear magnification 1,42 Oil: Numerical Aperture (needs oil immer- to the specimen (less working distance) sion) compared to an objective of lower NA but ∞: Infinity corrected optic (can not be with same magnification. Therefore, mixed with finite optics that belongs choosing the best objective for your ap- to the 160 mm optics) 0.17: Cover slip correction. Needs a cover plication may not only depend on the re- slip of 0.17mm thickness. solving power. Note: Take care of this parameter. There can also be a “0“ for no coverslip, a “1“ for 1mm thickness or a range of mm. Resolution in digital images – is it Wrong usage of objectives will create important? “foggy” images (spherical aberra- tion). The next step is to go from the optical FN 26.5: Field number 26.5 (When using an image to the digital image. What hap- ocular and tube that can provide a FN of 26.5 you may divide this number pens here? The “real” world conversion with the magnification to achieve the from an optical to a digital image works diameter in your field of view in mm. via the light sensitive elements of the 26.5/60 = 0,44 mm).

The Resolving Power Basics of light Microscopy & Imaging • 15 Is there an optimum digital resolution? three pixels per specimen feature. The Box 3: resolution should definitely not be Multiple Image Alignment (mia) Thus, the number of pixels per optical smaller than this, otherwise information image area must not be too small. But will be lost. Multiple image alignment is a software approach what exactly is the limit? There shouldn’t to combine several images into one panorama be any information loss during the con- Calculating an example view having high resolution at the same time. version from optical to digital. To guar- Here, the software takes control of the micro- antee this, the digital spatial resolution A practical example will illustrate which scope, camera, motor stage, etc. All parameters should be equal or higher than the opti- digital resolution is desirable under which are transferred to the imaging system through a cal resolution, i.e. the resolving power of circumstances. The Nyquist criterion is remote interface. Using this data, the entire mi- the microscope. This requirement is for- expressed in the following equation: croscope and camera setups can be controlled mulated in the Nyquist theorem: The and calibrated by the software. After defining the sampling interval (i.e. the number of pix- R * M = 2 * pixel size (4) required image size and resolution, the user can els) must be equal to twice the highest execute the following steps automatically with a spatial frequency present in the optical R is the optical resolution of the objec- single mouse click: image. To say it in different words: To tive; M is the resulting magnification at 1. Calculation of the required number of image capture the smallest degree of detail, two the camera sensor. It is calculated by the sections and their relative positions pixels are collected for each feature. For objective magnification multiplied by the high resolution images, the Nyquist crite- magnification of the camera adapter. rion is extended to 3 pixels per feature. Assuming we work with a 10x Plan To understand what the Nyquist crite- Apochromat having a numerical aper- rion states, look at the representations in ture (NA) = 0.4. The central wavelength fig. 18 and 19. The most critical feature of the illuminating light is l = 550 nm. So to reproduce is the ideal periodic pattern the optical resolution of the objective is R of a pair of black and white lines (lower = 0.61* l/NA = 0.839 µm. Assuming fur- figures). With a sampling interval of two ther that the camera adapter magnifica- pixels (fig. 18), the digital image (upper tion is 1x, so the resulting magnification figure) might or might not be able to re- of objective and camera adaptor is M = solve the line pair pattern, depending on 10x. Now, the resolution of the objective the geometric alignment of specimen and has to be multiplied by a factor of 10 to 2. Acquisition of the image sections including stage movement, image acquisition and comput- camera. Yet a sampling interval with calculate the resolution at the camera: ing the optimum overlap three pixels (fig. 19) resolves the line pair R * M = 0.839 µm * 10 = 8.39 µm. pattern under any given geometric align- Thus, in this setup, we have a minimum 3. Seamless “stitching” of the image sections with ment. The digital image (upper figure) is distance of 8.39 µm at which the line sub-pixel accuracy by intelligent pattern recogni- always able to display the line pair struc- pairs can still be resolved. These are 1 / tion within the overlap areas. ture. 8.39 = 119 line pairs per millimetre. With real specimens, 2 pixels per fea- The pixel size is the size of the CCD ture should be sufficient to resolve most chip divided by the number of pixels. details. So now, we can answer some of A 1/2 inch chip has a size of 6.4 mm * the questions above. Yes, there is an opti- 4.8 mm. So the number of pixels a 1/2 mum spatial digital resolution of two or inch chip needs to meet the Nyquist cri-

4. The overlap areas are adjusted automatically for differences in intensity

5. Visualisation of the full view image.

Fig. 15: Intensity profiles of the Airy disk patterns of one specimen detail and of two details at differ- ent distances.

16 • Basics of light Microscopy & Imaging The Resolving Power Fig. 16: Airy disk patterns of different size as an example of the Fig. 17: Four representations of the same image, with different num- resolving power for low NA (left) and high NA (right) objectives. bers of pixels used. The numbers of pixels is written below each image.

terion with 2 pixels per feature, is 1/(R there is no compression method applied. tion. Here, the microscope’s field of view * M) * chip size * 2 = 119 line pairs / mm The slow frame rate and the file size are becomes important. The field of view is * 6.4 mm *2 = 1526 pixels in horizontal just two aspects which demonstrate, that given by a number – the so called field direction. If you want 3 pixels per line the handling of high resolution images number – e.g. if the microscope is pair, the result is 2289 pixels. This calcu- becomes increasingly elaborate. equipped for the field number 22 and a lation can be followed through for differ- 10x magnifying objective is in use, the di- ent kind of objectives. Please check out, agonal of the field of view (via eyepieces High resolution over a wide field which numbers of pixels we need for a and tube that supports the given FN) is of view 1/2 inch chip in table 3. 22/10 = 2.2 mm. Using lower magnifying What might be astonishing here is the The xy-resolution desired is one aspect objectives will enable a larger field of fact, that the higher the magnification, which makes an image ‘high quality’. An- view and using large field tubes and ocu- the fewer pixels the chip of a CCD cam- other partially contradicting aspect is lars can enlarge the field number to 26.5 era needs! Working with a 100x Plan that we usually want to see the largest (e.g. field number 26.5 and 4x objective Apochromat objective combined with an possible fraction of the sample – in best allows a diagonal of 26.5/4 = 6.624 mm), 1/2 inch chip, we need just 800 x 600 pix- case the whole object under investiga- but unfortunately low magnifying objec- els to resolve digitally even the finest op- tically distinguished structure. The higher number of pixels of about 2300 x 1700 is necessary only at lower magnifi- cations up to 10. Box 4: Using hardware to increase the resolution in fluorescence microscopy Several hardware components are available, to enable the acquisition of images that are not disturbed by out Which camera to buy? of focus blur. For example, grid projection, confocal pinhole detection and TIRFM. Since out of focus parts of a specimen produce blur in an image, there is the simple option of eliminating this The resolution of a CCD camera is clearly stray light from the image. With most confocal microscopes, a small pinhole is located in front of the sensor one important criterion for its selection. and only those light beams that are originally from the focus area can pass through, others are simply ab- The resolution should be optimally ad- sorbed. The resulting point image only contains information from the focus area. To create an image of more justed to your main imaging objective. then just one point of the focus area, a scanning process is needed. Optimum resolution depends on the ob- jective and the microscope magnification This scanning process can be performed with the help of spinning disks for an ordinary fluorescence micro- you usually work with. It should have a scope or by a scanning process with a beam in a confocal laser scanning microscope (cLSM) set-up. minimum number of pixels to not lose Each system produces different levels of improvement to the resolution. However, all of them need a profes- any optically achieved resolution as de- sional digital sensor system to display the images. scribed above. But the number of pixels TIRFM (Total Internal Reflection Fluorescent Microscopy) uses also should not be much higher, because a completely different concept. With this method, a very thin the number of pixels is directly corre- layer of the specimen (around 200 nm) is used to create the lated with the image acquisition time. image. Therefore, TIRFM is ideal to analyse e.g. single molecule The gain in resolution is paid for by a interactions or membrane processes. To achieve this target, a slow acquisition process. The fastest light beam is directed within a critical angle towards the cover frame rates of digital cameras working slip. at high resolution can go up to the 100 Because of the higher refractive index of the cover slip com- milliseconds or even reach the second pared to the specimen, total internal reflection occurs. This range, which can become a practical dis- means that almost no direct light enters the specimen – but advantage in daily work. Furthermore, due to the physics of light a so called evanescent wave travels in the specimen direction. This wave is only unnecessary pixels obviously need the strong enough to excite fluorochromes within the first few hundred nanometres close to the cover slip. The same amount of storage capacity as nec- fluorescent image is restricted to this small depth and cannot be driven into deeper areas of the specimen, essary pixels. For example, a 24 bit true and also does not contain out of focus blur from deeper areas. (More information about TIRF can be found at colour image consisting of 2289 x 1717 www.olympusmicro.com/primer/techniques/fluorescence/tirf/tirfhome.html). pixels has a file size of almost 12 MB, if

The Resolving Power Basics of light Microscopy & Imaging • 17 Box 5: Changing objectives but keeping the digital live image in focus – Parfocal alignment. Fig. 18: Line pair pattern achieved High class objectives are designed to be parfocal. That means even when you change from a 4x magnifying ob- with 2 pixels per jective to a 40x objective – the structure under observation remains in focus. This is especially a need for imag- line pair. Please refer to the text ing with automated microscopy. To use this feature at the microscope the following short guideline will help: for the descrip- We assume that the microscope in use is in proper Köhler illumination setting, and the digital camera is con- tion. nected via a camera adapter (c-mount) that allows focus alignment. (Some adapters are designed with a focusing screw; some can be fixed with screws in different distance posi- tions.) 1. Use a high magnifying objective (40x or more) to get well recognisable detail of a specimen into focus Fig. 19: Line pair (camera live image), with the help of the course and fine focus of the microscope frame. pattern resolved with 3 pixels per 2. Change to a low magnifying objective (e.g. 4x), and do not change the course or fine focus at the micro- line pair. Please scope, but, align the focus of the camera adapter until the camera live image shows a clear in focus image. refer to the text for the descrip- 3. Changing back to high magnification – the image is still in focus – you do not believe? Have a try. tion. tives can not reach the same maximum the field of view (monitor image) once has both high resolution and a large field NA as high magnifying objectives. Even more? Some image processing can offer of view (fig. 21). Please check box 3 for when we use the best resolving lens for a an alternative. In the first step, these sys- the description of the image processing 4x objective, the N.A of this Apochromat tems automatically acquire individual procedure. Additionally, box 7 describes (0.16) is still lower then the lowest re- images at the predefined high resolution. its sophisticated and extended follower solving 20x Achromat with a N.A. of 0.35. In the next step the software performs called Digital Virtual Microscopy. Having a lower NA produces a lower res- intelligent pattern recognition together olution. In a number of applications, a with a plausibility check on the overlap- Physical limits and methods to field of view of several millimetres and a ping parts of the individual image sec- ­overcome them resolution on the micrometer or nano- tions to align them all in one image with metre scale is required simultaneously excellent accuracy (better than a pixel). As described before, the resolution of a (fig. 20). How can we overcome this prob- The computed result shows one com- microscope objective is defined as the lem, especially when we recognise that bined image, maintaining the original smallest distance between two points on the CCD sensor of the camera reduces resolution. So you get an image which a specimen that can still be distinguished as two separate entities. But several limi- tations influence the resolution. In the following we cover influence of image blurring and resolution depth capacity.

Convolution and Stray light from out of focus areas above or below the focal plane (e.g. in fluores- cence microscopy) causes glare, distor- tion and blurriness within the acquisition (fig. 22.a). These image artefacts are known as convolution and they limit one’s ability to assess images quickly, as well as make more extensive evaluations. There are several sometimes sophisti- cated hardware approaches in use which allow reducing or even avoiding out of focus blur at the first place when acquir- ing the images. Please see the descrip- tion in box 4. A different approach is the so-called deconvolution which is a recog- nised mathematical method for eliminat- ing these image artefacts after image ac-

Fig 20: Mr. Guido Lüönd, Rieter AG, Department Werkstoff-Technik DTTAM, Winterthur, Switzer- land, works on fibres using a microscope. For their investigations they often have to glue the single images into an overview image. Previ- ously this was a considerable challenge. Now a

guidolüönd piece of software takes over his job. ©

18 • Basics of light Microscopy & Imaging The Resolving Power quisition. It is called deconvolution. If the point spread function (PSF) is known, it is possible to deconvolute the image. This means the convolution is mathematically reversed, resulting in a reconstruction of the original object. The resulting image is much sharper, with less noise and at higher resolution (fig. 22.b).

What exactly is the point spread ­function?

The point spread function is the image of a point source of light from the specimen projected by the microscope objective a) onto the intermediate image plane, i.e. the point spread function is represented by the Airy disk pattern (fig. 15, 16). Mathematically, the point spread func- tion is the Fourier transform of the opti- cal transfer function (OTF), which is in general a measurement of the micro- scope’s ability to transfer contrast from the specimen to the intermediate image plane at a specific resolution. PSF (or OTF) of an individual objective or a lens system depends on numerical aperture, objective design, illumination wave- length, and the contrast mode (e.g. brightfield, phase, DIC).

The three-dimensional point spread function The microscope imaging system spreads the image of a point source of light from the specimen not only in two dimensions, but the point appears widened into a three-dimensional contour. Thus, more generally speaking, the PSF of a system is the three dimensional diffraction pat- b) tern generated by an ideal point source of light. The three-dimensional shapes of Fig. 22: Via deconvolution artefacts can be computed out of fluorescence images. a) These artefacts are caused by the stray light from non-focused areas above and below the focus level. These phenom- ena, referred to as convolution, result in glare, distortion and blurriness. b) Deconvolution is a recog- nised mathematical procedure for eliminating such artefacts. The resulting image displayed is sharper with less noise and thus at higher resolution. This is also advantageous for more extensive analyses.

the PSF create the so-called out-of-focus out-of-focus blur from images. The de- blur, which reduces the resolution and convolution algorithm works on a stack contrast in images, e.g. in fluorescence of images, which are optical sections microscopy. This blurring or haze comes through the specimen and are recorded from sections within the specimen which along the z-axis of the microscope. The are outside of the focal plane of the ac- algorithm calculates the three-dimen- tual image. So, an image from any focal sional PSF of the system and reverses the plane of the specimen contains blurred blurring present in the image. In this re- light from points located in that plane spect, deconvolution attempts to recon- mixed together with blurred light from struct the specimen from the blurred im- points originating in other focal planes age (fig. 23). (fig. 23). Fig. 21: The analysis of non-metallic inclusions according to DIN, ASTM and JIS requires the Depth of focus versus depth of field processing of areas up to 1000 mm2 while the What is deconvolution used for? sulphide and oxide inclusions to be analysed are When the PSF of a system is known, it Two terms – depth of focus and depth of themselves are less than micrometres wide. No can be used to remove the blurring field – are often used to describe the camera is available offering such a large CCD sensor. The image processing solution stitches present in the images. This is what the same optical performance, the amount of single overlapped images to give one high so-called deconvolution does: It is an im- specimen depth structures that can be resolution image together. age processing technique for removing seen in focus at the same time. However,

The Resolving Power Basics of light Microscopy & Imaging • 19 eye might be precisely in focus for a par- ticular distance – for example an object one metre away. However, because of the slop in the system, other objects 90 cm and 110 cm away may also be seen clearly. In front of, and behind, the pre- cise focal distance there is a range where vision is clear, and this is termed the “depth of field”. Now look at the physical understand- ing of depth of focus in microscopy and the depth of field, respectively. Depth of field Δfi in a microscope is the area in front of and behind the specimen that will be in acceptable focus. It can be de- fined by the distance from the nearest object plane in focus to that of the far- thest plane also simultaneously in focus. This value describes the range of dis- tance along the optical axis in which the Fig. 23: Stray light originating from areas above and below the focal plane results in glare, distortion specimen can move without the image and blurriness (convolution) especially in fluorescence microscopy and . Deconvolution is a appearing to lose sharpness. Mathemati- recognised mathematical method for correcting these artefacts. The degree to which an image is dis- cally Δfi is directly proportional to torted is described by what is known as the point spread function (PSF). Once this is known, it becomes possible to “deconvolute” the image. This means that the convolution of the image is mathematically 2 reversed and the original contours of the specimen are reconstructed. The greater the precision with Δfi ~ λ/2*NA (5) which the degree of distortion – i.e., PSF – is known, the better the result. What is this result? A sharper and noise-free version of the image at higher resolution with enhanced image quality. Δfi = depth of field λ = wavelength of light (emission) NA = numerical aperture only the term “Depth of Field” should be matches its optical power, and if a dis- used for this feature. As we will point out tant object is in focus when the eye is re- Δfi obviously depends on the resolution later the term “Depth of Focus” is needed laxed. If there is a difference between of the microscope. Large lenses with for a different optical feature. the power and length in such a situation, short and high magnifica- An optical system, such as the eye, then the image that is formed on the ret- tions will have a very short depth of field. which focuses light, will generally pro- ina will be very slightly out of focus. How- Small lenses with long focal length and duce a clear image at a particular dis- ever, such a discrepancy may be small low magnifications will be much better. tance from the optical components. In a enough that it is not noticed, and thus Depth of focus Δfo in a microscope is the camera, the ideal situation is where a there is a small amount of “slop” in the distance above and below the image clear image is formed on the chip, and in system such that a range of focus is con- plane over which the image appears in the eye it is where a clear image is sidered to be acceptable. This range is focus. It’s the extent of the region around formed on the retina. For the eye, this termed the “depth of focus” of the eye. the image plane in which the image will happens when the length of the eye Looking at it the other way round, the appear to be sharp. Δfo is directly pro- portional to

Δfo ~ M2/NA (6) Δfo = depth of focus M = magnification NA = numerical aperture

Δfo refers to the image space and de- pends strongly on the magnification M, but also on changes in numerical aper- ture NA. As a take home message – high reso- lution will create relative low depth of field, high magnifications will create a higher depth of focus – this is also why the procedure for parfocal alignment will work.

Parfocality

Fig. 24: The software extracts the focused areas from the component images of an image series and There is a well-known difficulty in con- reassembles them into one infinitely sharp image. The example shown here is the resulting image ventional light microscopy which refers computed automatically using 25 images of a Bembidion tetracolum sample. to the limited depth of focus: If you focus

20 • Basics of light Microscopy & Imaging The Resolving Power Box 6: Extended Focal Imaging (efi) Box 7: Virtual microscopy A lack of depth of field in microscope images is an old and familiar problem. The microscope’s own depth of field is only capable of focusing a limited height range at the same time. The remaining parts of the image are then blurry. Electronic image processing points the way out of this dead-end street. For a motorised microscope equipped with a motorised stage the whole process can be automated. The soft- ware takes control of the microscope, camera, motor stage, etc. All parameters are transferred to the imaging system through a remote interface. Using these data, the entire microscope and camera set-up can be controlled and calibrated by the software. After having defined the total number of images, and the maximum and mini- Light microscopy is one of the classic imaging mum height of the stage, the user can execute the following steps automatically with a single mouse click. techniques used in medical education and for rou- 1. In the first step the user defines the number of images in the focus series. Using a microscope with a mo- tine procedures of pathology, histology, physiol- tor stage the user has to define the maximum and the minimum lift of the microscope stage as well as the ogy and . Pathology makes use of mi- total number of individual images he intends to acquire. croscopes for diagnostic investigation of tissue 2. The defined image series at varying focus levels is acquired. samples to determine abnormal changes. Digitisa- tion has resulted in significant progress in the 3. Next the composite image of pixel-by-pixel precision will be generated from these images. This is done by field of microscopy. However, digital technology extracting the respective focused image areas of each separate image and assembling these into a fo- up until now has presented some decisive limita- cused composite image. Most of these solutions take into consideration the typical (due to construction) tions. One problem is that the field of view of any shift of the optical axis that occurs when focusing stereoscopic microscopes. camera is limited for any given magnification. It is 4. The “Extended focal image” with practically limitless depth of field is computed. usually not possible to have a complete overview 5. Now the user is able to generate a height map which permits users to reconstruct three-dimensional of the tissue specimen with just one image at a views e.g. to measure height differences. resolution that allows further analysis. Digital vir- tual microscopy moves beyond this barrier. Virtual microscopy is the digital equivalent to con- ventional microscopy. Instead of viewing a speci- an image with one objective, e.g. 4x, it Automated sharp images men through the eyepiece of the microscope and might be out of focus when you change to evaluating it, a virtual image of the entire slide (a a another magnification, e.g. 40x. The Let us come back to restrictions arising ’virtual slide‘) with perfect image quality is dis- term parfocality describes the situation from the limited depth of field. The better played on the monitor. The individual system com- when the structures remain in focus. the lateral resolution, the smaller your ponents (microscope, motor stage, PC, software) Parfocality is a characteristic of objec- depth of field will be. The problem is in are all optimally inter-coordinated and offer speed, tives. Please read box 5 how to assure fact physical, and it cannot be circum- precision and reliability of use. The Olympus solu- parfocality with high class objectives. vented by any adjustments to the optical tion .slide scans the entire slide at the resolution system. Today’s standard light micro- required. Integrated focus routines make sure the scopes allow objects to be viewed with a image is always in sharp focus. The single images maximum magnification of about 1000x. acquired are automatically stitched together into The depth of field is then reduced to a large montage (the ‘virtual slide‘). The entire about 1 µm. Only in this area the speci- ’virtual slide’ can be viewed onscreen. Detail im- men is perfectly imaged. This physical age segments can be selected and zoomed in or limitation of inadequate depth of field is a out, the equivalent to working with an actual familiar problem in microscope acquisi- glass slide under the microscope with the same tions. In the metal-processing industry, efficiency. With an internet connection, this pro- when analysing or evaluating two-dimen- cedure can be done from anywhere in the world. sional metallographical objects such as, In addition, users have all the advantages of dig- e.g., a section sample, a section press is ital image processing at their fingertips, including generally used in order to obtain an exact structured web archiving of images, analysis re- orthogonal alignment in relation to the sults and documentation. optical axis of the microscope. The sam- ple to be analysed can then be acquired totally focused in a single acquisition. However, objects which have a dis- What happens is that an image series is tinctly three-dimensional structure or for acquired at varying focus levels. Then investigations of, e.g., 3-D wear, at special software algorithms are applied Fig. 25: This image shows how far money can go: greater magnifications no satisfactory to all the images of the image series and This Singaporean coin detail was captured using a overview image is obtainable via stereo distinguish between sharply-focused and ColorView digital CCD camera and processed or reflected-light microscope. The micro- unfocused image segments in each im- using the realignment, extended focus and 3-D scope can only be focused onto limited age. The sharp image segments of the imaging software modules. Capture and auto- matically align multiple component images into a areas of the object. Due to the limited images are then pieced back together to high-resolution composite using the MIA module. depth of field, it is actually impossible to form a single totally-focused image of the Extract the sharpest details within the component obtain a sharp acquisition of the entire entire sample (fig. 24). Furthermore, images and reassemble into one single image image field. These physical restrictions measuring height differences and gener- having infinite depth of focus with the module EFI. Finally, create incredibly realistic views using can only be transcended via digital im- ating three-dimensional images becomes height and texture information attained through age analysis. The normally wholly-bind- feasible (fig. 25). For further detail please the perspective functions of the 3D module. ing laws of physics are “side-stepped”. see box 6.

The Resolving Power Basics of light Microscopy & Imaging • 21 Contrast and Microscopy

22 • Basics of light Microscopy & Imaging Contrast and Microscopy All cats are grey in the dark can not be visualised in additional processing steps. Why are cats grey in the dark? Here, the term contrast comes into play. Contrast The familiar view – brightfield contrast refers to the difference of intensities or colours within an image. Details within In brightfield transmitted microscopy the Contrast and Microscopy an image need intensity or colour differ- contrast of the specimen is mainly pro- ences to be recognised from the adjacent duced by the different absorption levels surroundings and overall background. of light, either due to or by pig- Let us first restrict our view to a greys- ments that are specimen inherent (am- cale level image, like image fig. 26. While plitude objects). With a histological spec- watching the image try to estimate the imen for example, the staining procedure number of grey scales that you can dis- itself can vary the contrast levels that are tinguish – and keep that number in available for imaging (fig. 27). Neverthe- mind. less, the choice of appropriate optical Let us now compare this number with equipment and correct illumination set- another example – you enter an office tings is vital for the best contrast. shop in order to buy a selection of card samples in different shades of grey. All of the cards fall down by accident and are scattered over the floor. Now you have to replace them in the correct grey level or- der – how much can you differentiate now? Surprisingly, we are only able to dif- ferentiate approximately 50–60 grey lev- els – that means that already the 8 bit image on your monitor with 256 grey scales offers greater possible differentia- tion than we can discriminate with our Fig. 26: Vitamin C crystals observed in polarised own eyes. The card samples in various light. shades of grey need to a have an approx- imate difference in contrast level of about During our explanation about Köhler 2 % in order for us to recognise them as alignment (see box 2) we described that different. However, if we look at the at the end of this procedure the aperture number of image areas (pixels) that rep- stop should be closed to approximately resent a discrete intensity (grey level or 80 % of the numerical aperture (NA) of pixel intensity) within an intensity distri- the objective. This setting is to achieve bution of the image we can understand the best contrast setting for our eyes. and handle contrast and brightness vari- Further reduction of the aperture stop ations more easily. Intensity or grey level will introduce an artificial appearance distributions are referred to as histo- and low resolution to the image. grams, see box 8 for further explanation. For documentation purposes however, These histograms allow us to optimise the aperture stop can be set to the same camera and microscope settings so that level as the NA of the objective – because all the intensity values that are available the camera sensors in use are capable to within the specimen are acquired handle much more contrast levels than (fig. 27). If we do not study the available our eyes. For specimens lacking natural intensity values at the initial stage before differences in internal absorption of light, image acquisition, they are archived and like living cells (phase objects; fig. 28) or

Fig. 27: Histological staining. Cells obtained after a transbronchial needle application. Image and histogram show that the overall intensity and contrast have been optimised.

Contrast and Microscopy Basics of light Microscopy & Imaging • 23 Box 8: Histogram optimisation during acquisition achieved because the dust particles dif- fract and/or reflect the light and this light An intensity histogram depicts the intensity distribution of the pixels in an image. The intensity values are is now travelling in all directions. plotted along the x axis and range from 0–255 in an 8-bit greyscale image or 24-bit (3x8bit) colour image. Therefore we can see light originating The number of pixels per intensity value is displayed along the y axis. from the particle in front of a dark back- The intensity histogram provides the means to monitor general characteristics of a digital image like its over- ground even when the particle itself is all intensity, its contrast, the dynamic range used, possible saturation, the sample’s phases etc. In this respect too small to be resolved or does not show the histogram gives a more objective criterion to estimate the quality of an image than just viewing it (which an appropriate contrast under daylight is somewhat subjective). When displayed in the live mode during image acquisition, the histogram allows op- conditions. This phenomenon is also used timising and fine tuning of the acquisition modes and parameters on the microscope as well as the camera. in darkfield (or dark ground) microscopy. These include microscope alignment, contrast method, microscope settings, or current camera exposure time. Light is directed to the specimen in a way So the histogram helps to acquire a better image containing more image information. that no direct light enters the objective. If there is no light particle the image is dark, if there is something that diffracts or reflects the light, those scat- tered beams can enter the objective and are visible as bright white structures on a black background (fig. 29). (See also: http://www.olympusmicro.com/ primer/techniques/darkfieldindex.html)

Transmitted darkfield Epithel cells viewed with phase contrast: left side with low contrast setting, right sight with For transmitted microscopy including contrast optimisation of microscope and camera setting. stereo microscopy, this contrast method Left figure is obviously lit correctly but has poor contrast. The corresponding histogram mirrors this: the peak is especially used to visualise scattering is roughly located in the middle of the intensity range (x-axis), so the camera’s exposure time is correctly set. objects like small fresh water micro-or- But all pixels are squashed in the middle range between about 75 and 200. Thus, only about half of the dy- ganisms or and fibres (fig. 30). namic range of the camera system is used. Aligning the microscope better (e.g., light intensity) and adapting Almost all upright microscopes can be the camera exposure time increases the image contrast (right figure). It also spreads the intensity distribution easily equipped for darkfield illumina- over the whole dynamic range without reducing information by saturation (see the corresponding histogram). tion. Here, more detailed image structures become visible. The easiest way to achieve simple General rule darkfield is a central light stop insert for the condenser. For better illumination or The overall contrast is usually best when the intensity histogram covers the whole dynamic range of the sys- even high resolving darkfield, special tem; but one should usually avoid creating overflow or saturation at the right side of the histogram, i.e. white darkfield condensers are required pixels. (fig. 29). It is also possible to use the histogram to improve the image contrast afterwards. However, you can not in- To ensure that no direct light is enter- crease the image content; you can only improve the image display. ing the objective, the numerical aperture (NA) of the condenser has to be about 15 % higher than the NA of the objective. reflected microscopy specimens without tails in those specimens optical contrast This is in contradiction to all other con- significant three dimensional relief struc- methods must be used. trast methods where the objective has a tures, the acquired image has flat con- higher or equal NA than the condenser. trast. To better visualise the existing im- Like stars in the sky – Darkfield Contrast Remember the NA is a number that de- age features (fig. 28, original image), scribes the angle of the light cone a con- subsequent digital contrast optimisation Dust in the air is easily visible when a denser or objective is using. An objective procedures may be applied (fig. 28, im- light beam is travelling through the air in with high NA like the apochromate is proved image). But to visualise more de- a darkened room. The visibility is only characterised by a high angle of its light cone and therefore it may be possible that some of the direct illumination will also enter the objective. This would destroy the darkfield contrast immediately. For that reason objectives with high NA are

Fig. 28: Brightfield image of living mouth epithelial cells on a slide before and after digital contrast optimisation of the archived image and correspond- ing histograms. See the section “Making it look better“, for further explanation.

24 • Basics of light Microscopy & Imaging Contrast and Microscopy Fig. 29: Light path for darkfield compared to brightfield set up in transmitted and reflected illumination.

available, designed with an internal iris field/darkfield objective guides the illu- needed to convert this difference into an diaphragm to reduce the NA to the appro- mination light in an outer ring to the intensity shift. These optical elements priate amount for darkfield observation. specimen. Only the scattered light from create a contrast where un-deviated and the specimen runs in the normal central deviated light are ½ wavelength out of Reflected darkfield part of the objective as image forming phase, which results in destructive inter- Within the reflected microscopy applica- light rays (fig. 29). Those objectives can ference. This means that details of the tions the darkfield illumination is a very also be used for normal brightfield ob- cell appear dark against a lighter back- common contrast technique. It allows the servation or other techniques like differ- ground in positive phase contrast (see visualisation of smallest scratches and ential interference contrast (DIC) and/or figures in box 8). (See also: www.olym- changes in height because of the circular polarisation. pusmicro.com/primer/techniques/phase- oblique illumination (fig. 31). To achieve contrast/phaseindex.html) a reflected darkfield, the amount of spe- Creating destructive interference – For phase contrast microscopy two el- cial adaptations at the microscope is Phase Contrast ements are needed. One is a ring slit in- more sophisticated. The central light stop sert for the condenser, the other is spe- is located within a cube of the reflected Light that is travelling through part of a cial objectives that contain a phase plate. light attachment and the special bright- specimen and is not absorbed by ampli- Objectives for phase contrast are charac- tude objects will not produce a clearly terised by green lettering and an indica- visible image. The intensity remains the tion of the size of the phase ring like Ph1, same, but the phase is changed com- Ph2, Ph3 or PhC, PhL, PhP. Correspond- pared to the light just travelling in the ing to these objective phase rings the ap- Box 9: Alignment of a transmitted surrounding areas. This phase shift of propriate inserts for the condenser have darkfield condenser about a quarter wavelength for a cul- to be used. tured cell is not visible to our eyes. There- Both elements are located at the so 1. Engage the 10x objective and bring the speci- fore, additional optical elements are called back focal planes. They are visible men into focus. 2. While looking through the eyepieces and using the condenser height adjustment knob, care- Box 10: Alignment of phase contrast fully adjust the height of the condenser until a dark circular spot becomes visible (Figure A). 1. A microscope condenser with Köhler illumination has to be in Köhler positioning. 3. Turn the condenser centring screws to move 2. Remove one of the eyepieces and look into the empty eyepiece sleeve. When using a centring at the dark spot to the centre of field of view the eyepiece sleeve (highly recommended and in some inverted microscopes already build in at the tube ­(Figure B). This completes the centration. e.g. Olympus U-BI90CT – binocular tube), bring the bright ring (condenser ring slit) and dark ring (objec- tive phase plate) into focus by turning the eye lens focus. 3. Use the centring screws for the condenser in- serts to centre the phase contrast ring, so that the bright ring overlaps the dark ring within the field of view (see figure). 4. Repeat these steps for each phase and con- trast ring set. 4. Engage the desired objective. Using the con- denser height adjustment knob, adjust until 5. Remove the centring telescope and replace it the darkfield spot is eliminated and a good with the eyepiece. darkfield image is obtained. 6. Widen the field iris diaphragm opening until the diaphragm image circumscribes the field of view.

Contrast and Microscopy Basics of light Microscopy & Imaging • 25 Fig. 30: Epithelial cells with transmitted dark- Fig. 31: Wafer at reflected darkfield contrast. field contrast.

when an eyepiece is removed and can halo rings, visible round the relatively Making it look better – for vision only then be aligned under optical control (for thick astrocyte cell bodies, whereas the better viewing a centring telescope fine details show dark phase contrast. It is essential to select the optical con- should be used). When observing cells in a chamber trast method appropriate to the sample Due to the optical principles of phase e.g. those of a 24 well plate, the cells investigated and depending on the fea- contrast it allows good contrast in trans- which are best for imaging may lie at the tures which are to be made visible. You mitted light when the living specimens border of the well. It is possible to see the can only build upon something which is are unstained and thin. A specimen cells, but because the light path is already there. should not be more than 10 µm thick. For changed by the border of the well and Having done the best possible job here these specimens the dark contrast is the adhesion effect of the medium, the you can try to better visualise these fea- valid, whereas thicker details and over- phase contrast is totally misaligned at tures. It often makes sense to adjust the laying structures produce a bright halo this position. We can realign the phase image contrast digitally. Here you can ei- ring. This halo-effect can be so strong rings for this specific specimen position ther elevate the overall image contrast or and superimposed that a clear analysis of as described below, but will have to rea- you can emphasise special structures. the underlying morphology becomes crit- lign them again when the stage is moved Whatever you want to do, you should ical. Nevertheless, the artificial looking to other positions. Furthermore, the use again utilise the histogram to define ex- image of thicker structures can be used of the special Olympus PHC phase con- actly the steps for image improvement. by expert eyes to determine how many trast inserts instead of the PH1 helps to Fig. 28 shows a drastic example (left cells within an adherent cell culture are ensure better contrast in multi well side). The optical contrast is so poor that undergoing or have entered cell plates where meniscus problems are ap- the cells’ relief is hardly visible. The cor- death pathways. Fig. 32 shows bright parent. responding histogram shows that the

Fig. 33: Epithelial cells scratched with a spoon from the tongue and transferred Fig. 32: Astrocytes with phase contrast, note: cell to a cover slip. A,B: DIC illumination at different focus plane; C: Oblique contrast; bodies are surrounded by halo rings. D: Olympus Relief Contrast imaged with lower numerical aperture.

26 • Basics of light Microscopy & Imaging Contrast and Microscopy pixels are grouped in the centre of the histogram indicating that only a small Box 11: How to set up the transmitted DIC microscope (e.g. Olympus type) fraction of the camera’s dynamic range J Use proper Köhler positioning of the microscope with a specimen in focus (best with 10x objective). has been used (intensity values 165–186 out of 256). The image has only 21 differ- J Insert polariser (condenser side) and analyser (objective side) into the light path. ent intensity values. The present features J Remove specimen out of the light path. can be made visible by stretching the his- J Turn the polariser until the image gets to its darkest view (Cross-Nicol position). togram over the whole dynamic range from 0–255. This operation does not J Insert the DIC prism at the condenser corresponding to the objective and slider type in use, immediately change the information content of the the image is bright again. image but lets us at least see what is J Insert the DIC slider at the objective side. By rotating the fine adjustment knob at this slider the amount of there (right side). contrast of the DIC can be varied. This stretching comes to its limits as J The best contrast is achieved when the background shows a grey colour and the specimen is clearly soon as there is a single black and a sin- pseudo three dimensional. gle white pixel in the image. In this case a moderate cut of up to 3 % on the left Another Way to Achieve this Setting Is the Following: side (dark pixels) and on the right side J Use proper Köhler positioning of the microscope with a specimen in focus (best with 10x objective). (bright pixels) doesn’t cause much infor- J Insert polariser (condenser side) and analyser (objective side) and the prism slider into the light path. mation loss but increases the contrast of the image’s main features in the central J Remove specimen out of the light path. intensity range. To selectively accentuate J Remove one eyepiece and if available look through a centring telescope, rotate the slider features of a distinct intensity range (so- adjustment knob until a black interference line is visible (box 11- Figure ). called phase), an individual transfer J Rotate the polariser until the black line becomes darkest. function has to be defined which in- creases the phase’s contrast to the back- J Insert the DIC prism at the condenser and insert the eyepiece for normal observation. ground or other phases. The intensity J Fine adjustment of DIC-slider can be done for selecting the amount of interference contrast. and contrast image operations are usu- ally performed after image acquisition. Often, the camera control of the image pret. Structures within a specimen can is by using oblique illumination: however, analysis software makes it possible to be identified and even though they are the theory of oblique illumination is more have the contrast stretched image calcu- only two dimensionally displayed they complex. Details can be found at: www.ol- lated and displayed in the live mode. look three dimensional. ympusmicro.com/primer/ techniques/ob- ­Alternatively, the contrast can be set Real three dimensional images can lique/obliquehome.html. Special condens- manually on the live histogram while the only be observed at a ers are available to handle this technique direct results can be monitored in the where two light paths of two combined on upright microscopes with a lot of com- live image. Whatever way the digital microscopes are used, sending the image fort. Oblique illumination is also often used ­histogram is used, it is a powerful tool to our eyes at a slightly different angle. on stereo microscopes to enhance the con- to maintain image fidelity as well as But this will be a topic to be described trast of a 3D surface simply by illuminat- to create a clear picture of physical na- later. ing the specimen from one side. For re- ture. flected light, this can be done with flexible Simple from One Side cold light fibres, or with LED ring light sys- Light alone is not enough – Differen- The simplest way to achieve a contrast tems that offer reproducible illumination tial Interference Contrast (DIC) method that results in a relief-like image of segments that can even be rotated.

If you have ever been skiing on a foggy day, you will know that diffuse light strongly reduces the ability to see differ- ences in the height of the snow – and that can cause major problems. An equally distributed light source does not produce clear shadows and this causes reduced visibility of three dimensional structures. Our human vision is triggered to see three dimensions and is well trained to interpret structures if they are illumi- nated more or less from one point. The resulting dark and bright areas at the surface of a structure allow us to easily recognise and identify them. Using our experience, we get information of height and distance. Therefore, a contrast method that displays differences in a structure as a pattern of bright and dark areas is something that looks very famil- iar to us and seems to be easy to inter- Fig. 34: Simple principle of Nomarski DIC microscopy.

Contrast and Microscopy Basics of light Microscopy & Imaging • 27 Fig. 36: Alignment of Olympus Relief Contrast or Hoffmann Modulation condenser insert, grey Fig. 35: Specimens imaged with different shearing values. Left side shows thin NG108 cells imaged and dark areas are located within the objective with the high contrast prism set, the middle image shows a thick specimen imaged with the and areas A and B are located as an insert within high resolution prism set and the right image shows PtK2 cells imaged with the general prism set. the condenser and can be aligned accordingly.

In transmitted microscopy the oblique fied contrast of phase differences which To achieve transmitted Nomarski DIC condenser (e.g. the Olympus oblique con- occurs when light passes through mate- images, four optical elements are needed: denser WI-OBCD) has an adjustable slit rial with different refractive indices. De- a polariser, two prisms and an analyser. that can be rotated. After Köhler align- tailed information about the theory and For the reflected DIC setup only one DIC ment of the microscope this slit is in- use of the DIC method can be found at prism (the slider) is required. serted in the light path and results in il- www.olympusmicro.com/primer/tech- Let us have a closer look at transmit- lumination from one side so that niques/dic/dichome.html. Here we will ted DIC. The wave vibration direction of specimens that vary in thickness and concentrate on the basic introduction light is unified by a polariser located be- density are contrasted. Rotating the slit and the practical use. tween the light source and condenser enables us to highlight structures from every side. The contrast itself is produced by the complete thickness of the speci- men and the resolution of the image is limited due to the oblique illumination (fig. 33c). To overcome the limitations of oblique contrast, Nomarski Differential Interfer- ence Contrast (DIC) is commonly used for high resolving images. The benefit of this method is that the relief like image is only contrasted at the focus area (depth of field). The user can optically section a thicker specimen by changing the focus level. As shown in fig. 33 the contrasted fo- cus layer can be restricted to the layer of the somata (fig. 33a) or the superficial part of these cells (fig. 33b). In addition, using light (mostly used around 700 or 900 nm) instead of white light, this technique allows a very deep look of more than 100 µm into thick sections, which is often used in neurobiological re- Fig. 37: Transmitted polarising microscopy; variation of melted chemicals viewed with crossed polar- search. Nomarski DIC creates an ampli- iser. Images courtesy of Norbert Junker, Olympus Europa GmbH, Hamburg, Germany.

28 • Basics of light Microscopy & Imaging Contrast and Microscopy (fig. 34). In the condenser, a special in- sert – a Wollaston prism (matching the magnification of the objective) – divides every light ray into two, called the ordi- nary and the extraordinary, which vi- brate at a 90 degree angle to each other. Both light rays then travel a small dis- tance apart, the so called shearing dis- tance. At the specimen, the ray that is passing through e.g. a cell part is delayed compared to the one passing through the surrounding medium. This result in a Fig. 38: Brightfield image of a wafer. Left side: The ring shaped background pattern is clearly visible. phase shift of both rays, which are re- Right side: The same image with real-time shading correction. combined with the help of a second prism located at the objective revolver. Only those combined rays with a phase shift, interfere in a way that they contain vi- bration planes that pass through the an- alyser. To create an easily observable Box 12: How to describe image processing operations mathematically pseudo 3D image, a prism can be moved in and out to enhance the phase shift be- From the mathematical point of view, an image processing function is a transfer function which is applied to tween ordinary and extraordinary ray. At each pixel of the original image individually and generates a new pixel value (intensity/colour) in the resulting a mid position the background should be imaging. The transfer functions can be roughly divided into point operations, local operations and global op- dark and the specimens should be visible erations. as if illuminated from the equivalent J Point Operations: The resulting pixel is only dependent on the intensity or colour of the original pixel, but North-West and South-West simultane- it is independent from the pixel places (x/y) and the pixel intensities/colours of the neighbouring pixels. ously. When the slider is screwed more to Point operations can be monitored, defined and executed via the histogram. All intensity and contrast op- one direction the typical three dimen- erations are point operations. To be exact, point operations are not filters in the more specific sense of the sional view of the specimen will come up, word. either having the sun in North-West or J Local Operations: These look not only at East-South location. This will only work the pixel itself but also at its neighbour- if no depolarising material (e.g. plastic) is ing pixels to calculate the new pixel used within the light path. Therefore, a value (intensity / colour). For example, real high resolving DIC method can not convolution filters are local operations. be used with plastic Petri dishes or multi Many well known noise reduction, well plates. In those cases other methods sharpening and edge enhancing filters like Hoffmann Modulation Contrast or are convolution filters. Here, the filter’s a) the analogous Olympus Relief Contrast transfer function can be described as a are commonly used. The DIC method does not require spe- cial objectives, but the prism has to match the objective used. Suitable prisms are available for most fluorite and apochro- mat objectives. DIC contrast can be of- fered with shearing values for every opti- misation, a high contrast set-up that is best for very thin specimens (higher shearing value, fig. 35 left), a standard b) prism slider for general use (fig. 35 right) and a high resolution slider for thick matrix consisting of whole positive or negative numbers, known as weight factors. Any original pixel is sit- specimens (lower shearing value, fig. 35 uated at the centre of the matrix. The matrix is applied to the original pixel and its neighbours to calculate mid) like C. elegans or zebra fish em- the resulting pixel intensity. See box 12 – Figure 1a as illustration. The 3x3 matrix has nine weight factors bryos. W1 – W9. Each weight factor is multiplied with the respective intensities I1 – I9 of the original image to calculate the intensity of the central pixel I5 after filtering. See a concrete numerical example in box 12 – Different names but the same principle: Figure 1 b. In addition to this calculation, the resulting intensity value can be divided by a normalisation Hoffman Modulation Contrast – Olympus factor and added to an offset value in order to ensure that the filter operation does not alter the average Relief Contrast image brightness. The matrix weight factors determine the function and the strength of the filter. For ex- Whenever high resolution is needed and ample, a 3x3 matrix with all weight factors being one makes a simple average filter used for noise reduc- plastic dishes are used, the Hoffman tion. Sharpen filters consist of matrices having a positive central weight factor surrounded by negative Modulation or Olympus relief contrast weight factors. The unsymmetrical matrix in fig. 43b creates a pseudo topographical contrast in the result- method are common techniques for in- ing image. verted transmission light microscopes J Global operations: All pixels of the original image with regard to their intensity / colour and position (x/y) e.g. for in vitro techniques (fig. 33d). are used to calculate the resulting pixel. Lowpass and bandpass noise reduction filters are examples. In In principle it combines the oblique il- general, all Fourier Transform filters are global operations. lumination of a specimen where the re-

Contrast and Microscopy Basics of light Microscopy & Imaging • 29 fraction differences at various parts of the specimen shape are used for contrast enhancement. It allows semi-transparent specimens structures to be analysed in a manner difficult to achieve using bright field microscopy. (See also www.olympusmicro.com/primer/tech- niques/ hoffmanindex.html). For this contrast technique a special condenser and objective are needed. In advanced systems the condenser is equipped with a polariser and a con- denser slit plate (according to the objec- tive in use, fig. 36 A and B area). The achromatic or fluorite objective contains a modulator insert at one side of the back focal plane (fig. 36 dark and grey area). The slit plate at the condenser has to be aligned according to the modulator within the objective (fig. 36). This can be done by visualising these elements via a focused centring telescope (remove one eyepiece and insert the telescope). The resulting contrast can be varied by rotat- ing the polariser at the condenser. The interpretation of DIC and relief contrast images is not intuitive. These techniques contrast different refractive indices within a specimen into a pseudo- three-dimensional image. This means that specimen details which look like holes or hills on the surface of a struc- Fig. 39: Fluorescence image of stem cells. The detail ture (see fig. 35 left and right side) may zoom better reveals the noise level. simply be areas of different refraction in- dex but not necessarily different in height.

Get more then expected – Polarisation DIC equipment on a microscope allows the user to employ a totally different mi- analytical advantages, polarising micros- beauty of hidden nature as being mir- croscopic method, simple polarisation. In copy also offers images of high aestheti- rored in the images they acquire, but the this case no prisms are used and only the cal value (fig. 37). focus of the daily work lies upon the ac- first settings of aligning polariser and an- tual information within the image. Yet alyser in crossed positioning (see box 11) Revealing structures with imaging this information can be superimposed by are needed. This setting will generate a filter techniques artefacts which might be created by dark image because the vibration direc- specimen preparation, illumination con- tion of light that is travelling through the We are now at the point where the suit- ditions, camera settings or display pa- specimen is exactly the vibration direc- able microscope contrast method has rameters. At this point, filter techniques tion that is totally blocked by the ana- been selected and the optimum camera come into their own right. lyser. However, if the specimen contains settings have been made. So the field has Depending on the reason for imaging, material that is able to turn the light, been tilled and digital filter and image a wide variety of processing steps can be some light can pass the analyser and is processing techniques come into play. applied to reveal new information or to observed as a bright detail on a dark This is explained in more detail below. enhance image clarity for the details that background. Examples of such aniso- are under observation. What kind of ar- tropic materials are crystalline Vitamin Along a thin line tefacts may appear in a digital image? C, emulsions of butter, skeletal muscles, Imperfect digital images may look weak urate crystals (Gout inspection), amyloid, Image processing can turn the photo- in contrast, unevenly illuminated, rocks and , as well as metal sur- graph of an ordinary face into an unbe- wrongly coloured, diffuse, blurred, noisy, faces or the DIC prism itself. The list is lievable beauty or create images that dirty etc. These distortions might make very long and often the combination of have no counterpart in the real world. the image look unprofessional. But what simple polarising microscopy and DIC Knowing this, digital image processing is is more, they might make any automatic can offer a more detailed analysis of the often compared with manipulation of re- measurement routine based on threshold specimen, simply by using equipment sults. It might be true for many micro- values or edge detection difficult and that is already available. Beside all the scopists that they enjoy the amazing sometimes even impossible to apply.

30 • Basics of light Microscopy & Imaging Contrast and Microscopy Where there is light, there is also shade

One major source of distortion in a light microscopic image is brightness fluctua- tion, which may arise from out-of-axis il- lumination conditions or when the mi- croscope is not aligned optimally. Uneven illumination artefacts may also appear under regular observation conditions us- ing low objective magnification and/or in combination with low magnifying cam- era adapters. Then the shading artefacts are manifested as a ring shaped shade structure which becomes even darker near the edges of the image (fig. 38 left side). The minor shading within the back- ground needs to be corrected. The oper- ation applied here is called background correction, flat-field correction or shad- ing correction. (A simple tool for back- ground correction can be downloaded at www.mic-d.com.) The best method is to record a back- ground image and a dark image in addi- tion to the “raw” specimen image. The background illumination profile is simu- lated in the background image, whereas the noise level of the camera system is apparent in the dark image. The advan- tage here is that these images are gath- ered independently from the specimen Fig. 40: The same image as in fig. 39 after application of the Sigma noise reduction filter. The detail zoom image. makes the improvement better visible. The background image is usually ob- tained by removing the specimen and leaving an area of mounting medium and cover slip in place. If this is not possible, the same result can also be achieved by leaving the specimen in place but to de- focus the microscope. It is important that Many of these artefacts can be reduced Processing”, which means that numer- the background image does not show any by using a suitable digital filtering tech- ous real-time functions can be executed debris. The dark image is acquired by nique. Often there exist several different during image acquisition. These include closing the camera shutter. It should use filters to improve the image quality. It is the online histogram, which is used to the same exposure time as the specimen not always easy to find the best method monitor the image brightness and con- image. and parameter settings to suppress the trast. Additionally, the histogram func- The shading correction algorithm first image defects without disturbing the tion itself can be used to have the image subtracts the dark image (d) from the “real” image information. This is the contrast improved automatically or to set specimen image (s) as well as from the highest standard which the optimum fil- it in the live mode by hand. (How to use background image (b). Then it divides ter should meet: removing the image ar- the histogram to improve image contrast the corrected background image through tefacts and keeping the specimen struc- see box 8.) Another real-time function is the corrected specimen image: (s–d)/(b- tures in the image. This goal cannot the white balance, which corrects colour d). This procedure can also be performed always be accomplished. When doing im- shifts at the moment of acquisition. Non- automatically with the live image. Here, age processing, there is often a thin line uniform specimen illumination can also the background and the dark images between what needs to be done and what be immediately corrected in the live im- have to be acquired only once for each would be good to accomplish. In addi- age using the online shading correction. microscope objective using a standard tion, there is a thin line between scien- This is described below. The operations sample and standard acquisition param- tific virtue and creative composition! of contrast enhancement, white balance eters. This is how the image in fig. 38 and shading correction can be applied (right side) was taken. Now or later – both are possible during or after image acquisition. They If the background and dark images are point operations, mathematically are not available, the shading correction Digital image processing is usually ap- speaking (refer to box 12). This makes can also be derived from the specimen plied after image acquisition and image them suitable for reducing image arte- image itself. For example, a very strong storage. But the latest image processing facts without distorting the image con- smoothing filter averages out all struc- systems support “Live Digital Image tent itself. tures and reveals the background. Or a

Contrast and Microscopy Basics of light Microscopy & Imaging • 31 statistical noise without greatly broaden- ing the specimen structures (see also ex- ample in fig. 40). The Median and Rank filters eliminate the shot noise widely (fig. 41). They are especially suited to suppress dust and dirt, in general all small structures which stand out the underground. Comparing the effects the different smoothing filters have on the artificial sample image (fig. 41), the image’s two different distortions could be eliminated best by applying two filters successively: first the Sigma filter against the statistical noise, and then the Rank filter against the shot noise. This leads to the somewhat philosophical rule which says that it is best not to strike everything all at once but to cut the prob- lem into pieces and find the appropriate measure to deal with each piece sepa- rately. NxN-, Sigma- and Rank filters have user-definable controls which make it possible to adjust the smoothing effect optimally to the image. There is another option to reduce statistical noise, usually with fixed parameters, which gives quite good results: the lowpass filter. This filter is a global operation (refer to box 12) which filters out high frequency and pe- Fig. 41: Comparison of different smoothing filters. See text for further explanation. riodic distortions. Yet here, the edges be- come somewhat broadened. An even bet- ter global operation is the bandpass filter which reduces the noise and preserves multi-dimensional surface function can ence on one hand and so-called hot pixels the steepness of edges. be applied to fit the background illumi- or shot noise on the other hand. The shot nation profile. noise is individual bright and dark pixels Revealing the details and can come from a defective camera. Just too much noise These pixels also mirror artefacts from Another class of filters seen from the ap- dust particles, dirt, debris or small plication point of view are the sharpen Random noise can be an annoying phe- scratches. filters. Sharpen filters can be used to en- nomenon which is encountered when the The noise reduction filters applied to hance fine image details. After process- specimen signal is low and/or was en- the image are local or neighbouring op- ing an image with a sharpen filter, the hanced with a high gain factor. An exam- erations, i.e., the neighbouring pixels are image appears to look clearer. But ple from fluorescence microscopy is taken into account to calculate the new sharpen filters have to be applied some- shown in fig. 39. In regular brightfield pixel value of the resulting image (see what carefully because they can create light microscopy images random noise is also box 12). The Mean and NxN filters artefacts themselves if overdone. The es- usually not visible. But it is often un- just average everything and thus also tablished sharpen filters are local neigh- sheathed when a sharpen filter is applied broaden the structure of the rectangle bouring operations like the convolution to the image. See fig. 42, first and second (fig. 41). The more powerful the smooth- filters (see box 12). The newer filters use image as an example. There are different ing effect, the more noticeable the mor- the so-called unsharp mask algorithms. smoothing filters which can be used to phology of the object within the image (Please have a look at fig. 42.) Sharpen- reduce the noise. Smoothing filters are will be altered. Shot noise will not be re- ing the first, original image, gives the also applied to suppress artefacts deriv- moved. Whereas the Mean filter applies second image. Unfortunately, this en- ing from small structures like dirt, dust, fixed settings, the extent to which the hances not only the structure but also debris or scratches. Yet the challenge is NxN filter averages depends on the pa- the noise present in the original image. to find a filter which eliminates the noise rameters set. So the sharpen operation makes the and the artefacts without smoothing the The Sigma or Gaussian blur filter is a noise visible. Additionally, the filter pa- object edges too much. special average filter which does not af- rameters have been set too aggressively Fig. 41 shows an artificially created fect any pixels deviating greatly from so that the “lamellae” structures of the image having only one real structure: a their surrounding area (fig. 41). So “real” diatom look artificial. This context has to simple rectangle of light shading upon a structures and edges like the rectangle be kept in mind when applying a sharpen dark background. This structure is su- are not touched whereas the random filter: Before the sharpen filter can be perimposed by two different kinds of dis- noise disappears. The Sigma filter is the applied to an image, a noise reduction tortion: strong statistical noise interfer- method of choice to reduce selectively filter must often be applied first. Any

32 • Basics of light Microscopy & Imaging Contrast and Microscopy Fig. 42: Brightfield image of a diatom. (a): original image. (b): after application of a strong sharpen filter. This increases the noise. (c): after application of the Sigma noise reduction filter and a moderate sharpen filter. (d): after application of the Sigma noise reduction filter and the DCE sharpen filter.

sharpen filter with user controls should selectively takes the lesser intensity mod- Putting everything together be applied conservatively. Here, less is ulations between neighbouring pixels often more. The third image in fig. 42 and enhances them, while greater inten- To integrate what has been said above, shows the result after noise removal via sity modulations remain as they are. So there are some image processing opera- a Sigma filter and successive moderate the DCE filter renders image structures tions which are meaningful or even nec- sharpening. visible which are barely distinguishable essary to apply to an image, either in real The DCE filter is a specially designed in the original image. The filter works time or after acquisition. These opera- sharpen filter and is part of the Olympus the better the more fine structures the tions can reduce image distortions, pre- image processing software solutions. The image has. Resulting images are more pare the image for automatic measure- letters DCE stand for Differential Con- detailed and appear more focused. See ments or just make the image look better. trast Enhancement. The DCE filter en- fig. 42, last image, and both images in The following steps suggest a possible hances weak differences in contrast. It fig. 43 as examples. strategy to proceed with digital image operations when acquiring an image (see also: www.olympusmicro.com/primer/digi- talimaging/imageprocessingsteps.html) 1. Shading correction is to equalise une- ven background illumination. 2. Contrast enhancement is to optimise brightness and contrast. 3. White balance is to adjust colour shifts. 4. Smoothing is to reduce random noise and suppress shot noise and small ar- tefacts like dust and dirt. 5. Sharpening is to enhance fine edge detail.

Here, we have covered a few of the wide range of possible filter operations. There are many more which go beyond scien- tific applications and transcend the pure Fig. 43: Epithelial cells viewed with phase contrast (left side) and transmitted darkfield contrast (right picturing of the visible reality towards side). The original images are acquired with optimised microscope settings. These have already been creative, funny, intelligent and beautiful shown in box 8 and fig. 30. The lower right parts of the images are each filtered with the DCE filter. compositions.

Contrast and Microscopy Basics of light Microscopy & Imaging • 33 Shining Fluorescence Details

34 • Basics of light Microscopy & Imaging Shining Fluorescence Details A first introduction specific questions regarding life science or materials science specimens and to Most of the newly developed microscopic visualise the result in a specific colour. techniques make use of fluorescence. For example, to identify the distribution Fluorescence microscopy is more than of a specific protein within a tissue, a ’just making colourful images in one, fluorochrome can be used to mark the Shining two, three or even more colours‘, it is an protein via an (immunohisto- enormously powerful tool for investiga- chemistry). tions in the biological field. Fluorescence Histological staining procedures for techniques place numerous benefits in transmission light microscopy have a long Fluorescence Details the hands of researchers wishing to ex- history in microscopy. One essential ad- ploit the upper limits of sensitivity and vantage of fluorescence microscopy, how- resolution in microscopy. Beyond the sci- ever, is the presence of fluorescent mole- entific benefits, just studying the fluores- cules themselves. Even if a structure is cence images can sometimes offer a new too small to be resolved by a light micro- insight into a reality which is usually hid- scope, the emission light remains visible. den from the view of the world. Fluorescent molecules act like light sources that are located within specific Fundamentals – a development both areas of a specimen, indicating their lo- remarkable and ongoing cation with light of a specific colour. These indicators require energy to emit Within the last few decades numerous light and this is given to the fluorochrome new techniques such as confocal, decon- by the excitation light, provided by the volution, ratio-imaging, total internal re- microscope light source. A specific range flection and applications such as the use of wavelengths is needed to excite a spe- of fluorescent proteins (e.g. GFP) have cific fluorochrome. For example, a range initiated a real renaissance in the micro- of blue wavelengths around 480 nm can scopy field. All of these techniques make excite the FITC fluorochrome. This in- use of fluorescence, a phenomenon first volves using two different light beams observed by Sir George Gabriel Stokes in and having to separate them. On the one 1852 and physically described by Alex- hand, we need to direct the light of the ander Jablonski in 1935 (see box 13). microscope light source onto the speci- Compared with today, the number of men and on the other hand we have to

Fig. 44 a: Spectral dia- gram of a typical fluoro- chrome that can be excited using blue light and then emit green fluorescence. This exam- ple shows that both curves are close to each other (small Stokes shift) and also show an area of spectral overlap.

Fig. 44 b: Cracks within the resin of an electrical circuit board.

widely used fluorochromes was restricted observe the light that is originating from to just a few in the 1990‘s. For example, the fluorochromes. This separation is nowadays the fluorochrome FITC filter possible due to the “Stokes shift”, which set for fluorescence microscopy can also describes the fact that the wavelength of be used for a wide range of different fluor- fluorescent light (emission) is always ochromes with green emission spectra. longer than that of the excitation. Using a blue excitation light will thus result in a Why use fluorescence? green emission for the FITC fluoro- chrome. Every fluorochrome has its own Using fluorescence can be compared to excitation and emission spectra. The mi- the situation where a teacher asks if the croscope must be perfectly equipped to students have done their homework. The visualise this fluorescence accordingly. rapidly changing facial colours of the “guilty” students provide conclusive “re- Fluorescent molecules sults”. However, fluorescence techniques are not really for answering questions There are two options for using fluores- such as the above. They help to address cent microscopy, depending on what is

Shining Fluorescence Details Basics of light Microscopy & Imaging • 35 Fig. 46: Characteristics of an Olympus HQ filter set optimised for GFP. By using up to 100 layers of an ion deposition tech- nique with new sub- strates and coating materials, filters can be created with high trans- mission and exception- ally sharp cut-off (toler- ances < 2 nm). Autofluorescence is significantly reduced and the filters are equipped with a stray light noise destructor to enhance Fig. 45: Light path on a microscope equipped for fluorescence. the signal-to-noise ratio. being investigated: either the specimen sold together with the specific target- that change their fluorescent properties itself already contains molecules that finding molecule (e.g. a goat anti-mouse when bound to different amounts of mol- show fluorescence; or specific fluoro- IgG antibody Cy5 labelled). Quantum dots ecules such as calcium (e.g. Fura-2). This chromes have to be added to the speci- are also partial members of this group means that these fluorochromes are used men. is widely found in but different in structure and theory. directly and do not necessarily require a materials such as sections or elec- They are nanometre-sized crystals of pu- transportation system such as an anti- trical circuits, for example. The resin on rified semiconductors and exhibit long- body. circuits is fluorescent and can easily be term photo stability, as well as bright The third group contains fluorescent inspected under blue excitation (fig. 44b). emission. The main difference feature- proteins produced by organisms them- The green emission of the resin enables wise is their capacity to be excited by selves such as GFP. This makes it possi- the detection of the tiniest cracks which wavelengths up to the blue range and ble to set up experiments in an entirely may influence material quality. having different emission colours de- different way. It is most often used for Fluorochromes themselves can be di- pending on their size. Due to their flexi- live cell imaging or developmental stud- vided into at least three groups. The first ble capabilities they can also be used for ies and molecular . All fluoro- are fluorochromes that require other direct staining of cells (e.g. cell viability). chromes show distinct spectral proper- molecules, such as or lectins, The second group contains fluoro- ties (fig. 44a) and can often be combined to bind to specific targets. This rapidly chromes that have inherent binding ca- for a multicolour specimen analysis. growing group of fluorochromes includes pacities, such as the DAPI nucleic acid longstanding ones such as FITC and stain or the DiI anterograde neuron stain. Requirements for a fluorescence TRITC. Most of these fluorochromes are This group also contains fluorochromes ­microscopy system

Box 13: What is fluorescence exactly? The Light Source To excite the fluorescence of a fluoro- Fluorescence activity can be schematically illustrated using the familiar Jablonski diagram, where absorbed chrome, an intense light source is needed light energy excites to a higher energetic level in the fluorochrome. These lose a proportion of their that provides the necessary excitation energy by vibration and rotation and the rest is then given off as fluorescent light as they return to their orig- wavelengths to excite the particular inal state after only about 10 ns. fluorochrome. In the first chapter we de- scribed the most frequently used light sources for light microscopy and their alignment. A correctly aligned burner plays an essential role in creating good fluorescent images. If the burner is not correctly aligned, the signal from the fluorochrome may be excited in a very weak way and the field of view will not be homogeneously illuminated, or can show a bad signal to noise ratio. Due to the very different specimens and applications that can be analysed us- ing fluorescent techniques, there is no one-size-fits-all strategy. All fluoro- chromes are subject to the process of photo-bleaching, which is the chemical destruction which takes place during ex- citation. Living cells, too, may be dam- aged by the intense light. This makes it of supreme importance to restrict the exci- tation brightness and duration to the ex-

36 • Basics of light Microscopy & Imaging Shining Fluorescence Details act amount needed. The amount of light The filter set could thus be chosen as fol- can be efficiently modified with neutral lows: an exciter with BP460-490 band- Box 14: How are filters described? density filters or a motorised attenuator. path characteristics, a dichromatic mir- Filters for fluorescence microscopy are described When the light is not needed for excita- ror with DM505 characteristics and an using letters and numbers: e.g. BA 460-480. In tion, the shutter is closed. These features emission filter with LP510 long path this case, BA stands for bandpass filter and the can be predefined in motorised micro- characteristics. This will result in a bright numbers indicate the 50 % cut on and the 50 % scopes and help to optimise experi- green image of all green fluorescent mol- cut off (fig. 46 and see boxfigure below). For a ments. ecules. So far, so good. The vital cells are longpass filter, just the number for the 50 % cut stained. To verify that non-labelled cells on is indicated. Some companies use a different The Filter Sets are in fact dead, propidumiodide (PI) dye description for the same overall filter characteris- In addition to the special light path within may be used. This dye cannot pass tics (e.g. 470/20), 470 indicating the central wave- a (fig. 45), an- through intact cell membranes. The DNA length and 20 indicating the range of the full other necessity is having a filter which of dead cells only will be labelled and ap- width at half maximum (FWHK). The correct trans- only permits the required range of exci- pear red. This means it can be used along mission characteristics of the filter can only be tation wavelengths to pass through. This with the FDA. When doing so, however, provided using a transmission/wavelength dia- is achieved using an exciter filter with the excitation of the FDA with the filter gram (see boxfigure below). what are referred to as bandpass filter mentioned will cause some problems. PI characteristics (box 14). After restricting will already be excited by the blue light the light to the particular colour that is and the red emission is also visible. This needed to excite the fluorochrome, the is caused by the emission filter because light is directed to the specimen via a di- in this set up, all wavelengths above 510 chromatic mirror (fig. 45). nm are allowed to pass through. Both As indicated by its name, the dichro- are thus excited and visible. A defi- matic mirror treats different colours dif- nite separation of both signals is not pos- ferently. It reflects light below a given sible and as we will see later on, this can wavelength and is able to let longer cause problems during imaging. wavelengths pass through. The excita- To separately identify both signals tion light travels through the objective to from the cell culture, the emission filter the specimen, acting like a condenser. required for FDA is a bandpass filter This is where the fluorescence phenom- enon takes place. Excited by the light, the fluorochromes emit the fluorescence light of longer wavelengths. This is cap- tured by the objective, moving on to the dichromatic mirror, now letting the longer wavelengths pass through. The last step of filtering is undertaken by the emission filter (also called a barrier fil- ter, see fig. 46). This filter restricts the light colour to best fit the fluorochrome emission and the question being investi- gated. It ensures that no unwanted wave- lengths are observed and analysed. The emission filter can be designed as a band- pass filter (precisely restricted to one spectrum) or as a longpass filter (brighter in effect, but less optimal due to a reduc- tion in restricted wavelengths). To help find the best filter combination for the fluorochromes in use and the analysis in mind, a variety of web pages is available (e.g. www.olympusmicro.com/primer/ java/fluorescence/matchingfilters/index. html). A straightforward example (be- low) will demonstrate how the combina- tion of filters may differ depending on the context. Fig. 47: Upper row: Images of a double-labelled specimen (DAPI for DNA staining and Alexa 488- labelled antibody for F-Actin staining). The first image shows the detection of the fluorescence using a filter set with the appropriate excitation filter for DAPI and with a long pass emission filter. Analysed Using the correct filter set – an example with a monochrome camera, even the Alexa-488-labelled structures are visible and prevent effective If you wish to make a vital analysis of a analysis. The blue and the green image show the same specimen with optimised emission filters, cell culture you may choose a Fluores- allowing the separation of each signal. cein-diacetate (FDA) to stain vital cells. The lower row shows an example of an image area analysed with an achromat (left), a fluorite (centre) and an apochromat objective (right) at the same magnification. This fluorochrome is excited by blue light Note that the signal intensity and colour transmission increases from achromat to apochromat due to and will have a green emission. The vital the increasing NA and quality of the objectives, whereas the background intensity in the upper left cells will be the ones appearing green. corner remains unchanged.

Shining Fluorescence Details Basics of light Microscopy & Imaging • 37 ised and recorded. Numerous properties Box 15: How to measure FRET. are required to use fluorescence micros- Ratio measurement copy effectively. These include: high res- olution, extreme sensitivity, cooling, vari- An epifluorescence microscope able exposure times and an external equipped with highly specific fluo- trigger function. Generally no single de- rescence filter sets, a digital camera tector will meet all these requirements in system and image analysis software fluorescence microscopy. Consequently, is required. The ratio between the in- the fluorescence microscopist frequently tensity of the sensitised acceptor has to compromise. However, the cam- emission (= FRET signal) and the in- eras used in fluorescence microscopy tensity of the donor emission is cal- should at the very least offer high signal culated (see figure). The results have sensitivity, low noise and the ability to to be corrected for spectral cross-talk quantify intensity of intensity distribu- by spectral unmixing software. tion. Acceptor photo-bleaching. Colour cameras are less sensitive than This effect occurs when the acceptor their monochrome counterparts because is irreversibly destroyed by extensive of the additional beam-splitting and illumination from excitation light. Af- wavelength selection components. There- ter bleaching of the acceptor fluoro- fore, monochrome cameras are prefera- phore, FRET is stopped. Excitation of the donor will result in donor fluorescence emission. After the acceptor ble. They image the fluorescence inten- photo-bleaching, the donor shows more intensity than before. sity of each fluorochrome separately and can handle the respective images later on Donor-bleaching. within a specific colour space using the The sample is irradiated with the specific excitation wavelength of the donor. Images of the donor are con- appropriate software. The resulting mul- tinuously acquired and the intensity decay is quantified. FRET decreases the mean lifetime of the donor and ticolour images can be displayed, printed protects the donor from photo damage. As a result, when using FRET, the rate of photo-bleaching is lower; out and analysed or further processed. without FRET, the rate of photo-bleaching is higher. Every channel of colour represents the result of one fluorochrome. This result (e.g. BP510-550). This filter will only al- sures a perfect signal-to-noise ratio (i.e. can be obtained if the fluorescence filters low the green emission of the FDA to a strong signal with low background in- are chosen correctly. Returning to our ex- pass through and the emission of the PI tensity (fig. 47 lower row)). Background ample of the FDA and PI double labelling: will be blocked. A second filter set can noise can also be introduced by the spec- when using the longpass emission filter to then be used to analyse the PI signal effi- imen itself due to , autofluores- detect the FDA signal, the red emission of ciently (e.g. BP530-550 excitation filter, cence of the specimen or non-optimised the PI will also contribute to the resulting LP575 emission filter, DM570 dichro- staining procedures. digital image. A monochrome camera matic mirror). This principle also applies would not differentiate between red and to other multicolour staining procedures. Type of camera green or between blue and green as Best results are achieved when the emis- The imaging device is one of the most shown in fig. 47 (first image) – it will only sion filter for the detection of the fluoro- critical components in fluorescence mi- show intensity in grey values. Therefore, chrome with the shorter wavelength is a croscopy analysis. This is because the the image will represent the resulting dis- band pass filter (fig. 47 upper row). imaging device used determines at what tribution of both fluorochromes within level specimen fluorescence may be de- the same colour. The use of the bandpass The objectives tected, the relevant structures resolved emission filter as described above will The light gathering capacity of the objec- and/or the dynamics of a process visual- help in this respect. tive plays an essential role in fluores- cence microscopy. For optimal signal strength, a high-numerical aperture (high NA) and the lowest useable magni- fication should be employed. For exam- ple, using a 1.4 NA aperture lens instead of a 1.3 NA lens of the same magnifica- tion will result in a 35 % increase in light intensity, assuming that all other factors are the same. Furthermore, the type of glass that is used requires good trans- mission for the wavelengths used. Fluor- ite or apochromat objectives are used for that reason. Further enhancement can be achieved by selecting objectives with extremely low autofluorescence of the glass material used (Olympus UIS2 Ob- Fig. 48: Multi-colour image of a GFP/YFP double-labelled sample, before (left) and after spectral jectives). When all of these factors are unmixing (right). In this sample, GFP was fused with the H2B histone protein and YFP with tubulin. The provided for – high NA, good transmis- visible result is a pronounced chromatic resolution of both structures now displayed in green and red. sion and low autofluorescence – this en- The tubulin on top of the bright green nuclei is even detectable.

38 • Basics of light Microscopy & Imaging Shining Fluorescence Details How to use light as a tool toxymethylester of the dyes can be added to the medium, loaded passively into the After having laid the fundamental princi- cells and cleaved enzymatically to pro- ples and the microscopic requirements duce cell-impermeable compounds. we can now go deeper into different For the analysis of a typical two chan- state-of-the-art qualitative and quantita- nel FURA experiment it is necessary to tive imaging methods for fluorescence switch between the excitation of 340nm microscopy. These approaches require and 380 nm. When observing FURA advanced microscopic and imaging loaded cells without any stimulus, Ca2+ is equipment which varies from case to bound in cell compartments. The FURA case. So only general statements can be molecules show strong fluorescence at made here about the hardware and soft- an emission of 510 nm when excited with ware used, along with a description of 380 nm, and weak fluorescence when ex- Fig. 49: A climate chamber offers features like the applications and methods. cited with 340 nm. As the cell releases 2+ control of temperature, humidity and CO2 con- Ca from storage compartments due to a tent of the atmosphere. Life under the microscope reaction to a stimulus, FURA molecules form complexes with these released Ca2+ Wherever there is life there are dynamic ions. The fluorescence signal in the emis- Software-tools: Spectral unmixing processes – observation of living cells sion channel of 510 nm increases when and organisms is a tremendous challenge excited with 340 nm, and decreases when A further problem can occur when using in microscopy. Microscopy offers differ- excited with 380 nm. The ratio between different fluorochromes with overlapping ent techniques to reveal the dynamic the signals of the two excitation channels spectra within one sample. The consider- processes and movements of living cells. is used to quantify the change of inten- able overlap of excitation and emission The use of fluorescent proteins and live sity. spectra of different fluorochromes is ex- fluorescent stains enable the highly spe- Why use a ratio? Lamp fluctuations or emplified by the spectra of enhanced cific labelling of different or other artificial intensity changes can green fluorescence protein (eGFP) and molecules within a living cell. The inten- cause a false signal which can be misin- enhanced yellow fluorescent protein sity of the emission light of these mark- terpreted as a change in ion concentra- (eYFP), two commonly used variants of ers can be used to image the cells. In ad- tion when intensity is measured in one the green fluorescent protein (fig. 48 left dition to the application protocol and the channel only. The second drawback of a side). Even with the use of high quality fluorescent technique to be used there single channel analysis is that it displays optical filters it is not possible to sepa- are further general considerations to be the amount of the fluorescence only. rate the spectral information satisfacto- aware of. Therefore, thicker parts may look rily. This means that YFP-labelled struc- First of all, there is the definition of brighter than smaller parts of a cell; tures are visible with a GFP filter set and the needs for the specific environmental however, they simply contain more fluor- vice versa, affecting the resulting images conditions and controlling these with re- ochromes due to the larger volume. Phys- significantly and detrimentally. This phe- gard to specimen handling. Assuming iologically relevant changes in ion con- nomenon, known as “bleed-through”, you are using a cell line and would like to centration in small areas such as growth strongly reduces colour resolution and analyse processes over time, it may be cones of a neuron may then not be visible makes it difficult to draw accurate con- necessary to provide appropriate envi- over time because they are too dim in clusions. The solution to overcome this ronmental conditions for these cells. Dy- fluorescence compared to the bright cen- effect is called spectral imaging and lin- namic processes in single cells can occur tre of the cell. After background subtrac- ear unmixing. within the millisecond range – such as tion, the calculation of a ratio between Spectral imaging and linear unmixing shifts in ion concentrations. Or they may is a technique adapted from satellite im- take minutes – such as the active or pas- aging to wide-field fluorescence micros- sive transport of proteins or vesicles. Mi- copy. Using this highly effective method, croscopes can be equipped with heating it becomes possible to ascertain the spe- stages and/or minute chambers, or with cific emission of the different fluoro- complete cultivation chambers to ensure chromes to the total signal and to restore cultivation of living cells with all the ap- a clear signal for each colour channel, propriate parameters on the microscope unaffected by emission from the other while observation is conducted for hours fluorochrome. This is achieved by redis- or days (fig. 49). tribution of the intensity (fig. 48 right side). It is important to note that original Where do all the ions go? data is not lost during linear unmixing nor is any additional data added to the Fluorescent dyes such as FURA, INDO or image. The original image information is Fluo show a spectral response upon bind- all that is used in this procedure. The ing Ca2+ ions and are a well established overall intensity of pixels is maintained. tool to investigate changes in intracellu- Thus, the technique does not result in ar- lar Ca2+ concentration. The cells can be Fig. 50: Autofluorescence of an apple slice tificially embellished images. After un- loaded with a salt or dextran conjugate (Lieder). The image was taken with an Olympus FluoView FV1000 and the TCSPC by PicoQuant. mixing, quantification analysis not only form of the dye – e.g. by microinjection, The colour bar in the lower right corner is a key, remains possible, but also becomes more electroporation or ATP-induced permea- showing the distribution of the various lifetimes precise. bilisation. Furthermore, the ace- within the image.

Shining Fluorescence Details Basics of light Microscopy & Imaging • 39 pole orientation, insufficient spectral overlap between the emission spectrum of the donor and the excitation spectrum of the acceptor. See box 15.

How long does a fluorochrome live? – count the photons!

When a fluorochrome is excited, it is shifted to a higher energy level. The life- time of a fluorophore is the average amount of time (in the nanosecond/pico- second range) that it remains at the higher energy level before it returns to the ground state. A fluorochrome‘s lifetime is a highly specific parameter for that par- ticular fluorochrome. It can be influenced easily by changes of environmental pa- rameters (e.g. pH, ion concentration, etc.), by the rate of energy transfer (FRET) or by interaction of the fluorochrome with quenching agents. Fluorochromes often Fig. 51: Photoconversion: In Miyawaki‘s lab, Ando et al [1] succeeded in cloning the gene that encodes have similar or identical spectral proper- the Kaede fluorescence protein of a coral. By irradiation with UV light, the green fluorescence can be ties, therefore, analysing a fluorochrome‘s converted to red fluorescence. By using excitation wavelengths in the blue and green spectral range, lifetime is critical to distinguishing the lo- the movement and distribution of the protein can be observed in a living cell. The figure shows the calisation of those fluorochromes in a cell protein diffusion in Kaede-expressing HeLa cells. Stimulation via 405 nm laser in the region of interest converts the Kaede protein from green to red emission. Confocal images were recorded every three (fig. 50). Here, the different experimental seconds using 488/543 nm laser excitation, showing the activated Kaede protein spreading through- setups are referred to as FLIM – Fluores- out the HeLa cell. By using two synchronised scanners (one for photo-conversion, one for image acqui- cence Lifetime Imaging Microscopy. sition), fluorescence changes that occur during stimulation can be observed. Data courtesy of: R. Ando, Dr A. Miyawaki, RIKEN Brain Science Institute for Cell Function Dynamics. Fluorescence Lifetime Imaging Microscopy – FLIM. There are different techniques for fluo- two channels corrects the result for over- between two different fluorophores. The rescence lifetime imaging microscopy all, artificial fluctuations and specimen first fluorophore (the donor) is excited by available on the market, for both wide- thickness. Following a calibration proce- light. The donor transfers its energy to the field and confocal microscopy. Here we dure, even quantitative results are ob- second fluorophore (the acceptor) without focus on Time-Correlated Single Photon tainable. There is a range of fluoro- radiation, meaning without any emission Counting (TCSPC). The fluorochrome is chromes, with different spectral of photons. As a result, the acceptor is ex- excited by a laser pulse emitted by a properties for numerous different ions, cited by the donor and shows fluorescence pulsed laser diode or femto-second available on the market. Microscopical (“sensitised emission”). The donor is pulsed Ti:Sa laser. A photon-counting systems with fast switching filter wheels quenched and does not show any fluores- photo- multiplier or single photon ava- and real time control permit investiga- cence. This radiation-free energy transfer lanche diode detects the photons emitted tions even in the millisecond range. occurs within the very limited range of 1– from the fluorophore. The time between 10 nm distances between the donor and the laser pulse and the detection of a Light as a ruler the acceptor. A positive FRET signal pro- photon is measured. A histogram accu- vides information about the distance be- mulates the photons corresponding to Many processes in a cell are controlled by tween the FRET partners and can be the relative time between laser pulse and inter- and intra-actions of molecules: e.g. quantified as FRET efficiency. When no detection signal. Every pixel of the FLIM receptor-ligand interactions, enzyme-sub- FRET signal is achieved, there may be image contains the information of a com- strate reactions, folding/unfolding of mol- many reasons for that: e.g. too much dis- plete fluorescence decay curve. If an im- ecules. Receptor-ligand interactions, for tance between the FRET partners, insuf- age is composed of three fluorochromes example, occur in the very close proxim- ficient steric orientation, insufficient di- with different lifetimes, distribution of all ity of two proteins in the Angstrøm range. Colocalisation studies do not reveal inter- Fig. 52: Photoactivation of PA- actions of molecules in the Angstrøm GFP expressing HeLa cells. The range because the spatial resolution of a PA-GFP was activated by 405 light microscope is limited to 200 nm. diode laser (ROI) and images of When using a light microscope, how can the PA-GFP distribution within the cell were acquired at 1 sec- the proximity of two molecules in the Ang- ond intervals using the 488nm strøm range be proven beyond the physi- laser. cal limits of light microscopy? Fluores- Data courtesy: A. Miyawaki, T. cence Resonance Energy Transfer (FRET) Nagai, T. Miyauchi, RIKEN Brain Science Institute helps to find an answer to this question. Laboratory for Cell Function FRET is a non-radiative energy transfer Dynamics.

40 • Basics of light Microscopy & Imaging Shining Fluorescence Details dyes can be shown as three different col- bleached, the other remains undamaged activated by irradiation using a 405 nm ours (fig. 50). and acts as a reference label. The popu- diode laser (fig. 52). The intense irradia- lation of bleached fluorochromes can be tion enhances the intensity of the fluores- Time-resolved FRET microscopy identified by subtraction of the image of cence signal of PA-GFP by 100 times. PA- With FRET, the lifetime of the donor de- the bleached dye from the image of the GFP is a powerful tool for tracking pends on the presence or absence of ra- unbleached dye. protein dynamics within a living cell diation-free energy transfer (see above). (fig. 52). Therefore, time-resolved FRET micros- FAUD = Fluorescence Application Under ­Development! copy is a technical approach for quantita- Imaging at the upper limits of tive measurement of the FRET efficiency. We are looking for inventors of new fluo- ­resolution rescence techniques – please keep in FRAP, FLIP and FLAP touch! The bleaching and photoactivation tech- niques described above can be under- FRAP, FLIP and FLAP are all photo- From black to green, from green taken easily using a suitable confocal la- bleaching techniques. By using the laser to red … ser scanning microscope. The laser is a scanner of a confocal microscope, fluoro- much more powerful light source than a chromes (which are bound to a specific If we sunbathe for too long, the colour of fluorescence burner based on mercury, protein, for example) in a selected region our skin changes from white to red and xenon or metal halide. The galvano mir- of a stained cell can be bleached (de- is referred to as sunburn. If we take a UV ror scanners, in combination with acou- stroyed). As a result, the fluorochrome laser and irradiate a cell with Kaede pro- sto-optically tuneable filters (AOTF), per- does not show any appropriate fluores- tein, the colour of the Kaede protein mit concentration of the laser onto one cence. Other proteins that are also la- or more selected regions of the cell with- belled, but where the fluorescence was out exciting any other areas in that cell. not bleached, can now be observed dur- This makes the laser scanning micro- ing movement into the previously scope the preferred tool for techniques bleached area. Dynamic processes, such which use light as a tool. Additionally, as active transport or passive diffusion in the confocal principle removes out-of-fo- a living cell cause this movement. There- cus blur from the image, which results in fore the intensity of the fluorochrome re- an optical section specifically for that fo- covers in the bleached area of the cell. cal plane. This means that high resolu- tion along the x, y and z axes can be ob- FRAP = Fluorescence Recovery After tained. A confocal microscope is an ideal ­Photobleaching system solution for researchers who After bleaching of the fluorochrome in a want to use light not only for imaging, selected region of a cell, a series of im- but also as a tool for manipulating fluor- ages of this cell is acquired over time. ochromes. New unbleached fluorochromes diffuse To achieve an even thinner optical sec- or are transported to the selected area. tion (for a single thin section only and not As a result, a recovery of the fluores- a 3-D stack), Total Internal Reflection Mi- cence signal can be observed and meas- croscopy (TIRFM) may be the technique of ured. After correction for the overall choice (for more information on TIRFM, bleaching by image acquisition, a curve see box 4 or at www.olympusmicro.com/ of the fluorescence recovery is obtained. primer/techniques/fluorescence/tirf/tirf- home.html). The used for FLIP = Fluorescence Loss In Photobleaching this technique excites only fluorochromes The fluorophore in a small region of a Fig. 53: Imaging at the Outer Limits of Resolution. which are located very closely to the cov- cell is continuously bleached. By move- (a) Dual emission widefield erslip (approx. 100–200 nm). Fluoro- ment of unbleached fluorophores from (b) TIRF images acquired with a 488nm laser chromes located more deeply within the green Fitc-Actin outside of the selected region into the red DyeMer 605-EPS8. cell are not excited (fig. 53). This means bleaching area, the concentration of the that images of labelled structures of mem- fluorophore decreases in other parts of Courtesy of M. Faretta, Eup. Inst. Oncology, Dept. branes, fusion of labelled vesicles with the the cell. Measuring the intensity loss, of Experimental Oncology, Flow and plasma-membrane or single molecule in- Imaging Core, Milan, Italy. outside the bleached area results in a de- teractions can be achieved with high z cay curve. A confocal laser scanning mi- resolution (200 nm or better, depending croscope which incorporates one scan- on the evanescent field). ner for observation and one scanner for changes from green to red – and we call light stimulation is especially appropri- this photo-conversion (fig. 51). A confo- References ate for this technique. A simultaneous cal microscope can be used to stimulate [1] R. Ando, H. Hama, M. Yamamoto-Hino, H. Mi- scanner system allows image acquisition fluorochromes in selected parts of the zuno, A. Miyawaki , PNAS Vol. 99, No. 20, during continuous bleaching. cell. The Olympus FluoView FV1000 even 12651–12656 (2002) offers stimulation of one area with one [2] G. H. Patterson, J. Lippincott-Schwartz, Sci- FLAP = Fluorescence Localization After laser whilst observing the result in a dif- ence, Vol 297, Issue 5588, 1873–1877 (2002) ­Photobleaching ferent channel with a second laser simul- The protein of interest is labelled with taneously. A recently developed photo- two different fluorochromes. One is activatable GFP mutant, PA-GFP, can be

Shining Fluorescence Details Basics of light Microscopy & Imaging • 41 3D Imaging

42 • Basics of light Microscopy & Imaging 3D Imaging Reality is multi-dimensional curved, two-dimensional surface ex- panded into three dimensional space. What do we see when we observe a sam- However, one general restriction is still ple through a light microscope? What in- the limited depth of field which makes it formation do we perceive? Most light mi- often impossible to obtain a completely croscopes give us a two-dimensional view sharp image. Additionally, stereo micro- 3D Imaging of physical reality. What we usually ob- scopes are restricted in magnification serve in the widefield microscope is the and resolving power due to their general projection of a three-dimensional physi- design when compared with upright or cal structure down to a two-dimensional inverted widefield microscopes. Further- image. This means one dimension is lost more, when using a stereo microscope which significantly restricts our percep- for documenting images, only one of the tion of the physical reality viewed. microscope pathways will be used for the When looking at two dimensional im- camera in most cases. This means the ages we often reverse the process and, resulting digital image is only two-di- based on experience, more or less un- mensional – there is no third dimension. consciously extend the two dimensional view into the third dimension. But how can we be sure that this interpretation is accurate? While watching the night sky we cannot tell whether two stars are close to each other or are in fact hun- dreds of light years away in space. Paint- ing techniques, however, offer numerous examples of how to create a distinct im- pression of a third dimension using dif- ferent lighting and shading of contrast, as well as well-placed usage of objects with familiar proportions. The fact that we are accustomed to recognising differ- ent lighting on a structure in a three di- mensional way is also how relief is shown – such as in contrast methods used in mi- Fig. 54: The stereomicroscopic (Galileo type) croscopy (e.g. Differential Interference light path – two microscopes in one – offers observation of specimens at the natural conver- Contrast (DIC)). sion angle of our eyes. This makes the 3D topog- raphy visible. However, when the specimen is documented the image will nevertheless remain The stereo view a two-dimensional view. Our eyes always view objects from a slight angle defined by the distance of Thus, when we want to have a sharp our eyes and the distance between the view of a topographic structure in light object and us. Together with the inter- or stereo microscopes – or in general, pretation that our brain provides us with, when we want to know more about the this creates a three-dimensional view three-dimensional internal set-up of with these two sources of perception. To structures, several sections or slices have see objects within a microscope in a sim- to be acquired. All at a different z posi- ilar way, both eyes have to each be pro- tion or on a different focal plane. Have a vided with a separate light path to enable look at an image stack containing several observation of the specimen at an angle such sections of the human head ac- similar to the one our eyes see from nat- quired via Magnetic Resonance Imaging urally. This is exactly what stereo micro- (MRI) (fig. 55). scopy is all about. These microscope types, often called “binos”, provide a Not too few – Not too many – just view of a specimen that looks natural, in- enough cluding a high depth of field and working distance. With a convergence angle of Using the light microscope, optical sec- 10–12°, the left eye and right eye receive tions are usually acquired using a motor- views of the same object from a slightly ised focus. First the maximum and the different perspective (fig. 54). minimum position along the z axis of the For viewing surfaces, the stereo mi- stage or the nosepiece are defined. Then croscope provides a reliable three-di- you either set the distance of two z-posi- mensional impression of the object. We tions (i.e. the focal change (Δz) or z spac- get an idea of how the surface is shaped ing), or you set the total number of sec- and which structures are higher or lower. tions to acquire. Depending on the What we are investigating here is a application, a minimum number of sec-

3D Imaging Basics of light Microscopy & Imaging • 43 cence images acquired in standard fluo- rescence widefield microscopes. There are two main techniques used to remove this out-of-focus blur and to restore the images: the deconvolution computational technique and confocal microscopy. One of these approaches is desirable to cre- ate a reasonable three-dimensional rep- resentation of the fluorescence signals. Deconvolution requires a three-di- mensional image stack of the observed sample, having a minimum focal change (Δz) for obtaining maximum effect. The Fig. 55: focal change (Δz) depends on the numer- Left: Stack of images taken via MRI (Magnetic Resonance Imaging). ical aperture (NA) of the objective, the Right: The head structures are extracted from the individual image sections. sample fluorochrome’s emission wave- length (λ) and the refractive index (n) of tions with the optimum z spacing (Δz) compose the resulting image. This infor- the immersion medium, and can be cal- will be necessary. mation is used to create an image where culated using the following equation: One application in light and stereo mi- each grey value represents a specific croscopy is the acquisition of z stacks for height. This height map contains as many Δz ~ (1.4*λ*n)/NA2 (7) overcoming the limited depths of field of grey value levels as sections have been a single frame. This is where the focal acquired. This means that the smaller To get a better idea of the actual num- change (Δz) between two sections should the depth of field of your objective and bers, an example will be calculated. A be the same or somewhat smaller than the more sections actually acquired, the Plan Fluorite 40x objective has an NA the depth of field. As described above, more exact the height information will value of 0.75. The refractive index of air the depth of field depends mainly on the be. This is significant when the height in- is 1. At 540 nm emission wavelength, this numerical aperture of the objective. In formation is represented in a three-di- yields a minimum required focal change, practice, a usable Δz value can be easily mensional view (fig. 57). When a suffi- i.e. z spacing (Δz) of about 1.3 µm. The derived by lifting the stage slowly. At the cient number of sections are acquired, numerical aperture mostly influences the same time, you observe at which focal the viewer receives a clear spatial im- z spacing. When using a higher quality change a sharp structure goes out of fo- pression of the surface. To smoothen the Plan Apochromat objective with a nu- cus. You apply this value to all the sec- height map, peaks and steps may be flat- merical aperture (NA) of 0.9, a z spacing tions over the whole height range of your tened somewhat. The three-dimensional (Δz) of about 0.9 µm is required. When sample (fig. 56). With digital imaging, the view becomes quite a realistic represen- observing cells, a height range of 5 to 10 focus parameters can be entered; the tation of the actual object structure when µm must be covered depending on cell software takes control of the motorised the sharp, composite image is placed type and preparation. So when using a focus and acquires the z stack. The fo- over the surface as a texture (fig. 57). Plan Apochromat 40x objective, up to 10 cused areas through the sections can sections or frames should be acquired. then be composed to generate an image Just the right number Fig. 58 shows an example. which has an unlimited depth of field in Software solutions available today of- principle (fig. 56). The method referred In life science applications and fluores- fer either definition of the numerical ap- to is called EFI (Extended Focal Imag- cence microscopy, the appropriate erture and the refractive index. With ing). (See also box 6) number of sections for three-dimensional fully motorised systems, the software In addition to the resulting image (fig. imaging can also be derived from the ex- also reads out the numerical aperture 56) with an unlimited depth of field, a perimental set-up. It is well known that and the refractive index of the current height map can be generated (fig. 57). out-of-focus haze can significantly dimin- objective. When the fluorochrome is se- The software algorithm knows which ish fluorescence images‘ scientific value, lected, the z spacing is automatically cal- section to obtain the sharp pixels from to as well as the apparent clarity in fluores- culated. You define the maximum and

Fig. 56: Left and middle: This is an eye of a beetle acquired at two different focal planes. The focal change (Δz) is such that no structure between these focal planes is not sharply in focus. Right: Seven optical sections acquired at different focal planes are combined to create one sharp image with unlimited depth of field.

44 • Basics of light Microscopy & Imaging 3D Imaging Fig. 57: Left: Height map. Middle and right: Three-dimensional view of the surface. The surface shape can be textured with the composite image.

the minimum lift of the stage and the z One for all seemingly satisfactory multicolour confo- stack is acquired. cal two-dimensional image will in fact The deconvolution filter can be di- Today‘s software generally handles the z reveal less valid information than a “nor- rectly applied to the z stack image object. stack as one object. All sections and mal” widefield fluorescent image. An- The most powerful deconvolution filter is frames are saved in a single file. The in- other way of presenting stacks is by ani- called blind deconvolution. It does not formation about the single frames’ x-y mating them – like watching a movie. assume any prior knowledge of the sys- resolution and calibration, as well as the Image objects are truly multi-dimen- tem or sample. Instead, it uses elaborate z spacing and the other meta-data such sional. In addition to the third axis in and time consuming mathematical itera- as the channel (fluorochrome and wave- space, they may also contain different tive approaches. These include the con- length), point in time acquired, numeri- colour channels of multiply labelled fluo- strained maximum likelihood estimation cal aperture and refractive index are rescent specimens – the fourth dimen- method to adapt to the actual three-di- saved in the same file as well. The single sion (fig. 59, 60 and 62). Image objects mensional point spread function (PSF) of layers can be viewed separately or ex- may also be comprised of a fifth dimen- the system (for a more detailed descrip- tracted. A quick way to display all the sion – time: where slices are acquired at tion see the paragraph „What exactly is a significant structures through the whole different points in time. point spread function“ in the chapter stack is the maximum intensity projec- „The Resolving Power“). Knowing the tion. This is where the brightest pixel It‘s all spatial three-dimensional PSF of a system, the value at each x-y position throughout all blind deconvolution takes the blurred z layers is shown in the projection image Z stacks and multidimensional images stack and essentially provides a three-di- (fig. 58, 60, 62). Other projection meth- can be visualised as three-dimensional mensional image restoration of the sam- ods are mean intensity and minimum in- objects via three-dimensional volume ple (fig. 58, 60, 62). Deconvolution algo- tensity projections. rendering, or voxel viewing. Voxel stands rithms are applied to a lesser degree in Projection methods have to be used for Volume Element. Fig. 61 shows the confocal microscopy image stacks and to with care because the lack of out-of-fo- correspondence between pixel and voxel. images from transmitted light widefield cus blur shows all details from the vari- A voxel is a pixel which has a height value microscopy as well. ous z-levels as if they were all located on additionally assigned to it. The height as- the same horizontal plane – which is ob- signed corresponds to the z spacing of the The confocal view viously not the case. For readers viewing frames. In this way, voxels are recon- such images in a scientific journal, a structed from the frames of the z stack. The second way to create optical sections in a fast and high resolution manner is via confocal microscopy. Here, a light beam, usually a laser, is concentrated on a small area of the specimen. The light originating from this spot (emission light of the fluorescence or reflected light of opaque materials) is captured by a sensi- tive detector. Within the confocal optics, an aperture (usually a pin hole) is placed at a position which is optically conju- gated with the focusing position. This conjugated focal position set-up is why the term “confocal” is used (see also Box 4). The system is meant to eliminate Fig. 58: Human tissue labelled with a probe against the HER2-gene (Texas Red – not light from all areas other than that on shown here, see fig. 62) and Chromosome 17 (FITC). the focal plane. Therefore, while scan- Uppler left: Maximum intensity projection of 25 frames, l(FITC) = 518 nm, numerical aperture NA = 1, z spacing (Δz) = 0.18. ning along x, y axes, an optical section of Lower Left: The same z stack after applying the blind deconvolution algorithm. Right: The three-dimen- the focused area can be obtained without sional slice view gives a good impression of the proportions and helps to spatially locate the fluores- requiring software algorithms. cence signals.

3D Imaging Basics of light Microscopy & Imaging • 45 Each voxel of a three-dimensional ob- ject has a colour. There are several well- known methods for volume rendering, such as projection and ray casting. These approaches project the three-dimensional object onto the two-dimensional viewing plane. The correct volume generating method guarantees that more than just the outer surface is shown and that the three-dimensional object is not displayed simply as a massive, contourless block. Inner structures within a three-dimen- sional object, such as fluorescence sig- nals can be visualised. The maximum in- tensity projection method looks for the brightest voxel along the projection line and displays only this voxel in the two-di- mensional view. Fig. 62 shows the maxi- mum intensity projection of two colour channels of 25 frames each when seen from two different angles. Changing the perspective helps to clearly locate the flu- orescence signals spatially. The three-di- mensional structure will appear more re- alistic when the structure rendered into three-dimensions is rotated smoothly.

How many are there – One or Two? Managing several colour channels (each having many z layers) within one image Fig. 59: Confocal images of . The upper rows show the first 12 images of a series of 114, that can be used to create either two-dimensional projections of parts of the pollen or create a 3D view of object can be deceptive. The question is the surface structure. This three-dimensional projection shown here is accompanied by two-dimen- whether two structures appearing to sional projections as if the pollen was being viewed from different perspectives. Images were created overlap are actually located at the same using the Olympus Fluoview 1000. position spatially or whether one is in fact behind the other. The thing is, two fluo- rescence signals may overlap – because threshold setting in each of the colour 3-D reconstructions their mutual positions are close to each channels. A new image object can be cre- other within the specimen. This phenom- ated with only those structures that were Now let us draw the net wider and move enon is called colocalisation. It is encoun- highlighted by the thresholds. The colo- for a minute beyond the light microscopy tered when multi-labelled molecules bind calised fluorescence signals are displayed approaches in imaging three-dimen- to targets that are found in very close or in a different colour. Rendering these im- sional objects. In general. determining spatially identical locations. age objects clearly reveals the colocal- three-dimensional structures from macro Volume rendering makes it possible to ised signals without any background dis- to atomic level is a key focus for current locate colocalised structures in space vis- turbance. To obtain quantitative results, biochemical research. Basic biological ually. For better representation and in the area fraction of the colocalised sig- processes, such as DNA metabolism, pho- order to measure colocalisation in digital nals compared to the total area of each tosynthesis or protein synthesis require image objects, the different fluorescence of the two fluorescence signals can be the coordinated actions of a large number signals are first distinguished from the calculated throughout all image sections of components. Understanding the three- background. This may be done via (see table 4). dimensional organisation of these com- ponents, as well as their detailed atomic structures, is crucial to being able to in- Table 4: terpret what their function is. In the materials science field, devices Layer Red Green Colocalisation Colocalisation/Red Colocalisation/Green have to actually „see“ to be able to meas- Area[%] Area[%] Area[%] Ratio[%] Ratio[%] ure three-dimensional features within 1.00 0.11 0.20 0.04 39.60 21.22 materials, no matter how this is done: 2.00 0.15 0.26 0.07 47.76 27.92 whether structurally, mechanically, elec- 3.00 0.20 0.36 0.10 49.93 28.08 trically or via performance. However, as 4.00 0.26 0.53 0.13 51.84 25.32 the relevant structural units have now 5.00 0.35 1.34 0.20 57.39 14.87 moved to dimensions less than a few 6.00 0.43 2.07 0.26 61.00 12.81 hundred nanometres, obtaining this 7.00 0.38 1.96 0.24 61.66 12.07 three-dimensional data means new Colocalisation sheet: A ratio value of about 52 % in the Colocalisation/Red column in layer 4 means methods have had to be developed in or- that 52 % of red fluorescence signals are colocalised with green fluorescent structures here. der to be able to display and analyse

46 • Basics of light Microscopy & Imaging 3D Imaging Fig. 60: Ovariole from Drosophila melanogaster mutant bgcn. Multicolour channel and z stack images can be managed in one image object. This image object contains three colour channels each consisting of a 41 frame z stack. This image is a maximum intensity projection. Red: Fascilin III, Green: F-Actin, Blue: Nuclei. Above right: Single z layer after applying the blind deconvolution algorithm to all three chan- nels of the image object. Left: The three-dimen- sional, volume-rendered structure. Image courtesy of: Dr. Ralph Rübsam, Institute for Biology, University of Erlangen-Nürnberg, Erlangen, Germany.

three-dimensional structures. Among make the model. Today, this is greatly for displaying embryonic growth proc- these methods, different 3-D reconstruc- simplified by the computer assembling esses in three dimensions. This approach tion approaches have become a useful the virtual model. Special 3-D pro- is attractive for research conducted in tool in various applications in both the grammes calculate the three-dimen- other areas as well – such as the materi- life and materials sciences. They result sional model based on digitised image als sciences. in three-dimensional displays which are slices. This model is referred to as a 3-D particularly illustrative and instructive. reconstruction. 3-D reconstruction by tilting the specimen Therefore, we now take a closer look at Today‘s powerful software offers 3-D is another way of recon- two of these 3-D reconstruction methods, processing, display and evaluation of structing the three dimensional (3-D) one using an image series of specimen two-dimensional image information. Er- structure of a specimen from a series of sections acquired in a light microscope, gonomic functions and flexible project two dimensional (2-D) micrographs. Some the second using electron tomography in management provide the user with the sample tomography applications include a transmission electron microscope. ability to reconstruct even the most com- not only those for electron microscopy plex models, and allow easy access to (called electron tomography) but also 3-D Reconstruction using an image series of spatial views of the interiors of such specimen sections structures. Up until now, these interior * eucentric is derived from the Latin for well-centered. It implies, e.g., in a goniometer and especially in a specimen One commonly used method when deal- views were extremely difficult to gener- stage, that the specimen can be tilted without moving the ing with 3-D reconstruction is to make a ate. Box 16 shows the necessary steps of field of view. This capability is highly desirable for micro- series of ultra-thin slices of the specimen 3-D reconstruction, such as those used scope stages. being investigated. These slices are then dyed. The dye makes structures more easily recognisable and/or highlights spe- cific parts. The slices are viewed one at a time beneath the microscope and drawn. These drawings are then enlarged to scale. In the past, a 3-D model was re- constructed layer by layer from pieces of cardboard cut according to the drawings. Fig. 61: This is how a two-dimensional pixel (picture element) is expanded to a three-dimensional Alternatively, wax or plastic was used to voxel (volume element).

3D Imaging Basics of light Microscopy & Imaging • 47 nologies – suitably equipped electron mi- Fig. 62: Human breast cancer tissue labelled with a probe against the HER2-gene (Texas Red) and croscopes, high resolving and highly sen- Chromosome 17 (FITC). sitive CCD cameras, increased computer Upper left: Maximum intensity projection of 25 processor performance and improved frames, λ(FITC) = 518 nm, numerical aperture software. (NA) = 1, z spacing (Δz) = 0.18. Upper right: Blind deconvolution applied to both The way in which electron tomogra- channels. Left: Maximum intensity projection. phy works is actually quite straightfor- When the volume-rendered structure is seen ward (see box 17). The specimen holder from two different angles, the spatial composi- is tilted along an eucentric* axis (if possi- tion of the fluorescence signals is shown. ble) and in the process, images are ac- quired from a wide range of angles. This guarantees that the 3-D structure will computerised axial tomography, (CAT) – have adequate resolution along the three familiar to many in medical applications axes. Usually the specimen is tilted along as Computerised Tomography (CT). a single axis from at least – 60° to + 60° Computer tomography has long been with images (projections) recorded at a part of the standard repertoire for rou- equidistant increments of 1° or 2°. The tine medical applications. Transmission images are acquired using a CCD camera electron tomography, however, has and stored as a tilt stack. Due to effects grown increasingly important as well in such as sample shift, tilt of axis position the last few years. Transmission electron or thermal expansion, the tilt stack has tomography made possible what had to be aligned before further processing seemed impossible in biology: 3-D imag- can begin. Once the stack has been ing of interior cellular structures, just aligned, it is ready to be used for recon- nanometres in size. This has become a struction. Reconstruction can be per- reality due to a combination of new tech- formed using different approaches. Two

48 • Basics of light Microscopy & Imaging 3D Imaging Box 16: The steps involved in 3-D reconstruction using an image series of specimen sections

In the past, pieces of cardboard cut out by The contours of a tissue structure can be The software joins the two-dimensional poly- hand had to be glued together to make a 3-D traced with the cursor or are automatically gons of the component layers to form spatial, model. Today this is all taken care of by the detected following a click of the ‚magic wand‘ three-dimensional structures. This is under- computer. First, the software digitally acquires tool (a local threshold approach). The software taken automatically. If necessary, manual microscope images of the series of image slices then reconstructs the contours as polygons. corrections can be made. Using navigation (or drawings are scanned into the computer, as This is a scanned drawing done at a stereo tools, the object can be enlarged, reduced and in this case). The software then stacks these microscope. The procedure is exactly the same rotated spatially in any direction. digitised slices. Any displacements or misalign- when using digital microscope images. ments can be corrected interactively or semi- automatically.

This is a complete reconstruction of the skull of a quail embryo. To be able to investigate and measure precisely it is extremely important to be able to look inside the model. This is achieved by making individual parts transpar- ent or fading them from view entirely.

Virtual specimen preparation: the digitally reconstructed skull of a quail embryo can be reduced to its component parts at any time. Virtual slices can also be generated in any direction.

Distance, polygonal length, area, volume and absolute position can all be measured directly within the 3-D view. approaches available are called |w| filter- applied on a daily basis in both industrial technique. How the object being investi- ing and (un-)weighted back projection and research applications. Innovative gated changes over time is also of deci- via FFT. The resulting reconstructed im- procedures involving even more dimen- sive importance in the Life Sciences. One age is stored and visualised as a z stack. sions with applications in the Life Sci- particular reason for this is that this in- Box 17 explains the steps necessary for ences have also become available and formation may be crucial for assessing the electron tomography process. are certain to become just as well estab- cell culture growth or for assessing how lished in the foreseeable future. a fluorescent-labelled molecule behaves The more dimensions – the more there Wavelength is viewed as a fourth di- over time (see the paragraph „Fluores- is to see mension in the field of fluorescence mi- cence Lifetime Imaging Microscopy“ in croscopy, as described above. Being able the chapter „Shining Fluorescence De- Many image-analysis methods, especially to differentiate wavelengths provides ad- tails“). In addition, there is further infor- two- and three-dimensional approaches, ditional structural information. We have mation which can be combined with x, y have become well established and are seen many new applications using this information efficiently. One example is

3D Imaging Basics of light Microscopy & Imaging • 49 Box 17: The steps involved in 3-D reconstruction via Electron Tomography (TEM).

Data collection for automated electron tomography involves a complex setup of TEM, CCD camera and a data gathering and processing system. Usually assembling the data would require a tilt of +/–90°. How- ever, due to the geometry of the sample holder, or the thickness of the sample itself, the range of rotations is limited. This means that in practice, tilt angles ranging from –70° to +70° are recorded at equidistant in- crements. The image shows only 6 of at least 144 acquired images of the complete image stack. (Image courtesy by Oleg Abrosimov, University of Nowosibirsk, Russia.) A key feature offered by automated electron tomography is automatic alignment of the tilt series. Various algorithms can be used. The rigid axis is calculated using information such as magnification, tilt range and increment. The images are then stored in an aligned tilt stack. Special reconstruction routines are used to generate a z-stack based on the data contained by the tilt stack. Using visualising tools, 3-dimensional views inside the specimen becomes possible. The sequences can be animated and stored as files in .AVI format. The image shows 4 different views clipped from one .avi file. energy loss. Energy loss plays a key role ement distribution image can be re- neous TEM and EDX analyser control, as in fluorescence applications such as corded and visualised. Using image well as analysis of the images acquired. FRET (see the paragraph „Light as a evaluation of these acquisitions, addi- The morphological data obtained within Ruler“ in the chapter „Shining Fluores- tional information becomes available the context of the analysis is saved and cence Details“). It is also used in several which has previously been unattainable. displayed using tables and graphs. Sub- electron microscopy methods. Let’s have This technique is in widespread use and sequently, the automatic acquisition of a look at two of these methods: X-Ray referred to as EDX (Energy Dispersive X- the EDX spectra – at the centre of the and Energy-Filtering ray Analysis) (fig. 63). particles being investigated – takes place. Transmission Electron Microscopy Combining digital image analysis with Data is also saved and quantified auto- (EFTEM). X-ray microanalysis makes it feasible to matically. The proportion of the exuda- study and characterise the properties of tion volume can then be determined in Combining morphological and new materials. This involves examining conjunction with a layer-thickness meas- ­chemical data macroscopic material properties in rela- urement. These results can provide re- tion to the microstructure. One applica- searchers with important micro-struc- X-rays can be described as electromag- tion where this is essential is for improv- tural information (fig. 64). netic radiation with wavelengths ranging ing and developing aluminium alloys, –9 –11 from 10 to 10 m or energy in the 10 products and fabrication processes. To Combining local data with energy loss eV – 100 keV range, respectively. X-rays obtain an analysis that is statistically-re- data are employed for element analysis in the liable and representative requires a large field of biology, as well as in materials re- number of samples. A TEM (Transmis- The capacity for analysing multidimen- search. Characteristic X-rays and radia- sion Electron Microscope) with an inte- sional images is becoming more and tive deceleration of electrons within mat- grated scanning mode can be used for more relevant. This includes image se- ter are recorded using special detectors. analysing particles within the 10–300 nm ries of energy-filtering electron micros- After correction for absorption, atomic range. This ensures that local resolution copy. EFTEM (energy-filtering TEM) im- weight and fluorescence, each sample along with sufficient electron intensity is age contrast occurs primarily due to the section shows a characteristic spectrum. as required. The software controls and electron scattering within the sample – By scanning the sample and simultane- automates the entire acquisition and as is the case with the conventional TEM ous recording of the X-ray signals, an el- analysis process. This includes simulta- (box 18). Whereas conventional TEM

50 • Basics of light Microscopy & Imaging 3D Imaging Box 18: How is an elemental map of iron generated? EFTEM stands for Energy-Filtering Trans- mission Electron Microscope. The electron beam of the EFTEM is transmitted through the sample (here, an ultra-thin section of brain tissue) and generates a magnified image of the sample in the imaging plane. The built-in energy filter makes it possible Fig. 64: a) Acquisition of an aluminium exuda- tion, b) image following detection and classifica- to generate electron-spectroscopic im- tion. Chemical analysis is conducted following ages (ESI). If the energy filter is set to morphological evaluation. 725 eV, for example, only those electrons reach the imaging plane that have an energy loss of 725 eV (while travelling through the sample). All other electrons are blocked by the energy filter. 725 eV is the characteristic energy loss of an electron from the beam when it ‚shoots past‘ an iron at close proximity and causes a specific kind of inner-shell transition. The energy filter allows such electrons to pass through. These electrons contribute to image generation in the imaging plane. Unfortunately, other contrast is caused by selecting the angles electrons manage to make it through as well. These also contribute to the image. These electrons have, how- ever, coincidentally lost 725 eV – due to getting ‚knocked about‘, i.e., multiple impacts occurring while the of the scattered electrons, the electrons electrons pass through the sample. Without these coincidental electrons, the ESI image at 725 eV would be a within the EFTEM undergo an additional straightforward elemental map of iron. This means that the image would show how iron is distributed energy selection: using a spectrometer. throughout the sample. Only electrons of a certain energy loss are selected (from the spectrum of the The elemental map can be calculated using multiple ESI images. Using what is known as the 3-window transmitted electrons) – the contrast-re- method, 3 ESI images are acquired. The first one is at 725 eV, the second and third ones at somewhat lower ducing portions are not visible. This energy losses. The second and third images are used to compute a ‚background image‘. This is then sub- means that only those electrons with a tracted from the first image. This is how any background interference is removed from image number one specific energy loss contribute to the ac- (made up of all the contrast components caused by the coincidental electrons referred to above). The result quisition. Thus the result offers enhanced is the desired elemental map of iron. This can now be used for making quantitative measurements. contrast for all types of acquisition. Be- Iron apparently plays a significant role in Parkinson‘s disease. Using an electron microscope, it can be shown cause this method does not affect local how iron concentration has increased tremendously in the diseased part of the midbrain. The image on the resolution, thin samples and those with left is an electron-spectroscopical image of an ultra-thin section taken from the substantia nigra** of a Par- no contrast, as well as frozen or uncon- kinson‘s patient (inverted display). The dark areas represent the neuromelanin. The image on the right is the ventionally thick samples can be ac- elemental map of iron computed using the 3-window method. The light areas show where iron has accumu- quired with excellent contrast. Further- lated. The quantitative evaluation in the example shown above indicates a concentration of iron that is three more, this approach enables one to also times as high as what it would be in the brain of a healthy person. (A LEO 912 AB EFTEM was used (accel- select electrons with very specific scat- eration voltage of 120 kV, electron energy loss of 725 eV) along with analySIS TEM EsiVision image-analytical tering behaviour, thus yielding structural software. (by Prof. N. Nagai and Dr. N. Nagaoka)) or element-specific contrast within the **substantia nigra: is located in the midbrain and is a layer of large pigmented nerve cells. These cells produce dopamine and their destruction is image. Images such as these provide associated with Parkinson‘s disease. new information on the sample that used to be inaccessible. Control and remote control of the elec- tron microscope for acquiring, displaying and analysing the images and spectrums is undertaken by special software. Much of the information can only be obtained via computer-supported processing and analysis. This method is used in the bio- medical field, for example to demon- strate the presence of particles in human lung tissue or when investigat- ing diseases involving excessive accumu- lation of iron. This method has become greatly prevalent in the materials sci- ences as well: e.g., primary influences on the mechanical properties of steel – i.e., secondary phases, such as exudations or grain-boundary phases (carbide, nitride or also intermetallic phases). Determin- ing the exact chemical composition, size, distribution and density is most impor- tant. This information can be obtained via the EFTEM method. Conventional methods are useless in this context. Fig. 63: Typical EDX spectrum.

3D Imaging Basics of light Microscopy & Imaging • 51 Meet the Authors: Dr. Rainer Wegerhoff Dr. Manfred Kässens Born in 1964 in Hückeswagen near Cologne, Dr Wegerhoff studied Biol- Born in 1964 in Haren (Niedersachsen), Dr Kässens studied Physics from ogy at Bonn University and completed his PhD dissertation in the field of 1986 to 1996 at the University in Münster (Westfalen). The focus of his di- developmental neurobiology in 1994. Working at the Biochemical Insti- ploma in Physics in 1993 was electron microscopy. Also undertaking his tute in Kiel from 1994 to 1999, he led a research group which focussed on PhD in Physics at the University of Münster, he completed his PhD disser- the toxicological effects of insecticides on brain development. tation in the field of electron microscopy in 1996. Sience 1999, Dr Wegerhoff has worked for the European headquarters of In 1996, Dr Kässens joined Soft Imaging System, now called Olympus Soft Olympus in Hamburg, responsible for training (Olympus Akademie) and Imaging Solutions GmbH, as a Key Account Manager in Sales and Sup- qualification in the field of microscopy. He is now Section Manager for port. Since 1999, he has been Marketing Communications Manager at the Qualification – Olympus Life and Material Science Europa GmbH. company.

Acknowledgements: Dr. Olaf Weidlich We would like to thank the following people who have provided a signif- Born in 1964 in Hamm (Westfalen), Dr Weidlich studied Physics and Phi- icant contribution to this review: losophy from 1983 to 1989 at the University of Münster (Westfalen), the focus of his diploma thesis was on electron microscopy. Studying for his Dr. Annett Burzlaff for co-authoring parts of the chapter on fluores- PhD in at the University of Freiburg (Breisgau) from 1991 to cence. 1995, Dr Weidlich’s thesis discussed FTIR spectroscopy on retinal proteins. Undertaking a postdoctoral position in 1996 at the University of Arizona in Michael W. Davidson, (National High Magnetic Field Laboratory, Tucson (USA), Dr Weidlich worked in the field of on Florida State University) for contributing many of the images. His web- photosensitive proteins. He returned to Germany in 1997 to take up an in- sites have provided a huge pool of knowledge for the authors when draft- tern lectureship in Physics and Chemistry in Essen (Nordrhein Westfalen). ing this review. In 1998, Dr Weidlich joined Soft Imaging System, now called Olympus Soft Imaging Solutions GmbH, as a Technical Documentation Expert and Trainer. Esther Ahrent, (Section Manager Marketing Communication – He is now responsible for the customer courses (trainingCENTER) at the ­Olympus Life and Material Science Europa) for her coordination during all company. steps of this project.

Pictures Conclusion: Figures 1-10; 12; 17;44A; Box 13; Box 14: Data courtesy of Michael W. Davidson (National High As is evident from this review, the combined fields of microscopy and imaging Magnetic Field Laboratory, Florida State University). are extremely broad and there is much more that could have been discussed. For Figure 48: Data courtesy of Dr Paulo Magalhaes and Prof. Dr Tullio Pozzan, University of Padua, Italy example, the subject of storage media has not been considered in great depth

Figure 51: Data courtesy of R. Ando, Dr A. Miyawaki, due to the rapid technology changes that occur within this area. Consequently, RIKEN Brain Science Institute Laboratory for Cell Function Dynamics, Wako, Saitama, Japan we have drawn on our expertise in microscopy and imaging to select the key Figure 52: Data courtesy of A. Miyawaki, T. Nagai, T. Miyauchi, RIKEN Brain Science Institute Labora- basic information necessary to provide the reader with a solid foundation of tory for Cell Function Dynamics, Wako, Saitama, Japan knowledge within these ever expanding fields.

Figure 53: Data courtesy of M. Faretta, Eup. Inst. Oncology, Dept. of Experimental Oncology, Flow Cytometry and Imaging Core, Milan, Italy With the extraordinary rate of advance in technology in recent times, many new

Figure 60: Data courtesy of Dr. Ralph Rübsam, possibilities and opportunities are opening up for the application of microscopy Institute for Biology, University of Erlangen-Nürn- berg, Erlangen, Germany and imaging. For example, the current escalation in use of digital microscopy has Box 15, Figure 1: Data courtesy of H. Mizuno, A. Miyawaki, RIKEN Brain Science Institute Laboratory enabled the development of virtual microscopy which now allows entire slides be for Cell Function Dynamics, Wako, Saitama, Japan seen as one image from any location. As new techniques are developed, users are

gaining an increasingly bigger picture which in turn delivers heightened under-

standing in many areas of application. Who knows exactly what the future will

bring, but it’s certainly looking bright for microscopy and imaging users ….

52 • Basics of light Microscopy & Imaging INCREDIBLY SIMPLE. YOU JUST HAVE TO THINK OF IT: OLYMPUS ANALYSIS IMAGING SOFTWARE.

At the end of the day, it’s results that count. Anyone measuring and testing on a daily basis wants results directly, simply and quickly. And this is exactly what Olympus now offers with its analySIS imaging software for microscopic applications. Whether recording and analysing data or archiving and reporting, analySIS simplifies your daily procedures reliably, accurately and objectively. And the software is so user-friendly and intuitive that you feel at home with it straight away. Once configured for your application, analySIS delivers the results you need with just one mouse click. In fact, it’s so simple that you will wonder why you didn’t think of using it before. Olympus helps you solve problems easily.

For more information, contact: Olympus Life and Material Science Europa GmbH Phone: +4940237735426 E-mail: [email protected] www.olympus-europa.com

Olympus Microscopy Basics of Light Microscopy Imaging SPECIAL EDITION OF & Imaging Microscopy & RESEARCH • DEVELOPMENT • PRODUCTION

www.gitverlag.com