<<

See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/323826341 essential equipments

Book · March 2018

CITATIONS READS 0 4,042

1 author:

Mosab Nouraldein Mohammed Hamad Elsheikh Abdallah Elbadri University

305 PUBLICATIONS 10 CITATIONS

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

teaching materials View project

Re-evaluation of Entamoeba coli pathogenicity View project

All content following this page was uploaded by Mosab Nouraldein Mohammed Hamad on 17 March 2018.

The user has requested enhancement of the downloaded file.                                                                                                     

     

      !" # $"          %  &         '"  ()  *+,- *+,.        &  "  (     ( /  *+,. 

      

          

     

          

                                                                                                                                     

      

     !"#$        %  " ! &      ' &     (  )*!  & " " *)+,-!

       

 .! /  !  0  ! "     . 1,)* %   "  !  &       ' &    (     " " 1,)* ESSENTIAL EQUIPMENTS IN MEDICAL LABORATORY

Mosab Nouraldein Mohammed Hamad B.Sc. (honor), M.Sc., in Medical parasitology Lecturer of Medical parasitology Faculty of Health Sciences Elsheikh Abdallah Elbadri University Corresponding author: [email protected]

Ahmed Mustafa Basheir Soba university hospital

ϭ Table of contents Dedication ϯ acknowledgement ϰ ϱ Centrifuge ϯϭ ϯϲ ϯϴ ϰϬ ϰϯ Water bath ϰϲ Spectrophotometer ϰϴ Distiller ϱϮ Microtomes ϱϯ References ϱϴ

Ϯ Dedication

To: My father: Nouraldein Mohammed Hamad To my: great mother

ϯ 

































 ACKNOWLEDGEMENT:

To my friend Hesham Abdel Hameed ibn edries and to my colleagues at faculty of health sciences in Elsheikh Elbadri University.

ϰ  DŝĐƌŽƐĐŽƉe͗

A microscope (from the Ancient GreekȝȚțȡȩȢmikrósVPDOODQGıțȠʌİ૙Ȟskopeîn, "to look" or "see") is an instrument used to see objects that are too small to be seen by the naked eye. is the science of investigating small objects and structures using such an instrument. Microscopic means invisible to the eye unless aided by a microscope. (1)

From ancient times, man has wanted to see things far smaller than could be perceived with the naked eye. Although the first use of a lens is a bit of a mystery, it's now believed that use of lenses is more modern than previously thought.

However, it has been known for over 2000 years that glass bends light. In the 2nd Century BC, Claudius Ptolemy described a stick appearing to bend in a pool of water, and accurately recorded the angles to within half a degree. He then very accurately calculated the refraction constant of water.

During the 1st century AD (year 100), glass had been invented and the Romans were looking through the glass and testing it. They experimented with different shapes of clear glass and one of their samples was thick in the middle and thin on the edges.

They discovered that if you held one of these "lenses" over an object, the object would look larger. These early lenses were called magnifiers or burning glasses. The word lens is actually derived from the Latin word lentil, as they were named because they resembled the shape of a lentil bean.

At the same time, Seneca described actual magnification by a globe of water. "Letters, however small and indistinct, are seen enlarged and more clearly through a globe of glass filled with water." The lenses were not used much until the end of the 13th century when spectacle makers were producing lenses to be worn as glasses. Then, around 1600, it was discovered that optical instruments could be made by combining lenses.

The early simple "" which were only magnifying glasses had one power, usually about 6x - 10x. One thing that was very common and interesting to look at, were fleas and other tiny insects, hence these early magnifiers called "flea glasses".

ϱ 

Sometime, during the 1590's, two Dutch spectacle makers, Zaccharias Janssen and his father Hans started experimenting with these lenses. They put several lenses in a tube and made a very important discovery. The object near the end of the tube appeared to be greatly enlarged, much larger than any simple magnifying glass could achieve by itself.

Their first microscopes were more of a novelty than a scientific tool since maximum magnification was only around 9X and the images were somewhat blurry. Although no Jansen microscopes survived, an instrument made for Dutch royalty was described as being composed of "3 sliding tubes, measuring 18 inches long when fully extended, and two inches in diameter". The microscope was said to have a magnification of 3x when fully closed, and 9x when fully extended.

It was Antony Van Leeuwenhoek (1632-1723), a Dutch draper and scientist, and one of the pioneers of microscopy who in the late 17th century became the first man to make and use a real microscope.

He made his own simple microscopes, which had a single lens and were hand-held. Van Leeuwenhoek achieved greater success than his contemporaries by developing ways to make superior lenses, grinding and polishing a small glass ball into a lens with a magnification of 270x, the finest known at that time (other microscopes of the time were lucky to achieve 50x magnification). He used this lens to make the world's first practical microscope.

ϲ  Leeuwenhoek's microscope used a single convex glass lens attached to a metal holder and was focused using screws. Anthony Leeuwenhoek became more involved in science and with his new improved microscope was able to see things that no man had ever seen before. He saw bacteria, yeast, blood cells and many tiny animals swimming about in a drop of water. People did not realize that magnification might reveal structures that had never been seen before - the idea that all life might be made up of tiny components unseen by the unaided eye was simply not even considered.

ϳ 



Compound Microscopes

To increase the power of a single-lens microscope, the focal length has to be reduced. However, a reduction in focal length necessitates a reduction of the lens diameter, and after a point, the lens becomes difficult to see through.

To solve this problem, the compound microscope system was invented in the 17th century. This type of microscope incorporates more than one lens so that the image magnified by one lens can be further magnified by another.

Today, the term "microscope" is generally used to refer to this type of compound microscope. In the compound microscope, the lens closer to the object to be viewed is refers to as the "objective", while the lens closer to the eye is called the "eyepiece".

The function of any microscope is to enhance resolution. The microscope is used to create an enlarged view of an object such that we can observe details not otherwise possible with the human eye. Because of the enlargement, resolution is often confused with magnification, which refers to the size of an image. In general, the greater the magnification, the greater the resolution, but this is not always true. There are several practical limitations of lens design, which can result in increased magnification without increased resolution. The reason for a dichotomy between magnification and resolution is the ability of the human eye to see two objects.

ϴ  

Englishman Robert Hooke is credited with the microscopic milestone of discovering the basic unit of all life, the cell. In the mid-17th century, Hooke saw a structural mesh while studying a sample of cork that reminded him of the small monastic rooms called cells (Micrographia). Hooke is also credited with being the first to use the basic three-lens configuration that is still used in microscopes today.

All the early microscopists saw quite distorted images due to the low quality of the glass and imperfect shape of their lenses. Little was done to improve the microscope until the middle of the 19th century when great strides were made and quality instruments like today's microscope emerged. Companies in Germany like Zeiss and an American company founded by Charles Spencer began producing fine optical instruments. We can also mention Ernst Abbe, who carried out a theoretical study of optical principles, and Otto Schott, who conducted research on optical glass. (2)

ϵ  Light microscope:

A light microscope (LM) is an instrument that uses visible light and magnifying lenses to examine small objects not visible to the naked eye, or in finer detail than the naked eye allows. Magnification, however, is not the most important issue in microscopy. Mere magnification without added detail is scientifically useless, just as endlessly enlarging a small photograph may not reveal any more detail, but only larger blurs. The usefulness of any microscope is that it produces better resolution than the eye. Resolution is the ability to distinguish two objects as separate entities, rather than seeing them blurred together as a single smudge. The history of microscopy has revolved largely around technological advances that have produced better resolution. (3) The advancement of light microscopy also required methods for preserving plant and animal tissues and making their cellular details more visible, methods collectively called histotechnique (from histo, meaning "tissue"). In brief, classical histotechnique involves preserving a specimen in a fixative, such as formalin, to prevent decay; embedding it in a block of paraffin and slicing it very thinly with an instrument called a microtome; removing the paraffin with a solvent; and then staining the tissue, usually with two or more dyes. The slices of tissue, called histological sections, are typically thinner than a single cell. The colors of a prepared tissue are not natural colors, but they make the tissue's structural details more visible. A widely used stain combination called hematoxylin and eosin, for example, typically colors cell nuclei violet and the cytoplasm pink.

ϭϬ 

Other microscope variants

There are many variants of the compound optical microscope design for specialized purposes. Some of these are physical design differences allowing specialization for certain purposes:

x Stereo microscope, a low-powered microscope which provides a stereoscopic view of the sample, commonly used for dissection. x Comparison microscope, which has two separate light paths allowing direct comparison of two samples via one image in each eye. x Inverted microscope, for studying samples from below; useful for cell cultures in liquid, or for metallography. x Fiber optic connector inspection microscope, designed for connector end-face inspection

Other microscope variants are designed for different illumination techniques:

9 Petrographic microscope, whose design usually includes a polarizing filter, rotating stage and gypsum plate to facilitate the study of minerals or other crystalline materials whose optical properties can vary with orientation. 9 Polarizing microscope, similar to the petrographic microscope. 9 Phase contrast microscope, which applies the phase contrast illumination method. 9 Epifluorescence microscope, designed for analysis of samples which include fluorophores. 9 Confocal microscope, a widely used variant of epifluorescent illumination which uses a scanning laser to illuminate a sample for fluorescence. 9 Student microscope ± an often low-power microscope with simplified controls and sometimes low quality optics designed for school use or as a starter instrument for children.[6] 9 Ultramicroscope, an adapted light microscope that uses light scattering to allow viewing of tiny particles whose diameter is below or near the wavelength of visible light (around 500 nanometers); mostly obsolete since the advent of electron microscopes. (4)

ϭϭ  

A general biological microscope mainly consists of an objective lens, ocular lens, lens tube, stage, and reflector. An object placed on the stage is magnified through the objective lens. When the target is focused, a magnified image can be observed through the ocular lens. (5)

ϭϮ  

Uses of Light Microscopy Microscopes are essential tools for scientists. They are used in microbiology, material science, mineralogy and medicine. A combination of staining and light microscopy can allow scientists to identify different kinds of bacteria. Staining involves adding special dyes to a smear of cells. These stains are diagnostic for different kinds of cell membranes. Gram staining, for instance, uses crystal violet to stain Gram- positive bacteria and safranin to stain Gram-negative bacteria. These will show up in the light microscope as purple Gram-positive cells and pink Gram-negative cells. Being able to identify bacteria in this way is helpful, as many Gram-negative cells are associated with infection and disease. (6)

ϭϯ  

A primitive microscope was invented in 1590 in Middelburg, Netherlands, by the eyeglass makers Hans Lippershey, Zacharias Jansen and his father Hans Jansen. Further, Galileo Galilei improved the instrument by using a set of aligned lenses and called it ³occhiolino´, what means ³little eye´. In 1625, Giovanni Faber named Galileo Galilei¶s ³occhiolino´ as a compound microscope and this name remains until today.

The optical microscope, the most common type of microscope, contains several parts with specific functions. Observe the picture and find their functions.

1. Eyepiece: contains the ocular lens, which provides a magnification power of 10x to 15x, usually. This is where you look through.

2. Nosepiece: holds the objective lenses and can be rotated easily to change magnification.

3. Objective lenses: usually, there are three or four objective lenses on a microscope, consisting of 4x, 10x, 40x and 100x magnification powers. In order to obtain the total magnification of an image, you need to multiply the eyepiece lens power by the objective lens power. So, if you couple a 10x eyepiece lens with a 40x objective lens, the total magnification is of 10 x 40 = 400 times.

4. Stage clips: hold the slide in place.

5. Stage: it is a flat platform that supports the slide being analyzed.

6. Diaphragm: it controls the intensity and size of the cone light projected on the specimen. As a rule of thumb, the more transparent the specimen, less light is required.

7. Light source: it projects light upwards through the diaphragm, slide and lenses.

8. Base: supports the microscope.

9. lens: it helps to focus the light onto the sample analyzed. They are particularly helpful when coupled with the highest objective lens.

10. Arm: supports the microscope when carried.

11. Coarse adjustment knob: when the knob is turned, the stage moves up or down, in order to coarse adjust the focus.

12. Fine adjustment knob: used fine adjust the focus. (7)

ϭϰ  

Dissecting microscope:

Dissecting microscopes, which is also called as stereoscope are stereomicroscope that are often use on acquiring a 3D view on a specimen. From the name itself, it is usually applicable in dissection of specimens, but they still have other function.

It has two eyepiece which links the two lens arrays that are arranged to established a stereoscopic image. This allows the researcher to see the 3D object on the platform of microscope clearly. The stage of a dissecting microscope is usually big with a hollow for securing the specimens being examined.

The magnification of this microscope is commonly less than a hundred times and it is lower compare to the compound microscope. This magnification feature can be settle.

Other model is composed of three lenses which can takes videos or photographs. This is also use to make projections that can be applied for demonstrations in classrooms that help students visualize the sample clearly.

On the illumination process, the dissecting microscope uses two types of light from transmitted light or from direct illumination. Opaque objects that are put on the microscope platform or stage can be lighten straight from the illuminator. In this instance, the illuminator can be set whether in an open part on the arm of the dissecting microscope or in an adapter ring affixed to the separate transformer.

ϭϱ  

Another way is the light from a source like bulb can be mirrored across a transparent object from underneath with the use of substage mirror. This process of illumination demand the clear glass insert in the microscope platform. Nevertheless, in more cases the opaque stage insert that has black and white part and direct lighting is greatly used.

Producers generates different kinds of objectives, stands, ocular lenses, arms and light sources to enable the dissecting scopes to be used in many application. In addition, dissecting microscope can also be used to study about archaeological artifacts, geological samples and many other objects.

In all cases the microscope makes wider magnification so that people will be able to view objects in detail without going into the depth. Viewing a comparatively big and solid surfaces or specimens is the primary application of a dissecting microscope.

This type of microscope is a helpful device for researcher because it has the ability to manipulate the specimen or sample that is being studied. It is also applicable to more detailed work such as watch making, microsurgery, coin inspection and circuit board inspection. (8)

ϭϲ  

Phase contrast microscope 





With a conventional biological microscope, it is difficult to observe colorless, transparent cells while they are alive. A phase contrast microscope makes it possible by utilizing two characteristics of light, diffraction and interference, to visualize specimens based on brightness differences (contrast).

ϭϳ  

Principle:

With regard to periodic movements, such as sinusoidal waves, the phase represents the portion of the wave that has elapsed relative to the origin. Light is also an oscillation and the phase changes, when passing through an object, between the light that has passed through (diffracted light) and the remaining light (direct light). Even if the object is colorless and transparent, there is still a change in phase when light pass through it. This phase contrast is converted into brightness differences to observe specimens.

Features:

- Transparent cells can be observed without staining them because the phase contrast can be converted into brightness differences. - Because it is not necessary to stain cells, cell division and other processes can be observed in a living state.

Structure:

Because diffracted light is too weak to be normally observed by the eye, a phase plate is located at the focal point of light between the objective lens and the image surface so that only the phase of the direct light changes. This generates contrast on the image surface. Structural features include a ring aperture, instead of a pinhole, on the focal plane of the converging lens and a phase plate on the rear focal plane of the objective lens. (9)

ϭϴ  

Dark field microscope:

Most people who have survived a biology class know what a light field microscope is. This type of scope uses bright field illumination, meaning it floods the specimen with white light from the condenser without any interference. Thus the specimen shows up as a dark image on a light background (or white field if you will).

This type of unit works best with specimens that have natural color pigments. The samples need to be thick enough to absorb the incoming light; so staining is usually paired with this type of microscope.

Yet what if the specimen is light colored or translucent, like the plankton on the right? It certainly won't stand out against a strong white background. Additionally, some specimens are just too thin. They cannot absorb any of the light that passes through them, so they appear invisible to the user. This is where the concept of dark field illumination comes in.

Rather than using direct light from the condenser, one uses an opaque disk to block the light into just a few scattered beams. Now the background is dark, and the sample reflects the light of the beams only. This results in a light colored specimen against a dark background (dark field), perfect for viewing clear or translucent details.

On a grand scale, the same thing happens every day when you look up at the sky. Do the stars disappear when it's light out? Of course not! They're still there, their brilliance blotted out by the mid-day sun.

Dark field microscopes are used in a number of different ways to view a variety of specimens that are hard to see in a light field unit. Live bacteria, for example, are best viewed with this type of microscope, as these organisms are very transparent when unstained.

There are multitudes of other ways to use dark field illumination, often when the specimen is clear or translucent. Some examples: x Living or lightly stained transparent specimens x Single-celled organisms x Live blood samples x Aquatic environment samples (from seawater to pond water) x Living bacteria x Hay or soil samples x Pollen samples

ϭϵ   x Certain molecules such as caffeine crystals

Dark field microscopy makes many invisible specimens appear visible. Most of the time the specimens invisible to bright field illumination are living, so you can see how important it is to bring them into view.

No one system is perfect, and dark field microscopy may or may not appeal to you depending on your needs.

Some advantages of using a dark field microscope are: x Extremely simple to use x Inexpensive to set up x Very effective in showing the details of live and unstained samples

Some of the disadvantages are: x Limited colors (certain colors will appear, but they're less accurate and most images will be just black and white) x Images can be difficult to interpret to those unfamiliar with dark field microscopy

Although surface details can be very apparent, the internal details of a specimen often don't stand out as much with a dark field setup. How to Make a Dark Field Microscope

To create a dark field, an opaque circle called a patchstop is placed in the condenser of the microscope. The patchstop prevents direct light from reaching the objective lens, and the only light that does reach the lens is reflected or refracted by the specimen.

If you want to make a dark field microscope you'll first need a regular light microscope. Below is your full list of "ingredients": x Microscope x Hole punch x Black construction paper x Transparency film x Glue x Scissors x Pen

ϮϬ  

Now use the following steps to make your patchstop: x Set up your microscope and choose the lowest-power objective lens. x Set the eyepiece aside somewhere safe. x Open the diaphragm as wide as possible. Then slowly close it until is just encroaches on the circle of visible light. x Now bend over and take a look at the diaphragm from below. See that opening? It's only slightly smaller than the finished patchstop you'll create.

Ϯϭ   x Punch a few circles in the black construction paper with the hole punch. Measure one against the diaphragm opening. If it's more than 10% larger, cut it down to about that size (10% larger than the diaphragm opening). If it's smaller, cut out a larger circle. x Cut a 5 cm square of transparency paper. x Glue the black circle onto the transparency film, about 2 cm from the corner of the square. In that free 2 cm of paper, write the correct magnification power of your objective. x Mark the patchstop with the correct magnification power. x Repeat the above steps for all the objective powers except the oil immersion lenses.

Now use your patchstop to turn a light field unit into a dark field microscope: x Select the correct patchstop for the objective power to be used. x Slip the patchstop between the filter holder and condenser. If your microscope has no filter, hold it manually below the condenser. x Remove the eyepiece. x Open the diaphragm and move the patchstop until the light is blocked entirely. Use tape to secure it if there is no condenser on your microscope. x Replace the eyepiece and examine the sample. (10)

ϮϮ  

Fluorescent Microscope

A fluorescence microscope is much the same as a conventional light microscope with added features to enhance its capabilities.

x The conventional microscope uses visible light (400-700 nanometers) to illuminate and produce a magnified image of a sample.

x A fluorescence microscope, on the other hand, uses a much higher intensity light source which excites a fluorescent species in a sample of interest. This fluorescent species in turn emits a lower energy light of a longer wavelength that produces the magnified image instead of the original light source.

Fluorescent microscopy is often used to image specific features of small specimens such as microbes. It is also used to visually enhance 3-D features at small scales. This can be accomplished by attaching fluorescent tags to anti-bodies that in turn attach to targeted features, or by staining in a less specific manner. When the reflected light and background fluorescence is filtered in this type of microscopy the targeted parts of a given sample can be imaged. This gives an investigator the ability to visualize desired organelles or unique surface features of a sample of interest. Confocal fluorescent microscopy is most often used to accentuate the 3-D nature of samples. This is achieved by using powerful light sources, such as lasers, that can be focused to a pinpoint. This focusing is done repeatedly throughout one level of a specimen after another. Most often an image reconstruction program pieces the multi-level image data together into a 3-D reconstruction of the targeted sample.

Ϯϯ  

How does Fluorescent Microscopy Work?

Ϯϰ  

In most cases the sample of interest is labeled with a fluorescent substance known as a fluorophore and then illuminated through the lens with the higher energy source. The illumination light is absorbed by the fluorophores (now attached to the sample) and causes them to emit a longer lower energy wavelength light. This fluorescent light can be separated from the surrounding radiation with filters designed for that specific wavelength allowing the viewer to see only that which is fluorescing.

The basic task of the fluorescence microscope is to let excitation light radiate the specimen and then sort out the much weaker emitted light from the image. First, the microscope has a filter that only lets through radiation with the specific wavelength that matches your fluorescing material. The radiation collides with the atoms in your specimen and electrons are excited to a higher energy level. When they relax to a lower level, they emit light. To become detectable (visible to the human eye) the fluorescence emitted from the sample is separated from the much brighter excitation light in a second filter. This works because the emitted light is of lower energy and has a longer wavelength than the light that is used for illumination.

Most of the fluorescence microscopes used in biology today are epi-fluorescence microscopes, meaning that both the excitation and the observation of the fluorescence occur above the sample. Most use a Xenon or Mercury arc-discharge lamp for the more intense light source.

Applications:

Ϯϱ  

These microscopes are often used for:

x Imaging structural components of small specimens, such as cells

x Conducting viability studies on cell populations (are they alive or dead)

x Imaging the genetic material within a cell (DNA and RNA)

(11) x Viewing specific cells within a larger population with techniques such as FISH.

Electron microscope:

Ϯϲ  

The electron microscope is a type of microscope that uses a beam of electrons to create an image of the specimen. It is capable of much higher magnifications and has a greater resolving power than a light microscope, allowing it to see much smaller objects in finer detail. They are large, expensive pieces of equipment, generally standing alone in a small, specially designed room and requiring trained personnel to operate them.

The History of electron microscope:

By the middle of the 19th century, microscopists had accepted that it was simply not possible to resolve structures of less than half a micrometre with a light microscope because of the Abbe¶s formula, but the development of the cathode ray tube was literally about to change the way they looked at things; by using electrons instead of light! Hertz (1857-94) suggested that cathode rays were a form of wave motion and Weichert, in 1899, found that these rays could be concentrated into a small spot by the use of an axial magnetic field produced by a long solenoid. But it was not until 1926 that Busch showed theoretically that a short solenoid converges a beam of electrons in the same way that glass can converge the light of the sun, that a direct comparison was made between light and electron beams. Busch should probably therefore be known as the father of electron optics.

In 1931 the German engineers Ernst Ruska and Maximillian Knoll succeeded in magnifying and electron image. This was, in retrospect, the moment of the invention of the electron microscope but the first prototype was actually built by Ruska in 1933 and was capable of resolving to 50 nm. Although it was primitive and not really fit for practical use, Ruska was recognised some 50 years later by the award of a Nobel Prize. The first commercially available electron microscope was built in England by Metropolitan Vickers for Imperial College, London, and was called the EM1, though it never surpassed the resolution of a good optical microscope. The early electron microscopes did not excite the optical microscopists because the electron beam, which had a very high current density, was concentrated into a very small area and was very hot and therefore charred any non-metallic specimens that were examined. When it was found that you could successfully examine biological specimens in the electron microscope after treating them with osmium and cutting very thin slices of the sample, the electron microscope began to appear as a viable proposition. At the University of Toronto, in 1938, Eli Franklin Burton and students Cecil Hall, James Hillier and Albert Prebus constructed the first electron microscope in the New World. This was an effective, high- resolution instrument, the design of which eventually led to what was to become known as the RCA (Radio Corporation of America) range of very

Ϯϳ  

successful microscopes.

Unfortunately, the outbreak of the Second World War in 1939 held back their further development somewhat, but within 20 years of the end of the war routine commercial electron microscopes were capable of 1 nm resolution. (12)

How electron microscopes work

Ϯϴ  

If you've ever used an ordinary microscope, you'll know the basic idea is simple. There's a light at the bottom that shines upward through a thin slice of the specimen. You look through an eyepiece and a powerful lens to see a considerably magnified image of the specimen (typically 10±200 times bigger). So there are essentially four important parts to an ordinary microscope:

1. The source of light. 2. The specimen. 3. The lenses that makes the specimen seem bigger. 4. The magnified image of the specimen that you see.

In an electron microscope, these four things are slightly different.

1. The light source is replaced by a beam of very fast moving electrons. 2. The specimen usually has to be specially prepared and held inside a vacuum chamber from which the air has been pumped out (because electrons do not travel very far in air).

3. The lenses are replaced by a series of coil-shaped electromagnets through which the electron beam travels. In an ordinary microscope, the glass lenses bend (or refract) the light beams passing through them to produce magnification. In an electron microscope, the coils bend the electron beams the same way.

4. The image is formed as a photograph (called an electron micrograph) or as an image on a TV screen.

That's the basic, general idea of an electron microscope. But there are actually quite a few different types of electron microscopes and they all work in different ways. The three most familiar types are called transmission electron microscopes (TEMs), scanning electron microscopes (SEMs), and scanning tunneling microscopes (STMs).

Transmission electron microscopes (TEMs)

A TEM has a lot in common with an ordinary optical microscope. You have to prepare a thin slice of the specimen quite carefully (it's a fairly laborious process) and sit it in a vacuum chamber in the middle of the . When you've done that, you fire an electron beam down through the specimen from a giant electron gun at the top. The gun uses electromagnetic coils and high voltages (typically from 50,000 to several million volts) to accelerate the electrons to very high speeds. Thanks to our old friend wave-particle duality, electrons (which we normally think of as particles) can behave like waves (just as waves of light can behave like particles). The faster they travel, the smaller the waves they form and the more detailed the images they show up. Having reached top speed, the electrons zoom through the specimen and out the other side,

Ϯϵ   where more coils focus them to form an image on screen (for immediate viewing) or on a photographic plate (for making a permanent record of the image). TEMs are the most powerful electron microscopes: we can use them to see things just 1 nanometer in size, so they effectively magnify by a million times or more.

Scanning electron microscopes (SEMs)

Most of the funky electron microscope images you see in books²things like wasps holding microchips in their mouths²are not made by TEMs but by scanning electron microscopes (SEMs), which are designed to make images of the surfaces of tiny objects. Just as in a TEM, the top of a SEM is a powerful electron gun that shoots an electron beam down at the specimen. A series of electromagnetic coils pull the beam back and forth, scanning it slowly and systematically across the specimen's surface. Instead of traveling through the specimen, the electron beam effectively bounces straight off it. The electrons that are reflected off the specimen (known as secondary electrons) are directed at a screen, similar to a cathode-ray TV screen, where they create a TV-like picture. SEMs are generally about 10 times less powerful than TEMs (so we can use them to see things about 10 nanometers in size). On the plus side, they produce very sharp, 3D images (compared to the flat images produced by TEMs) and their specimens need less preparation.

Scanning tunneling microscopes (STMs)

Among the newest electron microscopes, STMs were invented by Gerd Binnig and Heinrich Rohrer in 1981. Unlike TEMs, which produce images of the insides of materials, and SEMs, which show up 3D surfaces, STMs are designed to make detailed images of the atoms or molecules on the surface of something like a crystal. They work differently to TEMs and SEMs too: they have an extremely sharp metallic probe that scans back and forth across the surface of the specimen. As it does so, electrons try to wriggle out of the specimen and jump across the gap, into the probe, by an unusual phenomenon called "tunneling". The closer the probe is to the surface, the easier it is for electrons to tunnel into it, the more electrons escape, and the greater the tunneling current. The microscope constantly moves the probe up or down by tiny amounts to keep the tunneling current constant. By recording how much the probe has to move, it effectively measures the peaks and troughs of the specimen's surface. A computer turns this information into a map of the specimen that shows up its detailed atomic structure. One big drawback of ordinary electron microscopes is that they produce amazing detail using high-energy beams of electrons, which tend to damage the objects they're imaging. STMs avoid this problem by using much lower energies.

Atomic force microscopes (AFMs)

If you think STMs are amazing, AFMs (atomic force microscopes), also invented by Gerd Binnig, are even better! One of the big drawbacks of STMs is that they rely on electrical currents (flows of electrons) passing through materials, so they can only make images of conductors. AFMs don't suffer from this problem because, although they use still tunneling, they don't rely

ϯϬ   on a current flowing between the specimen and a probe, so we can use them to make atomic- scale images of materials such as plastics, which don't conduct electricity.

An AFM is a microscope with a little arm called a cantilever with a tip on the end that scans across the surface of a specimen. As the tip sweeps across the surface, the force between the atoms from which it's made and the atoms on the surface constantly changes, causing the cantilever to bend by minute amounts. The amount by which the cantilever bends is detected by bouncing a laser beam off its surface. By measuring how far the laser beam travels, we can measure how much the cantilever bends and the forces acting on it from moment to moment, and that information can be used to figure out and plot the contours of the surface. Other versions of AFMs (like the one illustrated here) make an image by measuring a current that "tunnels" between the scanning tip and a tunneling probe mounted just behind it. AFMs can make images of things at the atomic level and they can also be used to manipulate individual atoms and molecules²one of the key ideas in nanotechnology. (13)

Centrifuge

ϯϭ  

Separations are a critical step in your workflow; thus it¶s important to consider the centrifuge requirements and technical specifications for your applications, from selecting the appropriate speed and g-force to exploring the latest trends in centrifugation. A wide array of Thermo 6FLHQWLILFŒFHQWULIXJHVDQGWKHLULQQRYDWLYHURWRUVDUHDYDLODEOHIRUDOO\RXUSURFHVVLQJQHHGV supporting lab ware from and microtubes to large-capacity bottles²all designed to deliver outstanding performance spin after spin. (14) Centrifuge is an apparatus that rotates at high speed and by centrifugal force separates substances of different densities, as milk and cream. (15)

Bench top centrifuges capable of speeds of about 3,000 rpm have been in use since the mid- 1800s. Early instruments were hand powered, but in 1912, with the introduction of electric centrifuges, that changed. The early centrifuges were mostly used for non-biological applications, such as separating milk and collecting precipitates.

The process of centrifugation can be traced back to the mid-15th century, when hand-driven centrifuge systems were used to separate milk.

In 1864, this ad hoc system of milk separation was commercialized by Antonin Prandtl, who developed the first dairy centrifuge for the purpose of separating cream from milk. In 1864, this ad hoc system of milk separation was commercialized by Antonin Prandtl, who developed the first dairy centrifuge for the purpose of separating cream from milk.

ϯϮ  

The potential of the centrifuge in the laboratory setting was first exploited by Friedrich Miescher. In 1869, Miescher used a crude centrifuge system to isolate a cell organelle. This process led to the discovery of an important new class of biological constituents, later to be known as nucleic acids.

The work of Miescher was quickly recognized and developed further by others. In 1879, the first continuous centrifugal separator was demonstrated by Gustaf de Laval. This development made the commercialization of the centrifuge a possibility for the first time.

The next major step forward in the evolution of the centrifuge came during the 1920s and 1930s, when the ultracentrifuge capable of achieving 900,000 gwas developed by the Swedish colloid Theodor Svedberg. Models capable of reaching 900,000 g tended to have small rotors, so ultracentrifuges with larger rotors that could operate at around 260,000 g were more commonly used in routine work. Svedberg used his centrifuge to determine the molecular weight and subunit structure of highly complex proteins, such as hemoglobin. This information started a revolution in the understanding of the structures of proteins. In 1926 Svedberg received a Nobel Prize for the invention of the ultracentrifuge and for his work in colloid chemistry.

Svedberg¶s ultracentrifuge, however, was essentially an analytical instrument, specifically designed for the accurate recording of sedimentation boundaries. It would have been impossible to convert it for preparative processes for the simple reason that its rotor axis was horizontal. The transition from these analytical instruments to the modern preparative ultracentrifuges came through the efforts of the French physicist Emile Henriot, who was able to achieve very high rotational speeds by means of a bearingless top, driven and supported by compressed air.

Interest in the isolation of viruses brought Edward Pickels and Johannes Bauer together to build the first high-speed vacuum centrifuge suitable for the study of filterable viruses. Later, Pickels went on to develop the more convenient, electrically driven ultracentrifuge.

ϯϯ  

During the early 1930s Martin Behrens developed were retrieved from the final sediment. Improved centrifugation techniques using density gradients of nonaqueous solvents for the separation of nuclei. His approach to tissue fractionation aimed to isolate one or more identifiable components from disrupted cells that could be physically and chemically characterized.

In 1942 Albert Claude and James Potter published a landmark paper, Isolation of Chromatin Threads from the Resting Nucleus of Leukemic Cells. This paper outlined a series of centrifugation steps in which either the supernate or the sediment was collected until chromatin threads during the early 1930s Martin Behrens developed were retrieved from the final sediment.

The early 1950s saw the introduction of density gradient centrifugation for tissue fractionation, a process developed by a plant virologist, Myron K. Brakke, working at the Brooklyn Botanic Garden.

In 1962 Netheler & Hinz Medizintechnik, a company based in Hamburg, Germany, and known today as Eppendorf, developed the first microcentrifuge for laboratory use. This microliter system (model 3200) was introduced for use in routine analytical labs on a microliter scale, and offered just one dial to control centrifuge time. The Microliter System was the basis for a broad range of tools for the molecular laboratory, which were subsequently developed by a variety of biotech and labware companies.

In 1971 HEINKEL developed the first inverting filter centrifuge HF.

In 1976 the world¶s first microprocessorcontrolled centrifuge was launched at ACHEMA by Hettich. This innovation was considered ahead of its time, and arrived many years before this technology became standard.

During the 1980s, Beckman launched the first floor ultracentrifuges.

During the 1990s, Beckman launched the Avanti high-performance centrifuge, which went on to be one of the most popular centrifuge models in history. Also during this decade, the first centrifuge capable of robotic operation was developed by Hettich. This centrifuge also offered PC-control and adjustable rotor positioning. In 1991 HEINKEL introduced Hyper-centrifugation for high pressure dewatering. In 1992 the PAC (Pressure Added Centrifugation)-System was developed by HEINKEL.

In 2000 Eppendorf innovated its line of classic microcentrifuges with the cooled 5415D centrifuge, which was the smallest and quietest cooled microcentrifuge on the market. Also in this year, Eppendorf launched the MiniSpin and the MiniSpin Plus, known as personal centrifuges -- a new era for the centrifuge market. (15)

ϯϰ  

Featured centrifuge and accessories categories

Microcentrifuge:

Compact, safe and easy-to-use microcentrifuges combine power with versatility and convenience. Discover how our microcentrifuges can accelerate your sample preparation processes and support all your micro-volume protocols.

Benchtop centrifuge:

Thermo SciHQWLILFŒEHQFKWRSFHQWULIXJHVGHOLYHUHIILFLHQWVDPSOHSURFHVVLQJLQFOLQLFDO protocols, cell culture applications, processing and a variety of separation needs.

General purpose centrifuge:

7KHUPR6FLHQWLILFŒJHQHUDOSXUSRVHFHQWULIXJHVIHDWXUHinnovative rotor technologies designed for improved benchtop performance and flexibility, greater sample capacity, and increased speed.

Superspeed centrifuge:

7KHUPR6FLHQWLILFŒ6RUYDOOŒVXSHUVSHHGFHQWULIXJHVFRPELQHFXWWLQJ-edge technology, high- speed performance, and versatile rotor capacities, allowing you to maximize productivity with impressive acceleration rates.

Ultracentrifuge:

Combining outstanding speed, safety and ergonomics in a compact design, our ultracentrifuges and micro-ultracentrifuges are designed to deliver exceptional performance and versatility for a variety of applications.

Large capacity centrifuge:

Designed to combine dependable performance and easeǦofǦuse with advanced functionality, our large capacity centrifuges provide reproducible separations for highǦthroughput applications such as blood banking and bioprocessing.

Centrifuge rotor:

Our centrifuge rotors are designed for maximum application flexibility and quality separations, supporting applications across clinical and blood banking, microbiology, tissue culture, molecular biology and genomics, drug discovery and proteomics. (14)

ϯϱ  Autoclave (steam sterilizers) for laboratory sterilization applications of liquid, glass, and biohazard. Autoclave sterilizer features specifically designed to sterilize for lab applications used in laboratory, microbiology, pharmaceutical, food and chemical industries. Designed to provide high quality repeatable performance and documentation for laboratory applications and quality assurance processes. (16)

Many autoclaves are used to sterilize equipment and supplies by subjecting them to high- pressure saturated steam at 121 °C (249 °F) for around 15±20 minutes depending on the size of the load and the contents. The autoclave was invented by Charles Chamberland in 1879, although a precursor known as the steam digester was created by Denis Papin in 1679.The name comes from Greek auto-, ultimately meaning self, and Latin clavis meaning key, thus a self- locking device.

Sterilization autoclaves are widely used in microbiology, medicine, podiatry, tattooing, body piercing, veterinary medicine, mycology, funeral homes, dentistry, and prosthetics fabrication. They vary in size and function depending on the media to be sterilized.

Typical loads include , other equipment and waste, surgical instruments, and medical waste.

A notable recent and increasingly popular application of autoclaves is the pre-disposal treatment and sterilization of waste material, such as pathogenic hospital waste. in this category largely operate under the same principles as conventional autoclaves in that they are able to neutralize potentially infectious agents by using pressurized steam and superheated water. A new generation of waste converters is capable of achieving the same effect without a pressure vessel to sterilize culture media, rubber material, gowns, dressings, gloves, etc. It is particularly useful for materials which cannot withstand the higher temperature of a hot air oven. (17)

ϯϲ An autoclave chamber sterilizes medical or laboratory instruments by heating them above boiling point. Most clinics have tabletop autoclaves, similar in size to microwave ovens. Hospitals use large autoclaves, also called horizontal autoclaves. They¶re usually located in the the Central Sterile Services Department CSSD) and can process numerous surgical instruments in a single sterilization cycle, meeting the ongoing demand for sterile equipment in operating rooms and emergency wards. (18)

Autoclaving, sometimes called steam sterilization, is the use of pressurized steam to kill infectious agents and denature proteins. This kind of "wet heat" is considered the most dependable method of sterilizing laboratory equipment and decontaminating bio-hazardous waste.

Autoclave cycles There are 2 basic autoclave cycles:

x Gravity or "fast exhaust" x Liquid or "slow exhaust" Both cycles and the materials appropriate for each cycle are described below.

Cycle Materials Description

Gravity or "fast Dry goods, This cycle charges the chamber with steam and exhaust" glassware, etc. holds it at a set pressure and temperature for a set period of time. At the end of the cycle, a valve opens and the chamber rapidly returns to atmospheric pressure. Drying time may also be added to the cycle. Liquid or "slow Liquids This cycle prevents sterilized liquids from exhaust" boiling. Steam is exhausted slowly at the end of the cycle, allowing the liquids (which will be super-heated) to cool.

Sterility monitoring Chemical indicator (e.g., autoclave tape) must be used with each load placed in the autoclave. However, use of autoclave tape alone is not an adequate monitor of efficacy. Autoclave sterility monitoring must be conducted at least monthly using appropriate biological indicators (Bacillus stearothermophilus spore strips) placed at locations throughout the autoclave. The spores, which can survive 250°F for 5 minutes but are killed at 250°F in 13 minutes, are more resistant to heat than most, thereby providing an adequate safety margin when validating decontamination procedures. Each type of container employed should be spore tested because efficacy varies with the load, fluid volume, etc. (19)

ϯϳ Incubators

In a medical laboratory, incubators play an important role, as they provide a controlled environment that regulate and adjusts the temperature, humiliation and ventilation. These units are used for keeping pre mature babies, nurture micro-organisms and hatch poultry eggs. Apart from these, nowadays, people are becoming curious on raising pets such as lizards and amphibians, rather than buying them. Therefore, to hatch a reptile egg there are certain temperature conditions that are required, and these can be only provided by these units. (20)

Cell cultures and micro-organisms must be incubated in a controlled atmosphere. In the standard incubator and the cooled incubator, the temperature is controlled, and in addition in the CO2 incubator the carbon dioxide content, humidity, and in some cases the oxygen and nitrogen content are also controlled. Examples of applications in the incubator: x Growing cell cultures x Reproduction of germ colonies with subsequent germ count in the food industry x Reproduction of germ colonies and subsequent determination of biochemical oxygen demand (wastewater monitoring) x Reproduction of microorganisms such as bacteria, fungi, yeast or viruses x Breeding of insects and hatching of eggs in zoology x Controlled sample storage x Growing of crystals/protein crystals

Requirements made of the incubator

Temperature stability and temperature homogeneity: Living organisms react extremely sensitively to fluctuations in temperature. In order to guarantee reproducible test results, temperature stability and temperature homogeneity are important quality criteria for an incubator, even without the operation of a fan.

The nutritional media on which the cultures are grown must not under any circumstances dry out. Otherwise, there is a risk that the test results will be corrupted or the cultures will completely die

ϯϴ out. Appliances with natural convection are therefore optimal, since the drying out process is not accelerated, in contrast to appliances with forced air circulation.

Avoiding contamination Hygiene is of the highest priority when working with an incubator. Germs may end up in the samples through air movement in the chamber or impurities on the chamber surfaces. The chamber should therefore be designed as smoothly as possible and without any sharp corners or fittings. The interiors in the incubator are normally made of corrosion-free stainless steel 1.4301 (acc. to standard ASTM 304) to make cleaning easier, and they are smoothed by some manufacturers to prevent germs from settling. In order to observe the chamber load without having to open the door, incubators are usually equipped with an inner glass door. (22)

ϯϵ Hot air oven

Hot Air Oven is commonly used for dry heat sterilization. Dry heat sterilization is a method of controlling microorganisms. It employs higher temperatures in the range of 160-180°C and requires exposures time up to 2 hour, depending upon the temperature employed.

The benefit of dry heat includes good penetrability and non-corrosive nature which makes it applicable for sterilizing glasswares and metal surgical instruments. It is also used for sterilizing non-aqueous thermostable liquids and thermostable powders.

Dry heat destroys bacterial endotoxins (or pyrogens) which are difficult to eliminate by other means and this property makes it applicable for sterilizing glass bottles which are to be filled aseptically). Dry heat kills by Oxidation, Protein Denaturation and toxic effects of elevated levels of electrolytes.

Hot Air Oven, which is usually used for the dry heat sterilization is consists of the following parts:

x An insulated chamber surrounded by an outer case containing electric heaters. x A fan x Shelves x Thermocouples x Temperature sensor x Door locking controls Operation: x Articles to be sterilized are first wrapped or enclosed in containers of cardboard, paper or aluminum. x Then, the materials are arranged to ensure uninterrupted air flow. x Oven may be pre-heated for materials with poor heat conductivity. x The temperature is allowed to fall to 40°C, prior to removal of sterilized material.

ϰϬ 

Advantages: x This treatment kills the bacterial endotoxin, not all treatments can do this. x Protective of sharps or instruments with a cutting edge (fewer problems with dulling of cutting edges). x Dry heat sterilization by Hot Air Oven does not leave any chemical residue. x Eliminates ³wet pack´ problems in humid climates.

Disadvantages: x Plastic and rubber items cannot be dry-heat sterilized because temperatures used (160± 170°C) are too high for these materials. x Dry heat penetrates materials slowly and unevenly. x And the Oven requires a continuous source of electricity.

ϰϭ  Safety Guidelines x Before placing in Hot Air Oven i) Dry glasswares completely ii) Plug test tubes with cotton wools iii) Wrap glasswares in Kraft papers. Do not overload the oven. Overloading alters heat convection and increases the time required to sterilize. x Allow free circulation of air between the materials. x The material used for wrapping instruments and other items must be porous enough to let steam through but tightly woven enough to protect against dust particles and microorganisms. (23)

ϰϮ Analytical balance

A more precise measuring tool required for scientific and medical purposes may call for the use of analytical balances. When dealing with potentially dangerous and life threatening situations common in these fields, precise measurements can mean the difference between a resounding success and catastrophic failure, both within the laboratory or operating room. This is amplified even more so when a lab experiment or medical operation is put to the test in an everyday environment.

An analytical balance can have its share of trials and uses, such as in the scientific and medical fields mentioned above, in addition to other fields of study such as culinary science or industrial work. Any form of work which requires the precise measurement of different kinds of chemicals, materials, substances and other sensitive items may require the use of analytical balances, along with the necessary maintenance and care of any precision machine.

ϰϯ 

Using an Analytical Balance

Analytical balances are used on different laboratory substances to help determine the mass with great precision and accuracy. Analytical balances are also capable of weighing lab samples all the way down to micro quantities. These lab samples contained within the analytical balance are enclosed in a transparent weighing chamber, which also protects the substance from tampering when placed on the scale, as well as preventing air currents in the room from affect the operation of the balance.

Precise readings are achieved through the maintenance of a constant load on the balance beam, where the mass is subtracted on the same side of the beam on which the sample substance is added. The final balance measurement is achieved through the use of a small spring force instead of the subtracted fixed masses.

A sample to be weighed inside any analytical balances should be handled with great care at all times. Room temperature should be maintained with both the balance and the sample before the weighing process begins. A deviation from room temperature may cause the formation of air currents to disrupt the sample inside the analytical balance. Differences in temperature can also decrease the precision and accuracy of the analytical balance.

As a precision instrument, an analytical balance needs to be calibrated every time before a sample is weighed. Newer analytical balances may have an auto calibration feature which only requires a short cool down time in order to acclimatise to the environmental conditions present in the current lab setting when switched on. An analytical balance may have standard masses used in the automatic calibration process.

Once the analytical balance is calibrated, do not place additional weight or object on the same lab table on which the analytical balance itself resides. This may also cause ill effects to the accuracy of the weighing process. The scale should also be operated at a comfortable distance. Take special care not to accidentally bump or tamper with the scale during weighing, as you will have to repeat the entire weighing process from the beginning.

Depending on the samples which are to be weighed. Powders, granules and liquids should not be transferred directly onto the weighing pan; granular or powder like materials must be placed on the scale using a piece of paper. Liquids may have to be transferred onto the weighing platform through the use of a spatula which pours the substance into the analytical scale. Hygroscopic

ϰϰ   substances which absorb moisture the moment they are exposed to air must be weighed quickly as possible for accurate and precise results. It also goes without saying that toxic or flammable substances need to be handled very carefully when placed into the scale.

A substance weighed on analytical balances of any kind may have to be tarred after being weighed on the scale in order to disregard the added weight of the container or vessel which the substance is contained in. Some analytical scales may simply have a tare button on the balance itself, while others may not, requiring the use of manual calculation. This option present on some analytical balances allows the balance to negate the extra weight of the container or vessel counted in during the weighing process. The tare process requires that the doors of the weighing chamber are closed in order for correct and precise data.

Maintenance of an Analytical Balance

Every precision instrument needs periodical maintenance to operate at peak performance and efficiency. Analytical balances are no different from this rule. Be sure to exercise proper care and maintenance on analytical balances to ensure a high degree of accuracy during every weighing session.

The weighing chamber must be free of dust particles at all times, even when not in use. The slightest hint of dirt may set your weighing results off by small quantities, as most analytical balances are particularly sensitive instruments. Be sure to always keep the doors of the weighing chamber closed at all times, so that dust and other particles may not enter the weighing chamber.

Take special care to clean the weighing pan or platform regularly in order to ensure accurate measurement results which are not affected by dust and dirt particles. Also be sure not to touch the weighing platform without sterilized gloves in order not to leave or grime accumulated on your fingers. These seemingly insignificant particles may also set the course of the weighing process off by more than you would be comfortable with. A soft cloth or a sterilized brush can be used to clean typical analytical balances. (24)

ϰϱ  Water bath

A water bath is laboratory equipment made from a container filled with heated water. It is used to incubate samples in water at a constant temperature over a long period of time. All water baths have a digital or an analogue interface to allow users to set a desired temperature. Utilizations include warming of reagents, melting of substrates or incubation of cell cultures. It is also used to enable certain chemical reactions to occur at high temperature. Water bath is a preferred heat source for heating flammable chemicals instead of an open flame to prevent ignition. Different types of water baths are used depending on application. For all water baths, it can be used up to 99.9 °C. When temperature is above 100 °C, alternative methods such as oil bath, silicone bath or sand bath may be used.

Precautions:

x It is not recommended to use water bath with moisture sensitive or pyrophoric reactions. Do not heat a bath fluid above its flash point.

x Water level should be regularly monitored, and filled with distilled water only. This is required to prevent salts from depositing on the heater.

x Disinfectants can be added to prevent growth of organisms.

x Raise the temperature to 90 °C or higher to once a week for half an hour for the purpose of decontamination.

x Markers tend to come off easily in water baths. Use water resistant ones.

x If application involves liquids that give off fumes, it is recommended to operate water bath in or in a well-ventilated area.

x The cover is closed to prevent evaporation and to help reaching high temperatures.

x Set up on a steady surface away from flammable materials.

Types of water bath

¾ Circulating water bath:

Circulating the water baths (also called stirrers) are ideal for applications when temperature uniformity and consistency are critical, such as enzymatic and serologic experiments. Water is thoroughly circulated throughout the bath resulting in a more uniform temperature.

ϰϲ 

¾ Non ±circulating water bath :

This type of water bath relies primarily on convection instead of water being uniformly heated. Therefore, it is less accurate in terms of temperature control. In addition, there are add-ons that provide stirring to non-circulating water baths to create more uniform heat transfer.

¾ Shaking water bath :

This type of water bath has extra control for shaking, which moves liquids around. This shaking feature can be turned on or off. In microbiological practices, constant shaking allows liquid- grown cell cultures grown to constantly mix with the air. (25)

ϰϳ  Spectrophotometer x A spectrophotometer is a photometer that can measure intensity as a function of the light source wavelength. Important features of spectrophotometers are spectral bandwidth and linear range of absorption or reflectance measurement. x A spectrophotometer is commonly used for the measurement of transmittance or reflectance of solutions, transparent or opaque solids, such as polished glass, or gases. However they can also be designed to measure the diffusivity on any of the listed light ranges that usually cover around 200 nm ± 2500 nm using different controls and calibrations. Within these ranges of light, calibrations are needed on the machine using standards that vary in type depending on the wavelength of the photometric determination. (26)

History:

By 1940 several spectrophotometers were available on the market, but early models could not work in the ultraviolet. Arnold O. Beckman developed an improved version at the National Technical Company, later the Beckman Instrument Company and ultimately Beckman Coulter. Models A, B, and C were developed (three units of model C were produced), then the model D, which became DU. All the electronics were contained within the instrument case, and it had a new hydrogen lamp with ultraviolet continuum, and a better monochromator. This instrument was produced from 1941 until 1976 with essentially the same design; over 30,000 were sold. 1941 price was US$723 (far-UV accessories were an option at additional cost). Nobel chemistry laureate Bruce Merrifield said it was "probably the most important instrument ever developed towards the advancement of bioscience.

ϰϴ 

Design:

There are two major classes of devices: single beam and double beam. A double beam spectrophotometer compares the light intensity between two light paths, one path containing a reference sample and the other the test sample. A single-beam spectrophotometer measures the relative light intensity of the beam before and after a test sample is inserted. Although comparison measurements from double-beam instruments are easier and more stable, single- beam instruments can have a larger dynamic range and are optically simpler and more compact. Additionally, some specialized instruments, such as spectrophotometers built onto microscopes or telescopes, are single-beam instruments due to practicality.

Historically, spectrophotometers use a monochromator containing a diffraction grating to produce the analytical spectrum. The grating can either be movable or fixed. If a single detector, such as a photomultiplier tube or photodiode is used, the grating can be scanned stepwise so that the detector can measure the light intensity at each wavelength (which will correspond to each "step"). Arrays of detectors, such as charge coupled devices (CCD) or photodiode arrays (PDA) can also be used. In such systems, the grating is fixed and the intensity of each wavelength of light is measured by a different detector in the array. Additionally, most modern mid-infrared spectrophotometers use a Fourier transform technique to acquire the spectral information. The technique is called Fourier transform infrared spectroscopy.

When making transmission measurements, the spectrophotometer quantitatively compares the fraction of light that passes through a reference solution and a test solution, then electronically compares the intensities of the two signals and computes the percentage of transmission of the sample compared to the reference standard. For reflectance measurements, the spectrophotometer quantitatively compares the fraction of light that reflects from the reference and test samples. Light from the source lamp is passed through a monochromator, which diffracts the light into a "rainbow" of wavelengths through a rotating prism and outputs narrow bandwidths of this diffracted spectrum through a mechanical slit on the output side of the monochromator. These bandwidths are transmitted through the test sample. Then the photon flux density (watts per meter squared usually) of the transmitted or reflected light is measured with a photodiode, charge coupled device or other light sensor. The transmittance or reflectance value for each wavelength of the test sample is then compared with the transmission or reflectance values from the reference sample. Most instruments will apply a logarithmic function to the

ϰϵ   linear transmittance ratio to calculate the 'absorbency' of the sample, a value which is proportional to the 'concentration' of the chemical being measured.

In short, the sequence of events in a modern spectrophotometer is as follows:

1. The light source is shone into a monochromator, diffracted into a rainbow, and split into two beams. It is then scanned through the sample and the reference solutions. 2. Fractions of the incident wavelengths are transmitted through, or reflected from, the sample and the reference. 3. The resultant light strikes the photodetector device, which compares the relative intensity of the two beams. 4. Electronic circuits convert the relative currents into linear transmission percentages and/or absorbance/concentration values.

Many older spectrophotometers must be calibrated by a procedure known as "zeroing", to balance the null current output of the two beams at the detector. The transmission of a reference substance is set as a baseline (datum) value, so the transmission of all other substances are recorded relative to the initial "zeroed" substance. The spectrophotometer then converts the transmission ratio into 'absorbency', the concentration of specific components of the test sample relative to the initial substance.

Application in biochemistry:

Spectrophotometry is an important technique used in many biochemical experiments that involve DNA, RNA, and protein isolation, enzyme kinetics and biochemical analyses. A brief explanation of the procedure of includes comparing the absorbency of a blank sample that does not contain a colored compound to a sample that contains a colored compound. This coloring can be accomplished by either a dye such as Coomasie Brilliant Blue G-250 dye or by an enzymatic reaction as sHHQEHWZHHQȕ-galactosidase and ONPG (turns sample yellow). The spectrophotometer is used to measure colored compounds in the visible region of light (between 350 nm and 800 nm), thus it can be used to find more information about the substance being studied. In biochemical experiments, a chemical and/or physical property is chosen and the procedure that is used is specific to that property in order to derive more information about the sample, such as the quantity, purity, enzyme activity, etc. Spectrophotometry can be used for a number of techniques such as determining optimal wavelength absorbance of samples, determining optimal pH for absorbance of samples, determining concentrations of unknown samples, and determining the pKa of various samples. Spectrophotometry is also a helpful procedure for protein purification and can also be used as a method to create optical assays of a compound. Spectrophotometric data can also be used in conjunction with the Beer-Lambert Equation, A= -log107 İFO 2'LQRUGHUWRGHtermine various relationships between transmittance and concentration, and absorbance and concentration. Because a spectrophotometer measures the wavelength of a compound through its color, a dye binding substance can be added so that it can undergo a color change and be measured. Spectrophotometers have been developed and improved over decades and have been widely used among . Additionally, Spectrophotometers are specialized to measure either UV or Visible light wavelength absorbance values. It is considered to be a highly accurate instrument that is also very sensitive and therefore

ϱϬ   extremely precise, especially in determining color change. This method is also convenient for use in laboratory experiments because it is an inexpensive and relatively simple process. (27)

ϱϭ  Distiller:

Equipment used for water purification and distillation includes deionized (DI) water systems, water distillers, reagent-grade water systems, and laboratory filters.

DI and distilled water are the most common types of purified water used in the lab, but techniques also used to produce high-purity water include: x Carbon filtration x Ion exchange x Reverse osmosis (RO) x Ultrafiltration x UV oxidation

Purified water is necessity in PCR, DNA sequencing, protein research, chromatography, media preparation. (28)

ϱϮ Microtomes:

A microtome (from the Greek mikros, meaning "small", and temnein, meaning "to cut") is a tool used to cut extremely thin slices of material, known as sections. Important in science, microtomes are used in microscopy, allowing for the preparation of samples for observation under transmitted light or electron radiation. Microtomes use steel, glass, or diamond blades depending upon the specimen being sliced and the desired thickness of the sections being cut. Steel blades are used to prepare sections of animal or plant tissues for light microscopy histology. Glass knives are used to slice sections for light microscopy and to slice very thin sections for electron microscopy. Industrial grade diamond knives are used to slice hard materials such as bone, teeth and plant matter for both light microscopy and for electron microscopy. Gem quality diamond knives are used for slicing thin sections for electron microscopy.

Microtomy is a method for the preparation of thin sections for materials such as bones, minerals and teeth, and an alternative to electropolishing and ion milling. Microtome sections can be made thin enough to section a human hair across its breadth, with section thickness between 50 nm and 100 μm.

History:

In the beginnings of light microscope development, sections from plants and animals were manually prepared using razor blades. It was found that to observe the structure of the specimen under observation it was important to make clean reproducible cuts on the order of 100 μm, through which light can be transmitted. This allowed for the observation of samples using light microscopes in a transmission mode.

One of the first devices for the preparation of such cuts was invented in 1770 by George Adams, Jr. (1750±1795) and further developed by Alexander Cummings. The device was hand operated, and the sample held in a cylinder and sections created from the top of the sample using a hand crank.

In 1835, Andrew Prichard developed a table based model which allowed for the vibration to be isolated by affixing the device to the table, separating the operator from the knife.

ϱϯ 

Occasionally, attribution for the invention of the microtome is given to the anatomist Wilhelm His, Sr. (1865), in his Beschreibung eines Mikrotoms (German for Description of a Microtome), Wilhelm wrote:

The apparatus has enabled a precision in work by which I can achieve sections that by hand I cannot possibly create. Namely it has enabled the possibility of achieving unbroken sections of objects in the course of research.

Other sources further attribute the development to a Czech physiologist -DQ(YDQJHOLVWD3XUN\QČ. Several sources describe the Purkyne model as the first in practical use.

The obscurities in the origins of the microtome are due to the fact that the first microtomes were simply cutting apparatuses, and the developmental phase of early devices is widely undocumented.

At the end of the 1800s, the development of very thin and consistently thin samples by Microtomy, together with the selective staining of important cell components or molecules allowed for the visualization of microscope details.

Today, the majority of microtomes are a knife-block design with a changeable knife, a specimen holder and an advancement mechanism. In most devices the cutting of the sample begins by moving the sample over the knife, where the advancement mechanism automatically moves forward such that the next cut for a chosen thickness can be made. The section thickness is controlled by an adjustment mechanism allowing for precise control.

Applications:

The most common applications of microtomes are:

x Traditional Histology Technique: tissues are hardened by replacing water with paraffin. The tissue is then cut in the microtome at thicknesses varying from 2 to 50 μm. From there the tissue can be mounted on a microscope slide, stained with appropriate aqueous dye(s) after prior removal of the paraffin, and examined using a light microscope. x Frozen section procedure: water-rich tissues are hardened by freezing and cut in the frozen state with a freezing microtome or microtome-cryostat; sections are stained and examined with a light microscope. This technique is much faster than traditional histology (5 minutes vs 16 hours) and is used in conjunction with medical procedures to achieve a quick diagnosis. Cryosections can also be used in immunohistochemistry as freezing tissue stops degradation of tissue faster than using a fixative and does not alter or mask its chemical composition as much. x Electron Microscopy Technique: after embedding tissues in epoxy resin, a microtome equipped with a glass or gem grade diamond knife is used to cut very thin sections (typically 60 to 100 nanometer). Sections are stained with an aqueous solution of an appropriate heavy metal salt and examined with a transmission electron microscope. This instrument is often called an ultramicrotome. The ultramicrotome is also used with its glass knife or an industrial grade diamond knife to cut survey sections prior to thin

ϱϰ  

sectioning. These survey sections are generally 0.5 to 1 μm thick and are mounted on a glass slide and stained to locate areas of interest under a light microscope prior to thin sectioning for the TEM. Thin sectioning for the TEM is often done with a gem quality diamond knife. Complementing traditional TEM techniques ultramicrotomes are increasingly found mounted inside an SEM chamber so the surface of the block face can be imaged and then removed with the microtome to uncover the next surface for imaging. This technique is called Serial Block-Face Scanning Electron Microscopy (SBFSEM). x Botanical Microtomy Technique: hard materials like wood, bone and leather require a sledge microtome. These microtomes have heavier blades and cannot cut as thin as a regular microtome.

x Spectroscopy (especially FTIR or Infrared spectroscopy) Technique: thin polymer sections are needed in order that the infra-red beam will penetrate the sample under examination. It is normal to cut samples to between 20 and 100 μm in thickness. For more detailed analysis of much smaller areas in a thin section, FTIR microscopy can be used for sample inspection. x Fluorescence microscopy: samples can be made into thin slices to be viewed under a fluorescent microscope.

A recent development is the laser microtome, which cuts the target specimen with a femtosecond laser instead of a mechanical knife. This method is contact-free and does not require sample preparation techniques.

Types of microtomes

Compresstome microtome

A variation on the vibrating microtome is the Compresstome microtome, which is designed and made by Precisionary Instruments. The Compresstome uses a specimen syringe or "lipstick-like" tube to hold the tissue. The tissue specimen is completely embedded in agarose, and the tissue is slowly and gently pressed out of the tube for the vibrating blade to cut. The device operates in the following way: the end of the specimen tube where the tissue emerges is slightly narrower than the loading end, which allows gentle "compression" of the tissue as it comes out of the tube. The slight compression prevents shearing, uneven cutting, and vibration artifacts from forming. Note that the compression technology does not damage or affect the tissue being sectioned.

There are several advantages of the Compresstome microtome:

1) the agarose embedding provides stability to the entire specimen on all sides, which prevents uneven slicing or shearing of tissue; 2) the compression technology prevents gently compresses tissue for even cutting, so that the blade doesn't push against the tissue; 3) faster sectioning than most vibrating microtomes; and 4) it cuts tissue from older or more mature animals well to provide healthier tissues.

ϱϱ  

Sledge microtome:

A sledge microtome is a device where the sample is placed into a fixed holder (shuttle), which then moves backwards and forwards across a knife. Modern sled microtomes have the sled placed upon a linear bearing, a design that allows the microtome to readily cut many coarse sections. By adjusting the angles between the sample and the microtome knife, the pressure applied to the sample during the cut can be reduced. Typical applications for this design of microtome are of the preparation of large samples, such as those embedded in paraffin for biological preparations. Typical cut thickness achievable on a sledge microtome is between 1 and 60 μm.

Rotary microtome:

This instrument is a common microtome design. This device operates with a staged rotary action such that the actual cutting is part of the rotary motion. In a rotary microtome, the knife is typically fixed in a horizontal position.

Cryomicrotome:

For the cutting of frozen samples, many rotary microtomes can be adapted to cut in a liquid- nitrogen chamber, in a so-called cryomicrotome setup. The reduced temperature allows the hardness of the sample to be increased, such as by undergoing a glass transition, which allows the preparation of semi-thin samples however the sample temperature and the knife temperature must be controlled in order to optimize the resultant sample thickness.

Ultramicrotome:

An ultramicrotome is a main tool of ultramicrotomy. It allows the preparation of extremely thin sections, with the device functioning in the same manner as a rotational microtome, but with very tight tolerances on the mechanical construction. As a result of the careful mechanical construction, the linear thermal expansion of the mounting is used to provide very fine control of the thickness.

These extremely thin cuts are important for use with transmission electron microscope (TEM) and serial block-face scanning electron microscopy (SBFSEM), and are sometimes also important for light-optical microscopy. The typical thickness of these cuts is between 40 and 100 nm for transmission electron microscopy and often between 30 and 50 nm for SBFSEM. Thicker sections up to 500 nm thick are also taken for specialized TEM applications or for light- microscopy survey sections to select an area for the final thin sections. Diamond knives (preferably) and glass knives are used with ultramicrotomes. To collect the sections, they are floated on top of a liquid as they are cut and are carefully picked up onto grids suitable for TEM specimen viewing. The thickness of the section can be estimated by the thin-film interference colors of reflected light that are seen as a result of the extremely low sample thickness.

ϱϲ  

Saw microtome

The saw microtome is especially for hard materials such as teeth or bones. The microtome of this type has a recessed rotating saw, which slices through the sample. The minimal cut thickness is approximately 30 μm and can be made for comparatively large samples.

Laser microtome

The laser microtome is an instrument for contact-free slicing. Prior preparation of the sample through embedding, freezing or chemical fixation is not required, thereby minimizing the artifacts from preparation methods. Alternately this design of microtome can also be used for very hard materials, such as bones or teeth, as well as some ceramics. Dependent upon the properties of the sample material, the thickness achievable is between 10 and 100 μm.

The device operates using a cutting action of an infrared laser. As the laser emits a radiation in the near infrared, in this wavelength regime the laser can interact with biological materials. Through sharp focusing of the probe within the sample, a focal point of very high intensity, up to TW/cm2, can be achieved. Through the non-linear interaction of the optical penetration in the focal region a material separation in a process known as photo-disruption is introduced. By limiting the laser pulse durations to the femtoseconds range, the energy expended at the target region is precisely controlled, thereby limiting the interaction zone of the cut to under a micrometre. External to this zone the ultra-short beam application time introduces minimal to no thermal damage to the remainder of the sample.

The laser radiation is directed onto a fast scanning mirror-based optical system, which allows three-dimensional positioning of the beam crossover, whilst allowing beam traversal to the desired region of interest. The combination of high power with a high raster rate allows the scanner to cut large areas of sample in a short time. In the laser microtome the laser- microdissection of internal areas in tissues, cellular structures, and other types of small features is also possible. (29)



ϱϳ  ZĞĨĞƌĞŶĐĞ͗

ϭͲKWWSVHQZLNLSHGLDRUJZLNL0LFURVFRSH

 KWWSZZZYLVLRQHQJFRPUHVRXUFHVKLVWRU\RIWKHPLFURVFRSH

 KWWSZZZELRORJ\UHIHUHQFHFRP/D0D/LJKW0LFURVFRS\KWPO

 KWWSVHQZLNLSHGLDRUJZLNL2SWLFDOBPLFURVFRSH

KWWSZZZNH\HQFHFRPVVSURGXFWVPLFURVFRSHE][VWXG\SULQFLSOHLQGH[MVS

KWWSVWXG\FRPDFDGHP\OHVVRQOLJKWPLFURVFRSHGHILQLWLRQXVHVSDUWVKWPO

 KWWSPLFURELRORJ\RQOLQHEORJEORJVSRWFRPPLFURELRORJ\RQOLQHIRFXVRQSDUWVDQGKWPO

 KWWSGLVVHFWLQJPLFURVFRSHFRP

 KWWSZZZNH\HQFHFRPVVSURGXFWVPLFURVFRSHE][VWXG\SULQFLSOHDBLVRXMVS

KWWSZZZPLFURVFRSHGHWHFWLYHFRPGDUNILHOGPLFURVFRSHKWPOVWKDVKUYGMW1KGSEV

KWWSVVHUFFDUOHWRQHGXPLFUREHOLIHUHVHDUFKBPHWKRGVPLFURVFRS\IOXURPLFKWPO

KWWSVZZZMLFDFXNPLFURVFRS\LQWURB(0KWPO

KWWSZZZH[SODLQWKDWVWXIIFRPHOHFWURQPLFURVFRSHVKWPO

KWWSVZZZWKHUPRILVKHUFRPXVHQKRPHOLIHVFLHQFHODEHTXLSPHQWODEFHQWULIXJHVKWPO

KWWSZZZODEPDQDJHUFRPODESURGXFWHYROXWLRQRIWKHODEFHQWULIXJH:

KWWSVWXWWQDXHUFRPODERUDWRU\DXWRFODYHV

KWWSVHQZLNLSHGLDRUJZLNL$XWRFODYH

 KWWSVWXWWQDXHUFRPDXWRFODYH

 KWWSEOLQNXFVGHGXVDIHW\UHVHDUFKODEELRVDIHW\DXWRFODYH

 KWWSZZZVHOIJURZWKFRPDUWLFOHVGLIIHUHQWW\SHVRILQFXEDWRUVDQGWKHLUDSSOLFDWLRQV

 KWWSZZZDWPRVDIHQHWHQJORVVDU\LQFXEDWRUKWPO

 KWWSZZZDWPRVDIHQHWHQJORVVDU\LQFXEDWRUKWPO

KWWSZZZDFPDVLQGLDFRPEORJKRWDLURYHQ

KWWSVZZZVFDOHVPDUWFRPDQDO\WLFDOEDODQFHV

KWWSVHQZLNLSHGLDRUJZLNL/DERUDWRU\BZDWHUBEDWK

KWWSZZZPHGLFDOODEVQHWVSHFWURSKRWRPHWHURYHUYLHZ

KWWSVHQZLNLSHGLDRUJZLNL6SHFWURSKRWRPHWU\

ϱϴ KWWSZZZODEFRPSDUHFRP*HQHUDO/DERUDWRU\(TXLSPHQW:DWHU'LVWLOODWLRQ(TXLSPHQW :DWHU3XULILFDWLRQ(TXLSPHQW

KWWSVHQZLNLSHGLDRUJZLNL0LFURWRPH

ϱϵ

View publication stats