Introduction to Engineering Camera Lab #3 Pre-Lab Assignment

Purpose

The purpose of this assignment is to have you read the attached material to get some understanding about optics and related issues prior to performing the Camera Lab 3 procedures. What You Have To Do

1. Read the material for Camera Lab 3. If you like, you can even visit a library to find out more about these things.

2. Write a summary of the material, taking about a page. It would be nice if this was word-processed, but hand-written submissions are acceptable.

3. Submit the summary at the start of your Camera Lab 3

Camera Lab 3 Background Reading 1 08/07/02 mjh Introduction to Engineering Camera Lab 3 Background Reading The Camera as an Information Appliance Design Issues – Optics Introduction

The purpose of these notes is to present some of the basic theoretical points that underlie what we will be covering in Lab 3. In the lab session itself, we will be more concerned with the engineering design implications of this theory and how we can reverse engineer the camera, than with a lot of theory. This set of notes will therefore address mainly the theory behind what’s going on.

You don’t need to know a great deal of theory to understand this lab. However, you should have an understanding of the general idea of what the background theory is about, i.e. what it means and what it is used for, to help you understand this lab. Choosing a Camera

When selecting a camera to use, you need to consider what it will be used for. Will you be taking very close-range photographs? Will it be used for photographing fast-moving objects? Will there be enough light or will we need to use a flash (or faster film)?

Professional photographers generally choose a number of camera bodies (mostly for specific film formats or other features), then select lenses and film to suit their needs. But when we are designing a general-purpose camera, like the single-use camera we are studying, we cannot be so flexible. We need to be able to have a camera that can take most of the photographs that most people want to take, with a minimum of fuss and hassle.

In the last 15 years or so, there has been developed and sold a range of pretty much automatic cameras. These tended to cost over $ 100 and to require some thought as to film, zoom settings, use of a flash, etc. But most have automatic focusing, flash use, film handling and some other settings. If we want to manufacture a general-purpose series of cameras that cover most people’s needs, we will have to skip a lot of the complexity of these cameras and focus on what most people’s photographs have to achieve. We want to avoid the issues of focusing, adjusting the aperture settings, deciding when to use the flash, the film speed required and the like.

Most people take photographs of subjects that are from about 2 meters away from the camera (e.g. the child drinking), up to as far away as one can see (termed infinity, which tends to mean more than about 20 meters). See the images on the next page, copied from a Kodak brochure. So we need to design a camera that can cover that range. We need a camera with a fairly wide field of view, because we need to fit in group shots (e.g. the little ballerinas’ behinds) and panoramic landscapes (like the forest). We need a camera that can work with a fairly quick shutter speed (to avoid shaking effects if held for very long during an exposure), and need flash only for shots in poor light (we don’t want the flash to be needed for landscapes on fairly bright days). All these needs place a lot of constraints on the optics of the camera we are designing.

Clearly we need a camera that can do a lot with a little, so we will have to have some compromises in the design. But we can do a lot with what we have.

Camera Lab 3 Background Reading 2 08/07/02 mjh The lab we will undertake is designed to let you see a little of the optics considerations that went into the design of these cameras. It doesn’t cover everything, but it should give you a basic understanding of how cameras work optically, and how you can apply that knowledge to some issues in camera design.

The first important issue is, of course, the lens. The power that a lens has to focus light is expressed in unit called dioptres. The larger the number of dioptres a lens has, the tighter it focuses light that comes into it. By tighter, we mean how close to the lens the image forms. For example, most common glasses that people wear have powers between about 2 and 10 dioptres. A 20 dioptre lens is starting to look like the bottom of the proverbial Coke bottle (which used to be made of very thick glass). It turns out that the focal length and the power in dioptres are related; each is the inverse of the other. So a 10 dioptre lens has a focal length of 0·1 meter; a 1 dioptre lens has a focal length of 1 meter. The classical 50 mm focal length camera lens is a 20 dioptre lens.

The focal length is an important characteristic of the lens, so we will examine it first.

Camera Lab 3 Background Reading 3 08/07/02 mjh Focal Length

In one view, the focal length is the distance away from the lens where a bundle of originally parallel rays intersect (come into focus). In reality, it is never so simple. Most complex cameras (not the Kodak Max Flash though) have lens systems made up of many individual lenses. Even the eye, a well-established camera-type system, is complicated by the fact that the refractive index of the material on one side of the lens system (it has two basic components) differs from that on the other side of the lens system.

f o c a l l e n s p a r a l l e l r a y s p l a n e

f o c a l l e n g t h

One good way of thinking about focal length, especially in a camera designed for measurement work (a metric camera), is that the focal length is the distance from the lens system to a location from which you can see exactly what the camera sees. This means that we can treat the lens system as though it were a pinhole in a pinhole camera, and that pinhole would be the lens node. This is an ideal definition, but most real cameras come fairly close to this. If we assume that there is no distortion in either the lens system or the camera (never really true), then the image will meet this ideal about the focal length. We call this ‘ideal focal length’ the principal distance of the lens system, and it tends to get the symbol, ƒ.

O b j e c t x F o c a l l e n g t h f p p f h

If we think about the negative of the image, it and the lens system back node (that single point where all the light rays pass through the lens system on their way to the focal plane (film) form a duplicate of the relationship between the lens system front node and the object space being observed (photographed), with the only difference being one of scale. That is to say, the angle that a ray of light makes with the camera’s optical axis as it enters the lens system is the same as the angle it makes when it leaves the lens

Camera Lab 3 Background Reading 4 08/07/02 mjh system on its way to the film. (In reality we get some distortion, but the relationship is generally pretty good.)

So, if we make a contact positive print of the film negative, which is at the same scale and size as the negative, we have a situation where we can use similar triangles to investigate things in the image and also the camera.

The determination of the focal length of the camera lens is important. The focal length is the most basic parameter of the camera’s design. It affects almost every other aspect of the camera design, so we have to know it. The lab exercise will address this aspect. Depth of Field

Another of the critical issues in camera design and use is the depth of field. This is the range of distances (of objects) from the camera where the image is in focus, i.e. the image is sharp. This depends upon two main factors: the focal length of the lens and the size of the aperture. In general, the smaller the aperture, the longer the depth of field; and the larger the aperture, the shorter the depth of field.

Why is this the case? Without going into the theory too far, consider a lens system with a large aperture, and then the same system with a small aperture. When light from an object enters the large aperture, the light coming from the edges of the lens is coming in at a sharper angle and has a smaller range in which to focus. With a smaller aperture, the light never comes in from that far out, so has a larger range of focal positions. See the diagram below.

i m a g e l a r g e a p e r t u r e

o b j e c t

r a n g e o f a c c e p t a b l e f o c u s

s m a l l a p e r t u r e

i m a g e o b j e c t

r a n g e o f a c c e p t a b l e f o c u s

Camera Lab 3 Background Reading 5 08/07/02 mjh The range of acceptable focus is determined by how large a blur appears on the film. This blur is often termed the circle of confusion. We can translate this directly back into how wide a range of objects, in terms of distance from the camera, we can focus on the film. So a smaller aperture makes the camera more versatile as far as the depth of field is concerned.

The images below show the affect of different aperture sizes on a series of objects at different distances, photographed with two different aperture settings.

Photo taken with a very large aperture. Notice how the near and far objects are out of focus (a bit blurred), while the central figure is in sharp focus. The side view shows the depth of field.

Camera Lab 3 Background Reading 6 08/07/02 mjh Photo taken with a very small aperture. Notice how there is a very large depth of field, and that objects throughout the picture are in sharp focus. The side view shows the depth of field.

For our general purpose camera design, we should use as small an aperture as possible, to get the largest depth of field. But a small aperture means that we need either faster film or a longer exposure time. Faster film tends to appear a little ‘grainier’, while a slower exposure time leads to more chance of shaking the camera and blurring the image that way. Clearly we need to compromise here, but how can we decide what aperture to set for the camera to cover all its potential uses?

Camera Lab 3 Background Reading 7 08/07/02 mjh It turns out that there is a formula to cover the question of the relationship between focal length, aperture setting and depth of field. (Are you surprised?) We can compute the closest and furthest distance that will be in focus for a give focal length and aperture setting. We simply have to decide what level of poor focus we will accept. This ‘level of poor focus’ is determined by the circle of confusion. If we accept a certain sized circle of confusion, we can determine either the closest and furthest distances or, more useful in this case, given that we want certain distances and have some constraints as to the focal length of the lens (the camera cannot be too large), the size of the aperture we want to meet the constraints.

We can determine the depth of field by inspection of images. Basically, at what distances do objects appear out of focus. However, this is rather imprecise and it doesn’t help us in designing the camera.

We know that a lot of photographs are taken of objects at infinity, such as those shots of the mountains. So this must be the upper limit of the depth of field. We can also expect to take photographs of objects at two to three meters from the camera, especially when indoor shots with flash are contemplated. So we can set either of these distances (2 or 3 meters) as the minimum depth of field. We must decide about the size of the circle of confusion that we are prepared to accept, and we can then compute the size of the aperture we need.

We can also compute the setting for the lens itself, given that we know its focal length, so that the depth of field is correct. This affects the placement of the lens with respect to the plane of the film, and is a matter of the finer details of the design. We have to position the lens so that it will focus the correct distance perfectly and the full range of the depth of field acceptably, and this is based on the lens equations that affect the sharpness of the image. Some Formulae

Looking at the diagram on the next page, if we have a lens that is imaging an object (which is in the ‘object plane,’ OP) onto the ‘image plane’, IP, when everything is in perfect focus, there will be a relationship between the distance from the front node of the lens (H) to the OP, termed s, and the distance from the back node of the lens (H') to the IP, termed s'. The relationship is called the lens equation: 1 1 1   s s ' f

Now, for a given s, there are two other distances to objects where the circle of confusion reaches a specific size, which we can call u. One of these will be nearer the camera, and termed sn, while the other will be further from the camera, and termed sf.

If we have the aperture set to a particular size, generally given as a diameter, d, this will allow us to determine the ratio of the focal length to the aperture, , which is also called the F-Number or F-Stop setting. Often this will get the letter k.

With some geometrical and algebraic shuffling and legerdemain, we can produce a pair of equations that will give us the near and far distances of the depth of field. These are:

s f 2 s f 2 s  s  n f 2  k u ( s  f ) f f 2  k u ( s  f )

Camera Lab 3 Background Reading 8 08/07/02 mjh You can see that the bigger the F-stop (and the smaller the aperture), the bigger the range between sn and sf, and so the larger the depth of field.

If we want sf to be at infinity, we must set the denominator of the equation to zero, and with a little more shuffling we can get a non-infinite distance for which the effect is the same, so

f 2 s  f k u

If we know the bounds for the depth of field, we can also compute the perfect focusing distance for this depth of field, s. If we need to know k, we can estimate what this might be. This can also be obtained from above equations, so that 2 s s s  n f s n  s f

Working backwards, if we know ƒ and u, together with sf and sn, we can compute the distance for perfect focus, s. The specifications for the job will generally dictate the size of u.

l e n s H ' H

O P I P

d u

s sƒ ' n s ' s

s 'n s ƒ

Knowing s, we can compute s', which will tell us exactly how far from the image plane we need to place the lens. This is an important design consideration.

Once we know s, ƒ and u, as well as what we expect sn and sf to be, we can compute the size of the aperture, by computing k and from that, d. This will give us the actual diameter of the aperture, which is the final parameter we need for building the camera.

Camera Lab 3 Background Reading 9 08/07/02 mjh Film Speed and Exposure Times

Given that we have now designed the lens system, the location of the lens with respect to the film plane, and the size of the aperture, we must now think about exposing the film.

A film needs a certain amount of light to form an image. We can get this light by having a big aperture open for just a short time, or a small aperture open for a long time, or something in between. Shorter exposures are better because there is less chance of movement in the object or the camera, so we tend to want something faster than about a fiftieth of a second to avoid camera shake.

With some limits on the shutter speed, we now have to look at the film speed. The ASA number tells us how fast the film is, i.e. how much light it needs to produce an image. The bigger the number, the faster the film and the less light needed for an image. Common consumer film tends to range between 100 and 800 ASA, although much faster and much slower film is available.

We won’t go into this in great detail, but we need to be aware that the film speed to use is also a factor in the camera’s design. Cameras as Measurement Systems: Cameras in Information Systems

Going back to the earlier discussion on focal length, remember this picture?

O b j e c t x F o c a l l e n g t h f p p f h

One of the important things about this diagram is that if we measure the image of an object on the film (or a print of that film), giving p, and we know how far away that object is, h, and the camera’s focal length, f, we can compute x, the size of the object. If all the objects in the image are at the same distance, our image will be an exact copy of the objects, and we simply have to scale the image measurements so that we can get the objects’ dimensions. If we take photographs from aircraft or satellites, the amount of difference in the terrain height is often very small compared to the overall distance from the camera, and we can make a lot of approximate measurements directly from the photograph. This makes aerial photographs a powerful mapping tool, and the camera a very useful appliance for collection data and measurements about real-world objects. So we find that cameras are used a lot for recording and reproducing information which doesn’t have a lot of differences in height compare to camera distance.

Camera Lab 3 Background Reading 10 08/07/02 mjh But the situation of having all the objects in the object plane is not always the case. We often have objects at different distances from the camera, and also objects that have different distances to different parts of them. How can we measure these types of objects? Stereoscopic Imagery and Measurements If we take a photograph of an object from two different positions, so that the object is on both photographs, we can arrange to see the object in stereo. This allows us to perceive the 3-D world in which the object is located. We can also measure the images in the stereo-pair, generally with respect to each other, and often with respect to the center of the image, and so compute 3-D co-ordinates for different parts of the object. The method of computation varies, depending upon the technology, but it is possible to have aerial photographs taken from several thousand feet, and compute the positions and dimensions of objects on the ground to better than a centimeter. This requires us to measure the photograph extremely precisely, perhaps to as fine as a micron (a millionth of a meter, or micrometer). At this size, you can see the grains in the photograph quite clearly.

This whole discipline area is photogrammetry, the science of making measurements from photographs. Today, we can obtain stereoscopic imagery from satellites, which take images of the same point before, during and after the time they pass over it. This ‘in track’ stereo allows depth perception from space, and permits mapping of vast areas rapidly. In the lab, we will measure the negatives to get image co-ordinates, and use these to compute the co-ordinates (in the real world) of some of the objects that we photographed on the test range. The procedure for doing this is pretty straightforward and simply builds on the similar triangles approach discussed above. The difference here is that we don’t know the depth to the object, just as we don’t know its location, but that can be solved by having two different triangles, one from each photograph of a stereo-pair. The figure overleaf illustrates this situation. The (a) diagram illustrates the overall situation, while (b) is a plan view. Images of object point A appear at a and a' on the two photos. We will be able to deal with the angles and ' as we set up the camera to point at right angles to the base line and parallel to each other; so these angles are 90° (well, close enough for this lab). We also know the length of the base line (the line between L and L') and we know that the camera elevations were the same (the camera support stations were at the same height). We set up an arbitrary X-Y object space (as it is termed), based on the left-hand camera (as origin) and the base line (as the X axis). The Y-axis goes away from the left-hand camera and we perceive this as depth. Z is up. What we want are the 3-D co-ordinates of the point A. We can solve the problem graphically or analytically. In the analytical solution, we compute the angles a, 'a, a, 'a from the triangles formed by the measured co- ordinates on the negatives together with the focal length (which we computed earlier).

 = arctan

a = arctan The angles ' and " are then calculated as follows:  – a ' = ' – 'a

Camera Lab 3 Background Reading 11 08/07/02 mjh " = 180° —  — ' Applying the sine rule, distance LA may be calculated as

LA = And the co-ordinates can be calculated using X = (LA) cos  Y = (LA) sin  We can determine the elevation of A, assuming we know the elevation of the camera at L, using the following:

elev A = elev L + LA tan a where LA is the horizontal distance between L and A

Camera Lab 3 Background Reading 12 08/07/02 mjh Camera Lab 3 Background Reading 13 08/07/02 mjh The Information Society and Information Economy

Today, about 60% of the US work force can be classed as information workers. The proportion of the US and world economies that is based on information is already very large and growing. As a society, we are awash in information, often more than we can reasonably handle. As professional engineers, most of you will be at least as much concerned with information and related issues as with things like chemicals and materials.

Where does a camera fit into this information-oriented picture?

Cameras can be thought of as information appliances, a means of collecting spatially- referenced data and information, in the form of implicit measurements, quickly, cheaply and permanently. You can buy a camera that works for less than $ 10 (you already have!). You can take a photograph with it in a fraction of a second. You can measure that photography very precisely and convert those measurements into measurements in the real world. And you can make those measurements years after the photograph was taken, even if no-one ever thought you might need them. Truly these little cameras are wonderful devices!

In an information economy, any device which collects large amounts of data and information in a quick and economical manner has a central place in the economy. Cameras are a major tool of the information economy and the information society. As designers of such information appliances, you will have to think about how your cameras fit into this bigger picture.

What is the camera’s next evolutionary step? Originally devised as a means of taking a single image, the cine camera (or movie camera) was developed, which allowed the recording of things happening in both time and space. Digital cameras allow you to connect what was recently only chemical reactions to light, into computers and digital networks. Video cameras allow a re-usable recording format, and digital movie cameras are becoming a commonplace consumer item.

You can see the development: more information, better ways of distributing it. This is what the information economy is all about. As engineers, we must work with these larger issues before us. And the Lab Itself.....?

In the lab, we will explore some of the engineering issues involved in the camera design, and then we’ll look at the larger information issues. References

KRAUS, K., 1993. Photogrammetry. Bonn, Germany : Ferd. Dümmler Verlag.

MOFFITT, F.H. and MIKHAIL, E.M., 1980. Photogrammetry. New York : Harper and Row.

WOLF, P. R., 1983. Elements of photogrammetry, with air photo interpretation and remote sensing. New York : McGraw-Hill.

Camera Lab 3 Background Reading 14 08/07/02 mjh