<<

AUTOFOCUS FOR A DIGTAL CAMERA USING SPECTRUM ANALYSIS

By AHMED FAIZ ELAMIN MOHAMMED INDEX NO. 084012

Supervisor

Dr. Abdelrahman Ali Karrar

THESIS SUBMITTED TO UNIVERSITY OF KHARTOUM IN PARTIAL FULFILMENT FOR THE DEGREE OF B.Sc. (HON) IN ELECTRICAL AND ELECTRONICS ENGINEERING (CONTROL ENGINEERING) FACULTY OF ENGINEERING DEPARTMENT OF ELECTRICAL AND ELECTRONICS ENGINEERING JULY 2013

DICLARATION OF ORIGINALITY

I declare that this report entitled “AUTOFOCUS FOR A DIGTAL CAMERA USING SPECTRU ANALYSIS “is my own work except as cited in the references. The report has not been accepted for any degree and is not being submitted concurrently in Candidature for any degree or other award.

Signature: ______

Name: ______

Date: ______

I

DEDICATION

To my Mother

To my Father

To all my great Family

II

ACKNOWLEDGEMENT

Thanks first and foremost to God Almighty who guided me in my career to seek knowledge.

I am heartily thankful to my parents who helped me, encouraged me, always going to support me and stand close to me at all times.

All thanks and appreciation and respect to my supervisor Dr. Abd- Elrahman Karrar for his great supervisory, and his continued support and encouragement.

Many thanks to my colleague Mazin Abdelbadia for his continued diligence and patience to complete this project successfully.

Finally, all thanks to those who accompanied me and helped me during my career to seek knowledge.

III

ABSTRACT

The purpose of a camera system is to provide the observer with image information. A defocused image contains less information than a focused one. Therefore, focusing is a central problem in such a system. When the scene changes a lot it is desirable to perform focusing automatically to liberate the operator from this duty. In this thesis I present new method or new technique to obtain an autofocus system. This method is a passive method. It’s based on analyzing a captured image by the camera using frequency domain analysis. The analysis based on taking the Discrete Cosine Transform (DCT) for the image to calculate the energy. This energy is a direct indication to the focus degree or the sharpness for that image. After calculating the energy, it sent to the controller using serial protocol. The controller drives a stepper motor which adjusts the camera lens according to the energy value. The project goals have been met successfully and the auto focusing system was designed and implemented using spectrum analysis.

IV

المستخلص

الغرض من نظام الكامٌرا هو توفٌر اكبر قدر من العلومات عن الصورة للمصور.الصورة الغٌر مركزة تحتوي على معلومات أقل من الصورة المركزة . لذلك، التركز هو المشكلة المركزٌة فً مثل هذا النظام .عندما ٌتغٌر المشهد أكثر من المرغوب فٌه من األحرى أداء التركٌز تلقائٌا لتحرٌر المصور من هذا الواجب. فً هذه األطروحة أقدم طرٌقة جدٌدة أو أسلوب جدٌد للحصول على نظام ضبط تلقائً للصورة .وٌعتبر هذا األسلوب وسٌلة سلبٌة النوع. األسلوب ٌعتمد على تحلٌل الصور الملتقطة بواسطة الكامٌرا باستخدام تحلٌل مجال التردد.ٌستند التحلٌل على أخذ تحوٌل جٌب التمام المنفصل للصورة لحساب الطاقة .هذه الطاقة هً إشارة مباشرة إلى درجة التركٌز أو الحدة لتلك الصورة .بعد احتساب الطاقة , ترسل إلى وحدة التحكم باستخدام بروتوكول تسلسلً .وحدة التحكم تحرك محرك السائر الذي ٌضبط عدسة الكامٌرا وفقا لقٌمة الطاقة.

وقد تم تحقٌق أهداف المشروع بنجاح وتم تصمٌم نظام التركٌز التلقائً وتنفٌذها باستخدام تحلٌل الطٌف.

V

TABLES OF CONTENTS

DICLARATION OF ORIGINALITY ______I DEDICATION______II ACKNOWLEDGEMENT______III ABSTRACT ______IV V ______المستخلص TABLES OF CONTENTS ______VI LIST OF FIGURES ______IX LIST OF TABLES ______X LIST OF ABBREVIATIONS ______XI 1 Introduction ______1 1.1 Introduction ______1 1.2 Project Background ______1 1.3 Problem statement ______2 1.4 Motivation ______2 1.5 Objectives ______2 1.6 Thesis layout ______3 2 LITERATURE REVIEW ______4 2.1 Introduction ______4 2.2 Methods of focusing______4 2.2.1 Manual focus______5 2.2.2Automatic focus (Autofocus) ______5 2.2.2.1 Active AF______5 2.2.2.2Passive AF______7

2.3 Digital Images______8 2.3.1Types of Digital Images:______9 2.3.1.1 Black and White Images:______9 2.3.1.2 Color Images:______9 2.3.2 Color Terminology:______10 2.3.3 JPEG______10

VI

2.4 Image Frequency analysis______11 2.4.1DFT______12 2.4.2 DFT in image processing______12 2.5 The Discrete Cosine Transform______13

2.5.1 The One-Dimensional DCT______14

2.5. 2 The Two-Dimensional DCT______15

2.6 Focusing Energy of an image:______15

3 METHODOLOGY______16

3.1 Introduction:______16 3.2 image processing system:______16 3.2.1 Get snap shot:______16 3.2.2 Frequency analysis (The DCT):______16 3.2.3 Energy calculation: ______17 3.2.4 Sending the energy: ______17 3.3 The controller:______17 3.3.1 Serial :______17 3.3.2 Motor Driver ______18 3.3.2.1 A very basic stepper motor______19 3.3.2.2 Bipolar stepper motor______20 3.3.2.3 Step Modes______20 3.3.2.4 Driver Technology Overview______21

3.4 Overall system flow chart:______23

4 IMPLEMENTATION & RESULTS______24

4.1 Introduction ______24 4.2 System implementation ______24 4.2.1 Hardware components ______24 4.2.1.1 The Atmega32 Microcontroller______24 4.2.1.2 Stepper motor______25

VII

4.2.1.3 Power Mosfets______25 4.2.1.4 Serial interface ______26 4.2.1.5 MAX232 ______26 4.2.1.6 Web Cam ______27 4.2.1.7The overall hardware circuit______27

4.2.2 Software components ______28 4.2.2.1 MATLAB______28 4.2.2.2 Codevision AVR______28 4.2.2.3 Proteus______28 4.2.2.4 Virtual emulator______29 4.2.2.5 Hyper terminal ______30

4.3 The results______31

4.3.1 Frequency analysis results ______31

4.3.2 Brightness effects______32

4.3.3 Monochromatic image______34

4.3.4 Energy calculation results______34

4.3.5 Final result of the project______37

4.4 Discusion______38

4.4.1 Energy calculation error______39

4.4.2 Mechanical Issue______40

5 CONCLUSION AND FUTURE WORK______41

5.1 Conclusion ______41

5.2 Future work ______41

REFERENCES______42

Appendix A: HARDWARE ______43

Appendix B : CODES ______48

VIII

LIST OF FIGURES

Figure ‎2-1 Methods of focusing ______4

Figure ‎2-2 The flow chart of the active autofocus ______6

Figure ‎2-3 The flow chart of the passive autofocus ______8

Figure ‎3-1 Basic design of a stepper motor ______19

Figure ‎3-2 Bipolar Stepper Motor windings______20

Figure ‎3-3 A Flowchart of the System General Algorithm______23

Figure ‎4-1 ATmega 32 IC ______24

Figure ‎4-2 Stepper motor______25

Figure ‎4-3 IRF 9540N P-type MOSFET ______25

Figure ‎4-4 9 pin RS-232 plug ______26

Figure ‎4-5 MAX232 IC______26

Figure 4-6 Web Cam______27

Figure 4-7 the overall hardware project circuit______27

Figure ‎4-8 The Proteus Simulation of the motor Driver ______29

Figure ‎4-9 Virtual Serial Emulator______30

Figure ‎4-10 Hyper Terminal ______30

Figure ‎4-11 A focus photo and its frequency analysis ______31

Figure ‎4-12 An out of focus photo and its frequency analysis ______32

Figure ‎4-13 Bright photo and its frequency analysis ______33

Figure ‎4-14 Dark photo and its frequency analysis ______33

Figure 4-15 Monochromatic photo and its analysis ______34

Figure 4-16 Focusing energies plot______37

IX

Figure 4-17 The unfocus image______37

Figure 4-18 Autofocus system output______38

Figure 4-19 Energy vs lens position______39

Figure 4-20 Mechanical design______40

LIST OF TABLES

Table ‎3.1: Full step sequences ______22 Table ‎3.2: Half step sequences ______22

X

LIST OF ABBREVIATIONS

DCT Discrete Cosine Transform.

MF Manual Focus.

AF Auto Focus. IC .

PC Personal .

SIR Secondary Image Registration.

RGB Red, Green, and Blue.

JPEG Joint Photographic Experts Group.

DFT Discrete Fourier Transform.

RS-232 Recommended Standard 232.

DCE Data Communications Equipment.

DTE Data Terminals Equipment.

XI

CHAPTER 1 INTRODUCTION 1.1 Introduction:

This‎chapter‎is‎intended‎to‎give‎the‎reader‎an‎idea‎about‎the‎project’s‎problem,‎background,‎ objectives and scope. In addition, an overview of report layout will be presented.

1.2 Project Background:

Photography is an important issue for human nowadays, because it used to capture photos for moments that will not stay forever, And the need to be clear images has become one of the most important concerns of human. These concepts are directly related to the degree of the focusing for the camera . A lot of researches, techniques, methods and technologies are applied to improve the focusing of captured images, starting from manual techniques then it’s‎developed‎to‎the‎automatic‎ techniques, although both techniques are used in photography world until today.

Leica was actually the company that first invented autofocusing systems between 1960 and 1973, the company patented a number of autofocus technologies. Generally autofocusing techniques are divided to two section, Active techniques and Passive techniques. Active techniques have a high cost because they require sensors to calculate the distance between the object and the lens. Also the cameras that use the active techniques are more complex than that use the passive techniques.

Autofocusing techniques differ from one manufacturer to another, each has its own characteristics over the others and any one can develop his own method.

1

1.3 Problem statement:

Focusing cameras is a central problem in many applications, e.g., microscopy and ordinary video systems. Traditionally the needed adjustment of the camera is done manually by the operator. When the view does not change this may be acceptable for most applications, but when the scene is variable the operator must pay unreasonable attention to this task. The purpose of an autofocus camera is to relieve the operator of this duty. When the distance between camera and object is known the autofocus process is very easy and straightforward. The distance corresponds to a specific lens position, and that transformation can be accomplished beforehand. Therefore many existing systems use an active sensor to measure the distance to the object. However, such an active system emits energy that can be noticed by others. This is not desirable in a military system. Furthermore, measure equipment that can handle longer distances can be very complicated. This thesis only considers a passive method that does not emit energy at all. Another important part of the auto focusing problem is the search strategy. The focused position should be attained as fast as possible.

1.4 Motivation:

 Develop a new technique for focusing cameras.  Accelerate the focusing process in digital cameras.  Reduce the cost and the complexity in cameras that use active Autofocusing methods. 1.5 Objectives:

Project objectives can be summarized as follows:  Interfacing the camera with the MATLAB program.  Design and implementation of an algorithm or typically a code using the discrete cosine transform, to give an accurate value that represents the focusing degree of the captured image.  Design a mechanical model that move the camera lenses to the right position, according to the analysis output.

2

1.6 Thesis layout

In this section, we present brief information about the rest of this thesis.

The remainder part of this thesis is:

 Chapter 2: Literature Review: this is a technology chapter; it reviews the available systems and their capabilities, and goes through the processing issues of image . Related projects to this project will also be briefed referencing their papers.  Chapter 3: Methodology: This chapter describes the methodology of the project in terms of state machine.  Chapter 4: Implementation and Results: This chapter shows all the work carried out in the project in software and hardware levels and the results of the implementation. The results discussion will also be briefed in this chapter.  Chapter 5: Conclusion and Future work: This chapter shows a conclusion for the results obtained, features and limitations of the implementation. Also the possible upgrade and bug removal will be shown in the chapter.  References: Here are the used citations indexed by numbers.  Appendix A: this appendix contains datasheets and information about ICs and devices used in the project.  Appendix B: this appendix contains the codes of the project.

3

CHAPTER TWO LITERATURE REVIEW 2.1 Introduction

One of the first rules of photography is that the subject should be sharp. Most modern digital cameras offer a number of ways of achieving sharp photos.

This chapter overview the methods for focusing a digital camera to get sharp images, also it gives a brief description of digital images and their types. The DCT and the Energy of an image were introduced in this chapter also.

2.2 Methods of focusing:

Due to the optical properties of photographic lenses, only objects within a limited range of distances from the camera will be reproduced clearly. The process of adjusting this range is known as changing the camera's focus. There are two main methods of focusing digital cameras: Manual focus (MF) and Autofocus (AF). Figure 2-1 shows the two methods of the focusing in a digital camera.

Figure 2-1 Methods of focusing

4

2.2.1 Manual focus

Adjust the camera lenses by a hand in help with human eyes and mind to get clear images refer to the manual method to focus any image. The process of manual focusing simply is, the human see the object or the thing which he needs to capture throws the camera, and then adjusts the lens of the camera until get the sharp capture.

2.2.2 Automatic focus (Autofocus)

Autofocus refers to a camera lens' ability to adjust its configuration in order to focus properly on a subject regardless of whether it is near or far from the camera.

Autofocus works either by using contrast sensors within the camera (passive AF) or by emitting a to illuminate or estimate distance to the subject (active AF) [1].

2.2.2.1 Active AF

Active AF systems measure distance to the subject independently of the optical system, and subsequently adjust the optical system for correct focus. There are various ways to measure distance, including ultrasonic sound waves and infrared light [2].

a) Sonar - ultrasonic sound is emitted by the camera; this is reflected from the subject back to the camera. The delay between the emitted and reflected sound is measured, giving a distance estimate, and the lens adjusted to this distance. Some Polaroid SX-70 cameras used this system. b) Infra-red beam - a pulsed infra-red light beam is emitted by the camera and reflected by the subject; the camera has an infra-red receiver set apart from the emitter. Adjusting the angles of emitter and receiver (in concert with moving the lens focus mechanism) and finding a maximum in the amount of light received gives a measure of the distance and a focused image. This method is common on compact film cameras, including the Nikon 35TiQD and 28TiQD, the Canon AF35M.

5

The flow chart of the active autofocus method is shown in Figure 2-2. When the active autofocus method starts, the camera would send patterns, for instance visible light or infrared rays. After receiving the reflected patterns, the camera would calculate the distance between the camera and the objective. According to the calculated distance, the camera adjusts the position of its lens. This type needs sensors to send patterns and receive them, so the camera needs space to place sensors. The cost of this camera would increase because of sensors. When you picture through glass, some patterns may be reflected. At this situation, the active method would calculate the wrong distance. The solution is to change the pattern that could go through glass. The choice of patterns is important. When there are many things in front of the main objective, this method would also calculate the wrong distance. [3]

Figure 2-2 The flow chart of the active autofocus.

6

2.2.2.2 Passive AF

Passive Autofocus analyses the image arriving at the camera, without transmitting anything towards the subject - except in some cases assistance light is used to illuminate the subject when it is too dark for there to be enough image for focusing [2]. As examples of passive AF:

 Image splitting - the image is divided into two parts, and these are analyzed by an autofocus sensor. This effectively creates a rangefinder - working by comparing light peaks and their phases in the two images. SIR - Secondary Image Registration is one example of this method.  Contrast analysis - the contrast of the image is measured whilst adjusting the focus of the lens - the highest contrast is achieved when the image is in focus. This is more easily implemented in digital cameras, which already have a sensor and a processing system, and some video cameras.

The flow chart of the passive autofocus method is shown in Figure 2-3. This method needs information to analyze if the position of lens is right. After the passive autofocus method starts, the camera first captures an image. Second, the camera uses some algorithms to calculate the sharpness value. According to this value, the camera uses another algorithm to judge if the lens focuses on the objective. If not, the camera goes back to the first step. The camera repeats those steps till the lens is in focus. This passive method would have trouble in lower light situation. In this situation, the sharpness value may be too low to focus all the time. The solution is to change the threshold at this situation. Some cameras would use light to assist in focusing, and some cameras would use an active autofocus method to assist, for example Sony DSC F707. The choice of algorithms is important. Some algorithms may be fast, but they have worse focus. People always want their cameras to work fast and have good focus. The main problem of the passive autofocus is to find an algorithm that needs less computation but has good focus [3].

7

Figure 2-3 The flow chart of the passive autofocus.

2.3 Digital Images

Digital images are composed of (short for picture elements). Each represents the color (or gray level for black and white photos) at a single point in the image, so a pixel likes a tiny dot of a particular color. By measuring the color of an image at a large number of points, we can create a digital approximation of the image from which a copy of the original can be reconstructed. Pixels are a little like grain particles in a conventional photographic image, but arranged in a regular pattern of rows and columns and store information somewhat differently. A digital image is a rectangular array of pixels sometimes called a bitmap [4].

8

2.3.1 Types of Digital Images: For photographic purposes, there are two important types of digital images: color and black and white. Color images are made up of colored pixels while black and white images are made of pixels in different shades of gray [4].

2.3.1.1 Black and White Images: A black and white image is made up of pixels each of which holds a single number corresponding to the gray level of the image at a particular location. These gray levels span the full range from black to white in a series of very fine steps, normally 256 different grays. Since the eye can barely distinguish about 200 different gray levels, this is enough to give the illusion of a step less tonal scale as illustrated below:

Assuming 256 gray levels, each black and white pixel can be stored in a single byte (8 bits) of memory.

2.3.1.2 Color Images: A color image is made up of pixels each of which holds three numbers corresponding to the red, green, and blue levels of the image at a particular location. Red, green, and blue (sometimes referred to as RGB) are the primary colors for mixing light. These so-called additive primary colors are different from the subtractive primary colors used for mixing paints (cyan, magenta, and yellow). Any color can be created by mixing the correct amounts of red, green, and blue light. Assuming 256 levels for each primary, each color pixel can be stored in three bytes (24 bits) of memory. This corresponds to roughly 16.7 million different possible colors [4].

9

2.3.2 Color Terminology: While pixels are normally stored within the computer according to their red, green, and blue levels, this method of specifying colors (sometimes called the RGB color space) does not correspond to the way we normally perceive and categorize colors. There are many different ways to specify colors, but the most useful ones work by separating out the hue, saturation, and brightness components of a color. Primary colors are those that cannot be created by mixing other colors. Because of the way we perceive colors using three different sets of wavelengths, there are three primary colors. Any color can be represented as some mixture of these three primary colors [4].

Secondary colors like:

2.3.3 JPEG

JPEG stands for Joint Photographic Experts Group. It is a standard method of compressing photographic images. Also the file format which employs this compression is called JPEG.

The file extensions for this format are .JPEG, .JFIF, .JPG, OR .JPE although .JPG is the most common on all platforms.

The JPEG compression algorithm is at its best on photographs and paintings of realistic scenes with smooth variations of tone and color. For web usage, where the amount of data used for an image is important, JPEG is very popular. JPEG/Exif is also the most common format saved by digital cameras.

On the other hand, JPEG may not be as well suited for drawings and other textual or iconic graphics, where the sharp contrasts between adjacent pixels can cause noticeable artifacts. Such

10

images may be better saved in a lossless graphics format such as TIFF, GIF, PNG, or a raw image format. The JPEG standard actually includes a lossless coding mode, but that mode is not supported in most products. JPEG compression can help you greatly reduce the size of an image file, it can also compromise the quality of an image - and‎if‎you‎aren’t‎careful,‎there‎may‎not‎be‎ any recovery [5].

2.4 Image Frequency analysis

Fourier analysis is used in image processing in much the same way as with one-dimensional signals. However, images do not have their information encoded in the frequency domain, making the techniques much less useful. For example, when the Fourier transform is taken of an audio signal, the confusing time domain waveform is converted into an easy to understand frequency spectrum.

In comparison, taking the Fourier transform of an image converts the straight forward information in the spatial domain into a scrambled form in the frequency domain.

The Fourier Transform is an important image processing tool which is used to decompose an image into its sine and cosine components. The output of the transformation represents the image in the Fourier or frequency domain, while the input image is the spatial domain equivalent. In the Fourier domain image, each point represents a particular frequency contained in the spatial domain image.

11

2.4.1 DFT

In mathematics, the discrete Fourier transform (DFT) converts a finite list of equally spaced samples of a function into the list of coefficients of a finite combination of complex sinusoids, ordered by their frequencies, that has those same sample values. It can be said to convert the sampled function from its original domain (often time or position along a line) to the frequency domain.

The DFT is the sampled Fourier Transform and therefore does not contain all frequencies forming an image, but only a set of samples which is large enough to fully describe the spatial domain image. The number of frequencies corresponds to the number of pixels in the spatial domain image, i.e. the image in the spatial and Fourier domain is of the same size.

2.4.2 DFT in image processing:

For a square image of size N×N, the two-dimensional DFT is given by:

where f(i j) is the image in the spatial domain and the exponential term is the basis function corresponding to each point F(k,l) in the Fourier space. The equation can be interpreted as: the value of each point F(k,l) is obtained by multiplying the spatial image with the corresponding base function and summing the result.

The basic functions are sine and cosine waves with increasing frequencies, i.e. F (0,0) represents the DC-component of the image which corresponds to the average brightness and F(N-1,N-1) represents the highest frequency.

12

2.5 The Discrete Cosine Transform It’s‎an‎abbreviation‎for‎discrete‎cosine‎transform;‎it‎represents‎an‎image‎as a sum of sinusoids of varying magnitudes and frequencies. A two dimensional array version DCT2 computes the two-dimensional discrete cosine transform (DCT) of an image.

In particular, a DCT is a Fourier-related transform similar to the discrete Fourier transform (DFT), but using only real numbers. DCTs are equivalent to DFTs of roughly twice the length, operating on real data with even symmetry (since the Fourier transform of a real and even function is real and even.

The DCT has major properties for a typical image:-

1- Most of the visually significant information about the image is concentrated in just a few

coefficients of the DCT.

2- Any spatial or temporal signal has an equivalent frequency representation.

3- High frequencies correspond to pixel values that change rapidly across the image (e.g.

text, texture, leaves, etc.)

4- Strong low frequency components correspond to large scale

5- features in the image (e.g. a single, homogenous object that

6- dominates the image)

7- We will investigate Fourier transformations to obtain frequency representations of an

image

8- The DCT is similar to the discrete Fourier transform, it transforms a signal or image from

the spatial domain to the frequency domain

13

For these reasons, the DCT is often used in applications. Also the DCT is at the heart of the international standard lossy image compression algorithm known as JPEG. Like other transforms, the Discrete Cosine Transform (DCT) attempts to de-correlate the image data. After de-correlation each transform coefficient can be encoded independently without losing compression efficiency [6].

2.5.1 The One-Dimensional DCT The most common DCT definition of a 1-D sequence of length N is

For u= 0, 1, 2,…,N− 1. α (u) is defined as

For u =0 ,

Thus the first transform coefficient is the average value of the sample sequence. In literature, this value is referred to as the DC Coefficient. All other transform coefficients are called the AC Coefficients.

14

2.5.2 The Two-Dimensional DCT

The 2-D DCT is a direct extension of the 1-D case and is given by

For u, v=0,1,2,…,N −1 and α(u) and α(v) are defined as:

The 2-D basis functions can be generated by multiplying the horizontally oriented 1-D basis functions with vertically oriented set of the same functions [6]. 2.6 Energy of an image:

In Signal Processing, "energy" corresponds to the mean squared value of the signal (typically measured with respect to the global mean value). This concept is usually associated with the Parseval theorem, which allows us to think of the total frequency as distributed along "frequencies" (and so one can say, for example, that an image has most of its energy concentrated in low frequencies). Another -related- use is in image transforms: for example, the DCT transform transforms a blocks of pixels (8x8 image) into a of transformed coefficients; for typical images, it results that, while the original 8x8 image has its energy evenly distributed among the 64 pixels, the transformed image has its energy concentrated in the lower- upper "pixels" [7] .

While blurring is similar to low-pass filtering, energy from higher frequencies will be suppressed in a defocused image. The total energy is therefore lower than that of a focused one. This can be used to measure the focus quality [8].

15

CHAPTER THREE METHODOLOGY

3.1 Introduction:

In this chapter, the methodology of the overall autofocus system using frequency analysis will be present. The main objective of the system is to focusing digital camera using frequency spectrum, so first get snap shot to take an image as a sample by the MATLAB, and then make image processing to transform the image to the frequency domain. Form the frequency analysis the energy of the captured image is calculated. After calculating the energy, it will send to a microcontroller using serial port to derive a stepper motor which controls the position of the lenses to get focused images.

3.2 image processing system:

All functions in the image processing system are done using MATLAB. These functions are: 3.2.1 Get snap shot:

To analyze the current focus of a camera, first get snap shot or capture an image. This image is the input of the image processing system.

3.2.2 Frequency analysis:

This is one of the most important stages in the image processing system that transfers the captured image to the frequency domain using the discrete cosine transform (DCT).

The discrete cosine transform (DCT) represents an image as a sum of sinusoids of varying magnitudes and frequencies.

16

3.2.3 Energy calculation:

After calculating the DCT of the image, the energy must be calculated; to obtain a value represent the focus degree of that image. 3.2.4 Sending the energy:

The calculated value or the energy value which indicates the focus degree is sent to the controller using serial system. This system sends the energy value to the controller serially across RS232 protocol and MAX232.

3.3 The controller:

A stepper motor is used to adjust the lens of the digital camera for optimum focus according to the information received from the image processing system.

The receiving of the information and the driving of the stepper motor processes need a controller. Thus the controller of the overall system is ATMEGA 32 Microcontroller and it has two functions:

3.3.1 Serial Interface

The first function of the controller is: work as a serial interface between the PC and the stepper motor; it receives the information or the energy value from the PC after determining the DCT of the captured image. Energy values are transmitted to the controller via RS232 serial interface, the values are transmitted from the PC to the MAX232 chip (it is a voltage level converter used in the interface between RS232 signal from the PC and the TTL level that is used to compatible with ATMEGA 32 microcontroller), and then the values are passed to the controller.

17

3.3.2 Motor Driver

The motor used in the system to adjust the lens of the camera is a stepper motor. A stepper motor is a brushless, synchronous electric motor that converts digital pulses into mechanical shaft rotation. Every revolution of the stepper motor is divided into a discrete number of steps, in many cases 200 steps, and the motor must be sent a separate pulse for each step. The stepper motor can only take one step at a time and each step is the same size. Since each pulse causes the motor to rotate a precise angle, typically 1.8°, the motor's position can be controlled without any feedback mechanism. As the digital pulses increase in frequency, the step movement changes into continuous rotation, with the speed of rotation directly proportional to the frequency of the pulses. Stepper motors, effectively have multiple "toothed" electromagnets arranged around a central gear-shaped piece of iron. The electromagnets are energized by an external control circuit, in this system the energize circuit is a microcontroller [9].

To make the motor shaft turn, first, one electromagnet is given power, which magnetically attracts the gear's teeth. When the gear's teeth are aligned to the first electromagnet, they are slightly offset from the next electromagnet. So when the next electromagnet is turned on and the first is turned off, the gear rotates slightly to align with the next one, and from there the process is repeated, Each of those rotations is called a "step", with an integer number of steps making a full rotation. The main objectives of using this type of motors in the system are:

1. The rotation angle of the motor is proportional to the input pulse. 2. The motor has full torque at standstill (if the windings are energized). 3. Precise positioning and repeatability of movement since good stepper motors have an accuracy of 3 to 5% of a step and this error is non-cumulative from one step to the next. 4. Excellent response to starting/stopping/reversing. 5. Very reliable since there are no contact brushes in the motor. Therefore the life of the step motor is simply dependant on the life of the bearing. 6. The stepper motors response to digital input pulses provides open-loop control, making the motor simpler and less costly to control.

18

3.3.2.1 A very basic stepper motor

As all motors, the stepper motors consists of a stator and a rotor. The rotor carries a set of permanent magnets, and the stator has the coils. The very basic design of a stepper motor would be as follows in figure 3-1:

Figure 3-1 Basic design of a stepper motor.

There are 4 coils with 90o angle between each other fixed on the stator. The way that the coils are interconnected, will finally characterize the type of stepper motor connection. In the above drawing, the coils are not connected together. The above motor has 90o rotation step. The coils are activated in a cyclic order, one by one. The rotation direction of the shaft is determined by the order that the coils are activated.

Generally, the stepper motor has two main types depending on the number of coils inside it. These two types are: unipolar and bipolar stepper motor.

19

3.3.2.2 Bipolar stepper motor The bipolar stepper motor usually has four wires coming out of it. Unlike unipolar steppers, bipolar steppers have no common center connection. They have two independent sets of coils instead. To distinguish them from unipolar steppers measure the resistance between the wires. We should find two pairs of wires with equal resistance. The windings of bipolar stepper motor shown in figure 3-2

Figure 3-2 Bipolar Stepper Motor windings

3.3.2.3 Step Modes

Stepper motor "step modes" include Full, Half and Micro step [9].

FULL STEP Standard hybrid stepping motors have 200 rotor teeth, or 200 full steps per revolution of the motor shaft. Dividing the 200 steps into the 360º of rotation equals a 1.8º full step angle. Normally, full step mode is achieved by energizing both windings while reversing the current alternately. Essentially one digital pulse from the driver is equivalent to one step.

HALF STEP Half step simply means that the step motor is rotating at 400 steps per revolution. In this mode, one winding is energized and then two windings are energized alternately, causing the rotor to rotate at half the distance, or 0.9º. Although it provides approximately 30% less torque, half-step mode produces a smoother motion than full-step mode.

20

MICROSTEP Micro stepping is a relatively new stepper motor technology that controls the current in the motor winding to a degree that further subdivides the number of positions between poles. Micro stepping is typically used in applications that require accurate positioning and smoother motion over a wide range of speeds. Like the half-step mode, micro stepping provides approximately 30% less torque than full-step mode.

3.3.2.4 Driver Technology Overview The stepper motor driver receives step and direction signals from the indexer or control system and converts them into electrical signals to run the step motor. One pulse is required for every step of the motor shaft. In full step mode, with a standard 200-step motor, 200 step pulses are required to complete one revolution. The speed of rotation is directly proportional to the pulse frequency. Some drivers have an on-board oscillator which allows the use of an external analog signal or to set the motor speed. Speed and torque performance of the step motor is based on the flow of current from the driver to the motor winding. The factor that inhibits the flow, or limits the time it takes for the current to energize the winding, is known as inductance. The effects of inductance, most types of driver circuits are designed to supply a greater amount of voltage than the motor's rated voltage. Generally, the driver output voltage (bus voltage) should be rated at 5 to 20 times higher than the motor voltage rating. In order to protect the motor from being damaged, the step motor drive should be current-limited to the step motor current rating.

Stepper motors can be driven in two different patterns or sequences. Namely:

 Full Step Sequence  Half Step Sequence

In the full step sequence, two coils are energized at the same time and motor shaft rotates. The order in which coils has to be energized is given in table 3-1.

21

Table 3-1 Full Step Sequence

In Half mode step sequence, motor step angle reduces to half the angle in full mode. So the angular resolution is also increased i.e. it becomes double the angular resolution in full mode. Also in half mode sequence the number of steps gets doubled as that of full mode. Half mode is usually proffered over full mode. Table below shows the pattern of energizing the coils.

Table 3-2 Half Step Sequence

22

3.4 Overall system flow chart:

Start

Get snap shot from the camera

Analyze the captured image to

determine the energy value

Sending the energy value

using serial interface

Controller

Rotate clockwise Stop Rotate anti-clockwise

Motor

Figure ‎3-3 A Flowchart of the System General Algorithm

23

CHAPTER FOUR IMPLEMENTATION & RESULTS

4.1 Introduction

This chapter shows the implementation details of the autofocus system, including a brief description of the software or hardware components that were used in the system including all the circuit components, interface units and processing unit. It also shows the results of the autofocus system. Datasheets and information about most of the components used in the implementation are available in appendix A.

4.2 System implementation

In this section of the chapter all the implementation details will be mentioned, including the hardware and software components.

4.2.1 Hardware components

4.2.1.1 The Atmega32 Microcontroller

The Atmel AVR ATmega32 is a low-power CMOS 8-bit microcontroller based on the AVR enhanced RISC architecture. By executing powerful instructions in a single clock cycle, the ATmega32 achieves throughputs approaching 1 MIPS per MHz allowing the system’s‎designer‎ to optimize power consumption versus processing speed. Figure 4-2 shows ATmega32 IC.

Figure 4-1 ATmega 32 IC

24

4.2.1.2 Stepper motor

It is the most important hardware component that was used in the autofocus system; it converts the electrical power (pulses from the controller) into mechanical power to adjust the focus by changing the camera lens position. Figure ‎4-3 shows a stepper motor.

Figure ‎4-2 Stepper motor

4.2.1.3 Power

The power mosfets were used in the driver of the stepper motor as a power interface. There two types of power Mosfets: P-type and N-type mosfets. In this project N-type mosfets were used to drive the stepper motor. Figure ‎4-3 shows N-type .

Figure ‎4-3 IRF 3205 N-type MOSFET

25

4.2.1.4 Serial interface

The interface between the PC and the microcontroller is the RS232 serial protocol which is a complete protocol, which specifies signal voltages, signal timing, signal function, pin wiring, and the mechanical connections. The device that connects to the RS-232 interface is called a Data Communications Equipment (DCE) (the computer in our system) and the device to which it connects (the microcontroller in our system) is called a Data Terminal Equipment (DTE). Figure 4-4 shows RS-232 plug.

Figure ‎4-4 9 pin RS-232 plug

4.2.1.5 MAX232

The MAX232 IC is used to convert the TTL/CMOS logic levels to RS232 logic levels during serial communication of microcontrollers with PC. The controller operates at TTL logic level (0-5V) whereas the serial communication in PC works on RS232 standards (-25 V to + 25V). This makes it difficult to establish a direct link between them to communicate with each other. The intermediate link is provided through MAX232. The transmitters take input from controller‘s‎serial‎transmission‎pin‎and‎send‎the‎output to‎RS232‘s‎receiver.‎The‎receivers,‎on‎the‎ other hand, take input from transmission pin of RS232 serial port and give serial output to microcontroller‘s‎receiver‎pin.‎MAX232‎needs‎four‎external‎capacitors‎whose‎value‎ranges‎from‎ 1µF to 22µF. figure 4-5 shows MAX232 IC.

Figure ‎4-5 MAX232

26

4.2.1.6 Web Cam

The most important in the hardware component is the web cam. A simple web cam shown in figure 4-6

Figure 4-6 Web Cam

4.2.1.7 The overall project hardware circuit

Figure 4-7

27

4.2.2 Software components

There are many software component that helped us during the development of the system here are a list of some of them.

4.2.2.1 MATLAB

MATLAB (matrix laboratory) is a numerical computing environment and fourth-generation programming language. Developed by MathWorks, MATLAB allows matrix manipulations, plotting of functions and data, implementation of algorithms, creation of user interfaces, and interfacing with programs written in other languages, including C, C++, Java, and Fortran [10].

4.2.2.2 Codevision AVR

It is a C compiler integrated development environment, automatic program generator and in- system programmer for the Atmel AVR family of microcontroller and it was used for the development of the controller code in the Atmega32 microcontroller.

4.2.2.3 Proteus

Proteus is software for simulation, schematic capture, and printed circuit board (PCB) design. It is developed by Lab center Electronics [11], it was used in the system in a try to simulate the driver of the stepper motor. The overall simulation circuit of the this project mainly consists of five elements: the controller (ATmega32), stepper motor, mosfets, MAX232and RS232 protocol. Figure 4-8 shows the circuit of the motor driver using this simulator.

28

Figure 4-8 The Proteus Simulation of the motor Driver

4.2.2.4 Virtual serial port emulator

Virtual serial ports, created with Virtual Serial Port Driver appear in the system like usual hardware ones and the application working with them will never notice the difference. Virtual Serial Port Driver gives the user the ability to use several pairs of ports simultaneously. This emulator was used because the lap top that has the image processing system doesn‘t‎has‎a‎ physical serial port, so this emulator was a good simulation to test the system reliability before the integration stage. The Emulator of the virtual serial is shown in figure 4-9.

29

Figure ‎4-9 Virtual Serial Emulator

4.2.2.5 Hyper terminal

HyperTerminal is a program that can be used to connect to other , like online services and host computers. HyperTerminal connections are made using a modem, a null modem cable (used to emulate modem communication), or an connection. In our case null mode or emulate modem connection were used to test the image processing system in the PC before the integration of the whole system (to test particularly the coordinates that should be sent to the controller via the RS232 serial interface).

Figure ‎4-10 Hyper Terminal

30

4.3 The results 4.3.1 Frequency analysis results

 The frequency analysis of all images used in this project is the frequency values or the information contents in one row of the 2D DCT array of the images. One row is selected for several reasons. Firstly to simplify the mathematical and graphical analysis. Secondly it’s‎the‎mid‎row‎of the image, means that it represents the image very well. Finally to avoid the redundancy; because one raw is enough to represents the frequency analysis for an images.

1- Analysis of a focused photo

Figure 4-11 shows a focused photo and its frequency analysis. The frequency analysis represents the frequencies of one row (typically the mid horizontal row) from the two dimensional DCT array for the image.

Figure 4-11 A focus photo and its frequency analysis

31

2- Analysis of an out of focus photo :-

Figure 4-12 shows an out of focus photo or blur photo and its frequency analysis. Also

the frequency analysis represents the frequencies of one row (typically the mid horizontal

row) from the two dimensional DCT array for the image.

Figure 4-12 An out of focus photo and its frequency analysis

4.3.2 Brightness effects

Of course the brightness has a significant impact in the frequency analysis of the images. To study the brightness effects in the algorithm, the DCT analysis is applied for a two photos as shown a in figures 4-13 and 4-14. Figure 4-13 shows a bright photo and its frequency spectrum but figure 4-11 shows a dark photo and its frequency spectrum.

32

Figure 4-13 Bright photo and its frequency analysis

Figure 4-14 Dark photo and its frequency analysis

33

4.3.3 Monochromatic image

In this case the DCT analysis algorithm is applied to a monochromatic or one color or no-details image, for example the image in fig4-15.

Figure 4-15 Monochromatic photo and its analysis 4.3.4 Energy calculation results

Using the webcam, different focus images are taken and the energy of each is calculated to study the autofocus system algorithm. NOTE: The lens of the webcam is initially located in the maximum focus position, then the focus is reduced gradually. The experimented images and the focusing energy for each are:

The Energy = 38408.24

Image NO.1

34

The Energy = 3434.5

Image NO.2

The Energy = 646.31

Image NO.3

The Energy = 301.21

Image NO.4

35

The Energy = 178.93

Image NO.5

The Energy = 175.81

Image NO.6

The Energy = 136.05

Image NO.7

36

The plot for the energy of the seven images is shown in figure 4-16 .

45000

40000

35000

30000

25000

20000 The Energy

15000

10000

5000

0 1 2 3 4 5 6 7 Figure 4-16 Focusing Energies Plot 4.3.5 Final result of the project

After integrating all elements of the project, we test the autofocus system. The test is applied using the web cam, firstly defocused the camera then running the autofocus system. After the system stop record the output.

Figure 4-17 shows the unfocused image and figure 4-18 shows the autofocus system output.

Figure 4-17 the unfocused image

37

Figure 4-18 Autofocus system output 4.4 Discussion

From figure 4-11 and figure 4-12 we note that, the focused image has a higher frequencies s than the unfocused image and the difference between the two images is very clear.

From the energy calculation result we can say the value of the calculated energy is proportional to the focus degree. The focused image has a higher energy than the unfocused one.

Figures 4-13 and 4-14 show that the brightness of the images has a significant effect in the spectrum analysis; because brightness and light affect or hide the image details.

The final result of the project gives approximately 97% focused image compare with the maximum focus of the web cam.

We observed that the general scheme of the focusing energy for the image from maximum focusing to the maximum blur according to the lens position is shown in figure 4-19.

38

Energy

MAX focus point

Optimum lens position Lens position

Figure 4-19 Energy vs lens position

Figure 4-19 explain that the optimum position of the lens matched the maximum focus point.

4.4.1 Energy calculation Error

Energy error represents the first problem we faced in this project. This error appears when we take two consecutive readings of the energy for the same object without changing the lens position. The error is small but it has Undesirable effect in the system. This error has a significant effect in determining where to drive the stepper motor (clock or anti clock wise) which control the lens position.

To solve the error we tried to change the formula that we used to calculate the focusing energy. The typical formula that we used is :-

39

E ∑

Where:

E : focusing energy . mid-r : the mid-row elements from the two dimensional DTC array of the image.

N: column number.

One of the other formulas that we tried: accumulate a left or a right partition of the mid-row to reduce the error. Also we tried to take the row from the little up rows or down rows of the 2D DCT array. All applied formulas gave a larger error in the calculation of focusing energy than the first one.

Practically the energy reading errors do not affect the system; because the system needs only one value at a time, and this value sent to the controller to apply the conditions.

4.4.2 Mechanical Issue

The mechanical part in this project is to transfer the stepper motor rotating motion to the adjusting tuner in the webcam. A plastic bind was used for this purpose. The mechanical design of the project is shown in figure 4-20.

Figure 4-20 Mechanical design

40

CHAPTER 5 5 CONCLUSIONS AND FUTURE WORK

5.1 Conclusion

The results presented in this thesis show that using frequency spectrum to focusing digital cameras is an excellent and simple technique because it reduces the cost and the complexity of the camera.

The DCT of the image is a good indicator to the clarity of an image which can be called clarity scale.

Also one of the most useful things is using MATLAB in the image processing field.

Finally, the main objective of this project has been applied successfully and the project gave an exciting result carry aw

5.2 Future work

 Deployment of this new technology around the world.

 Using a linear motor to adjust the camera lens instead of the stepper

motor to eliminate the mechanical design.

 Apply this technology in a good quality digital camera or a high

resolution camera rather than the web cam.

41

REFERENCES

[1] http://www.mobileburn.com.

[2] http://www.camerapedia.wiki.com/wiki.autofocus.

[3] Wei-Sheng Liao and Chiou-Shann‎Fuh,”Images&Recognition.Vol.9,No.4.‎ Autofocus_2”.

[4] Jonathan‎Sachs‎”Digital‎Image‎Basics”‎1996-1999 Digital Light & Color.

[5] http://www.coolutils.com/Formats/JPEG.

[6] Syed‎Ali‎Khayam,” The Discrete Cosine Transform (DCT): Theory and

Application1”, March 10th 2003.

[7] http://www.wikipedia.com.

[8] Fredrik‎Svahn‎”Tools‎and‎Methods‎to‎obtain‎a‎passive‎Autofocus‎ system”1996.

[9] www.omega.com/prodinfo/stepper-motors.

[10] http://www.mathworks.com/products/matlab.

[11] http://en.wikipedia.org/wiki/Proteus (design_software).

42

Appendix A: HARDWARE

This appendix shows the pins configuration and a data sheet for devices used in the system.

Microcontroller:

Figure_apx A-1 Atmega32 MCU

43

Figure_apx A-1 Atmega32 block diagram

N-type Power MOSFET :-

Figure_apx A-2 IRF3205 N- type power MOSFET

44

Max232

Figure_apx A-4 MAX 232

Pin Description: Table 5-1 Max232 Pin Description

45

RS2323

Figure_apx A-5 RS232 9 pin connector

46

BiPolar stepper motor :-

Figure_apx A-6 Bipolar stepper Motor terminal description

47

Appendix B: Codes

1) MATLAB Code :-

1) function[E]=operate() 2) vid = videoinput('winvideo',1); 3) photo = getsnapshot(vid); 4) fg_photo = rgb2gray(photo); 5) photo_dct = dct2(fg_photo); 6) midrow = (size(photo,1))/2; 7) E = sum((abs(photo_dct(midrow,:))).^2); 8) E_scaled = uint8(E/9);

2) Code-Vision Code :-

1) void clokwise() 2) { 3) if(i==0){i=4;} 4) PORTA=clock[--i]; 5) } 6) 7) void unti_clokwise() 8) { 9) if(i==3){i=-1;} 10) PORTA=clock[++i]; 11) } 12) while (1) 13) { 14) 15) while (UCSRA.7==1) //data is coming 16) { 17) //////////////////////////////////////////////////////////////////////////////// 18) if(count<=2) 19) { 20) if(count==2) 21) { 22) if(energy >energy_old) left=1; 23) else left=0; 24) } 25) 26) energy_old = energy ; 27) if(count==1){clokwise();clokwise();clokwise();}

48

28) count++; 29) } 30) /////////////////////////////////////////////////////////////////////////////////// 31) if(count>3) 32) { 33) if ( (energy >energy_old)&&(left==1) ) // left 34) {clokwise();clokwise();clokwise();} 35) if ( (energy >energy_old)&&(left==0) ) //right 36) {unti_clokwise();unti_clokwise();unti_clokwise();} 37) else if (energy

49