IMPLEMENTATION OF FPGA-BASED OBJECT TRACKING ALGORITHM A PROJECT REPORT Submitted by G. SHRIKANTH (21904106079) KAUSHIK SUBRAMANIAN (21904106043) in partial fulfillment for the award of the degree of BACHELOR OF ENGINEERING In ELECTRONICS AND COMMUNICATION ENGINEERING SRI VENKATESWARA COLLEGE OF ENGINEERING, SRIPERUMBUDUR ANNA UNIVERSITY: CHENNAI 600 025 APRIL 2008 1 ANNA UNIVERSITY: CHENNAI 600 025 BONAFIDE CERTIFICATE Certified that this project report “IMPLEMENTATION OF FPGA-BASED OBJECT TRACKING ALGORITHM” is the bonafide work of “KAUSHIK SUBRAMANIAN (21904106043) AND G. SHRIKANTH (21904106079)” who carried out the project work under my supervision . SIGNATURE SIGNATURE Prof. R. Narayanan Mr. N. Venkateswaran Head of the Department SUPERVISOR Department of Electronics and Assistant Professor Communication Engineering Department of Electronics and Sri Venkateswara College of Communication Engineering Engineering, Pennalur, Sri Venkateswara College of Sriperumbudur - 602105 Engineering, Pennalur, Sriperumbudur - 602105 EXTERNAL INTERNAL EXAMINAR EXAMINAR 2 ACKNOWLEDGEMENT We are personally indebted to a number of people who gave us their useful insights to aid in our overall progress for this project. A complete acknowledgement would therefore be encyclopedic. First of all, we would like to give our deepest gratitude to our parents for permitting us to take up this course. Our sincere thanks and heartfelt sense of gratitude goes to our respected Principal, Dr. R. Ramachandran for all his efforts and administration in educating us in his premiere institution. We take this opportunity to also thank our Head of the Department, Prof. R. Narayanan for his encouragement throughout the project. We would like to express our gratitude to our Internal Coordinator, Prof. Ganesh Vaidyanathan for his commendable support and encouragement for the completion of our project with perfection. We also convey our sincere thanks to our internal guide, Prof. N Venkateswaran for his invaluable suggestions throughout the project and for his technical support rendered during the course of our project. 3 ABSTRACT In this project we propose to use Image Processing algorithms for the purpose of Object Recognition and Tracking and implement the same using an FPGA. In today’s world most sensing applications require some form of digital signal processing and these are implemented primarily on serial processors. While the required output is achievable, it can be beneficial to take advantage of the parallelism, low cost, and low power consumption offered by FPGAs (Spartan 3E). The Field Programmable Gate Array (FPGA) contains logic components that can be programmed to perform complex mathematical functions making them highly suitable for the implementation of matrix algorithms. The individual frames acquired from the target video are fed into the FPGA. These are then subject to segmentation, thresholding and filtering stages. Following this the object is tracked by comparing the background frame and the processed updated frame containing the new location of the target. The results of the FPGA implementation in tracking a moving object were found to be positive and suitable for object tracking. 4 TABLE OF CONTENTS CHAPTER NO. TITLE PAGE NO. ABSTRACT iv LIST OF FIGURES vii 1. INTRODUCTION TO OBJECT TRACKING AND SYSTEM DESIGN 1 1.1 OVERVIEW 1.1.1 Basic Object Tracking 1 1.1.2 Methods of Implementation 2 1.2 IMAGE PROCESSING SYSTEM 3 1.2.1 System Environment 3 1.2.2 Image Acquisition 4 1.2.2.1 Frame Generation 4 1.2.2.2 Background and Object Identification 5 1.3 ALGORITHM DESIGN FOR OBJECT RECOGNITION 5 1.3.1 Grayscale Conversion 6 1.3.2 Delta Frame Generation 6 1.3.3 Thresholding 6 1.3.4 Noise Filtering 7 1.3.5 Image Enhancement 9 5 1.4 OBJECT TRACKING 9 1.4.1 Optimal Frame Rate 9 1.4.2 Determining the Objects Position 10 1.4.3 Comparative Tracking 10 1.5 MATLAB SIMULATION 11 2. FPGA IMPLEMENTATION OF OBJECT TRACKING ALGORITHM 14 2.1 THE ADVANTAGE OF USING FPGAs 14 2.1.1 Hardware Constraints 16 2.1.1.1 Timing Constraints 16 2.1.1.2 Bandwidth Constraints 17 2.2 SPARTAN 3E STARTER BOARD 17 2.2.1 Overview of Features and Layout 17 2.3 DEVELOPMENT TOOLS 20 2.3.1 Xilinx Embedded Development Kit 8.1i 20 2.3.2 Microblaze – Virtual Microprocessor 20 2.4 ALGORITHM MAPPING 21 2.4.1 Memory Interfacing and C Compiler 24 2.4.1.1 C Compiler 24 2.4.1.2 Program Files 25 2.5 SIMULATION 28 2.5.1 Hyper Terminal 28 3. CONCLUSIONS 30 APPENDIX 1 31 APPENDIX 2 37 REFERENCES 42 6 LIST OF FIGURES CHAPTER TITLE PAGE NO. NO. 1. INTRODUCTION 1.1 Layout of the Image processing system 3 1.2 Object Recognition Algorithm Flow 5 1.3 Gray Level Thresholding 6 1.4 Example of Median Filter 8 1.5 Frame Generation using Matlab 11 1.6 Step-wise generation of Enhanced Image 12 1.7 Object Path obtained 13 2. FPGA IMPLEMENTATION 2.1 Benchmarking Test conducted by BDTi 14 2.2 Programmable Logic Blocks of an FPGA 15 2.3 Spartan-3E Layout 18 2.4 Spartan-3E Starter kit 19 2.5 Mapping the Sliding Window Operation 22 2.6 Window Operation using Buffers 23 2.7 System Block Diagram 27 2.8 Text File converted to Image in Matlab 28 2.9 Pixel Values obtained from FPGA plotted using Matlab 29 7 CHAPTER 1 INTRODUCTION TO OBJECT TRACKING AND SYSTEM DESIGN 1.1 OVERVIEW 1.1.1 Basic Object Tracking Object tracking is the process of locating a moving object in time using a camera. The algorithm analyses the video frames and outputs the location of moving targets within the video frame. A few examples of established motion models are: • To track objects in a plane, the motion model is a 2D transformation of an image of the object (the initial frame) • When the target is a 3D object, the motion model defines its aspect depending on its 3D position and orientation • The image of deformable objects can be covered with a boundary box, the motion of the object is defined by the position of the nodes of the bounding box. The role of the tracking algorithm is to analyze the video frames in order to estimate the motion parameters. These parameters characterize the location of the target. They help identify several other factors such as average speed, number of direction changes, total time in motion and also information about the shape and size of the target. 8 1.1.2 Methods of Implementation The two major components of a visual tracking system – • Target Representation and Localization • Filtering and Data Association Target Representation and Localization is mostly a bottom-up process. Typically the computational complexity for these algorithms is low. The following are the common Target Representation and Localization algorithms: • Blob tracking: Segmentation of object interior (for example blob detection, block-based correlation). • Mean-shift tracking: An iterative localization procedure based on the maximization of a similarity measure. • Contour tracking: Detection of object boundary (e.g. Active Contours, Watershed Algorithm) • Visual feature matching: Registration Filtering and Data Association is mostly a top-down process, which involves incorporating prior information about the scene or object, dealing with object dynamics. The computational complexity for these algorithms is usually much higher. The following are some common Filtering and Data Association algorithms: • Kalman filter: An optimal recursive Bayesian filter for linear functions and Gaussian noise. 9 • Particle filter: Useful for sampling the underlying state-space distribution of non-linear and non-Gaussian processes. 1.2 IMAGE PROCESSING SYSTEM Figure 1.1: Layout of the Image Processing System 1.2.1 System Environment Our aim is to work in an unstructured environment. An unstructured environment is one which has no artificial blue/green screen. This provides greater system flexibility and portability but can make reliable segmentation more difficult. As this environment requires the need to distinguish the objects of interest from any other objects that may be present within the frame. This limitation may be overcome by restricting the target objects to saturated and distinctive colors to enable them to be distinguished from the unstructured background. Augmenting the unstructured environment with 10 structured color in this way is a compromise that enables a much simpler segmentation algorithm to be used. Another method to maintain the color distribution is to keep the background environment a constant. This way, only the target is in motion and the system is able to track its motion in a 2- D frame. 1.2.2 Image Acquisition The image capture is performed using a color video camera which produces a stream of RGB pixels. The Casio camera is mounted on a tripod stand with a fixed Background that contains the object to be tracked. A brief 5 – 10 second video is recorded in .avi format. The temporal resolution requirements of the application have to be considered. We require a lower resolution as it will have a significantly higher acquisition rate for the observation of faster events. The size of the video frame is set to 640x480 pixels. The video obtained is read in the computer using Matlab. The software processes the entire video and converts it into Image frames at the rate of 10 frames per second. Depending on the accuracy required and computational capability of the System, the frames can be interlaced. 1.2.2.1 Frame Generation The video is fed in the Matlab program. The program reads the .avi file and converts it to frames. The frames are produced at the rate of 10 frames per second. Consider a 10 second video, a total of 100 frames will be produced in RGB format. These frames are then stored as individual bitmap 11 files (total of 100 files).
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages49 Page
-
File Size-