
END-TO-END TEXT RECOGNITION WITH CONVOLUTIONAL NEURAL NETWORKS AN HONORS THESIS SUBMITTED TO THE DEPARTMENT OF COMPUTER SCIENCE OF STANFORD UNIVERSITY David J. Wu Principal Adviser: Andrew Y. Ng May 2012 Abstract Full end-to-end text recognition in natural images is a challenging problem that has recently received much attention in computer vision and machine learning. Traditional systems in this area have relied on elaborate models that incorporate carefully hand-engineered features or large amounts of prior knowledge. In this thesis, I describe an alternative approach that combines the representational power of large, multilayer neural networks with recent developments in unsupervised feature learning. This particular approach enables us to train highly accurate text detection and character recognition modules. Because of the high degree of accuracy and robustness of these detection and recognition modules, it becomes possible to integrate them into a full end-to-end, lexicon-driven, scene text recognition system using only simple off-the-shelf techniques. In doing so, we demonstrate state-of-the- art performance on standard benchmarks in both cropped-word recognition as well as full end-to-end text recognition. ii Acknowledgements First and foremost, I would like to thank my adviser, Andrew Ng, for his advice and mentorship throughout the past two years. His class on machine learning first sparked my interest in the field of artificial intelligence and deep learning; his willingness to let me work in his lab has helped me refine my own research interests and convinced me to pursue graduate studies. Special thanks also to Adam Coates for the tremendous support and guidance he has provided me over these past two years. From him, I have learned an immense amount about the practical side of machine learning and how to get algorithms to work well in practice. I also want to thank him for giving me the opportunity to work on so many projects, even when I was just a freshman with absolutely no background in machine learning. I would also like to thank Tao Wang for the invaluable advice he provided through the course of this project. The work presented in this thesis is the joint work of our collaboration; none of it would have been possible without his input and ideas. Both Adam and Tao have contributed countless hours to bring this project to a successful completion. It has truly been both a pleasure and a privilege for me to have worked with them. For that, I thank them both. I would like to thank Hoon Cho for being a great source of insights and lively discussion, and most of all, for being a great friend. Without his input, I would probably not have pursued this thesis in the first place. I would also like to thank Will Zou for allowing me the opportunity to continue working in the lab after the completion of this project. Thanks also to Mary McDevitt for her insightful comments and proofreading of an early draft of this thesis. Finally, I would like to thank my family for their unconditional support both before and during my undergraduate career. To my grandparents, I thank you for sparking my curiosity and for instilling within me a passion for learning so early on in my childhood. To iii my parents, I thank you for your ever-present advice and encouragement. Without your support, this work would not have been possible. This thesis is for you. iv Contents Abstract ii Acknowledgements iii 1 Introduction 1 2 Background and Related Work 4 2.1 Scene Text Recognition . 4 2.1.1 Text Detection . 5 2.1.2 Text Segmentation and Recognition . 5 2.1.3 Lexicon-Driven Recognition . 6 2.2 Unsupervised Feature Learning . 7 2.3 Convolutional Neural Networks . 9 2.3.1 Feed-Forward Neural Networks . 9 2.3.2 Convolutional Neural Networks . 11 3 Methodology 15 3.1 Detection and Recognition Modules . 16 3.1.1 Unsupervised Pretraining . 16 3.1.2 Convolutional Neural Network Architecture . 19 3.1.3 Datasets . 21 3.2 Text Line Detection . 22 3.2.1 Multiscale Sliding Window . 23 3.2.2 Text Line Formation . 23 3.3 End-to-End Integration . 28 3.3.1 Space Estimation . 29 v 3.3.2 Cropped Word Recognition . 30 3.3.3 Full End-to-End Integration . 33 3.3.4 Recognition without a Specialized Lexicon . 34 4 Experiments 35 4.1 Text Detection . 36 4.2 Character and Word Recognition . 39 4.2.1 Cropped Character Recognition . 39 4.2.2 Cropped Word Recognition . 40 4.3 Full End-to-End Text Recognition . 42 4.3.1 Recognition without a Specialized Lexicon . 44 5 Conclusion 46 5.1 Summary . 46 5.2 Limitations of the Current System and Future Directions . 47 Bibliography 49 vi List of Tables 4.1 Text detector performance on the ICDAR 2003 dataset. 37 4.2 Character recognition accuracy on the ICDAR 2003 character test set. 40 4.3 Word recognition accuracy on the ICDAR 2003 dataset. 41 4.4 F-scores from end-to-end evaluation on the ICDAR and SVT datasets. 44 4.5 Results from end-to-end evaluation on the ICDAR dataset in the general lexicon setting. 45 vii List of Figures 2.1 Illustration of end-to-end text recognition problem. 5 2.2 A simple feed-forward neural network. 9 2.3 Average pooling in a convolution neural network. 13 3.1 Operation of detection and recognition modules. 16 3.2 Visualization of dictionary elements learned from whitened grayscale image patches. 18 3.3 Convolutional neural network architecture used for detection. 19 3.4 Comparison of real and synthetic training examples. 22 3.5 Detector response maps at different scales. 24 3.6 Detector and NMS responses across lines in the image. 25 3.7 Estimated bounding boxes from text detector. 27 3.8 Negative responses across a line of text. 30 3.9 Character classifier responses. 31 4.1 Sample example from the Street View Text dataset. 36 4.2 Visualization of ICDAR ground truth bounding boxes and coalesced ground truth bounding boxes. 38 4.3 Sample images from the ICDAR 2003 Robust Word Recognition dataset. 41 4.4 Precision and recall curves for end-to-end evaluation on the ICDAR and SVT datasets. 43 4.5 Sample outputs from the full end-to-end system on the ICDAR and SVT datasets. 45 viii Chapter 1 Introduction A system that can automatically locate and recognize text in natural images has many practical applications. For instance, such a system can be instrumental in helping visually impaired users navigate in different environments, such as grocery stores [28] or city land- scapes [3], or in providing an additional source of information to an autonomous navigation system. More generally, text in natural images provides a rich source of information about the underlying image or scene. At the same time, however, text recognition in natural images has its own set of dif- ficulties. While state-of-the-art methods generally achieve nearly perfect performance on object character recognition (OCR) for scanned documents, the more general problem of recognizing text in unconstrained images is far from solved. Recognizing text in scene im- ages is much more challenging due to the many possible variations in backgrounds, textures, fonts, and lighting conditions that are present in such images. Consequently, building a full end-to-end text recognition system requires us to develop models and representations that are robust to these variations. Not surprisingly, current high-performing text detection and character recognition systems have employed cleverly hand-engineered features [10, 11] to both capture the details of and represent the underlying data. In many cases, sophisticated models such as conditional random fields (CRFs) [32] or pictorial-structure models [38] are also necessary to combine the raw detection or recognition responses into a complete system. In this thesis, I describe an alternative approach to this problem of text recognition based upon recent advances in machine learning, and more precisely, unsupervised feature learning. These feature-learning algorithms are designed to automatically learn low-level 1 CHAPTER 1. INTRODUCTION 2 representations from the underlying data [8, 15, 16, 19, 23, 33] and thus present one alterna- tive to hand-engineering the features used for representation. Such algorithms have already enjoyed numerous successes in many related fields, such as visual recognition [42] and action classification [20]. In the case of text recognition, the system in [7] has achieved solid re- sults in text detection and character recognition using a simple and scalable feature-learning architecture that relies very little on feature-engineering or prior knowledge. By leveraging these feature-learning algorithms, we were able to derive a set of special- ized features tuned particularly for the text recognition problem. These learned features were then integrated into a larger, discriminatively-trained convolutional neural network (CNN). CNNs are hierarchical neural networks that have immense representational capac- ity and have been successfully applied to many problems such as handwriting recognition [21], visual object recognition [4], and character recognition [35]. By tapping into the rep- resentational power of these architectures, we trained highly accurate text detection and character recognition modules. Despite the inherent differences between the text detection and character recognition tasks, we were able to use structurally identical network archi- tectures for both the text detector and character classifier. Then, as a direct consequence of the increased accuracy and robustness of these models, it was possible to construct a full end-to-end system using very simple and standard post-processing techniques such as non-maximal suppression (NMS) [29] and beam search [34].
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages60 Page
-
File Size-