Hindawi Wireless Communications and Mobile Computing Volume 2021, Article ID 9922017, 13 pages https://doi.org/10.1155/2021/9922017 Research Article Separating Chinese Character from Noisy Background Using GAN Bin Huang ,1 Jiaqi Lin ,1 Jinming Liu ,1 Jie Chen ,1 Jiemin Zhang ,1 Yendo Hu,1 Erkang Chen ,1 and Jingwen Yan 2 1Computing Engineering College, Jimei University, Xiamen 361021, China 2College of Engineering, Shantou University, Shantou 515063, China Correspondence should be addressed to Erkang Chen; [email protected] Received 17 March 2021; Revised 7 April 2021; Accepted 20 April 2021; Published 3 May 2021 Academic Editor: Philippe Fournier-Viger Copyright © 2021 Bin Huang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Separating printed or handwritten characters from a noisy background is valuable for many applications including test paper autoscoring. The complex structure of Chinese characters makes it difficult to obtain the goal because of easy loss of fine details and overall structure in reconstructed characters. This paper proposes a method for separating Chinese characters based on generative adversarial network (GAN). We used ESRGAN as the basic network structure and applied dilated convolution and a novel loss function that improve the quality of reconstructed characters. Four popular Chinese fonts (Hei, Song, Kai, and Imitation Song) on real data collection were tested, and the proposed design was compared with other semantic segmentation approaches. The experimental results showed that the proposed method effectively separates Chinese characters from noisy background. In particular, our methods achieve better results in terms of Intersection over Union (IoU) and optical character recognition (OCR) accuracy. 1. Introduction In order to achieve examination paper autoscoring, one of the technical challenges to be solved is handling overlap- Converting paper documents into electronic documents and ping characters. This may happen when an elementary then recognizing them using optical character recognition school student did not master writing well or put annotation (OCR) technology have been widely used in daily life. In on the test paper. The current OCR technology cannot han- recent years, with the development of machine learning tech- dle the mixed situation of printed text and handwritten text nology, the recognition accuracy of OCR has been greatly in the same image. Generally, only a single type of text can improved [1–3]. We can now process a document with both be recognized by OCR technology [8]. Our early experiments machine-printed text and handwritten text and then recog- showed that when recognizing printed text, the OCR accu- nize them separately [4, 5]. Similar applications can be found racy was greatly reduced if there were handwritten strokes in the archiving and processing of historical documents [6, or handwritten characters around the printed text. Even 7]. In the field of education, related technologies for examina- worse was that the machine was not able to find the text area tion paper autoscoring have emerged, which greatly reduce needed to be recognized. Therefore, it is desirable to separate burden for teachers and students. Taking Figure 1 as an the handwritten characters from the printed characters on example, an examination paper with students’ answers can the examination paper and then process different text types first be processed by OCR, and then the recognized answers accordingly. Furthermore, for Chinese characters, the separa- can be evaluated and scored automatically by the machine. tion of handwriting and printing becomes more difficult Under certain circumstance, since the test paper template because the font structure is far more complicated than cannot be easily obtained, it is also necessary to directly Western fonts [9, 10]. A slight loss or increase of strokes identify the printed test paper template. may change the meaning of the characters completely, which 2 Wireless Communications and Mobile Computing in both IoU and OCR accuracy; and (c) our dataset (upon request) for further research is provided. 2. Related Work Many applications in document processing need to solve the problem of separation of handwriting and printed. The Maurdor project created a realistic corpus of annotated doc- uments in French, English, and Arabic to support the effi- cient development and evaluation of extraction method (a) [20]. DeepErase [21] uses neural networks to erase ink arti- facts on scanned documents and only extract text written by a user. The ink artifacts that DeepErase targets mainly include a tabular structure, fill-in-the-blank boxes, and underlines. Guo and Ma [22] used a machine-printed and handwritten annotation discrimination algorithm based on (b) the Hidden Markov Model. Solely focusing on English and other Latin languages, their algorithm can locate the position (c) of the handwritten part in the document in the form of a bounding box. Zagoris et al. [23] proposed a method of rec- ognizing and separating handwritten content from docu- (d) ment images mixed with handwritten and printed characters through the bag of visual word model. Their (e) Post-processing(OCR, NLP, etc.) method first computes a descriptor for each block of interest fi Exam Paper Auto-scoring and then classi es the descriptor into handwritten text, (f) machine printed text or noise. However, few research has Figure 1: Basic process of examination paper autoscoring. (a) A been focusing on highly overlapped texts, especially Chinese sample examination paper which consists of both handwriting and characters that are structurally more complex than English or printed text. (b) A subquestion with answers to be scored. (c) other Latin languages. Handwriting touches or even overlaps with printed text. The red Recent deep learning methods provide new ways for circle shows an example of overlapping characters. (d) The solving the separation of handwriting and printed. Li proposed method targets separation of overlapping Chinese et al. [5] handles printed/handwritten text separation characters into printed text (left rectangle) and handwriting (right within a single framework by using conditional random rectangle). (e, f) After successful separation is made, fields. Their algorithm only performs extraction at con- postprocessing and autoscoring become feasible. nected component (CC) level. Each CC is classified into printed and handwritten no matter it is overlapping or not. U-Net [11], which performs well in many segmenta- makes it difficult to separate effectively when handwriting tion tasks, builds upon only convolution layers and the idea fonts and printed fonts are highly overlapped. of propagating context information to higher resolution Separating Chinese characters from noisy background layers during upsampling. Pix2Pix [17] translates an input (particularly with overlappings) can be considered an image image into a corresponding output image. With a paired semantic segmentation problem. Previous deep learning training dataset, it can output sharp and realistic images. methods [11–13] have shown success in other applications. Such features make it attractive for solving our character However, these methods have poor performance due to the segregation problem. However, a paired training dataset complex structure of Chinese characters. To distinguish Chi- may not be easy to find in real-world applications. Cycle- nese characters from similar fonts, we adopted a GAN-based GAN [16] is an approach for learning to translate an image approach [14–19]. A network, called DESRGAN, was devel- from a source domain to a target domain without paired oped to denoise background and reconstruct both the stroke examples. CycleGAN’s coding scheme is to hide part of structure and fine details of targeted Chinese characters. Our the information about the input image in low-amplitude, method used ESRGAN [19] as the basic network structure high-frequency signal added to the output image [14]. and applied dilated convolution to residual-in-residual dense Another way to solve the separation of printed is to treat blocks. A new loss function that can measure the integrity of the image overlapped by handwriting and printed as a the font skeleton was imposed. Then, the generator of the low-resolution picture, and the neural network determines trained GAN model was used to separate targeted characters. which part needs to be enhanced in the process of single- Our main contributions include the following: (a) we image super-resolution. SRGAN [18] takes advantage of a proposed a new network structure and a loss function that perceptual loss function which consists of an adversarial achieves the goal of Chinese character separation from noisy loss and a content loss. Based on SRGAN, ESRGAN [19] background, especially when characters are highly over- improves the network structure by introducing the lapped; (b) the proposed method achieved the best results residual-in-residual dense block and computes perceptual Wireless Communications and Mobile Computing 3 Residual-in-residual dense block convolution 4 convolution 5 convolution 1 convolution 2 convolution 3 Leaky ReLU Leaky ReLU Leaky ReLU Leaky ReLU Dilated Dilated Dilated Dilated Dilated Dense Dense Dense block block block × × × � � � + + + ×� + Generator Synthesized Residual-in-residual Residual-in-residual Residual-in-residual
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages13 Page
-
File Size-