Broad Colorization Yuxi Jin ,Binsheng , Member, IEEE,Pingli , Member, IEEE, and C
Total Page:16
File Type:pdf, Size:1020Kb
2330 IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, VOL. 32, NO. 6, JUNE 2021 Broad Colorization Yuxi Jin ,BinSheng , Member, IEEE,PingLi , Member, IEEE, and C. L. Philip Chen , Fellow, IEEE Abstract— The scribble- and example-based colorization meth- be time-consuming for a preferable colorization result. The ods have fastidious requirements for users, and the training scribble-based methods provide users with great autonomy by process of deep neural networks for colorization is quite time- requiring them to provide substantial color scribbles on the consuming. We instead proposed an automatic colorization approach with no dependence on user input and no need to input gray-scale images and extending the special scribbles to endure long training time, which combines local features and the whole image, which means the result of the scribble-based global features of the input gray-scale images. Low-, mid-, and colorization depends largely on the scribbles. Thus, getting a high-level features are united as local features representing cues good colorization result is a challenging process for users, existed in the gray-scale image. The global feature is regarded and the practicability of the scribble-based colorization is as data prior to guiding the colorization process. The local broad learning system is trained for getting the chrominance limited. The example-based methods have to look for related value of each pixel from the local features, which could be images with good color design, which has a great impact on expressed as a chrominance map according to the position of the colorization result. However, looking for an appropriate pixels. Then, the global broad learning system is trained to refine reference image needs users to have a high esthetic standard the chrominance map. There are no requirements for users in our on image color; thus, the example-based methods often fail to approach, and the training time of our framework is an order of magnitude faster than the traditional methods based on deep achieve outstanding colorization results. neural networks. To increase the user’s subjective initiative, our Recently, deep learning has achieved outstanding results system allows users to increase training data without retraining in the field of image and data processing [10]–[16]. Much the system. Substantial experimental results have shown that our work has been done on the colorization by the deep neural approach outperforms state-of-the-art methods. networks [17]–[27]. It is proven that the deep learning-based Index Terms— Colorization, global broad learning system method can achieve an end-to-end framework and get satis- (GBLS), global features, local broad learning system (LBLS), factory results. Preprocessing nor postprocessing from the user local features. to design color is not required. However, the architecture of a deep neural network is complex and has to train a large I. INTRODUCTION number of weight data, leading to a long time of training OLORIZATION is first introduced at the end of the procedure. Although the deep neural network could make a C19th century [1], [2]. The main task of colorization is large reduction in the burden of users, the training process is assigning reasonable colors to pixels in a given gray-scale intolerable for users. Especially, when the architecture of the image. For colorizing a particular gray-scale image, tradi- framework is redesigned, the whole network has also to be tional methods can be roughly summarized as scribble-based retrained, which is unbearable. colorization [3]–[5] and example-based colorization [6]–[8]. The broad learning system is an efficient and effective learn- Both of them require considerable user interaction, which ing system [28]–[31]. Given input data, mapping features and is challenging for users, and the interaction process would enhanced features of the broad learning system are extracted and then placed in the input layer. The enhanced nodes are Manuscript received July 7, 2018; revised May 26, 2019, November 26, 2019, and April 3, 2020; accepted June 21, 2020. Date of publication July 2, obtained by enhancing the feature nodes for improving the 2020; date of current version June 2, 2021. This work was supported in part learning ability, which allows the input nodes and the output by the National Natural Science Foundation of China under Grant 61872241, nodes to connect directly, leading to the result that the input Grant 61572316, Grant 61751202, Grant 61751205, and Grant 61572540 and in part by The Hong Kong Polytechnic University under Grant P0030419 and data itself has a certain influence on the output. In this article, Grant P0030929. (Yuxi Jin, Bin Sheng, and Ping Li contributed equally to this we propose a new broad learning system-based automatic work.) (Corresponding author: Bin Sheng.) colorization algorithm, which can know exactly the category Yuxi Jin is with the Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai 200240, China. of pixels blocks and the color of pixels through training. Bin Sheng is with the Department of Computer Science and Engi- Compared with existing traditional methods, the proposed neering, Shanghai Jiao Tong University, Shanghai 200240, China (e-mail: method does not require users to scribble in the gray-scale [email protected]). Ping Li is with the Department of Computing, The Hong Kong Polytechnic image and does not have to find the related color image for the University, Hong Kong (e-mail: [email protected]). colorization. It is well known that hyperparameters set up by C. L. Philip Chen is with the School of Computer Science and Engineering, users greatly affect the performance of deep neural networks. South China University of Technology, Guangzhou 510006, China, also with the Navigation College, Dalian Maritime University, Dalian 116026, China, Besides, the result of deep neural networks converges slowly and also with the Faculty of Science and Technology, University of Macau, and easily gets to the local minimum. However, the weights Macau (e-mail: [email protected]). of a broad learning system can be calculated based on its Color versions of one or more of the figures in this article are available online at https://ieeexplore.ieee.org. flat framework, and new weights can be calculated without Digital Object Identifier 10.1109/TNNLS.2020.3004634 the retraining process. Thus, the proposed broad learning 2162-237X © 2020 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See https://www.ieee.org/publications/rights/index.html for more information. Authorized licensed use limited to: Hong Kong Polytechnic University. Downloaded on August 03,2021 at 12:56:25 UTC from IEEE Xplore. Restrictions apply. JIN et al.: BROAD COLORIZATION 2331 system-based method can be effectively and efficiently even scribbles, which could help users to get the desired results when the nodes are increased. quickly. Based on the weighted geodesic distance, this method We combine the global features and the local features is fast, and it can change the colors of an existing color image extracted from the gray-scale image to colorize each pixel. or change the relevant luminance. Luan et al. [5] reduced the All the extracted local features of the image are used as requirement of user-specific scribbles by utilizing the texture input data of the local broad learning system (LBLS, see similarity, and Qu et al. [33] employed the feature classifi- Section IV-B). Since the global information is useful for cation for a cartoon colorization technique, which propagates colorizing gray-scale image, we use the global feature to color over regions emphasizing the continuity in the target guide the colorization via a global broad learning system image. (GBLS, see Section IV-C). The colorization result of our approach is outstanding for generally combining the result of B. Example-Based Colorization via LBLS and GBLS. For special cases where users have already User-Supplied Example(s) known the color of the gray-scale image, they can increase the training data with special colors that they want without Unlike scribble-based methods, example-based methods waiting for hundreds of minutes (see Section V). In other could colorize gray-scale images without user-specific scrib- words, our colorization procedure can be user-guided through bles. The example-based methods colorize the gray-scale the increment of input data, taking advantage of the special image by exploiting the colors of reference images. For the framework of the broad learning system. Our work makes the accuracy of the colorization result, the reference images should following three main contributions. be similar to the target gray-scale images. Welsh et al. [34] colorized a gray-scale image by matching the pixel inten- 1) Efficient and Effective Learning System: The special sity and statistical information of its neighboring pixels structure of the broad learning system makes the training between the target image and the reference image, which is time significantly less than the model based on deep inspired by color transfer technique [35]. After transferring neural networks, which helps our approach avoid the the gray-scale image and its reference image to CIELAB unbearable long training time. Furthermore, the broad color space, the color transfer technique computes the lumi- learning system could be structured without retraining, nance of a particular pixel and the standard deviation with while the number of input nodes is changed. its neighboring pixels and then maps the color of the best 2) Global-and-Local Semantic-Guided Colorization: match point to the particular pixel to colorize the gray-scale The local features cannot completely determine the image. Irony et al. [36] improved the colorization by calcu- corresponding color values of the pixels since images lating the texture of the input image. Comparing with the with the same structure may be attached to different methods that colorize a gray-scale image only depending on colors for the different scene or season or time.