Visual and Textual Deep Feature Fusion for Document Image Classification

Visual and Textual Deep Feature Fusion for Document Image Classification

Visual and Textual Deep Feature Fusion for Document Image Classification Souhail Bakkali1 Zuheng Ming1 Mickael¨ Coustaty1 Marc¸al Rusinol˜ 2 1L3i, University of La Rochelle, France 2CVC, Universitat Autonoma` de Barcelona, Spain {souhail.bakkali, zuheng.ming, mickael.coustaty}@univ-lr.fr, [email protected] Abstract tracting features manually, the state-of-the-art approaches based on visual information of document images treat the The topic of text document image classification has been problem as a conventional image classification. Addition- explored extensively over the past few years. Most recent ally, from a natural language processing perspective, Yang approaches handled this task by jointly learning the visual et al. [37] presented a neural network to extract semantic in- features of document images and their corresponding tex- formation based on word embeddings from pretrained natu- tual contents. Due to the various structures of document im- ral language models. Nevertheless, classifying documents ages, the extraction of semantic information from its textual with only visual information may encounter the problem content is beneficial for document image processing tasks of low inter-class discrimination, and high intra-class struc- such as document retrieval, information extraction, and text tural variations of highly overlapped document images [1] classification. In this work, a two-stream neural architec- shown in Fig. 1. As such, jointly learning visual cues and ture is proposed to perform the document image classifica- text semantic relationships is an inevitable step to mitigate tion task. We conduct an exhaustive investigation of nowa- the issue of highly correlated classes. Recent methods have days widely used neural networks as well as word embed- used multimodal techniques to leverage both image and text ding procedures used as backbones, in order to extract both modalities extracted by an OCR engine to perform fine- visual and textual features from document images. More- grained document image classification [4, 9, 36, 3]. over, a joint feature learning approach that combines im- Therfore, we study the capability of static and dynamic age features and text embeddings is introduced as a late word embeddings to extract meaningful information from a fusion methodology. Both the theoretical analysis and the text corpus. While static word embeddings fail to capture experimental results demonstrate the superiority of our pro- polysemy by generating the same embedding for the same posed joint feature learning method comparatively to the word in different contexts, dynamic word embeddings are single modalities. This joint learning approach outperforms able to capture word semantics in different contexts to ad- the state-of-the-art results with a classification accuracy of dress the issue of polysemous and the context-dependent 97.05% on the large-scale RVL-CDIP dataset. nature of words. We explored and evaluated both static and dynamic word embeddings on the large RVL-CDIP 1 [15] dataset. Furthermore, we propose a cross-modal network 1. Introduction to learn simultaneously from the visual structural properties and the text information from document images based on For many public and private organizations, understand- two different models. The learned cross-modal features are ing and analyzing data from documents manually is time combined as the final representation of our proposed net- consuming and expensive. Unlike the general images, doc- work to boost the classification capacity of document im- ument images may be presented in a variety of forms due to ages. To perform text classification, an optical character the different manners of organizing each document. How- recognition (OCR) is employed to extract the textual con- ever, extracting an accurate and structured information from tent of each document image. The latent semantic analy- the wide variety of documents is very challenging consid- sis is following the OCR. We utilize the pretrained Glove ering their visual structural properties and their textual het- and FastText [27, 26] as static word embeddings, followed erogeneous content. From a computer vision perspective, by a gated recurrent unit (GRU) mechanism introduced by earlier studies that have been using deep neural networks J.Chung et al. and K.Cho et al. [6]. GRU is a simplified for document analysis tasks focused on their structural sim- variant of LSTM architectures introduced by S. Hochreiter ilarity constraints and their visual features [5, 29, 20, 21]. As most recent deep learning methods do not require ex- 1https://www.cs.cmu.edu/˜aharley/rvl-cdip/ Figure 1. Samples of different document classes in the RVL-CDIP dataset which illustrate the low inter-class discrimination and high intra- class variations of document images. From left to right: Advertisement, Budget, Email, File folder, Form, Handwritten, Invoice, Letter, Memo, News article, Presentation, Questionnaire, Resume, Scientific publication, Scientific report, Specification and J. Schmidhuber [13] to overcome the vanishing gradient ment images. We show that the joint learning method- problems. Moreover, based on both left and right context, ology boosts the overall accuracy comparatively to the the deep bidirectional pretrained BERT model [11] is uti- single-modal networks. lized as a contextualized dynamic word embedding to learn the text semantic features. • We evaluate the performance of static and contextual- To conduct image classification task, we investigate the ized dynamic word embeddings to classify textual con- impact of both heavyweight (i.e. with a large amount of tent of document images. paramaters) and lightweight (i.e. with a much lower amount • We review the impact of training heavyweight and of paramaters) deep neural network architectures on learn- lightweight deep neural networks on learning relevant ing deep structural properties from document images. The structural information from document images. heavyweight models with large size parameters such as NasNetLarge [39], Inception-ResNet-v2 [32] can achieve state-of-the-art classification accuracy on the widely used 2. Related work ImageNet [10] dataset in the cost of the computational com- plexity and time consuming. Instead, the lightweight mod- In the recent past, several studies have been made to els with less parameters designed for the constrained en- automatically classify document images. Earlier attempts vironment, e.g. real-time environment, mobile application have focused on region-based analysis by detecting and ana- with less hardware resources, focus on the trade-off be- lyzing certain parts of a document. Hao et al. [14] proposed tween the efficiency and the model accuracy. Amongst a novel method for table detection in PDF documents based all classes of the RVL-CDIP dataset, some samples from on conventional neutral networks. An alternative strategy specific categories present particular layout properties and for region-based approaches is to learn visual shared spa- document structures. Most classes are mainly composed tial configuration of a given document image [15]. In [8], of text information, while the classes like Advertisement, a combination of holistic and region based modelling for File folder contain only images with very few text infor- document image classification was conducted with intra- mation. Specifically, some samples do not contain any text domain transfer learning. However, many researchers uti- data. Another class such as Handwritten, which is com- lized deep learning methods among hand-crafted feature posed of handwritten text characters, produces noisy output techniques and representations, as they have shown their text resulted by the processing of OCR engine. The idea be- notable performance for the text document image classifica- hind this work relies on whether combining learned visual tion task. In [35], transfer learning was used to improve the features with textual features could effectively benefit for classification accuracy on the standard RVL-CDIP dataset, the document image classification task, to achieve accurate using AlexNet [19] architecture pre-trained on ImageNet results for the categories (Advertisement, File folder, Hand- dataset. A large empirical study was conducted to find written). In order to get the fusion embeddings, we adopt a what aspects of CNNs most affect the performance on doc- late fusion scheme methodology. Our main contributions of ument images. Results showed that CNNs trained on RVL- this paper are as follows: CDIP dataset learn region-specific layout features. Also, [2] investigated most used deep learning architectures such • We propose a cross-modal deep network that leverages as AlexNet, VGG-16 [31], GoogLeNet [34] and ResNet- textual contents and visual features to classify docu- 50 [17], using transfer learning on the RVL-CDIP and To- FC [CLS] Young People Form Attitud About Smokin By Mid Teens [SEP] E[CLS] EYoung EPeople EForm EAttitud EAbout ESmokin EBy EMid ETeens E[SEP] OCR engine Extracted Text 768 Text Model FC FC FC FC Fusion ImageImage Model Model 768 768 768 16 Global Average Pooling Average FC Ensembling Input Document Image … … … … … 768 4032 96 9642 42 424221 21 11 Figure 2. The proposed cross-modal deep network bacco3482 datasets. In comparison, [22] concentrated on feature-ranking algorithm. A modular multimodal architec- speed by replacing the fully connected portion of the VGG- ture is presented in [9] for document classification followed 16 architecture with extreme learning machines (ELM).

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    10 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us