Hardware Architecture of Bidirectional Long Short-Term Memory Neural Network for Optical Character Recognition

Hardware Architecture of Bidirectional Long Short-Term Memory Neural Network for Optical Character Recognition

Hardware Architecture of Bidirectional Long Short-Term Memory Neural Network for Optical Character Recognition Vladimir Rybalkin and Norbert Wehn Mohammad Reza Yousefi and Didier Stricker Microelectronic Systems Design Research Group Augmented Vision Department University of Kaiserslautern, Germany German Research Center for Artificial Intelligence (DFKI) {rybalkin, wehn}@eit.uni-kl.de Kaiserslautern, Germany {yousefi, didier.stricker}@dfki.de Abstract—Optical Character Recognition is conversion of memory state and hence preserving the internal state over printed or handwritten text images into machine-encoded text. It longer periods of time. LSTMs have proved successful over is a building block of many processes such as machine translation, a range of sequence learning tasks. One such task that has text-to-speech conversion and text mining. Bidirectional Long previously shown superior performance of LSTM networks is Short-Term Memory Neural Networks have shown a superior character recognition [3], [4], [5], [6] and many others. Unlike performance in character recognition with respect to other types conventional Optical Character Recognition (OCR) methods of neural networks. In this paper, to the best of our knowledge, we propose the first hardware architecture of Bidirectional Long that rely on pre-segmenting the input image down to single Short-Term Memory Neural Network with Connectionist Tem- characters, LSTM networks with the addition of a forward poral Classification for Optical Character Recognition. Based on backward algorithm, known as Connectionist Temporal Classi- the new architecture, we present an FPGA hardware accelerator fication (CTC) [7], are capable of recognizing textlines without that achieves 459 times higher throughput than state-of-the-art. requiring any pre-segmentation. Visual recognition is a typical task on mobile platforms that usually use two scenarios either the task runs locally on embedded A direct mapping of Bidirectional Long Short-Term Mem- processor or offloaded to a cloud to be run on high performance ory (BLSTM) onto hardware gives very suboptimal results. machine. We show that computationally intensive visual recogni- The main challenge for efficient hardware implementation tion task benefits from being migrated to our dedicated hardware of RNNs is a high memory bandwidth required to transfer accelerator and outperforms high-performance CPU in terms of network weights from memory to computational units. RNNs runtime, while consuming less energy than low power systems have even more difficulty with memory bandwidth scalability with negligible loss of recognition accuracy. than feed-forward Neural Networks (NNs) due to recurrent connections. Each neuron has to process additional N inputs, I. INTRODUCTION where N is a number of neurons in a layer. In case of LSTM Many variants of Artificial Neural Networks (ANNs) have memory cells, the requirements are even higher because of been proposed throughout the years. A major source of distinc- four input gates in contrast to a single one in a plain neuron tion among ANN architectures is between those having purely of a standard RNN. acyclic connections, also known as feed-forward networks The BLSTM doubles the required bandwidth and requires a and those which include cyclic connections referred to as duplicate of a hidden layer that processes inputs in opposite Recurrent Neural Networks (RNNs). The presence of cyclic direction. Further, the output layer processes outputs from connections allows for a memory-like functionality that can hidden layers only from corresponding columns of an image preserve impacts from the previous inputs to the network that introduces additional latency and requires extra memory within its internal state, such that they can later influence the to store intermediate results. All of the above requires custom network output. solutions. A high memory capacity is required to accommodate Bidirectional Recurrent Neural Networks (BRNNs) were pro- the network weights, resulting in a trade-off between different posed to take into account impact from both the past and the types of memory and bandwidth. One of the most efficient future status of the input signal by presenting the input signal ways to mitigate the memory bandwidth requirement is to forward and backward to two separate hidden layers both of exploit inherent capabilities of NN to tolerate precision loss which are connected to a common output layer. This way, the without diminishing recognition accuracy. The flexibility pro- final network has access to both past and future context within vided by FPGAs allows for architectures that can fully benefit the input signal that is helpful in applications such as character form custom precision. recognition that knowing the coming letters as well as those The elaborate structure of LSTM memory cells including 5 before improves the accuracy. There are, however, limitations activation functions and multiple multiplicative units imple- in the standard architecture of RNNs, with the major one being menting gates yield high implementation complexity resulting the limited range of accessible context. in high energy consumption and long runtime for software im- One of the most effective solutions to overcome this short- plementations when running on standard computing platforms coming was introduced by Long Short-Term Memory (LSTM) like CPUs. The conventional spatial parallelization is very architecture [1], [2] that replaces plain nodes with memory resource demanding. More efficient functional parallelization, blocks. LSTM memory blocks are composed of a number i.e. pipelining can largely increase throughput at a cost of of gates that can control the store-access functions of the lower area and energy consumption. However, RNNs have 978-3-9815370-8-6/17/$31.00 c 2017 IEEE 1390 constrained potential for functional parallelization due to the gate units in a memory cell facilitate the preservation and recurrent nature in contrast to feed-forward NNs. Because access of the cell internal state over long periods of time. The of data dependencies the next input cannot be processed peephole connections (p) are supposed to inform the gates of until the processing of the preceding input is not complete. the cell about its internal state. There are recurrent connections Consequently, the pipeline is part time idle. The FPGAs allow from the cell output (y) to the cell input (I) and the three for customized pipelined architectures that can overcome the gates. Eq. 1 summarizes formulas for LSTM network forward problem. NNs relaying on trained network parameters can pass. Rectangular input weight matrices are shown by W , and require to reprogramme the weights as the result machine square weight matrices by R. x is the input vector, b refers learning benefits from reconfigurability of FPGAs. to bias vectors, and t denotes the time (and so t − 1 refers to the previous timestep). Activation functions are point-wise In summary, multiple trade-offs between different types 1 non-linear functions, that is logistic sigmoid ( 1+e−x ) for the of memories and bandwidth, between custom precision and gates (σ) and hyperbolic tangent for input to and output from accuracy, between different parallelization schemes, area and the node (l and h). Point-wise vector multiplication is shown energy consumption make custom hardware architecture the by (equations adapted from [9]): top choice especially for embedded systems that have con- t t t−1 strained energy budget. I = l(W I x + RI y + bI ) t t t−1 t−1 We extend the previous research in the area by proposing i = σ(W ix + Riy + pi c + bi) t t t−1 t−1 the first hardware architecture of BLSTM with CTC. In the f = σ(W f x + Rf y + pf c + bf ) (1) new architecture, we rearrange memory access patterns that ct = it It + f t ct−1 allows us to avoid a duplication of a hidden layer, and also t t t−1 t to reduce the memory required for intermediate results and o = σ(W ox + Roy + po c + bo) t t t processing time. Additionally, the proposed architecture allows y = o h(c ) for implementation of uni-directional LSTM that extends the application area. Based on the new architecture, we present CTC is the output layer used with LSTM networks de- an FPGA hardware accelerator designed for a highly accurate signed for sequence labelling tasks. Using the CTC layer, character recognition of old German text (Fraktur) [5]. The net- LSTM networks can perform the transcription task without work is composed of one hidden layer with 100 hidden nodes requiring pre-segmented input, as it is trained to predict the in each of the forward and backward directions of our BLSTM conditional probabilities of the possible output labellings, network (200 hidden nodes altogether). We show feasibility given input sequences. CTC layer has K units, K−1 being the of implementing a medium size LSTM network making use number of elements in the alphabet of labels. The outputs are of only on-chip memory that minimizes power consumption normalized using softmax activation function at each timestep. and brings higher performance as we are not limited with The softmax activation function ensures that network outputs off-chip memory bandwidth. The new hardware architecture are all in the range between zero and one, and they sum to makes use of 5-bit fixed-point numbers to represent weights one at each timestep. This means that the outputs can be vs. minimum 16-bits in the previous works without impairing interpreted as the probabilities of the characters at a given accuracy. The reference recognition accuracy is 98.2337% timestep (column in our case). The first K −1 outputs provide using single-precision floating-point format. The hardware predictions over the probabilities of their corresponding labels accelerator achieves 97.5120% recognition accuracy, while for the input timestep, and the remaining one unit estimates outperforming high-performance CPU in terms of runtime the probability of a blank or no label. Summation over the and consuming less energy than low power systems.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    6 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us