Hardware Architecture of Bidirectional Long Short-Term Memory Neural Network for Optical Character Recognition

Total Page:16

File Type:pdf, Size:1020Kb

Hardware Architecture of Bidirectional Long Short-Term Memory Neural Network for Optical Character Recognition Hardware Architecture of Bidirectional Long Short-Term Memory Neural Network for Optical Character Recognition Vladimir Rybalkin and Norbert Wehn Mohammad Reza Yousefi and Didier Stricker Microelectronic Systems Design Research Group Augmented Vision Department University of Kaiserslautern, Germany German Research Center for Artificial Intelligence (DFKI) {rybalkin, wehn}@eit.uni-kl.de Kaiserslautern, Germany {yousefi, didier.stricker}@dfki.de Abstract—Optical Character Recognition is conversion of memory state and hence preserving the internal state over printed or handwritten text images into machine-encoded text. It longer periods of time. LSTMs have proved successful over is a building block of many processes such as machine translation, a range of sequence learning tasks. One such task that has text-to-speech conversion and text mining. Bidirectional Long previously shown superior performance of LSTM networks is Short-Term Memory Neural Networks have shown a superior character recognition [3], [4], [5], [6] and many others. Unlike performance in character recognition with respect to other types conventional Optical Character Recognition (OCR) methods of neural networks. In this paper, to the best of our knowledge, we propose the first hardware architecture of Bidirectional Long that rely on pre-segmenting the input image down to single Short-Term Memory Neural Network with Connectionist Tem- characters, LSTM networks with the addition of a forward poral Classification for Optical Character Recognition. Based on backward algorithm, known as Connectionist Temporal Classi- the new architecture, we present an FPGA hardware accelerator fication (CTC) [7], are capable of recognizing textlines without that achieves 459 times higher throughput than state-of-the-art. requiring any pre-segmentation. Visual recognition is a typical task on mobile platforms that usually use two scenarios either the task runs locally on embedded A direct mapping of Bidirectional Long Short-Term Mem- processor or offloaded to a cloud to be run on high performance ory (BLSTM) onto hardware gives very suboptimal results. machine. We show that computationally intensive visual recogni- The main challenge for efficient hardware implementation tion task benefits from being migrated to our dedicated hardware of RNNs is a high memory bandwidth required to transfer accelerator and outperforms high-performance CPU in terms of network weights from memory to computational units. RNNs runtime, while consuming less energy than low power systems have even more difficulty with memory bandwidth scalability with negligible loss of recognition accuracy. than feed-forward Neural Networks (NNs) due to recurrent connections. Each neuron has to process additional N inputs, I. INTRODUCTION where N is a number of neurons in a layer. In case of LSTM Many variants of Artificial Neural Networks (ANNs) have memory cells, the requirements are even higher because of been proposed throughout the years. A major source of distinc- four input gates in contrast to a single one in a plain neuron tion among ANN architectures is between those having purely of a standard RNN. acyclic connections, also known as feed-forward networks The BLSTM doubles the required bandwidth and requires a and those which include cyclic connections referred to as duplicate of a hidden layer that processes inputs in opposite Recurrent Neural Networks (RNNs). The presence of cyclic direction. Further, the output layer processes outputs from connections allows for a memory-like functionality that can hidden layers only from corresponding columns of an image preserve impacts from the previous inputs to the network that introduces additional latency and requires extra memory within its internal state, such that they can later influence the to store intermediate results. All of the above requires custom network output. solutions. A high memory capacity is required to accommodate Bidirectional Recurrent Neural Networks (BRNNs) were pro- the network weights, resulting in a trade-off between different posed to take into account impact from both the past and the types of memory and bandwidth. One of the most efficient future status of the input signal by presenting the input signal ways to mitigate the memory bandwidth requirement is to forward and backward to two separate hidden layers both of exploit inherent capabilities of NN to tolerate precision loss which are connected to a common output layer. This way, the without diminishing recognition accuracy. The flexibility pro- final network has access to both past and future context within vided by FPGAs allows for architectures that can fully benefit the input signal that is helpful in applications such as character form custom precision. recognition that knowing the coming letters as well as those The elaborate structure of LSTM memory cells including 5 before improves the accuracy. There are, however, limitations activation functions and multiple multiplicative units imple- in the standard architecture of RNNs, with the major one being menting gates yield high implementation complexity resulting the limited range of accessible context. in high energy consumption and long runtime for software im- One of the most effective solutions to overcome this short- plementations when running on standard computing platforms coming was introduced by Long Short-Term Memory (LSTM) like CPUs. The conventional spatial parallelization is very architecture [1], [2] that replaces plain nodes with memory resource demanding. More efficient functional parallelization, blocks. LSTM memory blocks are composed of a number i.e. pipelining can largely increase throughput at a cost of of gates that can control the store-access functions of the lower area and energy consumption. However, RNNs have 978-3-9815370-8-6/17/$31.00 c 2017 IEEE 1390 constrained potential for functional parallelization due to the gate units in a memory cell facilitate the preservation and recurrent nature in contrast to feed-forward NNs. Because access of the cell internal state over long periods of time. The of data dependencies the next input cannot be processed peephole connections (p) are supposed to inform the gates of until the processing of the preceding input is not complete. the cell about its internal state. There are recurrent connections Consequently, the pipeline is part time idle. The FPGAs allow from the cell output (y) to the cell input (I) and the three for customized pipelined architectures that can overcome the gates. Eq. 1 summarizes formulas for LSTM network forward problem. NNs relaying on trained network parameters can pass. Rectangular input weight matrices are shown by W , and require to reprogramme the weights as the result machine square weight matrices by R. x is the input vector, b refers learning benefits from reconfigurability of FPGAs. to bias vectors, and t denotes the time (and so t − 1 refers to the previous timestep). Activation functions are point-wise In summary, multiple trade-offs between different types 1 non-linear functions, that is logistic sigmoid ( 1+e−x ) for the of memories and bandwidth, between custom precision and gates (σ) and hyperbolic tangent for input to and output from accuracy, between different parallelization schemes, area and the node (l and h). Point-wise vector multiplication is shown energy consumption make custom hardware architecture the by (equations adapted from [9]): top choice especially for embedded systems that have con- t t t−1 strained energy budget. I = l(W I x + RI y + bI ) t t t−1 t−1 We extend the previous research in the area by proposing i = σ(W ix + Riy + pi c + bi) t t t−1 t−1 the first hardware architecture of BLSTM with CTC. In the f = σ(W f x + Rf y + pf c + bf ) (1) new architecture, we rearrange memory access patterns that ct = it It + f t ct−1 allows us to avoid a duplication of a hidden layer, and also t t t−1 t to reduce the memory required for intermediate results and o = σ(W ox + Roy + po c + bo) t t t processing time. Additionally, the proposed architecture allows y = o h(c ) for implementation of uni-directional LSTM that extends the application area. Based on the new architecture, we present CTC is the output layer used with LSTM networks de- an FPGA hardware accelerator designed for a highly accurate signed for sequence labelling tasks. Using the CTC layer, character recognition of old German text (Fraktur) [5]. The net- LSTM networks can perform the transcription task without work is composed of one hidden layer with 100 hidden nodes requiring pre-segmented input, as it is trained to predict the in each of the forward and backward directions of our BLSTM conditional probabilities of the possible output labellings, network (200 hidden nodes altogether). We show feasibility given input sequences. CTC layer has K units, K−1 being the of implementing a medium size LSTM network making use number of elements in the alphabet of labels. The outputs are of only on-chip memory that minimizes power consumption normalized using softmax activation function at each timestep. and brings higher performance as we are not limited with The softmax activation function ensures that network outputs off-chip memory bandwidth. The new hardware architecture are all in the range between zero and one, and they sum to makes use of 5-bit fixed-point numbers to represent weights one at each timestep. This means that the outputs can be vs. minimum 16-bits in the previous works without impairing interpreted as the probabilities of the characters at a given accuracy. The reference recognition accuracy is 98.2337% timestep (column in our case). The first K −1 outputs provide using single-precision floating-point format. The hardware predictions over the probabilities of their corresponding labels accelerator achieves 97.5120% recognition accuracy, while for the input timestep, and the remaining one unit estimates outperforming high-performance CPU in terms of runtime the probability of a blank or no label. Summation over the and consuming less energy than low power systems.
Recommended publications
  • Hardware Architecture
    Hardware Architecture Components Computing Infrastructure Components Servers Clients LAN & WLAN Internet Connectivity Computation Software Storage Backup Integration is the Key ! Security Data Network Management Computer Today’s Computer Computer Model: Von Neumann Architecture Computer Model Input: keyboard, mouse, scanner, punch cards Processing: CPU executes the computer program Output: monitor, printer, fax machine Storage: hard drive, optical media, diskettes, magnetic tape Von Neumann architecture - Wiki Article (15 min YouTube Video) Components Computer Components Components Computer Components CPU Memory Hard Disk Mother Board CD/DVD Drives Adaptors Power Supply Display Keyboard Mouse Network Interface I/O ports CPU CPU CPU – Central Processing Unit (Microprocessor) consists of three parts: Control Unit • Execute programs/instructions: the machine language • Move data from one memory location to another • Communicate between other parts of a PC Arithmetic Logic Unit • Arithmetic operations: add, subtract, multiply, divide • Logic operations: and, or, xor • Floating point operations: real number manipulation Registers CPU Processor Architecture See How the CPU Works In One Lesson (20 min YouTube Video) CPU CPU CPU speed is influenced by several factors: Chip Manufacturing Technology: nm (2002: 130 nm, 2004: 90nm, 2006: 65 nm, 2008: 45nm, 2010:32nm, Latest is 22nm) Clock speed: Gigahertz (Typical : 2 – 3 GHz, Maximum 5.5 GHz) Front Side Bus: MHz (Typical: 1333MHz , 1666MHz) Word size : 32-bit or 64-bit word sizes Cache: Level 1 (64 KB per core), Level 2 (256 KB per core) caches on die. Now Level 3 (2 MB to 8 MB shared) cache also on die Instruction set size: X86 (CISC), RISC Microarchitecture: CPU Internal Architecture (Ivy Bridge, Haswell) Single Core/Multi Core Multi Threading Hyper Threading vs.
    [Show full text]
  • Efficient Processing of Deep Neural Networks
    Efficient Processing of Deep Neural Networks Vivienne Sze, Yu-Hsin Chen, Tien-Ju Yang, Joel Emer Massachusetts Institute of Technology Reference: V. Sze, Y.-H.Chen, T.-J. Yang, J. S. Emer, ”Efficient Processing of Deep Neural Networks,” Synthesis Lectures on Computer Architecture, Morgan & Claypool Publishers, 2020 For book updates, sign up for mailing list at http://mailman.mit.edu/mailman/listinfo/eems-news June 15, 2020 Abstract This book provides a structured treatment of the key principles and techniques for enabling efficient process- ing of deep neural networks (DNNs). DNNs are currently widely used for many artificial intelligence (AI) applications, including computer vision, speech recognition, and robotics. While DNNs deliver state-of-the- art accuracy on many AI tasks, it comes at the cost of high computational complexity. Therefore, techniques that enable efficient processing of deep neural networks to improve key metrics—such as energy-efficiency, throughput, and latency—without sacrificing accuracy or increasing hardware costs are critical to enabling the wide deployment of DNNs in AI systems. The book includes background on DNN processing; a description and taxonomy of hardware architectural approaches for designing DNN accelerators; key metrics for evaluating and comparing different designs; features of DNN processing that are amenable to hardware/algorithm co-design to improve energy efficiency and throughput; and opportunities for applying new technologies. Readers will find a structured introduction to the field as well as formalization and organization of key concepts from contemporary work that provide insights that may spark new ideas. 1 Contents Preface 9 I Understanding Deep Neural Networks 13 1 Introduction 14 1.1 Background on Deep Neural Networks .
    [Show full text]
  • Architectural Adaptation for Application-Specific Locality
    Architectural Adaptation for Application-Specific Locality Optimizations y z Xingbin Zhang Ali Dasdan Martin Schulz Rajesh K. Gupta Andrew A. Chien Department of Computer Science yInstitut f¨ur Informatik University of Illinois at Urbana-Champaign Technische Universit¨at M¨unchen g fzhang,dasdan,achien @cs.uiuc.edu [email protected] zInformation and Computer Science, University of California at Irvine [email protected] Abstract without repartitioning hardware and software functionality and reimplementing the co-processing hardware. This re- We propose a machine architecture that integrates pro- targetability problem is an obstacle toward exploiting pro- grammable logic into key components of the system with grammable logic for general purpose computing. the goal of customizing architectural mechanisms and poli- cies to match an application. This approach presents We propose a machine architecture that integrates pro- an improvement over traditional approach of exploiting grammable logic into key components of the system with programmable logic as a separate co-processor by pre- the goal of customizing architectural mechanisms and poli- serving machine usability through software and over tra- cies to match an application. We base our design on the ditional computer architecture by providing application- premise that communication is already critical and getting specific hardware assists. We present two case studies of increasingly so [17], and flexible interconnects can be used architectural customization to enhance latency tolerance to replace static wires at competitive performance [6, 9, 20]. and efficiently utilize network bisection on multiproces- Our approach presents an improvement over co-processing sors for sparse matrix computations. We demonstrate that by preserving machine usability through software and over application-specific hardware assists and policies can pro- traditional computer architecture by providing application- vide substantial improvements in performance on a per ap- specific hardware assists.
    [Show full text]
  • Development of a Predictable Hardware Architecture Template and Integration Into an Automated System Design Flow
    Development Secure of Reprogramming a Predictable Hardware of Architecturea Network Template Connected and Device Integration into an Automated System Design Flow Securing programmable logic controllers MARCUS MIKULCAK MUSSIE TESFAYE KTH Information and Communication Technology Masters’ Degree Project Second level, 30.0 HEC Stockholm, Sweden June 2013 TRITA-ICT-EX-2013:138Degree project in Communication Systems Second level, 30.0 HEC Stockholm, Sweden Secure Reprogramming of a Network Connected Device KTH ROYAL INSTITUTE OF TECHNOLOGY Securing programmable logic controllers SCHOOL OF INFORMATION AND COMMUNICATION TECHNOLOGY ELECTRONIC SYSTEMS MUSSIE TESFAYE Development of a Predictable Hardware Architecture TemplateKTH Information and Communication Technology and Integration into an Automated System Design Flow Master of Science Thesis in System-on-Chip Design Stockholm, June 2013 TRITA-ICT-EX-2013:138 Author: Examiner: Marcus Mikulcak Assoc. Prof. Ingo Sander Supervisor: Seyed Hosein Attarzadeh Niaki Degree project in Communication Systems Second level, 30.0 HEC Stockholm, Sweden Abstract The requirements of safety-critical real-time embedded systems pose unique challenges on their design process which cannot be fulfilled with traditional development methods. To ensure their correct timing and functionality, it has been suggested to move the design process to a higher abstraction level, which opens the possibility to utilize automated correct-by-design development flows from a functional specification of the system down to the level of Multiprocessor Systems-on-Chip. ForSyDe, an embedded system design methodology, presents a flow of this kind by basing system development on the theory of Models of Computation and side-effect-free processes, making it possible to separate the timing analysis of computation and communication of process networks.
    [Show full text]
  • Computer Architectures an Overview
    Computer Architectures An Overview PDF generated using the open source mwlib toolkit. See http://code.pediapress.com/ for more information. PDF generated at: Sat, 25 Feb 2012 22:35:32 UTC Contents Articles Microarchitecture 1 x86 7 PowerPC 23 IBM POWER 33 MIPS architecture 39 SPARC 57 ARM architecture 65 DEC Alpha 80 AlphaStation 92 AlphaServer 95 Very long instruction word 103 Instruction-level parallelism 107 Explicitly parallel instruction computing 108 References Article Sources and Contributors 111 Image Sources, Licenses and Contributors 113 Article Licenses License 114 Microarchitecture 1 Microarchitecture In computer engineering, microarchitecture (sometimes abbreviated to µarch or uarch), also called computer organization, is the way a given instruction set architecture (ISA) is implemented on a processor. A given ISA may be implemented with different microarchitectures.[1] Implementations might vary due to different goals of a given design or due to shifts in technology.[2] Computer architecture is the combination of microarchitecture and instruction set design. Relation to instruction set architecture The ISA is roughly the same as the programming model of a processor as seen by an assembly language programmer or compiler writer. The ISA includes the execution model, processor registers, address and data formats among other things. The Intel Core microarchitecture microarchitecture includes the constituent parts of the processor and how these interconnect and interoperate to implement the ISA. The microarchitecture of a machine is usually represented as (more or less detailed) diagrams that describe the interconnections of the various microarchitectural elements of the machine, which may be everything from single gates and registers, to complete arithmetic logic units (ALU)s and even larger elements.
    [Show full text]
  • Integrated Circuit Technology for Wireless Communications: an Overview Babak Daneshrad Integrated Circuits and Systems Laboratory UCLA Electrical Engineering Dept
    Integrated Circuit Technology for Wireless Communications: An Overview Babak Daneshrad Integrated Circuits and Systems Laboratory UCLA Electrical Engineering Dept. email: [email protected] http://www.ee.ucla.edu/~babak Abstract: Figure 1 shows a block diagram of a typical wireless com- This paper provides a brief overview of present trends in the develop- munication system. Moreover it shows the partitioning of the ment of integrated circuit technology for applications in the wireless receiver into RF, IF, baseband analog and digital components. communications industry. Through advanced circuit, architectural and In this paper we will focus on two main classes of integrated processing technologies, ICs have helped bring about the wireless revo- circuit technologies. The first is ICs for analog and more lution by enabling highly sophisticated, low cost and portable end user importantly RF communications. In general the technologist as terminals. In this paper two broad categories of circuits are highlighted. The first is RF integrated circuits and the second is digital baseband well as the circuit designer’s challenge here is to first, make the processing circuits. In both areas, the paper presents the circuit design transistors operate at higher carrier frequencies, and second, to challenges and options presented to the designer. It also highlights the integrate as many of the components required in the receiver manner in which these technologies have helped advance wireless com- onto the IC. The second class of circuits that we will focus on munications. are digital baseband circuits. Where increased density and I. Introduction reduced power consumption are the key factors for optimiza- tion.
    [Show full text]
  • Hybrid Pipeline Hardware Architecture Based on Error Detection and Correction for AES
    sensors Article Hybrid Pipeline Hardware Architecture Based on Error Detection and Correction for AES Ignacio Algredo-Badillo 1,† , Kelsey A. Ramírez-Gutiérrez 1,† , Luis Alberto Morales-Rosales 2,* , Daniel Pacheco Bautista 3,† and Claudia Feregrino-Uribe 4,† 1 CONACYT-Instituto Nacional de Astrofísica, Óptica y Electrónica, Puebla 72840, Mexico; [email protected] (I.A.-B.); [email protected] (K.A.R.-G.) 2 Faculty of Civil Engineering, CONACYT-Universidad Michoacana de San Nicolás de Hidalgo, Morelia 58000, Mexico 3 Departamento de Ingeniería en Computación, Universidad del Istmo, Campus Tehuantepec, Oaxaca 70760, Mexico; [email protected] 4 Instituto Nacional de Astrofísica, Óptica y Electrónica, Puebla 72840, Mexico; [email protected] * Correspondence: [email protected] † These authors contributed equally to this work. Abstract: Currently, cryptographic algorithms are widely applied to communications systems to guarantee data security. For instance, in an emerging automotive environment where connectivity is a core part of autonomous and connected cars, it is essential to guarantee secure communications both inside and outside the vehicle. The AES algorithm has been widely applied to protect communications in onboard networks and outside the vehicle. Hardware implementations use techniques such as iterative, parallel, unrolled, and pipeline architectures. Nevertheless, the use of AES does not Citation: Algredo-Badillo, I.; guarantee secure communication, because previous works have proved that implementations of secret Ramírez-Gutiérrez, K.A.; key cryptosystems, such as AES, in hardware are sensitive to differential fault analysis. Moreover, Morales-Rosales, L.A.; Pacheco it has been demonstrated that even a single fault during encryption or decryption could cause Bautista, D.; Feregrino-Uribe, C.
    [Show full text]
  • Extensible Hardware Architecture for Mobile Robots
    Extensible Hardware Architecture for Mobile Robots Eric Park∗, Linda Kobayashi, and Susan Y. Lee∗ Intelligent Robotics Group NASA Ames Research Center Moffett Field, CA 94035-1000, USA {epark, lkobayashi, sylee}@arc.nasa.gov Abstract— The Intelligent Robotics Group at NASA Ames Research Center has developed a new mobile robot hardware architecture designed for extensibility and reconfigurability. Currently implemented on the K9 rover, and soon to be integrated onto the K10 series of human-robot collaboration research robots, this achitecture allows for rapid changes in instrumentation configuration and provides a high degree of modularity through a synergistic mix of off-the-shelf and custom designed components, allowing eased transplantation into a wide variety of mobile robot platforms. A component level overview of this architecture is presented along with a description of the changes required for implementation on K10, followed by plans for future work. Index Terms— modular, extensible, hardware architecture, mobile robot, k9 INTRODUCTION Fig. 1. K9 rover Mobile robots used for research and development are in increasing demand as the application space for robotics K9 Overview widens. With this demand comes the need for a smarter way to produce reliable mobile robots with a proven set The K9 rover is a 6-wheel steer, 6-wheel drive rocker- of avionics. Time and money lost in developing custom bogey chassis outfitted with electronics and instruments architectures for each new robot makes it increasingly appropriate for supporting research relevant to remote important to develop extensible hardware architectures that exploration [2] [3] [4]. K9’s overall dimensions are 1.05m can expand and accommodate rapid changes in configu- long by 0.85m wide by 1.6m high, with a mass of ap- ration, including transfer to an entirely different mobile proximately 65 kg and a top speed of approximately 6 platform [1].
    [Show full text]
  • Ultra-Low-Power Design and Implementation of Application-Specific Instruction-Set Processors for Ubiquitous Sensing and Computing
    Ultra-low-power Design and Implementation of Application-specific Instruction-set Processors for Ubiquitous Sensing and Computing NING MA Doctoral Thesis in Electronic and Computer Systems Stockholm, Sweden 2015 TRITA-ICT 2015:11 KTH School of Information and ISSN 1653-6363 Communication Technology ISRN KTH/ICT-2015/11-SE SE-164 40 Kista, Stockholm ISBN 978-91-7595-692-3 SWEDEN Akademisk avhandling som med tillstånd av Kungl Tekniska högskolan framlägges till offentlig granskning för avläggande av teknologie doktorsexamen i Elektronik och Datorsystem onsdag den 4 november 2015 klockan 10.00 i Sal B, Electrum, Kungl Tekniska högskolan, Kista 164 40, Stockholm. © Ning Ma, September 2015 Tryck: Universitetsservice US AB iii Abstract The feature size of transistors keeps shrinking with the development of technology, which enables ubiquitous sensing and computing. However, with the break down of Dennard scaling caused by the difficulties for further lower- ing supply voltage, the power density increases significantly. The consequence is that, for a given power budget, the energy efficiency must be improved for hardware resources to maximize the performance. Application-specific inte- grated circuits (ASICs) obtain high energy efficiency at the cost of low flexi- bility for various applications, while general-purpose processors (GPPs) gain generality at the expense of efficiency. To provide both high energy efficiency and flexibility, this dissertation ex- plores the ultra-low-power design of application-specific instruction-set pro- cessors (ASIP) for ubiquitous sensing and computing. Two application sce- narios, i.e. high-throughput compute-intensive processing for multimedia and low-throughput low-cost processing for Internet of Things (IoT) are imple- mented in the proposed ASIPs.
    [Show full text]
  • Foundations for the Study of Software Architecture
    ACM SIGSOFT SOFTWARE ENGINEERING NOTES vol 17 no 4 Oct 1992 Page 40 Foundations for the Study of Software Architecture Alexander L. Wolf Dewayne E. Perry Department of Computer Science AT&T Bell Lab oratories University of Colorado 600 Mountain Avenue Boulder, Colorado 80309 Murray Hill, New Jersey 07974 [email protected] [email protected] c 1989,1991,1992 Dewayne E. Perry and Alexander L. Wolf ABSTRACT more toward integrating designs and the design pro cess into the broader context of the software pro cess and its management. One result of this integration was that The purp ose of this pap er is to build the foundation many of the notations and techniques develop ed for soft- for software architecture. We rst develop an intuition ware design have b een absorb ed by implementation lan- for software architecture by app ealing to several well- guages. Consider, for example, the concept of supp ort- established architectural disciplines. On the basis of ing \programmming-in-the-large". This integration has this intuition, we present a mo del of software architec- tended to blur, if not confuse, the distinction b etween ture that consists of three comp onents: elements, form, design and implementation. and rationale. Elements are either pro cessing, data, or The 1980s also saw great advances in our abilityto connecting elements. Form is de ned in terms of the describ e and analyze software systems. We refer here to prop erties of, and the relationships among, the elements | that is, the constraints on the elements. The ratio- such things as formal descriptive techniques and sophis- nale provides the underlying basis for the architecture in ticated notions of typing that enable us to reason more terms of the system constraints, which most often derive e ectively ab out software systems.
    [Show full text]
  • 1. SYSTEM ARCHITECTURE 1.1 System Hardware Architecture
    1. SYSTEM ARCHITECTURE 1.1 System Hardware Architecture This part represents a potential hardware architecture as a series view or deployment view. These are view base on the overall architectures of our hardware system and organization. Also, the system requirements are specified to hardware components. 1.1.1 List of Hardware components ● Ultrasonic sensor HC-SR04 ● Arduino Board MCU ESP8266 ● Monitor ● Keyboard/Mouse ● Wireless Router ● Computer CPU ● GPS/GPRS Module 1.1.2 Diagram showing the connectivity between the components We represent into the deployment view of the architecture. To showing the connectivity between the components. Describes the physical components (hardware), their distribution (e.g., a web server, database server) and how the different pieces are connected. So, we choose the deployment diagram to represent deployment view. The reason we choose this approach is the simplest model and suited for a proof of concept as shown in figure 1. Figure 1: Deployment Diagram Install the Ultrasonic sensor inside the waste container on its cover. The Ultrasonic sensor will measure the level of garbage inside the waste container and send the digital signal to centralized server by wireless cellular network. The centralized server will control all the process in a central location. System Administrator can manage all of it at the desktops at the waste station. Garbage truck driver can also check status of each waste container by their devices or Personal Computer at the waste station. 1.2 System Software Architecture This part represents a potential software architecture as a series views; use- case view. These are view based on the overall architecture of our software system and organization.
    [Show full text]
  • Method of Top-Level Design for Automated Test Systems
    Advances in Engineering Research, volume 127 3rd International Conference on Electrical, Automation and Mechanical Engineering (EAME 2018) Method of Top-level Design for Automated Test Systems Zhenjie Zeng1, Xiaofei Zhu1,*, Shiju Qi1, Kai Wu2 and Xiaowei Shen1 1Rocket Force University of Engineering, Xi’an, China 2Troops No. 96604, Beijing, China *Corresponding author Abstract—When designing an automatic test system, it is The top-level design of automatic test system integration is necessary to make each electronic test device conform to different based on sufficient requirements analysis, and comprehensively test requirements. The most important issue is the system top- considers the optimal matching of technical and economic level design. The article starts with the three steps of the top-level performances. It is advanced, practical, open, real-time, design: system requirements analysis, architecture selection and universal (compatibility), and reliability. , maintainability and analysis, and test equipment configuration. It describes in detail other aspects of a comprehensive analysis, determine the test how to develop the top-level system design efficiently and system architecture (including hardware platforms and software reasonably when developing automated test systems. The platforms), develop a corresponding test program. As shown in principles, available method techniques, and precautions have Figure 1, it is usually divided into three steps: requirements some guiding significance for the top-level design of automated analysis,
    [Show full text]