Functional Equivalence Verification Tools in High-Level Synthesis Flows

Total Page:16

File Type:pdf, Size:1020Kb

Functional Equivalence Verification Tools in High-Level Synthesis Flows High-Level Synthesis Functional Equivalence Verification Tools in High-Level Synthesis Flows Anmol Mathur Edmund Clarke Calypto Design Systems Carnegie Mellon University Masahiro Fujita Pascal Urard University of Tokyo STMicroelectronics Editor’s note: Eliminate bugs insofar as possible High-level synthesis facilitates the use of formal verification methodologies that from the given description levels. check the equivalence of the generated RTL model against the original source This is necessary at the highest specification. The article provides an overview of sequential equivalence level of abstraction that cannot be checking techniques, its challenges, and successes in real-world designs. verified via equivalence checking, ÀÀAndres Takach, Mentor Graphics but it is also necessary when equiva- lence checking cannot verify all the MOST MODERN DESIGNS start with an algorithmic behaviors of a model by comparing it against a description of the target design that is used to iden- higher-level model. tify high-level system, or hardware and software, Guarantee the equivalence of the two descriptions. performance trade-offs. The designers’ goal is to We must guarantee equivalence of both a vali- transform these algorithmic descriptions into ones dated higher-level model and of a lower-level that are hardware-friendly, such that the final trans- model obtained automatically or through manual formed descriptions are accepted by high-level syn- refinements of the higher-level model. thesis tools to generate high-quality RTL designs.1 These design refinements consist of many transfor- These two issues complement each other to assure mation steps including changes of data types (for ex- the correctness of the descriptions as a whole. ample, floating point to fixed point, refinements of To eliminate bugs as much as possible from a bit-widths), removal of certain types of pointer given design description, simulation is not sufficient, manipulations, and partitioning of memories. Design especially for large and complicated designs. There- models at levels higher than RTL, such as algo- fore, we also use various formal methods to verify rithmic and high-level models, are called system- the design descriptions. level models (SLMs). Commonly, C/C++ or SystemC is used to represent these SLMs. Once a high-level Model checking for SLM synthesizable description is obtained in this manner, Validating the SLM’s functional correctness can be it can automatically be transformed into an RTL de- done via simulation (a dynamic technique). How- scription through appropriate high-level synthesis ever, simulation can never be exhaustive due to the tools. large input and state space of real models, and can Figure 1 illustrates a high-level design flow that miss functional issues in corner cases. Consequently, starts from an algorithmic design description. To en- we also use formal (static) techniques to validate force the correctness of various descriptions in such the SLM. These techniques do not require the user a design flow, we must resolve two issues: to specify test vectors. 88 0740-7475/09/$26.00 c 2009 IEEE Copublished by the IEEE CS and the IEEE CASS IEEE Design & Test of Computers Static analysis methods Static analysis does not exe- Algorithmic design Design description cute and follow design behav- optimization iors globally; instead, it checks Many steps of behaviors of the design locally. manual refinement The most basic static methods Static/model High-level synthesizable Sequential are those used in lint-type checking Design description equivalence tools, and there have been sig- optimization checking High-level Many steps of nificant extensions for more synthesis manual refinement detailed and accurate analysis.2 Designers generate control and Register transfer level Design description data flow graphs from given de- optimization sign descriptions, and examine dependencies on control and Figure 1. High-level design flow. data that result in various de- pendency graphs. With appropriate sets of rules, we can check vari- Model-checking methods ous items. For example, we can detect uninitialized After designers apply static checking methods to variables by traversing control/data dependencies to the design descriptions, they can use model-checking check if a variable is used before it is assigned a methods to detect more complicated bugs that can value. Note that this search process is local and be found only through systematic traversals starting does not need to start from the beginning of the de- from initial states or user-specified states. Given a scription. Whereas model-checking methods basi- property,model-checking methods determine whether cally examine the design descriptions exhaustively all possible execution paths from initial states satisfy starting from initial or reset states, static checking ana- that property. Properties are conditions on possible lyzes design descriptions only in small and local values of variables among multiple time frames, areas. Instead of starting from initial states, static such as ‘‘if a request comes, a response must be checking first picks up target statements and then returned within two cycles’’ and ‘‘even if an error hap- examines only small portions of design descriptions pens, the system eventually returns to its reset state.’’ that are very close to those target statements, as In one approach, the C-bounded modeling check- shown in the left part of Figure 2. ing tool, CBMC,4 verifies that a given ANSI-C program Because analysis is local, it can deal with very satisfies given properties by converting them into bit- large design descriptions, although such analysis vector equations and solving satisfiability (SAT) using has less accuracy. This means that static checking methods can produce false errors or warnings. Null pointer references can be checked with backward tra- versals of control and data dependencies, and rela- Main() { Initial states tive execution orders among concurrent statements … can be checked with backward and forward traver- … … sals starting from the target concurrent statements to } be checked. Static checking methods, which have Static checking Func1() { … Model checking been applied to various software verification tasks, Analyze only … have proven highly effective even for very large locally Starting from initial states, … check all possible paths descriptions having millions of lines of code. For ex- } Target statement with constraints ample, Unix kernels and utilities have been automat- … … ically analyzed by static methods, which have … revealed several design bugs.2 Also, researchers have been working to combine static and model- checking methods within the same framework for more efficient and accurate analysis.3 Figure 2. Static checking and model-checking approaches. July/August 2009 89 High-Level Synthesis a SAT solver, such as Chaff.5 All statements in C descriptions. Abstraction must be refined to avoid descriptions are automatically translated into Boo- such false negative cases. A considerable amount lean formulas, and they are analyzed up to given spe- of ongoing research concerns automatic abstraction cific cycles. Although CBMC and similar approaches refinements for model checking.7 State-of-the-art can generally verify C descriptions, they cannot model checkers for SLMs can process tens of thou- deal with large descriptions because the SAT process- sands of lines of codes within several hours. ing time grows exponentially with design size. Model-checking methods for RTL or gate-level Sequential equivalence checking designs are mostly based on Boolean functions. Sequential equivalence checking (SEC) is a formal These methods make each variable a single bit and technique that checks two designs for equivalence use decision procedures for Boolean formulas, such even when there is no one-to-one correspondence as SAT procedures. In SLMs, however, there can be between the two designs’ state elements. In contrast, many word-level variables that have multiple bit traditional combinational equivalence checkers need widths, such as integer variables. If we expand all a one-to-one correspondence between the flip-flops of them into Boolean variables, the number of varia- and latches in the two designs. SLMs can be untimed bles can easily exceed the capacity of SAT solvers. C/C++ functions and have very little internal state. Therefore, when reasoning about high-level descrip- RTL models, on the other hand, implement the full tions, we commonly use word-level decision proce- microarchitecture with the computation scheduled dures. Extended SAT solvers, called Satisfiability over multiple cycles. Accordingly, significant state Modulo Theory (SMT) solvers, such as CVC,6 can differences exist between the SLM and RTL model, deal with multibit variables as they are. They are sim- and SLM-to-RTL equivalence checking clearly ilar to theorem provers and consist of several deci- needs SEC. Researchers have investigated SEC tech- sion procedures. niques,8-12 and also commercial SEC tools that are Although reasoning about the SLM can be much now available, such as the one from Calypto Design. more efficient with SMT solvers, realistic design SEC can check models for functional equivalence sizes are still simply too large to be processed. More- even when they have differences along the following over, the types of formulas that word-level decision axes: procedures can deal with are limited, and sometimes the formulas must be expanded into Boolean ones if Temporal differences
Recommended publications
  • Synthesis and Verification of Digital Circuits Using Functional Simulation and Boolean Satisfiability
    Synthesis and Verification of Digital Circuits using Functional Simulation and Boolean Satisfiability by Stephen M. Plaza A dissertation submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy (Computer Science and Engineering) in The University of Michigan 2008 Doctoral Committee: Associate Professor Igor L. Markov, Co-Chair Assistant Professor Valeria M. Bertacco, Co-Chair Professor John P. Hayes Professor Karem A. Sakallah Associate Professor Dennis M. Sylvester Stephen M. Plaza 2008 c All Rights Reserved To my family, friends, and country ii ACKNOWLEDGEMENTS I would like to thank my advisers, Professor Igor Markov and Professor Valeria Bertacco, for inspiring me to consider various fields of research and providing feedback on my projects and papers. I also want to thank my defense committee for their comments and in- sights: Professor John Hayes, Professor Karem Sakallah, and Professor Dennis Sylvester. I would like to thank Professor David Kieras for enhancing my knowledge and apprecia- tion for computer programming and providing invaluable advice. Over the years, I have been fortunate to know and work with several wonderful stu- dents. I have collaborated extensively with Kai-hui Chang and Smita Krishnaswamy and have enjoyed numerous research discussions with them and have benefited from their in- sights. I would like to thank Ian Kountanis and Zaher Andraus for our many fun discus- sions on parallel SAT. I also appreciate the time spent collaborating with Kypros Constan- tinides and Jason Blome. Although I have not formally collaborated with Ilya Wagner, I have enjoyed numerous discussions with him during my doctoral studies. I also thank my office mates Jarrod Roy, Jin Hu, and Hector Garcia.
    [Show full text]
  • A Logic Synthesis Toolbox for Reducing the Multiplicative Complexity in Logic Networks
    A Logic Synthesis Toolbox for Reducing the Multiplicative Complexity in Logic Networks Eleonora Testa∗, Mathias Soekeny, Heinz Riener∗, Luca Amaruz and Giovanni De Micheli∗ ∗Integrated Systems Laboratory, EPFL, Lausanne, Switzerland yMicrosoft, Switzerland zSynopsys Inc., Design Group, Sunnyvale, California, USA Abstract—Logic synthesis is a fundamental step in the real- correlates to the resistance of the function against algebraic ization of modern integrated circuits. It has traditionally been attacks [10], while the multiplicative complexity of a logic employed for the optimization of CMOS-based designs, as well network implementing that function only provides an upper as for emerging technologies and quantum computing. Recently, bound. Consequently, minimizing the multiplicative complexity it found application in minimizing the number of AND gates in of a network is important to assess the real multiplicative cryptography benchmarks represented as xor-and graphs (XAGs). complexity of the function, and therefore its vulnerability. The number of AND gates in an XAG, which is called the logic net- work’s multiplicative complexity, plays a critical role in various Second, the number of AND gates plays an important role cryptography and security protocols such as fully homomorphic in high-level cryptography protocols such as zero-knowledge encryption (FHE) and secure multi-party computation (MPC). protocols, fully homomorphic encryption (FHE), and secure Further, the number of AND gates is also important to assess multi-party computation (MPC) [11], [12], [6]. For example, the the degree of vulnerability of a Boolean function, and influences size of the signature in post-quantum zero-knowledge signatures the cost of techniques to protect against side-channel attacks.
    [Show full text]
  • Logic Optimization and Synthesis: Trends and Directions in Industry
    Logic Optimization and Synthesis: Trends and Directions in Industry Luca Amaru´∗, Patrick Vuillod†, Jiong Luo∗, Janet Olson∗ ∗ Synopsys Inc., Design Group, Sunnyvale, California, USA † Synopsys Inc., Design Group, Grenoble, France Abstract—Logic synthesis is a key design step which optimizes of specific logic styles and cell layouts. Embedding as much abstract circuit representations and links them to technology. technology information as possible early in the logic optimiza- With CMOS technology moving into the deep nanometer regime, tion engine is key to make advantageous logic restructuring logic synthesis needs to be aware of physical informations early in the flow. With the rise of enhanced functionality nanodevices, opportunities carry over at the end of the design flow. research on technology needs the help of logic synthesis to capture In this paper, we examine the synergy between logic synthe- advantageous design opportunities. This paper deals with the syn- sis and technology, from an industrial perspective. We present ergy between logic synthesis and technology, from an industrial technology aware synthesis methods incorporating advanced perspective. First, we present new synthesis techniques which physical information at the core optimization engine. Internal embed detailed physical informations at the core optimization engine. Experiments show improved Quality of Results (QoR) and results evidence faster timing closure and better correlation better correlation between RTL synthesis and physical implemen- between RTL synthesis and physical implementation. We elab- tation. Second, we discuss the application of these new synthesis orate on synthesis aware technology development, where logic techniques in the early assessment of emerging nanodevices with synthesis enables a fair system-level assessment on emerging enhanced functionality.
    [Show full text]
  • Designing a RISC CPU in Reversible Logic
    Designing a RISC CPU in Reversible Logic Robert Wille Mathias Soeken Daniel Große Eleonora Schonborn¨ Rolf Drechsler Institute of Computer Science, University of Bremen, 28359 Bremen, Germany frwille,msoeken,grosse,eleonora,[email protected] Abstract—Driven by its promising applications, reversible logic In this paper, the recent progress in the field of reversible cir- received significant attention. As a result, an impressive progress cuit design is employed in order to design a complex system, has been made in the development of synthesis approaches, i.e. a RISC CPU composed of reversible gates. Starting from implementation of sequential elements, and hardware description languages. In this paper, these recent achievements are employed a textual specification, first the core components of the CPU in order to design a RISC CPU in reversible logic that can are identified. Previously introduced approaches are applied execute software programs written in an assembler language. The next to realize the respective combinational and sequential respective combinational and sequential components are designed elements. More precisely, the combinational components are using state-of-the-art design techniques. designed using the reversible hardware description language SyReC [17], whereas for the realization of the sequential I. INTRODUCTION elements an external controller (as suggested in [16]) is utilized. With increasing miniaturization of integrated circuits, the Plugging the respective components together, a CPU design reduction of power dissipation has become a crucial issue in results which can process software programs written in an today’s hardware design process. While due to high integration assembler language. This is demonstrated in a case study, density and new fabrication processes, energy loss has sig- where the execution of a program determining Fibonacci nificantly been reduced over the last decades, physical limits numbers is simulated.
    [Show full text]
  • Logic Synthesis Meets Machine Learning: Trading Exactness for Generalization
    Logic Synthesis Meets Machine Learning: Trading Exactness for Generalization Shubham Raif,6,y, Walter Lau Neton,10,y, Yukio Miyasakao,1, Xinpei Zhanga,1, Mingfei Yua,1, Qingyang Yia,1, Masahiro Fujitaa,1, Guilherme B. Manskeb,2, Matheus F. Pontesb,2, Leomar S. da Rosa Juniorb,2, Marilton S. de Aguiarb,2, Paulo F. Butzene,2, Po-Chun Chienc,3, Yu-Shan Huangc,3, Hoa-Ren Wangc,3, Jie-Hong R. Jiangc,3, Jiaqi Gud,4, Zheng Zhaod,4, Zixuan Jiangd,4, David Z. Pand,4, Brunno A. de Abreue,5,9, Isac de Souza Camposm,5,9, Augusto Berndtm,5,9, Cristina Meinhardtm,5,9, Jonata T. Carvalhom,5,9, Mateus Grellertm,5,9, Sergio Bampie,5, Aditya Lohanaf,6, Akash Kumarf,6, Wei Zengj,7, Azadeh Davoodij,7, Rasit O. Topalogluk,7, Yuan Zhoul,8, Jordan Dotzell,8, Yichi Zhangl,8, Hanyu Wangl,8, Zhiru Zhangl,8, Valerio Tenacen,10, Pierre-Emmanuel Gaillardonn,10, Alan Mishchenkoo,y, and Satrajit Chatterjeep,y aUniversity of Tokyo, Japan, bUniversidade Federal de Pelotas, Brazil, cNational Taiwan University, Taiwan, dUniversity of Texas at Austin, USA, eUniversidade Federal do Rio Grande do Sul, Brazil, fTechnische Universitaet Dresden, Germany, jUniversity of Wisconsin–Madison, USA, kIBM, USA, lCornell University, USA, mUniversidade Federal de Santa Catarina, Brazil, nUniversity of Utah, USA, oUC Berkeley, USA, pGoogle AI, USA The alphabetic characters in the superscript represent the affiliations while the digits represent the team numbers yEqual contribution. Email: [email protected], [email protected], [email protected], [email protected] Abstract—Logic synthesis is a fundamental step in hard- artificial intelligence.
    [Show full text]
  • Logical Equivalence Checking of Asynchronous Circuits Using Commercial Tools
    Logical Equivalence Checking of Asynchronous Circuits Using Commercial Tools Arash Saifhashemi Hsin-Ho Huang Priyanka Bhalerao Peter A. Beerel∗ Intel Corporation Electrical Engineering Yahoo Corporation Electrical Engineering Santa Clara, CA University of Southern California Sunnyvale, CA University of Southern California Email: [email protected] Los Angeles, CA Email: [email protected] Los Angeles, CA Email: [email protected] Email: [email protected] Abstract—We propose a method for logical equivalence check generally cannot be used to compare CSP with decomposed (LEC) of asynchronous circuits using commercial synchronous versions because the decomposition often introduces pipelining tools. In particular, we verify the equivalence of asynchronous that changes the allowed sequence of events at the external circuits which are modeled at the CSP-level in SystemVerilog as interface. Therefore, some researchers only check critical prop- well as circuits modeled at the micro-architectural level using con- erties on the final decomposed design [15], [16]. ditional communication library primitives. Our approach is based on a novel three-valued logic model that abstracts the detailed Our proposed approach is different from the previous work handshaking protocol and is thus agnostic to different gate-level in the following ways: first, since it is focused on CSP- implementations, making it applicable to a variety of different level designs, it is implementation-agnostic and can be used design styles. Our experimental results with commercial LEC for design flows that target various asynchronous templates. tools on a variety of computational blocks and an asynchronous Secondly, compared to [11], we explicitly support modules microprocessor demonstrate the applicability and limitations of the proposed approach.
    [Show full text]
  • Verilog HDL 1
    chapter 1.fm Page 3 Friday, January 24, 2003 1:44 PM Overview of Digital Design with Verilog HDL 1 1.1 Evolution of Computer-Aided Digital Design Digital circuit design has evolved rapidly over the last 25 years. The earliest digital circuits were designed with vacuum tubes and transistors. Integrated circuits were then invented where logic gates were placed on a single chip. The first integrated circuit (IC) chips were SSI (Small Scale Integration) chips where the gate count was very small. As technologies became sophisticated, designers were able to place circuits with hundreds of gates on a chip. These chips were called MSI (Medium Scale Integration) chips. With the advent of LSI (Large Scale Integration), designers could put thousands of gates on a single chip. At this point, design processes started getting very complicated, and designers felt the need to automate these processes. Electronic Design Automation (EDA)1 techniques began to evolve. Chip designers began to use circuit and logic simulation techniques to verify the functionality of building blocks of the order of about 100 transistors. The circuits were still tested on the breadboard, and the layout was done on paper or by hand on a graphic computer terminal. With the advent of VLSI (Very Large Scale Integration) technology, designers could design single chips with more than 100,000 transistors. Because of the complexity of these circuits, it was not possible to verify these circuits on a breadboard. Computer- aided techniques became critical for verification and design of VLSI digital circuits. Computer programs to do automatic placement and routing of circuit layouts also became popular.
    [Show full text]
  • Object-Oriented Development for Reconfigurable Architectures
    Object-Oriented Development for Reconfigurable Architectures Von der Fakultät für Mathematik und Informatik der Technischen Universität Bergakademie Freiberg genehmigte DISSERTATION zur Erlangung des akademischen Grades Doktor Ingenieur Dr.-Ing., vorgelegt von Dipl.-Inf. (FH) Dominik Fröhlich geboren am 19. Februar 1974 Gutachter: Prof. Dr.-Ing. habil. Bernd Steinbach (Freiberg) Prof. Dr.-Ing. Thomas Beierlein (Mittweida) PD Dr.-Ing. habil. Michael Ryba (Osnabrück) Tag der Verleihung: 20. Juni 2007 To my parents. ABSTRACT Reconfigurable hardware architectures have been available now for several years. Yet the application devel- opment for such architectures is still a challenging and error-prone task, since the methods, languages, and tools being used for development are inappropriate to handle the complexity of the problem. This hampers the widespread utilization, despite of the numerous advantages offered by this type of architecture in terms of computational power, flexibility, and cost. This thesis introduces a novel approach that tackles the complexity challenge by raising the level of ab- straction to system-level and increasing the degree of automation. The approach is centered around the paradigms of object-orientation, platforms, and modeling. An application and all platforms being used for its design, implementation, and deployment are modeled with objects using UML and an action language. The application model is then transformed into an implementation, whereby the transformation is steered by the platform models. In this thesis solutions for the relevant problems behind this approach are discussed. It is shown how UML can be used for complete and precise modeling of applications and platforms. Application development is done at the system-level using a set of well-defined, orthogonal platform models.
    [Show full text]
  • Automated Synthesis of Unconventional Computing Systems
    University of Central Florida STARS Electronic Theses and Dissertations, 2004-2019 2019 Automated Synthesis of Unconventional Computing Systems Amad Ul Hassen University of Central Florida Part of the Computer Engineering Commons Find similar works at: https://stars.library.ucf.edu/etd University of Central Florida Libraries http://library.ucf.edu This Doctoral Dissertation (Open Access) is brought to you for free and open access by STARS. It has been accepted for inclusion in Electronic Theses and Dissertations, 2004-2019 by an authorized administrator of STARS. For more information, please contact [email protected]. STARS Citation Ul Hassen, Amad, "Automated Synthesis of Unconventional Computing Systems" (2019). Electronic Theses and Dissertations, 2004-2019. 6500. https://stars.library.ucf.edu/etd/6500 AUTOMATED SYNTHESIS OF UNCONVENTIONAL COMPUTING SYSTEMS by AMAD UL HASSEN MSc Computer Science, University of Central Florida, 2016 MSc Electrical Engineering, University of Engineering & Technology Lahore, 2013 BSc Electrical Engineering, University of Engineering & Technology, Lahore, 2008 A Dissertation submitted in partial fulfilment of the requirements for the degree of Doctor of Philosophy in the Department of Electrical and Computer Engineering in the College of Engineering and Computer Science at the University of Central Florida Orlando, Florida Summer Term 2019 Major Professor: Sumit Kumar Jha c 2019 Amad Ul Hassen ii ABSTRACT Despite decades of advancements, modern computing systems which are based on the von Neu- mann architecture still carry its shortcomings. Moore’s law, which had substantially masked the effects of the inherent memory-processor bottleneck of the von Neumann architecture, has slowed down due to transistor dimensions nearing atomic sizes.
    [Show full text]
  • Robust Boolean Reasoning for Equivalence Checking and Functional Property Verification
    IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, VOL. XX, NO. Y, MONTH 2002 1 Robust Boolean Reasoning for Equivalence Checking and Functional Property Verification Andreas Kuehlmann, Senior Member, IEEE, Viresh Paruthi, Florian Krohm, and Malay K. Ganai, Member, IEEE Abstract— Many tasks in CAD, such as equivalence checking, in a powerful solution for a wider range of applications. Ad- property checking, logic synthesis, and false paths analysis require ditionally, by including random simulation its efficiency can be efficient Boolean reasoning for problems derived from circuits. further improved for problems with many satisfying solutions. Traditionally, canonical representations, e.g., BDDs, or structural A large fraction of practical problems derived from the above SAT methods, are used to solve different problem instances. Each of these techniques offer specific strengths that make them efficient mentioned applications have a high degree of structural re- for particular problem structures. However, neither structural dundancy. There are three main sources for this redundancy: techniques based on SAT, nor functional methods using BDDs of- First, the primary netlist produced from a register transfer level fer an overall robust reasoning mechanism that works reliably for (RTL) specification contains redundancies generated by lan- a broad set of applications. In this paper we present a combina- guage parsing and processing. For example, in industrial de- tion of techniques for Boolean reasoning based on BDDs, struc- tural transformations, a SAT procedure, and random simulation signs, between 30 and 50% of generated netlist gates are redun- natively working on a shared graph representation of the prob- dant [1]. A second source of structural redundancy is inherent lem.
    [Show full text]
  • An Optimal Power Supply and Body Bias Voltage for an Ultra Low Power Micro-Controller with Silicon on Thin BOX MOSFET
    An Optimal Power Supply And Body Bias Voltage for an Ultra Low Power Micro-Controller with Silicon on Thin BOX MOSFET Hayate Okuharay, Kuniaki Kitamoriy, Yu Fujitay, Kimiyoshi Usamiz, and Hideharu Amanoy yKeio University, 3-14-1 Hiyoshi, Kohoku-ku, Yokohama, Japan zShibaura Institute of Technology, 3-7-5 Toyosu, Kohtoh-ku, Tokyo, Japan yE-mail: fhayate,[email protected] zE-mail: [email protected] Abstract| Body bias control is an efficient means of Although a CPU with the SOTB was investigated in [2], it balancing the trade-off between leakage power and perfor- was not based on a performance and power model. mance especially for chips with silicon on thin buried oxide In the present work, we propose and examine a method (SOTB), a type of FD-SOI technology. In this work, a to find the optimal combination of supply voltage and back- method for finding the optimal combination of the supply gate bias for a micro-controller with the SOTB technique. voltage and body bias voltage to the core and memory is The main contributions of this paper are: proposed and applied to a real micro-controller chip using • A method is proposed to optimize the supply voltage SOTB CMOS technology. By obtaining several coefficients and back-gate bias for a real 32-bit micro-controller of equations for leakage power, switching power and op- implemented with a 65-nm SOTB CMOS technique in erational frequency from the real chip measurements, the which the core and memory are controlled indepen- optimized voltage setting can be obtained for the target dently.
    [Show full text]
  • Busting the Myth That Systemverilog Is Only for Verification
    Synthesizing SystemVerilog Busting the Myth that SystemVerilog is only for Verification Stuart Sutherland Don Mills Sutherland HDL, Inc. Microchip Technology, Inc. [email protected] [email protected] ABSTRACT SystemVerilog is not just for Verification! When the SystemVerilog standard was first devised, one of the primary goals was to enable creating synthesizable models of complex hardware designs more accurately and with fewer lines of code. That goal was achieved, and Synopsys has done a great job of implementing SystemVerilog in both Design Compiler (DC) and Synplify-Pro. This paper examines in detail the synthesizable subset of SystemVerilog for ASIC and FPGA designs, and presents the advantages of using these constructs over traditional Verilog. Readers will take away from this paper new RTL modeling skills that will indeed enable modeling with fewer lines of code, while at the same time reducing potential design errors and achieving high synthesis Quality of Results (QoR). Target audience: Engineers involved in RTL design and synthesis, targeting ASIC and FPGA implementations. Note: The information in this paper is based on Synopsys Design Compiler (also called HDL Compiler) version 2012.06-SP4 and Synopsys Synplify-Pro version 2012.09-SP1. These were the most current released versions available at the time this paper was written. SNUG Silicon Valley 2013 1 Synthesizing SystemVerilog Table of Contents 1. Data types .................................................................................................................................4
    [Show full text]