Research Collection

Total Page:16

File Type:pdf, Size:1020Kb

Research Collection Research Collection Doctoral Thesis Programmable intellectual property modules for system design by reuse Author(s): Röwer, Thomas Publication Date: 2000 Permanent Link: https://doi.org/10.3929/ethz-a-004039264 Rights / License: In Copyright - Non-Commercial Use Permitted This page was generated automatically upon download from the ETH Zurich Research Collection. For more information please consult the Terms of use. ETH Library Diss. ETH No. 13905 Programmable Intellectual Property Modules for System Design by Reuse A dissertation submitted to the SWISS FEDERAL INSTITUTE OF TECHNOLOGY ZURICH for the degree of Doctor of Technical Sciences presented by THOMAS ROWER Dipl. Ing. born August 1st, 1969 citizen of Germany accepted on the recommendation of Prof. Dr. W. Ficlitner, examiner Prof. Dr. L. Thiele, co-examiner 2000 f < * f s ; « : Acknowledgments I would like to thank my supervisor. Prof. Dr. Wolfgang Fichtner, for his overall support and for his faith in me and my work. 1 would also like to thank Prof. Dr. Lothar Thiele for reading and commenting on my thesis. Special thanks to Hubert Kaeslin and Norbert Felber for their en¬ couragement and support during my research work as well as for their proofreading of and commitment to my thesis. Furthermore, I want to thank the secretaries for taking care of the administration and Christoph Wicki for excellent supervision of hard- and software. Hanspeter "Die Granate" Mathys was a great, help in taking the chip-photos included in this thesis. The support of Mathias Bränclh and Robert Reutemann on the chip design process was of great value. 1 acknowledge the financial support of KTL a Commission for tech¬ nology and innovation supported by the Swiss Government and Siemens Schweiz AG. Several students worked with me during my thesis and provided very helpful discussions on different topics of this thesis. Peter Lût hi did a great job in implementing the register-based processor. Very special thanks go to Markus Th alma nu and Manfred Stadler my co-workers on the ITRASYS project. Many ideas that are presented in my thesis originate from discussions with the two. 1 also want to express my gratitude to all my colleagues at the Inte¬ grated Systems Laboratory who contributed to the good working envi- lii IV Acknowledgments ronment, especially Michael Oberle made the IIS a FUN place to work. I owe very special gratefulness to my parents who made this all possible, in the first place. I want to particularly thank my family, my wife, Susanne, for her un¬ limited patience and understanding, and our son, Alexander, for sleep¬ ing quite a lot at night. Contents Acknowledgments iii Abstract ix Zusammenfassung xi 1 Introduction 1 1.1 Goals of this work 3 1.2 Structure of the thesis 3 2 Hardware/software co-design 5 2.1 Principles 5 2.2 System-level hardware/software co-design 6 2.3 An example implementation 8 2.3.1 Description of the system 10 2.3.2 Hardware/software partitioning 11 2.3.3 Task scheduling and interrupt handling 12 2.3.4 instruct ion Application-specific set processor . 13 v VI Contents 2.3.5 Area efficiency 15 2.4 Discussion and Outlook 19 3 System design by reuse 23 3.1 History of IC design 23 3.2 System design and IP reuse 25 3.2.1 System design flow 27 3.2.2 IP module design 29 Hard IP modules 29 Soft IP modules 30 IP design How 31 3.2.3 Functional verification 33 3.2.4 Test 34 4 The case for programmable intellectual property mod¬ ules 37 4.1 Present Situation 38 4.2 Concept 42 4.3 System design advantages 43 4.3.1 Flexible hard-IPs 45 4.3.2 Adaptable interfaces 45 4.3.3 Block test 47 4.4 Evaluation 48 Contents vu 5 The embedded processor 51 5.1 Requirements 52 5.2 The stack processor 53 5.2.1 Architect ure 53 5.2.2 Qualitative adaption of 1 he processor core .... 56 5.2.3 Numerical parametrizatiou of the processor core 57 5.2.4 System interfaces 60 5.2.5 Implementation 60 5.3 The register-based processor 61 5.3.1 Architecture 62 5.3.2 Qualitative adaptation of the processor core ... 66 5.3.3 Numerical parameterization of the processor core 66 5.3.4 System interfaces 67 5.3.5 Implementation 69 5.4 Comparison 70 5.4.1 Compiler support 70 5.4.2 Interrupt capabilities 71 5.4.3 Parametrizatiou 71 5.4.4 Area efficiency 71 6 Functional verification of PIP modules 77 6.1 Behavioral model 78 6.2 Expected responses generation 82 6.3 Functional veriiicatiou of the customized RTL model . 85 vin Contents 6.4 Test bench and real-time constraints 85 7 A pip module design example 87 7.1 Architecture 87 7.2 Adaptable Interfaces 91 7.3 Built-in self-test 95 7.4 Parametrization of the PIP module 97 7.5 Implementation Results 100 8 Conclusions 105 A Parameter set of the PIP module 107 B Parameter set of the register-based processor IP module 113 Bibliography 117 Curriculum Vitae 131 f\ %r\ r~i -|- -i^» <rv /"» T" The design of integrated circuits (IC) is currently undergoing a paradigm shift from "application-specific" to "reusable". Increasing sys¬ tem complexity asks for a new approach to IC design, because starting a design from scratch in each and every new project leads to severe violations of time-to-market requirements. Housing large portions from previous projects for a new system may overcome the imposed efficiency problems. Hardware/software co-design is one of the most prominent method¬ ologies used to improve the design efficiency and quality. HowTever. system design by reuse is problematic in combination with system level hardware/software co-design. The tight coupling between a system pro¬ cessor and several reusable hardware blocks does not fit well into a, sys¬ tem design flow based on several independent intellectual property (IP) blocks. This thesis proposes the concept of programmable intellectual prop¬ erty modules (PIP) to solve this problem. The key innovation is to include a processor into every major fP module. PIP modules oiler the possibility to integrate highly reusable IP modifies that have superior properties because the embedded processor can be used for several sys¬ tem design tasks. Some of these are flexibility in the implementation of standards and protocols, system debugging after silicon production or the ability to postpone decisions due to t he high configurability. PIP modules fit perfectly into a system design flow based on system-level functional partitioning and IP-level hardware/software partitioning. The most important building block of PIP modules is obviously IX X Abstract the embedded processor. A detailed investigation of processor concepts showed that a register-based architecture using stack registers for fast interrupt switching is best suited for PIP modules. A convenient functional verification flow is crucial for successful IP modules. A simulation-based flow suitable for highly parametrized PIP modules has been established. It uses a configuration-independent be¬ havioral model to check the correctness of the synthesizable RTL model. An experimental implementation of a STM-J./STS3 block demon¬ strates additional advantages of the PIP concept, like flexible hard-IPs, software driven BIST, and adaptable interface protocols. Zusammenfassung Der Entwurf integrierter Schaltungen (IC) erfährt momentan eine tiefgreifende Veränderung von "anwendungsspezifisch"' zu "wiederver¬ wendbar". Die ständig wachsende Komplexität der integrierten Syste¬ me macht eine neue Vorgehensweise im Entwurf von ICs nötig. Die vom Markt geforderten immer kürzeren Produkt zyklcn können nicht mehr erfüllt werden, wenn in jedem neuen Projekt wieder von Grund auf neu mit dem Entwurf begonnen werden muß. Wenn es gelingt, große Tei¬ le vorhergehender Projekte in einen neuen IC zu übernehmen, können viele Effizienzprobleme gelöst werden. Die am häufigsten angewandte Methode zur Steigung der Effizienz und Qualität beim Entwurf integrierter Schaltungen ist die gleichzeitige Erstellung von Schaltungsteilen und Programmcode. genannt hardwa¬ re/software co-design. Es ist allerdings sehr problematisch, diese Metho¬ de mit einem Systementwurfsprozess zu verbinden, der auf der Wieder¬ verwertung großer Teile vorheriger Schaltungen basiert. Die sehr enge Kopplung zwischen dem Systemrechner und mehreren wiederverwend¬ baren Sehaltungsteilcn passt nicht gut in einen Entwurfsprozess, der auf der Verwendung von IP-Modulen (IP. intellectual property) basiert. Um dieses Problem zu lösen, wird in dieser Doktorarbeit das Prin¬ zip programmierbarer, wiederverwendbarer Bausteine (PIP) vorgeschla¬ gen. Die wichtigste Neuerung dieses Konzepts ist, einen Prozessor in je¬ des größere IP-Modul zu integrieren. PIP-Module haben herausragende Eigenschaften, welche durch die Integration des Prozessors ermöglicht werden und bieten viele Vorteile beim Systementwurf. Zum Beispiel erreicht mau hohe Flexibilität bei der Implementation von Protokollen und Standards, nach der Produktion der integrierten Schaltung können XI Xll Zusammenfassung mögliche Fehler noch korrigiert werden oder man kann Entscheidungen erst spät im Entwurfsprozess fällen. PIP-Module passen perfekt in einen Entwurfsprozess, der auf der Aufteilung der Funktionalität auf System¬ niveau und der Einteilung in Hardware und Software auf Modulniveau beruht. Der wichtigste Baustein von PIP-Modulen ist offensichtlich der inte¬ grierte Prozessor. Eine detaillierte Untersuchung verschiedener Prozes¬ sorkonzepte ergab, daß eine registerbasierte Architektur, die um Schat¬ tenregister für schnellen Kontextwechsel ergänzt wurde, am besten ge¬ eignet ist. Die funktionale Verifikation ist äußerst wichtig für den Erfolg von IP-Modulen. Für hochgradig parametrisiorbare PIP-Module wurde im Rahmen dieser Arbeit ein simulationsbasiorfer Ansatz etabliert. Er be¬ nutzt ein konfigurationunabhängiges Verhaltensmodel, um die Richtig¬ keit des RTL-Models zu überprüfen. Durch die Implementation eines STM-1/STS3 Blocks konnten wei¬ tere wichtige Vorteile von PIP-Modulen zu Tage gefördert werden. Das neue Konzept ermöglicht die Verwirklichung von flexiblen hard-IP-Mo- dulen, einen in Software realisierten BIST und anpassbare Schnittstel¬ lenprotokolle.
Recommended publications
  • THE INVARIANCE THESIS 1. Introduction in 1936, Turing [47
    THE INVARIANCE THESIS NACHUM DERSHOWITZ AND EVGENIA FALKOVICH-DERZHAVETZ School of Computer Science, Tel Aviv University, Ramat Aviv, Israel e-mail address:[email protected] School of Computer Science, Tel Aviv University, Ramat Aviv, Israel e-mail address:[email protected] Abstract. We demonstrate that the programs of any classical (sequential, non-interactive) computation model or programming language that satisfies natural postulates of effective- ness (which specialize Gurevich’s Sequential Postulates)—regardless of the data structures it employs—can be simulated by a random access machine (RAM) with only constant factor overhead. In essence, the postulates of algorithmicity and effectiveness assert the following: states can be represented as logical structures; transitions depend on a fixed finite set of terms (those referred to in the algorithm); all atomic operations can be pro- grammed from constructors; and transitions commute with isomorphisms. Complexity for any domain is measured in terms of constructor operations. It follows that any algorithmic lower bounds found for the RAM model also hold (up to a constant factor determined by the algorithm in question) for any and all effective classical models of computation, what- ever their control structures and data structures. This substantiates the Invariance Thesis of van Emde Boas, namely that every effective classical algorithm can be polynomially simulated by a RAM. Specifically, we show that the overhead is only a linear factor in either time or space (and a constant factor in the other dimension). The enormous number of animals in the world depends of their varied structure & complexity: — hence as the forms became complicated, they opened fresh means of adding to their complexity.
    [Show full text]
  • Optimal Multithreaded Batch-Parallel 2-3 Trees Arxiv:1905.05254V2
    Optimal Multithreaded Batch-Parallel 2-3 Trees Wei Quan Lim National University of Singapore Keywords Parallel data structures, pointer machine, multithreading, dictionaries, 2-3 trees. Abstract This paper presents a batch-parallel 2-3 tree T in an asynchronous dynamic multithreading model that supports searches, insertions and deletions in sorted batches and has essentially optimal parallelism, even under the restrictive QRMW (queued read-modify-write) memory contention model where concurrent accesses to the same memory location are queued and serviced one by one. Specifically, if T has n items, then performing an item-sorted batch (given as a leaf-based balanced binary tree) of b operations · n ¹ º ! 1 on T takes O b log b +1 +b work and O logb+logn span (in the worst case as b;n ). This is information-theoretically work-optimal for b ≤ n, and also span-optimal for pointer-based structures. Moreover, it is easy to support optimal intersection, n union and difference of instances of T with sizes m ≤ n, namely within O¹m·log¹ m +1ºº work and O¹log m + log nº span. Furthermore, T supports other batch operations that make it a very useful building block for parallel data structures. To the author’s knowledge, T is the first parallel sorted-set data structure that can be used in an asynchronous multi-processor machine under a memory model with queued contention and yet have asymptotically optimal work and span. In fact, T is designed to have bounded contention and satisfy the claimed work and span bounds regardless of the execution schedule.
    [Show full text]
  • On Data Structures and Memory Models
    2006:24 DOCTORAL T H E SI S On Data Structures and Memory Models Johan Karlsson Luleå University of Technology Department of Computer Science and Electrical Engineering 2006:24|: 402-544|: - -- 06 ⁄24 -- On Data Structures and Memory Models by Johan Karlsson Department of Computer Science and Electrical Engineering Lule˚a University of Technology SE-971 87 Lule˚a, Sweden May 2006 Supervisor Andrej Brodnik, Ph.D., Lule˚a University of Technology, Sweden Abstract In this thesis we study the limitations of data structures and how they can be overcome through careful consideration of the used memory models. The word RAM model represents the memory as a finite set of registers consisting of a constant number of unique bits. From a hardware point of view it is not necessary to arrange the memory as in the word RAM memory model. However, it is the arrangement used in computer hardware today. Registers may in fact share bits, or overlap their bytes, as in the RAM with Byte Overlap (RAMBO) model. This actually means that a physical bit can appear in several registers or even in several positions within one regis- ter. The RAMBO model of computation gives us a huge variety of memory topologies/models depending on the appearance sets of the bits. We show that it is feasible to implement, in hardware, other memory models than the word RAM memory model. We do this by implementing a RAMBO variant on a memory board for the PC100 memory bus. When alternative memory models are allowed, it is possible to solve a number of problems more efficiently than under the word RAM memory model.
    [Show full text]
  • Algorithms: a Quest for Absolute Definitions∗
    Algorithms: A Quest for Absolute De¯nitions¤ Andreas Blassy Yuri Gurevichz Abstract What is an algorithm? The interest in this foundational problem is not only theoretical; applications include speci¯cation, validation and veri¯ca- tion of software and hardware systems. We describe the quest to understand and de¯ne the notion of algorithm. We start with the Church-Turing thesis and contrast Church's and Turing's approaches, and we ¯nish with some recent investigations. Contents 1 Introduction 2 2 The Church-Turing thesis 3 2.1 Church + Turing . 3 2.2 Turing ¡ Church . 4 2.3 Remarks on Turing's analysis . 6 3 Kolmogorov machines and pointer machines 9 4 Related issues 13 4.1 Physics and computations . 13 4.2 Polynomial time Turing's thesis . 14 4.3 Recursion . 15 ¤Bulletin of European Association for Theoretical Computer Science 81, 2003. yPartially supported by NSF grant DMS{0070723 and by a grant from Microsoft Research. Address: Mathematics Department, University of Michigan, Ann Arbor, MI 48109{1109. zMicrosoft Research, One Microsoft Way, Redmond, WA 98052. 1 5 Formalization of sequential algorithms 15 5.1 Sequential Time Postulate . 16 5.2 Small-step algorithms . 17 5.3 Abstract State Postulate . 17 5.4 Bounded Exploration Postulate and the de¯nition of sequential al- gorithms . 19 5.5 Sequential ASMs and the characterization theorem . 20 6 Formalization of parallel algorithms 21 6.1 What parallel algorithms? . 22 6.2 A few words on the axioms for wide-step algorithms . 22 6.3 Wide-step abstract state machines . 23 6.4 The wide-step characterization theorem .
    [Show full text]
  • A Microcoded Machine Simulator and Microcode Assembler in a FORTH Environment A
    A Microcoded Machine Simulator and Microcode Assembler in a FORTH Environment A. Cotterman, R. Dixon, R. Grewe, G. Simpson Department ofComputer Science Wright State University Abstract A FORTH program which provides a design tool for systems which contain a microcoded component was implemented and used in a computer architecture laboratory. The declaration of standard components such as registers, ALUs, busses, memories, and the connections is required. A sequencer and timing signals are implicit in the implementation. The microcode is written in a FORTH-like language which can be executed directly as a simulation or interpreted to produce a fixed horizontal microcode bit pattern for generating ROMs. The direct execution of the microcode commands (rather than producing bit patterns and interpreting those instructions) gives a simpler, faster implementation. Further, the designer may concentrate on developing the design at a block level without considering some ofthe implementation details (such as microcode fields) which might change several times during the design cycle. However, the design is close enough to the hardware to be readily translated. Finally, the fact that the same code used for the simulation may be used for assembly ofthe microcode instructions (after the field patterns have been specified) saves time and reduces errors. 1. Introduction At the Wright State University computer architecture laboratory a microcoded machine simulator and microcode generator have been developed using FORTH. These tools have been used as "hands-on" instructional aids in graduate courses in computer architecture, and are also being used to aid the in-house development of new architectures in ongoing research efforts. The simulator provides basic block-level functional control and data-flow simulation for a machine architecture based on a microcoded implementation.
    [Show full text]
  • Hardware Information Flow Tracking
    Hardware Information Flow Tracking WEI HU, Northwestern Polytechnical University, China ARMAITI ARDESHIRICHAM, University of California, San Diego, USA RYAN KASTNER, University of California, San Diego, USA Information flow tracking (IFT) is a fundamental computer security technique used to understand how information moves througha computing system. Hardware IFT techniques specifically target security vulnerabilities related to the design, verification, testing, man- ufacturing, and deployment of hardware circuits. Hardware IFT can detect unintentional design flaws, malicious circuit modifications, timing side channels, access control violations, and other insecure hardware behaviors. This article surveys the area of hardware IFT. We start with a discussion on the basics of IFT, whose foundations were introduced by Denning in the 1970s. Building upon this, we develop a taxonomy for hardware IFT. We use this to classify and differentiate hardware IFT tools and techniques. Finally, we discuss the challenges yet to be resolved. The survey shows that hardware IFT provides a powerful technique for identifying hardware security vulnerabilities as well as verifying and enforcing hardware security properties. CCS Concepts: • Security and privacy Logic and verification; Information flow control; Formal security models. ! Additional Key Words and Phrases: Hardware security, information flow security, information flow tracking, security verification, formal method, survey ACM Reference Format: Wei Hu, Armaiti Ardeshiricham, and Ryan Kastner. 2020. Hardware Information Flow Tracking. ACM Comput. Surv. 0, 0, Article 00 (April 2020), 38 pages. https://doi.org/10.0000/0000000.0000000 1 INTRODUCTION A core tenet of computer security is to maintain the confidentiality and integrity of the information being computed upon. Confidentiality ensures that information is only disclosed to authorized entities.
    [Show full text]
  • Complexity of Algorithms
    Complexity of Algorithms Lecture Notes, Spring 1999 Peter G´acs Boston University and L´aszl´oLov´asz Yale University 1 Contents 0 Introduction and Preliminaries 1 0.1 Thesubjectofcomplexitytheory . ... 1 0.2 Somenotationanddefinitions . .. 2 1 Models of Computation 3 1.1 Introduction................................... 3 1.2 Finiteautomata ................................ 5 1.3 TheTuringmachine .............................. 7 1.3.1 ThenotionofaTuringmachine. 7 1.3.2 UniversalTuringmachines. 9 1.3.3 Moretapesversusonetape . 11 1.4 TheRandomAccessMachine . 17 1.5 BooleanfunctionsandBooleancircuits. ...... 22 2 Algorithmic decidability 30 2.1 Introduction................................... 30 2.2 Recursive and recursively enumerable languages . ......... 31 2.3 Otherundecidableproblems. .. 35 2.3.1 Thetilingproblem .. .. .. .. .. .. .. .. .. .. .. 35 2.3.2 Famous undecidable problems in mathematics . .... 38 2.4 Computabilityinlogic . 40 2.4.1 Godel’sincompletenesstheorem. .. 40 2.4.2 First-orderlogic ............................ 42 2.4.3 A simple theory of arithmetic; Church’s Theorem . ..... 45 3 Computation with resource bounds 48 3.1 Introduction................................... 48 3.2 Timeandspace................................. 48 3.3 Polynomial time I: Algorithms in arithmetic . ....... 50 3.3.1 Arithmeticoperations . 50 3.3.2 Gaussianelimination. 52 3.4 PolynomialtimeII:Graphalgorithms . .... 55 3.4.1 Howisagraphgiven? . .. .. .. .. .. .. .. .. .. .. 55 3.4.2 Searchingagraph ........................... 55 3.4.3 Maximum bipartite
    [Show full text]
  • Energy-Efficient Algorithms
    Energy-Efficient Algorithms Erik D. Demaine∗ Jayson Lynch∗ Geronimo J. Mirano∗ Nirvan Tyagi∗ May 30, 2016 Abstract We initiate the systematic study of the energy complexity of algorithms (in addition to time and space complexity) based on Landauer's Principle in physics, which gives a lower bound on the amount of energy a system must dissipate if it destroys information. We propose energy- aware variations of three standard models of computation: circuit RAM, word RAM, and trans- dichotomous RAM. On top of these models, we build familiar high-level primitives such as control logic, memory allocation, and garbage collection with zero energy complexity and only constant-factor overheads in space and time complexity, enabling simple expression of energy- efficient algorithms. We analyze several classic algorithms in our models and develop low-energy variations: comparison sort, insertion sort, counting sort, breadth-first search, Bellman-Ford, Floyd-Warshall, matrix all-pairs shortest paths, AVL trees, binary heaps, and dynamic arrays. We explore the time/space/energy trade-off and develop several general techniques for analyzing algorithms and reducing their energy complexity. These results lay a theoretical foundation for a new field of semi-reversible computing and provide a new framework for the investigation of algorithms. Keywords: Reversible Computing, Landauer's Principle, Algorithms, Models of Computation ∗MIT Computer Science and Artificial Intelligence Laboratory, 32 Vassar Street, Cambridge, MA 02139, USA, fedemaine,jaysonl,geronm,[email protected]. Supported in part by the MIT Energy Initiative and by MADALGO | Center for Massive Data Algorithmics | a Center of the Danish National Research Foundation. arXiv:1605.08448v1 [cs.DS] 26 May 2016 Contents 1 Introduction 1 2 Energy Models 4 2.1 Energy Circuit Model .
    [Show full text]
  • The Yesterday, Today, and Tomorrow of Parallelism in Logic Programming
    The Yesterday, Today, and Tomorrow of Parallelism in Logic Programming Enrico Pontelli Department of Computer Science New Mexico State University New Mexico State University Tutorial Roadmap Systems Going clasp Large Basics Prolog ASP Going Small (Early) Yesterday Today Tomorrow KLAP Laboratory New Mexico State University Let’s get Started! KLAP Laboratory New Mexico State University Tutorial Roadmap Systems Going clasp Large Basics Prolog ASP Going Small (Early) Yesterday Today Tomorrow KLAP Laboratory New Mexico State University Prolog Programs • Program = a bunch of axioms • Run your program by: – Enter a series of facts and axioms – Pose a query – System tries to prove your query by finding a series of inference steps • “Philosophically” declarative • Actual implementations are deterministic KLAP Laboratory 5 Horn Clauses (Axioms) • Axioms in logic languages are written: H :- B1, B2,….,B3 Facts = clause with head and no body. Rules = have both head and body. Query – can be thought of as a clause with no body. KLAP Laboratory 6 Terms • H and B are terms. • Terms = – Atoms - begin with lowercase letters: x, y, z, fred – Numbers: integers, reals – Variables - begin with captial letters: X, Y, Z, Alist – Structures: consist of an atom called a functor, and a list of arguments. ex. edge(a,b). line(1,2,4). KLAP Laboratory 7 Backward Chaining START WITH THE GOAL and work backwards, attempting to decompose it into a set of (true) clauses. This is what the Prolog interpreter does. KLAP Laboratory 8 Backtracking search KLAP Laboratory 9 Assumption for this Tutorial CODE AREA TRAIL • BasicHEAP familiarity with Logic ProgrammingTop of Trail Instruction Pointer – Datalog (pure Horn clauses, no function symbols)Return Address Heap Top Prev.
    [Show full text]
  • Computer Science & Engineering
    Punjabi University, Patiala Four Year B.Tech (CSE) Batch 2014 BOS: 2015 B. TECH SECOND YEAR COMPUTER SCIENCE & ENGINEERING (Batch 2014) Session (2015-16) SCHEME OF PAPERS THIRD SEMESTER (COMPUTER SCIENCE & ENGINEERING) S. No. Subject Code Subject Name L T P Cr. 1. ECE-209 Digital Electronic Circuits 3 1 0 3.5 2. CPE-201 Computer Architecture 3 1 0 3.5 3. CPE-202 Object Oriented Programming using C++ 3 1 0 3.5 4. CPE-203 Operating Systems 3 1 0 3.5 5. CPE-205 Discrete Mathematical Structure 3 1 0 3.5 6. CPE-210 Computer Peripheral and Interface 3 1 0 3.5 7. ECE-259 Digital Electronic Circuits Lab 0 0 2 1.0 8. CPE-252 Object Oriented Programming using C++ Lab 0 0 2 1.0 9. CPE-253 Operating System and Hardware Lab 0 0 2 1.0 10. ** Punjabi 3 0 0 Total 18 6 6 24 Total Contact Hours = 30 ECE-259, CPE-252 and CPE-253 are practical papers only. There will not be any theory examination for these papers. * * In addition to above mentioned subjects, there will be an additional course on Punjabi as a qualifying subject. Page 1 of 87 Batch: 2015 (CSE) Punjabi University, Patiala Four Year B.Tech (CSE) Batch 2014 BOS: 2015 B. TECH SECOND YEAR COMPUTER SCIENCE & ENGINEERING (Batch 2014) Session (2015-16) SCHEME OF PAPERS FOURTH SEMESTER (COMPUTER SCIENCE & ENGINEERING) S. No. Subject Code Subject Name L T P Cr. 1. BAS-201 Numerical Methods & Applications 3 1 0 3.5 2.
    [Show full text]
  • Triadic Automata and Machines As Information Transformers
    information Article Triadic Automata and Machines as Information Transformers Mark Burgin Department of Mathematics, University of California, Los Angeles, 520 Portola Plaza, Los Angeles, CA 90095, USA; [email protected] Received: 12 December 2019; Accepted: 31 January 2020; Published: 13 February 2020 Abstract: Algorithms and abstract automata (abstract machines) are used to describe, model, explore and improve computers, cell phones, computer networks, such as the Internet, and processes in them. Traditional models of information processing systems—abstract automata—are aimed at performing transformations of data. These transformations are performed by their hardware (abstract devices) and controlled by their software (programs)—both of which stay unchanged during the whole computational process. However, in physical computers, their software is also changing by special tools such as interpreters, compilers, optimizers and translators. In addition, people change the hardware of their computers by extending the external memory. Moreover, the hardware of computer networks is incessantly altering—new computers and other devices are added while other computers and other devices are disconnected. To better represent these peculiarities of computers and computer networks, we introduce and study a more complete model of computations, which is called a triadic automaton or machine. In contrast to traditional models of computations, triadic automata (machine) perform computational processes transforming not only data but also hardware and programs, which control data transformation. In addition, we further develop taxonomy of classes of automata and machines as well as of individual automata and machines according to information they produce. Keywords: information; automaton; machine; hardware; software; modification; process; inductive; recursive; superrecursive; equivalence 1.
    [Show full text]
  • A Verified Information-Flow Architecture
    A Verified Information-Flow Architecture Arthur Azevedo de Amorim1 Nathan Collins2 Andre´ DeHon1 Delphine Demange1 Cat˘ alin˘ Hrit¸cu1,3 David Pichardie3,4 Benjamin C. Pierce1 Randy Pollack4 Andrew Tolmach2 1University of Pennsylvania 2Portland State University 3INRIA 4Harvard University Abstract 40, etc.] and dynamic [3, 20, 39, 44, etc.] enforcement mecha- SAFE is a clean-slate design for a highly secure computer sys- nisms and a huge literature on their formal properties [19, 40, etc.]. tem, with pervasive mechanisms for tracking and limiting infor- Similarly, operating systems with information-flow tracking have mation flows. At the lowest level, the SAFE hardware supports been a staple of the OS literature for over a decade [28, etc.]. But fine-grained programmable tags, with efficient and flexible prop- progress at the hardware level has been more limited, with most agation and combination of tags as instructions are executed. The proposals concentrating on hardware acceleration for taint-tracking operating system virtualizes these generic facilities to present an schemes [12, 15, 45, 47, etc.]. SAFE extends the state of the art information-flow abstract machine that allows user programs to la- in two significant ways. First, the SAFE machine offers hardware bel sensitive data with rich confidentiality policies. We present a support for sound and efficient purely-dynamic tracking of both ex- formal, machine-checked model of the key hardware and software plicit and implicit flows (i.e., information leaks through both data mechanisms used to control information flow in SAFE and an end- and control flow) for arbitrary machine code programs—not just to-end proof of noninterference for this model.
    [Show full text]