Simulation-Based Approach to Efficient Commonsense Reasoning in Very Large Knowledge Bases

Total Page:16

File Type:pdf, Size:1020Kb

Simulation-Based Approach to Efficient Commonsense Reasoning in Very Large Knowledge Bases The Thirty-Third AAAI Conference on Artificial Intelligence (AAAI-19) Simulation-Based Approach to Efficient Commonsense Reasoning in Very Large Knowledge Bases Abhishek Sharma , Keith M. Goolsbey Cycorp Inc., 7718 Wood Hollow Drive, Suite 250, Austin, TX 78731 [email protected], [email protected] Abstract paths. Therefore, to make the search more efficient in such Cognitive systems must reason with large bodies of general a KBS, an inference engine is expected to assess the utility knowledge to perform complex tasks in the real world. of further expanding each incomplete path. A naïve order- However, due to the intractability of reasoning in large, ex- ing algorithm can cause unproductive backtracking. To pressive knowledge bases (KBs), many AI systems have solve this problem, researchers have used two types of limited reasoning capabilities. Successful cognitive systems have used a variety of machine learning and axiom selection search control knowledge: (i) Axiom/premise selection methods to improve inference. In this paper, we describe a heuristics: These heuristics attempt to find the small num- search heuristic that uses a Monte-Carlo simulation tech- ber of axioms that are the most relevant for answering a set nique to choose inference steps. We test the efficacy of this of queries, and (ii) Certain researchers have worked on approach on a very large and expressive KB, Cyc. Experi- ordering heuristics for improving the order of rule and mental results on hundreds of queries show that this method is highly effective in reducing inference time and improving node expansions. question-answering (Q/A) performance. In the current work, we describe a simulation-based ap- proach for learning an ordering heuristic for controlling search in large knowledge-based systems (KBS). The key Introduction idea is to simulate several thousand paths from a node. Deductive reasoning is an important component of many New nodes are added to a search tree, and each node con- cognitive systems. Modern cognitive systems need large tains a value that predicts the expected number of answers bodies of general knowledge to perform complex tasks from expanding the tree from that node. The search tree (Lenat & Feigenbaum 1991, Forbus et al 2007). However, expansion guides simulations along promising paths, by efficient reasoning systems can be built only for small-to- selecting the nodes that have the highest potential values. medium sized knowledge bases (KBs). Very large The algorithm produces a highly selective and asymmetric knowledge bases contain millions of rules and facts about search tree that quickly identifies good axiom sequences the world in highly expressive languages. Due to the in- for queries. Moreover, the evaluation function is not hand- tractability of reasoning in such systems, even simple que- crafted: It depends solely on the outcomes of the simulated ries are timed-out after several minutes. Therefore, re- paths. This approach has the characteristics of a statistical searchers believe that resolution-based theorem provers are anytime algorithm: The quality of evaluation function im- overwhelmed when they are expected to work on large proves with additional simulations. The evaluation func- expressive KBs (Hoder & Voronkov 2011). tion is used to order nodes in a search. Experimental results The goal query in knowledge-based systems (KBS) is show that: (i) this approach helps in significantly reducing typically provable from a small number of ground atomic inference time and, (ii) by guiding the search towards more formulas (GAFs) and rules. However, unoptimized infer- promising parts of the tree, this approach improves the ence engines can find it difficult to distinguish between a question-answering performance in time-constrained cog- small set of relevant rules and the millions of irrelevant nitive systems. ones. Hundreds of thousands of axioms that are irrelevant This paper is organized as follows: We start by discuss- for the query can inundate the reasoner with millions of ing relevant previous work. Our approach to using simula- tions to learn a search heuristic is explained next. We con- clude after analyzing the experimental results. Copyright © 2019, Association for the Advancement of Artificial Intelli- gence (www.aaai.org). All rights reserved. 1360 Related Work specific collections of which it is an instance. This set is referred to as MostSpecificCollections (e). Learning of search control knowledge plays an important Reasoning with Cyc representation language is difficult role in the optimization of KBS for at least two reasons: due to the size of the KB and the expressiveness of the First, the inference algorithms of KBS (e.g., backward language. The Cyc representation language uses full first- chaining, tableaux algorithms in description logic (DL)), order logic with higher-order extensions. Some examples typically represent their search space as a graph. Hundreds of highly expressive features of its language include: (a) of rules can apply to each node in a large KBS, and re- Cyc has more than 2000 rules with quantification over searchers have shown that the specific order of node and predicates, (b) it has 267 relations with variable arities, and rule expansion can have a significant effect on efficiency (c) Although first-order logic with three variables is unde- (Tsarkov & Horrocks 2005). Further still, first-order logic cidable (Tsarkov et al. 2004), Cyc ahs several thousand (FOL) theorem provers have been used as tools for such rules with more than three variables. The number of rules reasoning with very expressive languages (e.g., OWL DL, in Cyc with three, four and five variables are 48160, 23813 the Semantic Web Rule Language (SWRL)), where the and 14014 respectively. Moreover, 29716 rules have more language does not correspond to any decidable fragment of than five variables. In its default inference mode, the Cyc FOL, or where reasoning with the language is beyond the inference engine uses the following types of rules/facts scope of existing DL algorithms (Tsarkov et al. 2004, Hor- during inference: (i) 28,429 role inclusion axioms; (ii) rocks & Voronkov 2006). Researchers have also examined 3,623 inverse role axioms, (iii) 494,405 concepts and 1.1 the use of machine learning techniques to identify the best million concept inclusion axioms; (iv) 814 transitive roles; heuristics for problems (Bridge et al. 2014). There has (v) 120,547 complex role inclusion axioms; (vi) 77,170 been work as well on the premise selection algorithms other axioms; (vii) 35,528 binary roles and 10,508 roles (Hoder & Voronkov 2011, Sharma & Forbus 2013, Meng with arities greater than two. The KB has 27.3 million as- & Paulson 2009, Kaliszyk et al. 2015, Kaliszyk & Urban sertions and 1.14 million individuals. 2015, Alama et al. 2014). In contrast, we focus on design- To efficiency search in such a large KBS, inference en- ing ordering heuristics that will enable the system to work gines often use control strategies. They define: (i) set of with all axioms. In (Tsarkov & Horrocks 2005), the au- support, i.e. the set of important facts about the problem.; thors proposed certain rule-ordering heuristics, and expan- and (ii) the set of usable axioms, i.e. the set of all axioms sion ordering heuristics (e.g., a descending order of fre- outside the set of support. At every inference step, the in- quency of usage of symbols). In contrast, we believe that ference engine has to select an element from the set of usa- rule-ordering heuristics should be based on the search ble axioms and resolve it with an element of the set of sup- state. In other recent work (Sharma, Witbrock & Goolsbey port. To perform efficient search, a heuristic control strate- 2016, Sharma & Goolsbey 2017), researchers have used gy measures the “weight” of each clause in the set of sup- different machine learning techniques to improve the effi- port, picks the “best” clause, and adds to the set of support ciency of theorem provers, ore approach is different from the immediate consequences of resolving it with the ele- all aforementioned research because none have shown how ments of the usable list (Russell & Norvig 2003). Cyc uses Monte Carlo tree search/UCT-based approach can be used a set of heuristic modules to identify the best clause from to improve reasoning in a very large and expressive KBS. the set of support. If S is the set of all states, and A is the The work in other fields (Chaudhuri 1998, Hutter et al. set of actions (i.e. inference steps), then a heuristic module 2014, Brewka et al. 2011) is also less relevant because it is a tuple h : (w ., f ) , where f is a function f : S ×A R, does not address the complexity of reasoning needed for i i i i i which assesses the quality of an inference step, and w is large and expressive KBS. i the weight of h . The net score of an inference step is ∑ i Background wifi(s, a) and the inference step with the highest score is selected. Cyc uses several heuristics including the success rates of rules, decision trees (Sharma, Witbrock and We assume familiarity with the Cyc representation lan- Goolsbey 2016), regression-based models (Sharma, Wit- guage (Lenat & Guha 1990). In Cyc, concepts are repre- brock, and Goolsbey 2016) and a large database of useful sented as collections. For example, “Cat” is the set of cats rule sequences (Sharma & Goolsbey 2017). We can define and only cats. Concept hierarchies are represented by the a policy that uses these heuristics: “genls” relation. For example, (genls Telephone Artifact- Π (s)= argmax ∑ w f (s, a) …(1) Communication) holds. The “isa” relation is used to link BASELINE a i i things to any kind of collections of which they are instanc- In (1), we use all heuristics mentioned above to calculate es.
Recommended publications
  • Commonsense Reasoning for Natural Language Processing
    Introductory Tutorial: Commonsense Reasoning for Natural Language Processing Maarten Sap 1 Vered Shwartz 1;2 Antoine Bosselut 1;2 Yejin Choi 1;2 Dan Roth 3 1 Paul G. Allen School of Computer Science & Engineering, Seattle, WA, USA 2 Allen Institute for Artificial Intelligence, Seattle, WA, USA 3 Department of Computer and Information Science, University of Pennsylvania fmsap, vereds, antoineb, yejing @cs.washington.edu, [email protected] 1 Introduction settings, such as social interactions (Sap et al., 2019b; Rashkin et al., 2018a) and physical situ- Commonsense knowledge, such as knowing that ations (Zellers et al., 2019; Talmor et al., 2019). “bumping into people annoys them” or “rain We hope that in the future, machines develop makes the road slippery”, helps humans navigate the kind of intelligence required to, for exam- everyday situations seamlessly (Apperly, 2010). ple, properly assist humans in everyday situations Yet, endowing machines with such human-like (e.g., a chatbot that anticipates the needs of an el- commonsense reasoning capabilities has remained derly person; Pollack, 2005). Current methods, an elusive goal of artificial intelligence research however, are still not powerful or robust enough to for decades (Gunning, 2018). be deployed in open-domain production settings, Commonsense knowledge and reasoning have despite the clear improvements provided by large- received renewed attention from the natural lan- scale pretrained language models. This shortcom- guage processing (NLP) community in recent ing is partially due to inadequacy in acquiring, years, yielding multiple exploratory research di- understanding and reasoning about commonsense rections into automated commonsense under- knowledge, topics which remain understudied by standing.
    [Show full text]
  • Why Machines Don't
    KI - Künstliche Intelligenz https://doi.org/10.1007/s13218-019-00599-w SURVEY Why Machines Don’t (yet) Reason Like People Sangeet Khemlani1 · P. N. Johnson‑Laird2,3 Received: 15 February 2019 / Accepted: 19 June 2019 © This is a U.S. government work and not under copyright protection in the U.S.; foreign copyright protection may apply 2019 Abstract AI has never come to grips with how human beings reason in daily life. Many automated theorem-proving technologies exist, but they cannot serve as a foundation for automated reasoning systems. In this paper, we trace their limitations back to two historical developments in AI: the motivation to establish automated theorem-provers for systems of mathematical logic, and the formulation of nonmonotonic systems of reasoning. We then describe why human reasoning cannot be simulated by current machine reasoning or deep learning methodologies. People can generate inferences on their own instead of just evaluating them. They use strategies and fallible shortcuts when they reason. The discovery of an inconsistency does not result in an explosion of inferences—instead, it often prompts reasoners to abandon a premise. And the connectives they use in natural language have diferent meanings than those in classical logic. Only recently have cognitive scientists begun to implement automated reasoning systems that refect these human patterns of reasoning. A key constraint of these recent implementations is that they compute, not proofs or truth values, but possibilities. Keywords Reasoning · Mental models · Cognitive models 1 Introduction problems for mathematicians). Their other uses include the verifcation of computer programs and the design of com- The great commercial success of deep learning has pushed puter chips.
    [Show full text]
  • A Politico-Social History of Algolt (With a Chronology in the Form of a Log Book)
    A Politico-Social History of Algolt (With a Chronology in the Form of a Log Book) R. w. BEMER Introduction This is an admittedly fragmentary chronicle of events in the develop­ ment of the algorithmic language ALGOL. Nevertheless, it seems perti­ nent, while we await the advent of a technical and conceptual history, to outline the matrix of forces which shaped that history in a political and social sense. Perhaps the author's role is only that of recorder of visible events, rather than the complex interplay of ideas which have made ALGOL the force it is in the computational world. It is true, as Professor Ershov stated in his review of a draft of the present work, that "the reading of this history, rich in curious details, nevertheless does not enable the beginner to understand why ALGOL, with a history that would seem more disappointing than triumphant, changed the face of current programming". I can only state that the time scale and my own lesser competence do not allow the tracing of conceptual development in requisite detail. Books are sure to follow in this area, particularly one by Knuth. A further defect in the present work is the relatively lesser availability of European input to the log, although I could claim better access than many in the U.S.A. This is regrettable in view of the relatively stronger support given to ALGOL in Europe. Perhaps this calmer acceptance had the effect of reducing the number of significant entries for a log such as this. Following a brief view of the pattern of events come the entries of the chronology, or log, numbered for reference in the text.
    [Show full text]
  • Security Applications of Formal Language Theory
    Dartmouth College Dartmouth Digital Commons Computer Science Technical Reports Computer Science 11-25-2011 Security Applications of Formal Language Theory Len Sassaman Dartmouth College Meredith L. Patterson Dartmouth College Sergey Bratus Dartmouth College Michael E. Locasto Dartmouth College Anna Shubina Dartmouth College Follow this and additional works at: https://digitalcommons.dartmouth.edu/cs_tr Part of the Computer Sciences Commons Dartmouth Digital Commons Citation Sassaman, Len; Patterson, Meredith L.; Bratus, Sergey; Locasto, Michael E.; and Shubina, Anna, "Security Applications of Formal Language Theory" (2011). Computer Science Technical Report TR2011-709. https://digitalcommons.dartmouth.edu/cs_tr/335 This Technical Report is brought to you for free and open access by the Computer Science at Dartmouth Digital Commons. It has been accepted for inclusion in Computer Science Technical Reports by an authorized administrator of Dartmouth Digital Commons. For more information, please contact [email protected]. Security Applications of Formal Language Theory Dartmouth Computer Science Technical Report TR2011-709 Len Sassaman, Meredith L. Patterson, Sergey Bratus, Michael E. Locasto, Anna Shubina November 25, 2011 Abstract We present an approach to improving the security of complex, composed systems based on formal language theory, and show how this approach leads to advances in input validation, security modeling, attack surface reduction, and ultimately, software design and programming methodology. We cite examples based on real-world security flaws in common protocols representing different classes of protocol complexity. We also introduce a formalization of an exploit development technique, the parse tree differential attack, made possible by our conception of the role of formal grammars in security. These insights make possible future advances in software auditing techniques applicable to static and dynamic binary analysis, fuzzing, and general reverse-engineering and exploit development.
    [Show full text]
  • Review Articles
    review articles DOI:10.1145/2701413 AI has seen great advances of many kinds recently, but there is one critical area where progress has been extremely slow: ordinary commonsense. BY ERNEST DAVIS AND GARY MARCUS Commonsense Reasoning and Commonsense Knowledge in Artificial Intelligence WHO IS TALLER, Prince William or his baby son Prince key insights George? Can you make a salad out of a polyester shirt? ˽ To achieve human-level performance in domains such as natural language If you stick a pin into a carrot, does it make a hole processing, vision, and robotics, basic knowledge of the commonsense world— in the carrot or in the pin? These types of questions time, space, physical interactions, people, may seem silly, but many intelligent tasks, such as and so on—will be necessary. understanding texts, computer vision, planning, and ˽ Although a few forms of commonsense reasoning, such as taxonomic reasoning scientific reasoning require the same kinds of real- and temporal reasoning are well world knowledge and reasoning abilities. For instance, understood, progress has been slow. ˽ Extant techniques for implementing if you see a six-foot-tall person holding a two-foot-tall commonsense include logical analysis, handcrafting large knowledge bases, person in his arms, and you are told they are father Web mining, and crowdsourcing. Each of these is valuable, but none by itself is a and son, you do not have to ask which is which. If you full solution. need to make a salad for dinner and are out of lettuce, ˽ Intelligent machines need not replicate you do not waste time considering improvising by human cognition directly, but a better understanding of human commonsense taking a shirt of the closet and cutting might be a good place to start.
    [Show full text]
  • An Overview of Automated Reasoning
    An overview of automated reasoning John Harrison Intel Corporation 3rd February 2014 (10:30{11:30) Talk overview I What is automated reasoning? I Early history and taxonomy I Automation, its scope and limits I Interactive theorem proving I Hobbes (1651): \Reason . is nothing but reckoning (that is, adding and subtracting) of the consequences of general names agreed upon, for the marking and signifying of our thoughts." I Leibniz (1685) \When there are disputes among persons, we can simply say: Let us calculate [calculemus], without further ado, to see who is right." Nowadays, by `automatic and algorithmic' we mean `using a computer program'. What is automated reasoning? Attempting to perform logical reasoning in an automatic and algorithmic way. An old dream! I Leibniz (1685) \When there are disputes among persons, we can simply say: Let us calculate [calculemus], without further ado, to see who is right." Nowadays, by `automatic and algorithmic' we mean `using a computer program'. What is automated reasoning? Attempting to perform logical reasoning in an automatic and algorithmic way. An old dream! I Hobbes (1651): \Reason . is nothing but reckoning (that is, adding and subtracting) of the consequences of general names agreed upon, for the marking and signifying of our thoughts." Nowadays, by `automatic and algorithmic' we mean `using a computer program'. What is automated reasoning? Attempting to perform logical reasoning in an automatic and algorithmic way. An old dream! I Hobbes (1651): \Reason . is nothing but reckoning (that is, adding and subtracting) of the consequences of general names agreed upon, for the marking and signifying of our thoughts." I Leibniz (1685) \When there are disputes among persons, we can simply say: Let us calculate [calculemus], without further ado, to see who is right." What is automated reasoning? Attempting to perform logical reasoning in an automatic and algorithmic way.
    [Show full text]
  • Downloading Programs
    Futures of Artificial Intelligence through Technology Readiness Levels Fernando Mart´ınez-Plumed1,2, Emilia Gomez´ 1, Jose´ Hernandez-Orallo´ 2 Abstract Artificial Intelligence (AI) offers the potential to transform our lives in radical ways. However, the main unanswered questions about this foreseen transformation are its depth, breadth and timelines. To answer them, not only do we lack the tools to determine what achievements will be attained in the near future, but we even ignore what various technologies in present-day AI are capable of. Many so-called breakthroughs in AI are associated with highly-cited research papers or good performance in some particular benchmarks. However, research breakthroughs do not directly translate into a technology that is ready to use in real-world environments. In this paper, we present a novel exemplar- based methodology to categorise and assess several AI technologies, by mapping them onto Technology Readiness Levels (TRL) (representing their depth in maturity and availability). We first interpret the nine TRLs in the context of AI, and identify several categories in AI to which they can be assigned. We then introduce a generality dimension, which represents increasing layers of breadth of the technology. These two dimensions lead to the new readiness-vs-generality charts, which show that higher TRLs are achievable for low-generality technologies, focusing on narrow or specific abilities, while high TRLs are still out of reach for more general capabilities. We include numerous examples of AI technologies in a variety of fields, and show their readiness-vs-generality charts, serving as exemplars. Finally, we show how the timelines of several AI technology exemplars at different generality layers can help forecast some short-term and mid-term trends for AI.
    [Show full text]
  • Commonsense Knowledge in Wikidata
    Commonsense Knowledge in Wikidata Filip Ilievski1, Pedro Szekely1, and Daniel Schwabe2 1 Information Sciences Institute, University of Southern California 2 Dept. of Informatics, Pontificia Universidade Cat´olicaRio de Janeiro filievski,[email protected], [email protected] Abstract. Wikidata and Wikipedia have been proven useful for reason- ing in natural language applications, like question answering or entity linking. Yet, no existing work has studied the potential of Wikidata for commonsense reasoning. This paper investigates whether Wikidata con- tains commonsense knowledge which is complementary to existing com- monsense sources. Starting from a definition of common sense, we devise three guiding principles, and apply them to generate a commonsense subgraph of Wikidata (Wikidata-CS). Within our approach, we map the relations of Wikidata to ConceptNet, which we also leverage to integrate Wikidata-CS into an existing consolidated commonsense graph. Our ex- periments reveal that: 1) albeit Wikidata-CS represents a small portion of Wikidata, it is an indicator that Wikidata contains relevant com- monsense knowledge, which can be mapped to 15 ConceptNet relations; 2) the overlap between Wikidata-CS and other commonsense sources is low, motivating the value of knowledge integration; 3) Wikidata-CS has been evolving over time at a slightly slower rate compared to the overall Wikidata, indicating a possible lack of focus on commonsense knowl- edge. Based on these findings, we propose three recommended actions to improve the coverage and quality of Wikidata-CS further. Keywords: Commonsense Knowledge · Wikidata · Knowledge Graphs 1 Introduction Common sense is \the basic ability to perceive, understand, and judge things that are shared by nearly all people and can be reasonably expected of nearly all people without need for debate" [10].
    [Show full text]
  • Natural Language Understanding with Commonsense Reasoning
    E.T.S. DE INGENIEROS INFORMÁTICOS UNIVERSIDAD POLITÉCNICA DE MADRID MASTER TESIS MSc IN ARTIFICIAL INTELLIGENCE (MUIA) NATURAL LANGUAGE UNDERSTANDING WITH COMMONSENSE REASONING: APPLICATION TO THE WINOGRAD SCHEMA CHALLENGE AUTHOR: ALFONSO LÓPEZ TORRES SUPERVISOR: MARTÍN MOLINA GONZÁLEZ JUNE, 2016 This is for my children Carla and Alonso, and my wife Véronique Thanks for their unconditional support and patient (also for the coming adventures…) v Acknowledgments: I’d like to thank the advices and help received from Martín. I was very lucky being your student. vi RESUMEN En 1950, Alan Turing propuso un test para evaluar el grado de inteligencia humana que podría presentar una máquina. La idea principal era realmente sencilla: llevar a cabo una charla abierta entre un evaluador y la máquina. Si dicho evaluador era incapaz de discernir si el examinado era una persona o una máquina, podría afirmarse que el test había sido superado. Desde entonces, a lo largo de los últimos 60 años se han presentado numerosas propuestas a través de los cuales se han puesto al descubierto ciertas debilidades del test. Quizás la más importante es el hecho de centrarse en la inteligencia humana, dejando a un lado otros tipos de inteligencia. El test obliga en gran medida a definir en la máquina un comportamiento antropomórfico y de imitación con el único fin de pasar el test. Con el fin de superar estos y otros puntos débiles, Hector Levesque propuso en 2011 un nuevo reto, “The Winograd Schema Challenge”. Un sencillo test basado en Pregunta y Respuesta sobre una frase que describe una situación cotidiana.
    [Show full text]
  • Evaluating Commonsense Reasoning in Neural Machine Translation
    The Box is in the Pen: Evaluating Commonsense Reasoning in Neural Machine Translation Jie Hey∗, Tao Wangz∗, Deyi Xiongy, and Qun Liux y College of Intelligence and Computing, Tianjin University, Tianjin, China z School of Computer Science and Technology, Soochow University, Suzhou, China x Huawei Noah’s Ark Lab, Hong Kong, China [email protected], [email protected] [email protected], [email protected] Abstract evidence for the need of commonsense knowl- edge in machine translation is “The box is in the Does neural machine translation yield transla- pen”, where machine translation is expected to per- tions that are congenial with common sense? form reasoning on the relative sizes of “box” and In this paper, we present a test suite to evalu- “pen”. Bar-Hillel also doubts that a machine, even ate the commonsense reasoning capability of equipped with extra-linguistic knowledge, would neural machine translation. The test suite consists of three test sets, covering lexical be able to reason with such knowledge sponta- and contextless/contextual syntactic ambiguity neously as human translators do (Bar-Hillel, 1960a; that requires commonsense knowledge to re- Macklovitch, 1995). solve. We manually create 1,200 triples, each Modern natural language processing (NLP) has of which contain a source sentence and two made tremendous progress, not only in building contrastive translations, involving 7 different abundant resources to develop linguistic insights, common sense types. Language models pre- but also in plenty of methodological practices. On trained on large-scale corpora, such as BERT, GPT-2, achieve a commonsense reasoning ac- the one hand, machine translation has been sub- curacy of lower than 72% on target transla- stantially advanced with large-scale parallel data tions of this test suite.
    [Show full text]
  • Data General Extended Algol 60 Compiler
    DATA GENERAL EXTENDED ALGOL 60 COMPILER, Data General's Extended ALGOL is a powerful language tial I/O with optional formatting. These extensions comple­ which allows systems programmers to develop programs ment the basic structure of ALGOL and significantly in­ on mini computers that would otherwise require the use of crease the convenience of ALGOL programming without much larger, more expensive computers. No other mini making the language unwieldy. computer offers a language with the programming features and general applicability of Data General's Extended FEATURES OF DATA GENERAL'S EXTENDED ALGOL Character strings are implemented as an extended data ALGOL. type to allow easy manipulation of character data. The ALGOL 60 is the most widely used language for describ­ program may, for example, read in character strings, search ing programming algorithms. It has a flexible, generalized, for substrings, replace characters, and maintain character arithmetic organization and a modular, "building block" string tables efficiently. structure that provides clear, easily readable documentation. Multi-precision arithmetic allows up to 60 decimal digits The language is powerful and concise, allowing the systems of precision in integer or floating point calculations. programmer to state algorithms without resorting to "tricks" Device-independent I/O provides for reading and writ­ to bypass the language. ing in line mode, sequential mode, or random mode.' Free These characteristics of ALGOL are especially important form reading and writing is permitted for all data types, or in the development of working prototype systems. The output can be formatted according to a "picture" of the clear documentation makes it easy for the programmer to output line or lines.
    [Show full text]
  • Enhancing Reliability of RTL Controller-Datapath Circuits Via Invariant-Based Concurrent Test Yiorgos Makris, Ismet Bayraktaroglu, and Alex Orailoglu
    IEEE TRANSACTIONS ON RELIABILITY, VOL. 53, NO. 2, JUNE 2004 269 Enhancing Reliability of RTL Controller-Datapath Circuits via Invariant-Based Concurrent Test Yiorgos Makris, Ismet Bayraktaroglu, and Alex Orailoglu Abstract—We present a low-cost concurrent test methodology controller states for enhancing the reliability of RTL controller-datapath circuits, , , control signals based on the notion of path invariance. The fundamental obser- , environment signals vation supporting the proposed methodology is that the inherent transparency behavior of RTL components, typically utilized for hierarchical off-line test, renders rich sources of invariance I. INTRODUCTION within a circuit. Furthermore, additional sources of invariance are obtained by examining the algorithmic interaction between the HE ability to test the functionality of a circuit during usual controller, and the datapath of the circuit. A judicious selection T operation is becoming an increasingly desirable property & combination of modular transparency functions, based on the of modern designs. Identifying & discarding faulty results be- algorithm implemented by the controller-datapath pair, yields a powerful set of invariant paths in a design. Compliance to the in- fore they are further used constitutes a powerful design attribute variant behavior is checked whenever the latter is activated. Thus, of ASIC performing critical computations, such as DSP, and such paths enable a simple, yet very efficient concurrent test capa- ALU. While this capability can be provided by hardware dupli- bility, achieving fault security in excess of 90% while keeping the cation schemes, such methods incur considerable cost in terms hardware overhead below 40% on complicated, difficult-to-test, sequential benchmark circuits. By exploiting fine-grained design of area overhead, and possible performance degradation.
    [Show full text]