The Design & Implementation of an Abstract Semantic Graph For
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Parser Tables for Non-LR(1) Grammars with Conflict Resolution Joel E
View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Elsevier - Publisher Connector Science of Computer Programming 75 (2010) 943–979 Contents lists available at ScienceDirect Science of Computer Programming journal homepage: www.elsevier.com/locate/scico The IELR(1) algorithm for generating minimal LR(1) parser tables for non-LR(1) grammars with conflict resolution Joel E. Denny ∗, Brian A. Malloy School of Computing, Clemson University, Clemson, SC 29634, USA article info a b s t r a c t Article history: There has been a recent effort in the literature to reconsider grammar-dependent software Received 17 July 2008 development from an engineering point of view. As part of that effort, we examine a Received in revised form 31 March 2009 deficiency in the state of the art of practical LR parser table generation. Specifically, LALR Accepted 12 August 2009 sometimes generates parser tables that do not accept the full language that the grammar Available online 10 September 2009 developer expects, but canonical LR is too inefficient to be practical particularly during grammar development. In response, many researchers have attempted to develop minimal Keywords: LR parser table generation algorithms. In this paper, we demonstrate that a well known Grammarware Canonical LR algorithm described by David Pager and implemented in Menhir, the most robust minimal LALR LR(1) implementation we have discovered, does not always achieve the full power of Minimal LR canonical LR(1) when the given grammar is non-LR(1) coupled with a specification for Yacc resolving conflicts. We also detail an original minimal LR(1) algorithm, IELR(1) (Inadequacy Bison Elimination LR(1)), which we have implemented as an extension of GNU Bison and which does not exhibit this deficiency. -
Parsing Techniques
Dick Grune, Ceriel J.H. Jacobs Parsing Techniques 2nd edition — Monograph — September 27, 2007 Springer Berlin Heidelberg NewYork Hong Kong London Milan Paris Tokyo Contents Preface to the Second Edition : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : v Preface to the First Edition : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : xi 1 Introduction : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 1 1.1 Parsing as a Craft. 2 1.2 The Approach Used . 2 1.3 Outline of the Contents . 3 1.4 The Annotated Bibliography . 4 2 Grammars as a Generating Device : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 5 2.1 Languages as Infinite Sets . 5 2.1.1 Language . 5 2.1.2 Grammars . 7 2.1.3 Problems with Infinite Sets . 8 2.1.4 Describing a Language through a Finite Recipe . 12 2.2 Formal Grammars . 14 2.2.1 The Formalism of Formal Grammars . 14 2.2.2 Generating Sentences from a Formal Grammar . 15 2.2.3 The Expressive Power of Formal Grammars . 17 2.3 The Chomsky Hierarchy of Grammars and Languages . 19 2.3.1 Type 1 Grammars . 19 2.3.2 Type 2 Grammars . 23 2.3.3 Type 3 Grammars . 30 2.3.4 Type 4 Grammars . 33 2.3.5 Conclusion . 34 2.4 Actually Generating Sentences from a Grammar. 34 2.4.1 The Phrase-Structure Case . 34 2.4.2 The CS Case . 36 2.4.3 The CF Case . 36 2.5 To Shrink or Not To Shrink . 38 xvi Contents 2.6 Grammars that Produce the Empty Language . 41 2.7 The Limitations of CF and FS Grammars . 42 2.7.1 The uvwxy Theorem . 42 2.7.2 The uvw Theorem . -
Application of Graph Databases for Static Code Analysis of Web-Applications
Application of Graph Databases for Static Code Analysis of Web-Applications Daniil Sadyrin [0000-0001-5002-3639], Andrey Dergachev [0000-0002-1754-7120], Ivan Loginov [0000-0002-6254-6098], Iurii Korenkov [0000-0002-8948-2776], and Aglaya Ilina [0000-0003-1866-7914] ITMO University, Kronverkskiy prospekt, 49, St. Petersburg, 197101, Russia [email protected], [email protected], [email protected], [email protected], [email protected] Abstract. Graph databases offer a very flexible data model. We present the approach of static code analysis using graph databases. The main stage of the analysis algorithm is the construction of ASG (Abstract Source Graph), which represents relationships between AST (Abstract Syntax Tree) nodes. The ASG is saved to a graph database (like Neo4j) and queries to the database are made to get code properties for analysis. The approach is applied to detect and exploit Object Injection vulnerability in PHP web-applications. This vulnerability occurs when unsanitized user data enters PHP unserialize function. Successful exploitation of this vulnerability means building of “object chain”: a nested object, in the process of deserializing of it, a sequence of methods is being called leading to dangerous function call. In time of deserializing, some “magic” PHP methods (__wakeup or __destruct) are called on the object. To create the “object chain”, it’s necessary to analyze methods of classes declared in web-application, and find sequence of methods called from “magic” methods. The main idea of author’s approach is to save relationships between methods and functions in graph database and use queries to the database on Cypher language to find appropriate method calls. -
GLR* - an Efficient Noise-Skipping Parsing Algorithm for Context Free· Grammars
GLR* - An Efficient Noise-skipping Parsing Algorithm For Context Free· Grammars Alon Lavie and Masaru Tomita School of Computer Scienc�, Carnegie Mellon University 5000 Forbes- Avenue, Pittsburgh, PA 15213 email: lavie©cs. emu . edu Abstract This paper describes GLR*, a parser that can parse any input sentence by ignoring unrec ognizable parts of the sentence. In case the standard parsing procedure fails to parse an input sentence, the parser nondeterministically skips some word(s) in the sentence, and returns the parse with fewest skipped words. Therefore, the parser will return some parse(s) with any input sentence, unless no part of the sentence can be recognized at all. The problem can be defined in the following way: Given a context-free grammar G and a sentence S, find and parse S' - the largest subset of words of S, such that S' E L(G). The algorithm described in this paper is a modificationof the Generalized LR (Tomita) parsing algorithm [Tomita, 1986]. The parser accommodates the skipping of words by allowing shift operations to be performed from inactive state nodes of the Graph Structured Stack. A heuristic similar to beam search makes the algorithm computationally tractable. There have been several other approaches to the problem of robust parsing, most of which are special purpose algorithms [Carbonell and Hayes, 1984), [Ward, 1991] and others. Because our approach is a modification to a standard context-free parsing algorithm, all the techniques and grammars developed for the standard parser can be applied as they are. Also, in case the input sentence is by itself grammatical, our parser behaves exactly as the standard GLR parser. -
A Simple, Possibly Correct LR Parser for C11 Jacques-Henri Jourdan, François Pottier
A Simple, Possibly Correct LR Parser for C11 Jacques-Henri Jourdan, François Pottier To cite this version: Jacques-Henri Jourdan, François Pottier. A Simple, Possibly Correct LR Parser for C11. ACM Transactions on Programming Languages and Systems (TOPLAS), ACM, 2017, 39 (4), pp.1 - 36. 10.1145/3064848. hal-01633123 HAL Id: hal-01633123 https://hal.archives-ouvertes.fr/hal-01633123 Submitted on 11 Nov 2017 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. 14 A simple, possibly correct LR parser for C11 Jacques-Henri Jourdan, Inria Paris, MPI-SWS François Pottier, Inria Paris The syntax of the C programming language is described in the C11 standard by an ambiguous context-free grammar, accompanied with English prose that describes the concept of “scope” and indicates how certain ambiguous code fragments should be interpreted. Based on these elements, the problem of implementing a compliant C11 parser is not entirely trivial. We review the main sources of difficulty and describe a relatively simple solution to the problem. Our solution employs the well-known technique of combining an LALR(1) parser with a “lexical feedback” mechanism. It draws on folklore knowledge and adds several original as- pects, including: a twist on lexical feedback that allows a smooth interaction with lookahead; a simplified and powerful treatment of scopes; and a few amendments in the grammar. -
Dense Semantic Graph and Its Application in Single Document Summarisation
Dense Semantic Graph and its Application in Single Document Summarisation Monika Joshi1, Hui Wang1 and Sally McClean2 1 University of Ulster, Co. Antrim, BT37 0QB, UK [email protected] , [email protected] 2 University of Ulster, Co. Londonderry, BT52 1SA, UK [email protected] Abstract Semantic graph representation of text is an important part of natural language processing applications such as text summarisation. We have studied two ways of constructing the semantic graph of a document from dependency parsing of its sentences. The first graph is derived from the subject-object-verb representation of sentence, and the second graph is derived from considering more dependency relations in the sentence by a shortest distance dependency path calculation, resulting in a dense semantic graph. We have shown through experiments that dense semantic graphs gives better performance in semantic graph based unsupervised extractive text summarisation. 1 Introduction Information can be categorized into many forms -- numerical, visual, text, and audio. Text is abundantly present in online resources. Online blogs, Wikipedia knowledge base, patent documents and customer reviews are potential information sources for different user requirements. One of these requirements is to present a short summary of the originally larger document. The summary is expected to include important in- formation from the original text documents. This is usually achieved by keeping the informative parts of the document and reducing repetitive information. There are two types of text summarization: multiple document summarisation and single document summarization. The former is aimed at removing repetitive content in a collection of documents. -
Autowig: Automatic Generation of Python Bindings for C++ Libraries
AutoWIG: automatic generation of python bindings for C++ libraries Pierre Fernique and Christophe Pradal EPI Virtual Plants, Inria, Montpellier, France AGAP, CIRAD, INRA, Montpellier SupAgro, Univ Montpellier, Montpellier, France ABSTRACT Most of Python and R scientific packages incorporate compiled scientific libraries to speed up the code and reuse legacy libraries. While several semi-automatic solutions exist to wrap these compiled libraries, the process of wrapping a large library is cumbersome and time consuming. In this paper, we introduce AutoWIG, a Python package that wraps automatically compiled libraries into high-level languages using LLVM/Clang technologies and the Mako templating engine. Our approach is auto- matic, extensible, and applies to complex C++ libraries, composed of thousands of classes or incorporating modern meta-programming constructs. Subjects Data Science, Scientific Computing and Simulation, Programming Languages, Software Engineering Keywords C++, Python, Automatic bindings generation INTRODUCTION Many scientific libraries are written in low-level programming languages such as C and C++. Such libraries entail the usage of the traditional edit/compile/execute cycle in order to produce high-performance programs. This leads to lower computer processing time at the cost of high scientist coding time. At the opposite end of the spectrum, scripting languages such as MATLAB, Octave (John, David Bateman & Wehbring, 2014, Submitted 6 July 2017 Accepted 26 February 2018 for numerical work) Sage (The Sage Developers, 2015, for symbolic mathematics), R (R Published 2 April 2018 Core Team, 2014, for statistical analyses) or Python (Oliphant, 2007, for general purposes) Corresponding author provide an interactive framework that allows data scientists to explore their data, test new Pierre Fernique, [email protected] ideas, combine algorithmic approaches and evaluate their results on the fly. -
OPTIMAL AMBIGUITY PACKING in CONTEXT-FREE PARSERS with Intaloner Laleavie VED Unicarolynfica Pensttionein Rose Language Technologies Inst
OPTIMAL AMBIGUITY PACKING IN CONTEXT-FREE PARSERS WITH INTAlonER LaLEAvie VED UNICarolynFICA PenstTIONein Rose Language Technologies Inst. Learning Res. and Dev. Center Carnegie Mellon University University of Pittsburgh Pittsburgh, PA 15213 Pittsburgh, PA 15213 alavie©cs . cmu .edu rosecpCpitt .edu Abstract Ambiguity packing is a well known technique for enhancing the efficiency of context-free parsers. However, in the case of unification-augmented context-free parsers where parsing is interleaved with feature unification, the propagation of feature structures imposes difficulties on the ability of the parser to effectively perform ambiguity packing. We demonstrate that a clever heuristic for prioritizing the execution order of grammar rules and parsing actions can achieve a high level of ambiguity packing that is provably optimal. We present empirical evaluations of the proposed technique, performed with both a Generalized LR parser and a chart parser, that demonstrate its effectiveness. 1 Introduction Efficient parsing algorithms for purely context-free grammars have long been known. Most natural language applications, however, require a far more linguistically detailed level of analysis that cannot be supported in a natural way by pure context-free grammars. Unification-based grammar formalisms (such as HPSG), on the other hand, are linguistically well founded, but are difficult to parse efficiently. Unification-augmented context-free grammar formalismshave thus become popular approaches where parsing can be based on an efficient context-free algorithm enhanced to produce linguistically rich representations. Whereas in some formalisms, such as the ANLT Grammar Formalism [2] [3], a context-free "backbone" is compiled from a pure unification grammar, in other formalisms [19] the grammar itself is written as a collection of context-free phrase structure rules augmented with unifi cation constraints that apply to feature structures that are associated with the grammar constituents. -
Context-Aware Scanning and Determinism-Preserving Grammar Composition, in Theory and Practice
Context-Aware Scanning and Determinism-Preserving Grammar Composition, in Theory and Practice A THESIS SUBMITTED TO THE FACULTY OF THE GRADUATE SCHOOL OF THE UNIVERSITY OF MINNESOTA BY August Schwerdfeger IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY July, 2010 c August Schwerdfeger 2010 ALL RIGHTS RESERVED Acknowledgments I would like to thank all my colleagues in the MELT group for all their input and assis- tance through the course of the project, and especially for their help in providing ready- made tests and applications for my software. Thanks specifically to Derek Bodin and Ted Kaminski for their efforts in integrating the Copper parser generator into MELT’s attribute grammar tools; to Lijesh Krishnan for developing the ableJ system of Java ex- tensions and assisting me extensively with its further development; and to Yogesh Mali for implementing and testing the Promela grammar with Copper. I also thank my advisor, Eric Van Wyk, for his continuous guidance throughout my time as a student, and also for going well above and beyond the call of duty in helping me to edit and prepare this thesis for presentation. I thank the members of my thesis committee, Mats Heimdahl, Gopalan Nadathur, and Wayne Richter, for their time and efforts in serving and reviewing. I especially wish to thank Prof. Nadathur for his many detailed questions and helpful suggestions for improving this thesis. Work on this thesis has been partially funded by National Science Foundation Grants 0347860 and 0905581. Support has also been received from funds provided by the Institute of Technology (soon to be renamed the College of Science and Engi- neering) and the Department of Computer Science and Engineering at the University of Minnesota. -
Elkhound: a Fast, Practical GLR Parser Generator
Published in Proc. of Conference on Compiler Construction, 2004, pp. 73–88. Elkhound: A Fast, Practical GLR Parser Generator Scott McPeak and George C. Necula ? University of California, Berkeley {smcpeak,necula}@cs.berkeley.edu Abstract. The Generalized LR (GLR) parsing algorithm is attractive for use in parsing programming languages because it is asymptotically efficient for typical grammars, and can parse with any context-free gram- mar, including ambiguous grammars. However, adoption of GLR has been slowed by high constant-factor overheads and the lack of a general, user-defined action interface. In this paper we present algorithmic and implementation enhancements to GLR to solve these problems. First, we present a hybrid algorithm that chooses between GLR and ordinary LR on a token-by-token ba- sis, thus achieving competitive performance for determinstic input frag- ments. Second, we describe a design for an action interface and a new worklist algorithm that can guarantee bottom-up execution of actions for acyclic grammars. These ideas are implemented in the Elkhound GLR parser generator. To demonstrate the effectiveness of these techniques, we describe our ex- perience using Elkhound to write a parser for C++, a language notorious for being difficult to parse. Our C++ parser is small (3500 lines), efficient and maintainable, employing a range of disambiguation strategies. 1 Introduction The state of the practice in automated parsing has changed little since the intro- duction of YACC (Yet Another Compiler-Compiler), an LALR(1) parser genera- tor, in 1975 [1]. An LALR(1) parser is deterministic: at every point in the input, it must be able to decide which grammar rule to use, if any, utilizing only one token of lookahead [2]. -
A Simple, Possibly Correct LR Parser for C11
14 A simple, possibly correct LR parser for C11 Jacques-Henri Jourdan, Inria Paris, MPI-SWS François Pottier, Inria Paris The syntax of the C programming language is described in the C11 standard by an ambiguous context-free grammar, accompanied with English prose that describes the concept of “scope” and indicates how certain ambiguous code fragments should be interpreted. Based on these elements, the problem of implementing a compliant C11 parser is not entirely trivial. We review the main sources of difficulty and describe a relatively simple solution to the problem. Our solution employs the well-known technique of combining an LALR(1) parser with a “lexical feedback” mechanism. It draws on folklore knowledge and adds several original as- pects, including: a twist on lexical feedback that allows a smooth interaction with lookahead; a simplified and powerful treatment of scopes; and a few amendments in the grammar. Although not formally verified, our parser avoids several pitfalls that other implementations have fallen prey to. We believe that its simplicity, its mostly-declarative nature, and its high similarity with the C11 grammar are strong informal arguments in favor of its correctness. Our parser is accompanied with a small suite of “tricky” C11 programs. We hope that it may serve as a reference or a starting point in the implementation of compilers and analysis tools. CCS Concepts: •Software and its engineering ! Parsers; Additional Key Words and Phrases: Compilation; parsing; ambiguity; lexical feedback; C89; C99; C11 ACM Reference Format: Jacques-Henri Jourdan and François Pottier. 2017. A simple, possibly correct LR parser for C11 ACM Trans. -
DART@AI*IA 2013 Proceedings
Cristian Lai, Giovanni Semeraro, Alessandro Giuliani (Eds.) Proceedings of the 7th International Workshop on Information Filtering and Retrieval Workshop of the XIII AI*IA Conference December 6, 2013 Turin, Italy http://aixia2013.i-learn.unito.it/course/view.php?id=26 i Preface The series of DART workshops provides an interactive and focused platform for researchers and practitioners for presenting and discussing new and emerging ideas. Focusing on research and study on new challenges in intelligent information filtering and retrieval, DART aims to investigate novel systems and tools to web scenarios and semantic computing. Therefore, DART contributes to discuss and compare suitable novel solutions based on intelligent techniques and applied in real-world applications. Information Retrieval attempts to address similar filtering and ranking problems for pieces of information such as links, pages, and documents. Information Retrieval systems generally focus on the development of global retrieval techniques, often neglecting individual user needs and preferences. Information Filtering has drastically changed the way information seekers find what they are searching for. In fact, they effectively prune large information spaces and help users in selecting items that best meet their needs, interests, preferences, and tastes. These systems rely strongly on the use of various machine learning tools and algorithms for learning how to rank items and predict user evaluation. Submitted proposals received two or three review reports from Program Committee members. Based on the recommendations of the reviewers, 7 full papers have been selected for publication and presentation at DART 2013. When organizing a scientific conference, one always has to count on the efforts of many volunteers.