Agile Parsing to Transform Web Applications

Total Page:16

File Type:pdf, Size:1020Kb

Agile Parsing to Transform Web Applications Agile Parsing to Transform Web Applications Thomas Dean1 and Mykyta Synytskyy2 1 Electrical and Computer Engineering, Queen’s University [email protected] 2 Amazon.com [email protected] Abstract. Syntactic analysis lies at the heart of many transformation tools. Grammars are used to provide a structure to guide the application of transformations. Agile parsing is a technique in which grammars are adapted on a transformation by transformation basis to simplify transformation tasks. This paper gives an overview of agile parsing techniques, and how they may be applied to Web Applications. We give examples from several transformations that have been used in the Web application domain. 1 Introduction Many program transformation tools [3,4,6,7,8,21,25] are syntactic in nature. That is, they parse the input into an abstract syntax tree (AST) or graph (ASG) as part of the transformation process. There is good reason for this: syntax is the framework on which the semantics of most modern languages is defined. One example is denotational semantics [24], which maps the syntax to a formal mathematical domain. The syntax of the language, and thus the structure of the tree or graph used by these tools is determined by a grammar. The grammar is typically some form of context free grammar, although it might be augmented with other information [1,2]. In most cases, the transformation rules that may be applied to the input are constrained by the resulting data structure. Grammars are used by transformation languages to impose a structure on the input that can be used by the rules. These grammars are general grammars and represent the compromise of authoring the best grammars that are suitable for a wide variety of tasks. However, for some tasks, minor changes to the grammar can make significant differences in the performance of the transformations. We call changing the grammar on a transfor- mation by transformation basis agile parsing [13]. One particular area of analysis that agile parsing is useful for is web applications. Web applications are written in a variety of languages, and are the subject of a variety of analysis and transformation tasks. We have used agile parsing techniques on several web application transformation projects [11,14,16,22,23,26,27]. We do not discuss the details of the transformations as they are covered elsewhere. Instead we focus on how grammars can be crafted for the web applications and how they are customized for each of the particular transformation tasks. R. Lämmel, J. Saraiva, and J. Visser (Eds.): GTTSE 2005, LNCS 4143, pp. 312 – 326, 2006. © Springer-Verlag Berlin Heidelberg 2006 Agile Parsing to Transform Web Applications 313 2 Agile Parsing Agile parsing means customizing the grammar to the task. It is a collection of techniques from a variety of sources that we have found useful for transformation tasks. In this section, we give an introduction to agile parsing, how it is accomplished in the TXL programming language and a quick overview of some of the techniques. The parsing techniques generally serve several needs. One is to change the way the input is parsed. One example is that in the C reference grammar the keyword typedef is simply given in the grammar as a storage specifier, eliminating any syntactic difference between variable and type declarations. That is, there is a single non-terminal declaration, that consists of a sequence of declaration specifiers followed by the declarator list. A declaration of a global variable and a type definition using typedef are parsed as the same non-terminal. For many transformations, this is a reasonable choice. However a transformation that targets only type definitions must include conditions in the rule to match only those declarations that include the typedef keyword. Splitting the declaration productions into two non-terminals, one which requires a typedef keyword, and one that does not, lets the parser make the distinction. Those rule sets that depend on the distinction become much simpler since they no longer have to code the distinction directly as conditions in the rule. They can simply target the appropriate non-terminal. The change to the grammar is local and only applies for the given transformation task. The second need is to modify the grammar to allow information to be stored within the tree. An example is to modify the grammar to allow us to add XML style markup to various non-terminals in the grammar. While some transformation systems permit metadata to be added to the parse tree, sometimes data more complex than can be handled in the metadata is more appropriate for the problem. Thus the markup can be used to store temporary information part way through the transformation. Markup can also be used as a vehicle for communicating the results of an analysis to the user. The third need is to robustly parse a mixture of languages such as embedded SQL in COBOL[18] or in this case, dynamic scripting languages in HTML. The last need is to unify input and output grammars so that the rules may translate from one to the other while maintaining a consistent and well typed parse tree. 2.1 Agile Parsing in TXL TXL is a pure functional programming language particularly designed to support rule- based source-to-source transformation [7,8,10]. Each TXL program has two parts: a structure specification of the input to be transformed, which is expressed as an unrestricted context-free grammar; and a set of one or more transformation rules, which are specified as pattern/replacement pairs to define what actions will be performed on the input. Each pattern/replacement pair in a transformation rule is specified by example, and may be arbitrarily parameterized for more rule flexibility. Figure 1 shows a simple TXL grammar for expressions. Square brackets denote the use of a non-terminal. Prefixing a non-terminal symbol with the keyword repeat denotes a sequence of the non-terminal and the vertical bar is used to indicate alternate productions. The terminal symbols in the grammar are either literal values, such as the operator symbols in Fig. 1, or general token classes which are also referenced using square brackets ([id] for identifier in Fig 1). 314 T. Dean and M. Synytskyy define expression define term [term] [repeat addop_term] [factor] [repeat mulop_factor] end define end define define addop_term define mulop_factor [add_op] [term] [mul_op] [factor] end define end define define add_op define factor + | - [id] end define end define Fig. 1. Simple Expression Grammar in TXL Agile parsing is supported in TXL through the use of the redefine keyword. The redefine keyword is used to change a grammar. Figure 2 shows an example. In this example, the factor grammar production is changed to include XML markup on the identifier. The ellipses (‘...’) in the factor redefinition indicate the previous productions for the factor non-terminal. Thus a factor is whatever it was before, or it may be an identifier annotated by XML markup. Subsequent transformation programs may choose to insist that all such identifiers be marked up when parsing the input by removing the first line and the vertical bar from the factor redefinition. The TXL processor first tokenizes the input using a standard set of tokens which may be extended by the grammar definition, parses the input using the grammar and then applies the rules. In TXL, the rules are constrained by the grammar to keep the tree well formed. Thus the grammar includes not only the productions for parsing the input, but also the productions for the output and intermediate results. This may lead to ambiguities which are resolved through the use of an ordered parse. That is, the order of the rules in the grammar is used to determine the form of the input. 3 Agile Parsing in Web Applications Web applications deliver services over the HTTP protocol, usually to a browser of some sort. Early web applications provided dynamic content to standard web browsers using server side scripting such as CGI scripts, servlets, JSP and ASP. Recent innovations include Real Simple Syndication (RSS), SOAP and AJAX. redefine factor define XMLEnd ... </[id]> | [XmlStart][id][XMLEnd] end define end redefine define XmlStart define XMLParm <[id] [repeat XMLParm]> [id] = [stringlit] end define end define Fig. 2. Adding XML Markup to Identifiers in Expressions Agile Parsing to Transform Web Applications 315 Web applications present a particular challenge for parsing and for transformation. While some of the files in the web application contain conventional languages such as Java, the core files of the web application (the web pages) are typically comprised of a mixture of several languages. The web pages typically contain HTML, augmented with JavaScript, and server side languages such as Java, Visual Basic, or PHP. Other components of the application may be in XML such as various configuration files, or the XML transfer schema such as that used in SOAP. For some web transformation tasks, it may be possible to convert to a single language [12], but if it is desired to translate between client and server side technologies, we must be able to represent all of the languages in a single parse. A grammar for web applications must also be robust. Most browsers accept malformed HTML and our approach must also accept and deal with the malformed HTML. 3.1 Island Grammars and Robust Parsing The first agile parsing technique is island grammars [5,16,18,19,23]. Island grammars, originally introduced by van Deursen et al. [5], are a technique where the input is divided into interesting sequences of tokens, called islands and uninteresting tokens called water. Grammar productions that recognize the islands are given precedence over the water grammar productions which typically match any token or keyword.
Recommended publications
  • The Design & Implementation of an Abstract Semantic Graph For
    Clemson University TigerPrints All Dissertations Dissertations 12-2011 The esiD gn & Implementation of an Abstract Semantic Graph for Statement-Level Dynamic Analysis of C++ Applications Edward Duffy Clemson University, [email protected] Follow this and additional works at: https://tigerprints.clemson.edu/all_dissertations Part of the Computer Sciences Commons Recommended Citation Duffy, Edward, "The eD sign & Implementation of an Abstract Semantic Graph for Statement-Level Dynamic Analysis of C++ Applications" (2011). All Dissertations. 832. https://tigerprints.clemson.edu/all_dissertations/832 This Dissertation is brought to you for free and open access by the Dissertations at TigerPrints. It has been accepted for inclusion in All Dissertations by an authorized administrator of TigerPrints. For more information, please contact [email protected]. THE DESIGN &IMPLEMENTATION OF AN ABSTRACT SEMANTIC GRAPH FOR STATEMENT-LEVEL DYNAMIC ANALYSIS OF C++ APPLICATIONS A Dissertation Presented to the Graduate School of Clemson University In Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy Computer Science by Edward B. Duffy December 2011 Accepted by: Dr. Brian A. Malloy, Committee Chair Dr. James B. von Oehsen Dr. Jason P. Hallstrom Dr. Pradip K. Srimani In this thesis, we describe our system, Hylian, for statement-level analysis, both static and dynamic, of a C++ application. We begin by extending the GNU gcc parser to generate parse trees in XML format for each of the compilation units in a C++ application. We then provide verification that the generated parse trees are structurally equivalent to the code in the original C++ application. We use the generated parse trees, together with an augmented version of the gcc test suite, to recover a grammar for the C++ dialect that we parse.
    [Show full text]
  • Application of Graph Databases for Static Code Analysis of Web-Applications
    Application of Graph Databases for Static Code Analysis of Web-Applications Daniil Sadyrin [0000-0001-5002-3639], Andrey Dergachev [0000-0002-1754-7120], Ivan Loginov [0000-0002-6254-6098], Iurii Korenkov [0000-0002-8948-2776], and Aglaya Ilina [0000-0003-1866-7914] ITMO University, Kronverkskiy prospekt, 49, St. Petersburg, 197101, Russia [email protected], [email protected], [email protected], [email protected], [email protected] Abstract. Graph databases offer a very flexible data model. We present the approach of static code analysis using graph databases. The main stage of the analysis algorithm is the construction of ASG (Abstract Source Graph), which represents relationships between AST (Abstract Syntax Tree) nodes. The ASG is saved to a graph database (like Neo4j) and queries to the database are made to get code properties for analysis. The approach is applied to detect and exploit Object Injection vulnerability in PHP web-applications. This vulnerability occurs when unsanitized user data enters PHP unserialize function. Successful exploitation of this vulnerability means building of “object chain”: a nested object, in the process of deserializing of it, a sequence of methods is being called leading to dangerous function call. In time of deserializing, some “magic” PHP methods (__wakeup or __destruct) are called on the object. To create the “object chain”, it’s necessary to analyze methods of classes declared in web-application, and find sequence of methods called from “magic” methods. The main idea of author’s approach is to save relationships between methods and functions in graph database and use queries to the database on Cypher language to find appropriate method calls.
    [Show full text]
  • Dense Semantic Graph and Its Application in Single Document Summarisation
    Dense Semantic Graph and its Application in Single Document Summarisation Monika Joshi1, Hui Wang1 and Sally McClean2 1 University of Ulster, Co. Antrim, BT37 0QB, UK [email protected] , [email protected] 2 University of Ulster, Co. Londonderry, BT52 1SA, UK [email protected] Abstract Semantic graph representation of text is an important part of natural language processing applications such as text summarisation. We have studied two ways of constructing the semantic graph of a document from dependency parsing of its sentences. The first graph is derived from the subject-object-verb representation of sentence, and the second graph is derived from considering more dependency relations in the sentence by a shortest distance dependency path calculation, resulting in a dense semantic graph. We have shown through experiments that dense semantic graphs gives better performance in semantic graph based unsupervised extractive text summarisation. 1 Introduction Information can be categorized into many forms -- numerical, visual, text, and audio. Text is abundantly present in online resources. Online blogs, Wikipedia knowledge base, patent documents and customer reviews are potential information sources for different user requirements. One of these requirements is to present a short summary of the originally larger document. The summary is expected to include important in- formation from the original text documents. This is usually achieved by keeping the informative parts of the document and reducing repetitive information. There are two types of text summarization: multiple document summarisation and single document summarization. The former is aimed at removing repetitive content in a collection of documents.
    [Show full text]
  • Autowig: Automatic Generation of Python Bindings for C++ Libraries
    AutoWIG: automatic generation of python bindings for C++ libraries Pierre Fernique and Christophe Pradal EPI Virtual Plants, Inria, Montpellier, France AGAP, CIRAD, INRA, Montpellier SupAgro, Univ Montpellier, Montpellier, France ABSTRACT Most of Python and R scientific packages incorporate compiled scientific libraries to speed up the code and reuse legacy libraries. While several semi-automatic solutions exist to wrap these compiled libraries, the process of wrapping a large library is cumbersome and time consuming. In this paper, we introduce AutoWIG, a Python package that wraps automatically compiled libraries into high-level languages using LLVM/Clang technologies and the Mako templating engine. Our approach is auto- matic, extensible, and applies to complex C++ libraries, composed of thousands of classes or incorporating modern meta-programming constructs. Subjects Data Science, Scientific Computing and Simulation, Programming Languages, Software Engineering Keywords C++, Python, Automatic bindings generation INTRODUCTION Many scientific libraries are written in low-level programming languages such as C and C++. Such libraries entail the usage of the traditional edit/compile/execute cycle in order to produce high-performance programs. This leads to lower computer processing time at the cost of high scientist coding time. At the opposite end of the spectrum, scripting languages such as MATLAB, Octave (John, David Bateman & Wehbring, 2014, Submitted 6 July 2017 Accepted 26 February 2018 for numerical work) Sage (The Sage Developers, 2015, for symbolic mathematics), R (R Published 2 April 2018 Core Team, 2014, for statistical analyses) or Python (Oliphant, 2007, for general purposes) Corresponding author provide an interactive framework that allows data scientists to explore their data, test new Pierre Fernique, [email protected] ideas, combine algorithmic approaches and evaluate their results on the fly.
    [Show full text]
  • DART@AI*IA 2013 Proceedings
    Cristian Lai, Giovanni Semeraro, Alessandro Giuliani (Eds.) Proceedings of the 7th International Workshop on Information Filtering and Retrieval Workshop of the XIII AI*IA Conference December 6, 2013 Turin, Italy http://aixia2013.i-learn.unito.it/course/view.php?id=26 i Preface The series of DART workshops provides an interactive and focused platform for researchers and practitioners for presenting and discussing new and emerging ideas. Focusing on research and study on new challenges in intelligent information filtering and retrieval, DART aims to investigate novel systems and tools to web scenarios and semantic computing. Therefore, DART contributes to discuss and compare suitable novel solutions based on intelligent techniques and applied in real-world applications. Information Retrieval attempts to address similar filtering and ranking problems for pieces of information such as links, pages, and documents. Information Retrieval systems generally focus on the development of global retrieval techniques, often neglecting individual user needs and preferences. Information Filtering has drastically changed the way information seekers find what they are searching for. In fact, they effectively prune large information spaces and help users in selecting items that best meet their needs, interests, preferences, and tastes. These systems rely strongly on the use of various machine learning tools and algorithms for learning how to rank items and predict user evaluation. Submitted proposals received two or three review reports from Program Committee members. Based on the recommendations of the reviewers, 7 full papers have been selected for publication and presentation at DART 2013. When organizing a scientific conference, one always has to count on the efforts of many volunteers.
    [Show full text]
  • Graph-Based Source Code Analysis of Javascript Repositories
    Budapest University of Technology and Economics Faculty of Electrical Engineering and Informatics Department of Measurement and Information Systems Graph-Based Source Code Analysis of JavaScript Repositories Master’s Thesis Author: Dániel Stein Supervisors: Gábor Szárnyas Ádám Lippai Dávid Honfi 2016 ii Contents Contents ii Kivonat v Abstract vi 1 Introduction1 1.1 Context.......................................... 1 1.2 Problem Statement................................... 2 1.3 Objectives and Contributions............................. 2 1.4 Structure of the Thesis................................. 3 2 Preliminaries4 2.1 JavaScript......................................... 4 2.1.1 From Glue Language to a Full-Fledged Language............ 4 2.1.2 ECMAScript .................................. 4 2.2 Static Analysis...................................... 5 2.2.1 Use Cases.................................... 6 2.2.2 Advantages and Disadvantages....................... 6 2.2.3 Source Code Processing and Analysis................... 7 2.3 Handling Large Interconnected Data ........................ 12 2.3.1 On Graph Computing............................. 12 2.3.2 Evaluating Queries on a Data Structure.................. 14 2.3.3 Graph Databases ............................... 15 2.4 Integrated Development Environment (IDE).................... 17 2.4.1 Visual Studio Code .............................. 17 2.4.2 Alternative IDEs................................ 18 3 Related Work 20 3.1 Static Analysis Frameworks.............................. 20 3.1.1 Tern......................................
    [Show full text]
  • Domain Specific Languages and Their Type Systems
    Domain specific languages and their type systems Citation for published version (APA): Meer, van der, A. P. (2014). Domain specific languages and their type systems. Technische Universiteit Eindhoven. https://doi.org/10.6100/IR778421 DOI: 10.6100/IR778421 Document status and date: Published: 01/01/2014 Document Version: Publisher’s PDF, also known as Version of Record (includes final page, issue and volume numbers) Please check the document version of this publication: • A submitted manuscript is the version of the article upon submission and before peer-review. There can be important differences between the submitted version and the official published version of record. People interested in the research are advised to contact the author for the final version of the publication, or visit the DOI to the publisher's website. • The final author version and the galley proof are versions of the publication after peer review. • The final published version features the final layout of the paper including the volume, issue and page numbers. Link to publication General rights Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. • Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain • You may freely distribute the URL identifying the publication in the public portal.
    [Show full text]
  • CODE REFACTORING UNDER CONSTRAINTS by YAN LIANG
    CODE REFACTORING UNDER CONSTRAINTS by YAN LIANG NICHOLAS A. KRAFT, CO-CHAIR RANDY K. SMITH, CO-CHAIR JEFFREY C. CARVER ALLEN S. PARRISH UZMA RAJA A DISSERTATION Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in the Department of Computer Science in the Graduate School of The University of Alabama TUSCALOOSA, ALABAMA 2011 Copyright Yan Liang 2011 ALL RIGHTS RESERVED ABSTRACT Code refactoring is the process of changing the internal structure of the program without changing its external behaviors. Most refactoring tools ensure behavior preservation by enforcing preconditions that must hold for the refactoring to be valid. However, their approaches have three drawbacks that make the refactoring results far from satisfactory and reduce the utilization of refactoring tools in practice. Firstly, programmers are not sure how code will be changed by those tools due to the invisible refactoring rules hidden behind the interfaces of tools. Secondly, current refactoring tools have limited extensibility to accommodate new refactorings. Lastly, most refactoring tools lack mechanisms to allow programmer to specify their own preconditions to indicate as to which properties of a program are of interest. We consider refactoring a code change activity that, as with other constraints imposed on code during software development and maintenance such as naming rules, should be visible, easily extensible, and adaptable. It should also combine the developers’ opinions, implementation styles of existing code and other good coding practice. We propose a model- based approach to precondition specification and checking in which preconditions can be declared explicitly and dynamically against the designated program metamodel, and verified against concrete program models.
    [Show full text]
  • Final Thesis Jugraj Singh Vfinal
    Computational Augmentation of Model Based System Engineering Supporting Mechatronic System Model Development with AI Technologies Mechatronics Research Centre Faculty of Technology De Montfort University, United Kingdom This thesis is submitted in partial fulfilment of the requirements of De Montfort University for the award of Doctor of Philosophy December, 2019 By: Jugraj Singh Table of Contents Introduction .......................................................................................................................... 11 1.1. Introduction ................................................................................................................................ 11 1.2. Motivation and Background....................................................................................................... 12 1.2.1. Research Gap ...................................................................................................................... 15 1.3. Research Scope .......................................................................................................................... 18 1.4. Research Aims and Objectives .................................................................................................. 20 1.5. Chapters Overview ..................................................................................................................... 21 Literature review .................................................................................................................. 22 2.1. System Engineering and MBSE
    [Show full text]
  • Code Structure Visualization
    Eindhoven University of Technology MASTER Code structure visualization Lommerse, G.L.P.M. Award date: 2005 Link to publication Disclaimer This document contains a student thesis (bachelor's or master's), as authored by a student at Eindhoven University of Technology. Student theses are made available in the TU/e repository upon obtaining the required degree. The grade received is not published on the document as presented in the repository. The required complexity or quality of research of student theses may vary by program, and the required minimum study period may vary in duration. General rights Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. • Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain TECHNISCHE UNIVERSITEIT EINDHOVEN Department of Mathematics and Computer Science MASTER’S THESIS Code Structure Visualization by G.L.P.M. Lommerse Supervisor: Dr. Ir. A.C. Telea (TUE) Eindhoven, August 2005 Abstract This thesis describes a tool that helps programmers to get insight in software projects and also helps them to manage projects. The tool, called Code Structure Visualizer (CSV), visualizes the structure of source code by augmenting a textual representation of source code with a graphical representation of the structure.
    [Show full text]
  • Languages, Applications and Technologies
    IV Symposium on Languages, Applications and Technologies June 18th and 19th, 2015 Universidad Complutense de Madrid, Spain Editors: José-Luis Sierra-Rodríguez José Paulo Leal Alberto Simões ISBN: 978-84-606-8762-7 II Schedule and Table of Contents Thursday, 18th June Time Page 8h45 Registration 9h15 Opening Session 9h30 Keynote I chair: José Paulo Leal The role of ontologies in machine-machine communication. 1 Asunción Gómez 10h40 Coffee Break Session I: Document Processing chair: José João Almeida 11h00 Tree String Path Subsequences Automaton and Its Use for Indexing XML Documents ........................................................... 3 Eliška Šestáková and Jan Janousek 11h25 A structural approach to assess graph-based exercises ..................... 13 José Paulo Leal and Rúben Sousa 11h50 Automatic generation of CVs from Online Social Networks. 23 Sérgio Maia Dias, Alda Lopes Gançarski and Pedro Rangel Henriques 12h10 Knowledge Extraction from Requirements Specification................... 29 Eduardo Barra and Jorge Morato Session II: Domain-Specific Languages chair: Marjan Mernik 12h30 Domain Specific Languages for Data Mining: A Case Study for Educational Data Mining.......................................................... 35 Alfonso de la Vega, Diego García-Saiz, Marta Zorrilla and Pablo Sanchez 12h55 WSDLUD: A Metric to Measure the Understanding Degree of WSDL Descriptions.......................................................... 45 M. Berón, H. Bernardis, E. Miranda, D. Riesco, M. J. Varanda Pereira and P. Henriques 13h20 Towards the generation of graphical modelling environments aided by patterns 55 Antonio Garmendia, Ana Pescador, Esther Guerra and Juan De Lara 13h40 Lunch III Time Page Session III: Tools for Natural Language Speech and Text processing chair: Hugo Gonçalo Oliveira 15h00 Speech Features for Discriminating Stress Using Branch and Bound Wrapper Search...............................................................
    [Show full text]
  • Towards a Generic Framework for Trustworthy Program Refactoring∗
    Acta Cybernetica | online–first paper version | pages 1{27. Towards a Generic Framework for Trustworthy Program Refactoring∗ D´anielHorp´acsiab, Judit K}oszegi,b and D´avidJ. N´emethb Abstract Refactoring has to preserve the dynamics of the transformed program with respect to a particular definition of semantics and behavioural equivalence. In general, it is rather challenging to relate executable refactoring implemen- tations with the formal semantics of the transformed language. However, in order to make refactoring tools trustworthy, we may need to provide formal guarantees on correctness. In this paper, we propose high-level abstractions for refactoring definition and we outline a generic framework which is capable of verifying and executing refactoring specifications. By separating the var- ious concerns in the transformation process, our approach enables modular and language-parametric implementation. Keywords: refactoring, domain-specific language, refactoring methodology, formal verification 1 Introduction The idea of refactoring is as old as high-level programming. A program refac- toring [7] is typically meant to improve non-functional properties, such as the in- ternal structure or the appearance, of a program without changing its observable behaviour. Tool support is necessary for refactoring in large-scale: it has to be en- sured that program changes are complete and sound, the behaviour is intact and no bugs are introduced or eliminated by the transformations. Refactoring tools may carry out extensive modifications in large projects, which are hard to be performed or comprehended by humans. ∗The research has been supported by the European Union, co-financed by the European Social Fund (EFOP-3.6.2-16-2017-00013, Thematic Fundamental Research Collaborations Grounding Innovation in Informatics and Infocommunications).
    [Show full text]