Towards an Open Set of Real-World Benchmarks for Model Queries and Transformations Amine Benelallam, Massimo Tisi, Istvan Rath, Benedeck Izso, Dimitrios S

Towards an Open Set of Real-World Benchmarks for Model Queries and Transformations Amine Benelallam, Massimo Tisi, Istvan Rath, Benedeck Izso, Dimitrios S

Towards an Open Set of Real-World Benchmarks for Model Queries and Transformations Amine Benelallam, Massimo Tisi, Istvan Rath, Benedeck Izso, Dimitrios S. Kolovos To cite this version: Amine Benelallam, Massimo Tisi, Istvan Rath, Benedeck Izso, Dimitrios S. Kolovos. Towards an Open Set of Real-World Benchmarks for Model Queries and Transformations. BigMDE, University of York, Jul 2014, York, UK, United Kingdom. hal-01035450 HAL Id: hal-01035450 https://hal.inria.fr/hal-01035450 Submitted on 22 Jul 2014 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Towards an Open Set of Real-World Benchmarks for Model Queries and Transformations Amine Benelellam Massimo Tisi István Ráth AtlanMod team AtlanMod team Budapest University of Inria, Mines-Nantes, Lina Inria, Mines-Nantes, Lina Technology and Economics Nantes, France Nantes, France Budapest, Hungary [email protected] [email protected] [email protected] Benedek Izsó Dimitrios S. Kolovos Budapest University of Enterprise Systems Group Technology and Economics University of York Budapest, Hungary York, United Kingdom [email protected] [email protected] ABSTRACT Over the past decade, technologies around Model-Driven Engineer- With the growing size and complexity of systems under design, ing (MDE) have been offering a systematic approach for software industry needs a generation of Model-Driven Engineering (MDE) development on top of models. This elegant approach conquered tools, especially model query and transformation, with the proven an interesting popularity and engaged both researchers and indus- capability to handle large-scale scenarios. While researchers are trials. However MDE is not able yet to attract large-scale industrial proposing several technical solutions in this sense, the commu- systems considering the serious scalability issues that the current nity lacks a set of shared scalability benchmarks, that would sim- generation of MDE tools is facing, namely (i) the growing com- plify quantitative assessment of advancements and enable cross- plexity of systems under design, and (ii) the huge amount of data evaluation of different proposals. Benchmarks in previous work they might represent. Therefore, there is a calling need to develop a have been synthesized to stress specific features of model manage- new generation of tools capable of managing, querying, and trans- ment, lacking both generality and industrial validity. In this paper, forming Very Large Models (VLMs). Existing empirical assess- we initiate an effort to define a set of shared benchmarks, gather- ment [7, 36, 21] has accredited that necessity indeed. ing queries and transformations from real-world MDE case studies. We make these case available to community evaluation via a public In spite of the advances on Model Transformation (MT) tools, ad- MDE benchmark repository. ditional research studies are still needed in order to acquire the at- tention of large-scale industrial systems. Aiming at improving per- formance and computation cost, many works have been developed Categories and Subject Descriptors around elaborating new features for existing model transformation D.2.2 [MANAGEMENT OF COMPUTING AND INFORMA- engines (e. g., Incrementality [18, 33, 9], and Parallelization [10, TION SYSTEMS]: Design Tools and Techniques—Computer-aided 38, 51]). Others [14] chose to develop new tools based on existing software engineering (CASE); D.2.8 [Software Engineering]: Met- frameworks effectively handling concurrency and parallelization. rics—performance measures, Complexity measures; K.6.3 [Computing Milieux]: Software Management—Software selection One of the most computationally expensive tasks in modeling ap- plications is the evaluation of model queries. While there exists a General Terms number of benchmarks for queries over relational databases [52] or Performance, Experimentation, Measurement graph stores [12, 50], modeling tool workloads are significantly dif- ferent. Specifically, modeling tools use more complex queries than typical transactional systems [29], and the real world performance Keywords is affected by response time (i.e. execution time for a specific op- Benchmarking, Very Large Models, Model transformations, Model eration such as validation or transformation) than throughput (i.e. queries the amount of parallel transactions). 1. INTRODUCTION In order to overcome these limitations and achieve scalability in MDE, several works [16, 37] drew research roadmaps advancing the use of the Cloud for distributed and collaborative processing of VLMs. Hence, in order to be able to compare this new generation of tools and measure their performance, it is required to provide a transformation benchmark that takes into consideration the well- known scalability issues in MTs [35]. Likewise, benchmarks might be a good reference to help engineers in choosing what fits the most to their solution while developing new tools. Most of existing benchmarks [32, 8, 54, 55] were more focused on other properties than scalability (e. g., Transformation sequence (i. e., queries over source code) using a graph or model transforma- length, Transformation strategy, Matching size etc.). Moreover tion tool. One of the main challenges in the definition of program these benchmarks do not provide neither any clue on how to mea- comprehension operators as transformations is scalability with re- sure transformation scalability from a theoretical point of view, nor spect to input size. This case study is divided into two independent real-world case studies. In contrast to the former benchmarks, Izsó tasks (i) a simple filtering query that selects a subgraph of its in- et al. [29] are the first ones to provide a precise metrics to predict put according to a given condition, and (ii) a complex query that the performance of graph queries – based on instance models and computes a control flow graph and a program dependence graph. query specification –, and therefore results in indicators that help These queries are performed over the JDTAST metamodel, the Java selecting the suitable technology. In addition, these benchmarks metamodel used in early versions of MoDisco [13] to discover Java are specific to either model transformation or model query. On projects. This benchmark comes with 5 different sets of increasing the other side, there exist some reference benchmarks for databases size ranging from 7 × 105 up to 5 × 109. with a higher level of maturity [15, 49]. In [15], Cattel et al. present a benchmark OO1 to measure the performance of specific charac- The experiment in [54] compares the performance of three model teristics of the most frequently used operations – according to feed- transformation engines: ATL, QVT-R, and QVT-O. This compar- backs from industrials – on engineering objects. This benchmark ison is based on two different transformation examples, targeting come with a precise specification, measures, and evaluation indeed. meta-models with different structural representations: linear repre- In [49] Schmidt et al. introduce XMark, a benchmark for XML data sentation (Class2RDBMS) and tree-like representation (RSS2ATOM). management that copes with a large range of queries coming from The benchmarking set involves randomly generated input models real world case scenarios. Each query comes to stress a particular of increasing numbers of elements (up to hundreds of thousands). aspect of XML query engines. Like the previous work [55], the benchmark sets are also tuned ac- cording to a particular feature such as the size of input models, their In this paper we present a set of benchmarks for model transfor- complexity (complex interconnection structure) and transformation mation and query engines. Our proposal is to select a set of trans- strategies. In order to study the impact of different implementation formations/queries from real-world cases and to make them avail- strategies in ATL, the Class2RDBMS transformation was imple- able to large public. Two of the four benchmarks included deal mented in different programming styles. The first one promotes with model transformations, while the other two deal with model expressing input models navigation in the in the right-hand side of queries. Models that feed each one of the benchmarks are of in- the bindings, the second use ATL attribute helpers, and third uses creasing size, also different kinds/extensions. They are either con- the imperative part of ATL. crete, coming from real projects (i. e., reverse engineered Java project models) or generated using deterministic model instantiators. These The work [55] is considered one of the early systematic MDE bench- instantiators can be easily customized to be able to generate models marks dedicated to Graph Transformations (GT). It proposes a method- that suit the benchmarking of a specific feature. ology for quantitative benchmarking in order to assess the perfor- mance of GT tools. The overall aim is to uniformly measure the These benchmarks are part of the results of the Scalable Modeling performance of a system under a deterministic,

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    10 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us