SMartyTesting: A Model-Based Testing Approach for Deriving Software Product Line Test Sequences

Kleber L. Petry1 a, Edson OliveiraJr1 b, Leandro T. Costa2 c, Aline Zanin3 d and Avelino F. Zorzo3 e 1State University of Maringa,´ Maringa,´ Brazil 2Unisinos, Porto Alegre, Brazil 3PUCRS, Porto Alegre, Brazil

Keywords: Activity , Model-Based Testing, Sequence Diagrams, SMarty, Software Product Line, UML, Variability.

Abstract: Code reuse and testing approaches to ensure and to increase productivity and quality in software develop- ment has grown considerably among process models in recent decades. Software Product Line (SPL) is a technique in which non-opportunistic reuse is the core of its development process. Given the inherent vari- ability in products derived from an SPL, an effective way to ensure the quality of such products is to use testing techniques, which take into account SPL variability in all stages. There are several approaches for SPL variability management, especially those based on the Unified Modeling Language (UML). The SMarty approach provides users identification and representation of variability in UML models using stereotypes and tagged-values. SMarty currently offers a verification technique for its models, such as sequence diagrams, in the form of checklist-based inspections. However, SMarty does not provide a way to validate models using, for example, Model-Based Testing (MBT). Thus, this paper presents SMartyTesting, an approach to assist the generation of test sequences from SMarty sequence diagrams. To evaluate the feasibility of such an approach, we performed an empirical comparative study with an existing SPL MBT approach (SPLiT-MBt) using activ- ity diagrams, taking into account two criteria: sequence differentiation, and number of sequences generated. Results indicate that SMartyTesting is feasible for generating test sequences from SMarty sequence diagrams. Preliminary evidence relies on generating more test sequences using sequence diagrams than diagrams, thus potentially increasing SPL coverage.

1 INTRODUCTION (Pohl et al., 2005). An SPL provides artifacts that can be reused based Code reuse and step reduction in the software devel- on the inherited variability, thus traditional software opment process are techniques that have been adopted development processes are not suitable for the con- by academia and industry over the years (Almeida, text of SPL as they do not support variability man- 2019). Furthermore, several approaches have been agement, especially those based on Unified Modeling developed with the purpose of increasing software Language (UML) (Raatikainen et al., 2019). reusability and, consequently, return on investment Several variability management approaches have (ROI), for example, Software Product Line (SPL). been proposed (Raatikainen et al., 2019), for ex- SPL provides a software reuse-based development ample, Stereotype-based Management of Variability process to achieve greater productivity, cost, time and (SMarty) (OliveiraJr et al., 2010). These approaches risk reduction, and to provide higher product quality use UML stereotypes to represent variability in UML elements of an SPL. a https://orcid.org/0000-0001-6949-596X Even though successful SPL approaches have b https://orcid.org/0000-0002-4760-1626 been proposed in the past, one of the biggest SPL c https://orcid.org/0000-0001-6084-8896 challenges remains, i.e., products testing, especially d https://orcid.org/0000-0002-2542-573X testing of model-based SPLs due to the inherit vari- e https://orcid.org/0000-0002-0790-6759 ability and the amount of potential products to be

165

Petry, K., OliveiraJr, E., Costa, L., Zanin, A. and Zorzo, A. SMartyTesting: A Model-Based Testing Approach for Deriving Software Product Line Test Sequences. DOI: 10.5220/0010373601650172 In Proceedings of the 23rd International Conference on Enterprise Information Systems (ICEIS 2021) - Volume 2, pages 165-172 ISBN: 978-989-758-509-8 Copyright c 2021 by SCITEPRESS – Science and Technology Publications, Lda. All rights reserved

ICEIS 2021 - 23rd International Conference on Enterprise Information Systems

tested (Petry et al., 2020; Machado et al., 2014; to represent variability information in UML elements Lamancha et al., 2013). by using tagged values and stereotypes and, different As testing all products is unfeasible, in this paper from other approaches, it defines a to rep- we consider generating test sequences to be reused resent inclusive variants. during SPL generated products testing. To address SMartyProfile provides the following stereotypes: some of the above mentioned issues, we specified a <> to represent the concept of vari- Model-Based Testing (MBT) approach, named SMar- ability; <> to represent a varia- tyTesting, which uses UML sequence diagrams to tion point; <> to represent variants generate SPL test sequences. Such kind of present in every product; <> to rep- contains variability modeled according to the SMarty resent variants that might be present in a product; approach. We chose sequence diagrams due to its <> to represent variants of an in- large amount of details and the possibility to repre- clusive group; <> to represent sent more variability than any other UML diagrams. variants of an exclusive group; <> to de- Therefore, we want to answer the following re- note the concept of mutual exclusion between two search question: “Is SMartyTesting feasible to derive variants; <> to represent variants that test sequences from sequence diagrams?” need another one to be part of a product. 2.2 Model-Based Testing of SPLs 2 BACKGROUND AND RELATED Model-Based Testing (MBT) aims to automate the WORK generation of test artifacts, e.g. test cases and test se- quences, based on system models describing the soft- 2.1 Software Product Lines and ware requirements. The basic idea is to identify and Variability Management to build an abstract model that represents the behavior of the System Under Test (SUT). Based on this model it is possible to generate a large number of test cases A Software Product Line (SPL) is a set of systems that even in product modeling (Devroey, 2014). share common and manageable characteristics (Pohl Such test cases, which are derived from models, et al., 2005). Pohl et al. (2005) developed a frame- are known as the abstract test suite, and their level work for SPL engineering, which aims to incorporate of abstraction is closely related to the level of model the core concepts of traditional product line engineer- abstraction (Isa et al., 2017). The advantages of the ing, providing reuse and mass customization MBT approach is that test generation starts early in through variability. Such framework is divided into the development cycle and test cases can be created two main phases: Domain Engineering, in which automatically from a template. Test cases can be rep- similarities and variability of SPLs are identified and resented using Unified Modeling Language (UML) represented; and Application Engineering, in which (Isa et al., 2017) decision trees, statecharts, domain SPL-specific products are built by reusing domain ar- ontologies, or diagrams and or states. tifacts, exploring the variability of an SPL. MBT can be applied to an SPL context. For exam- Variability is the term used to differentiate prod- ple, Machado et al. (2014) point out SPL tests should ucts from an SPL. It is usually described by: i) vari- be considered in Domain and Application Engineer- ation point, which is a specific location in a generic ing. Within the interest of testing, two items should artifact; ii) variant, which represents the possible ele- be considered: the product requirements set and the ments to be chosen to resolve a variation point and; quality of the variability model under test. and, iii) constraints between variants, which estab- Based on this scenario, one of the biggest chal- lish relationships between one or more variants to re- lenges in SPL testing is related to the particularities solve their respective variation points or variability of each model. To this end, MBT seeks to create Do- at a given resolution time (Pohl et al., 2005). There main Engineering models to generate test cases that are several approaches to manage variability, espe- can be reused in the Application Engineering phase. cially those based on UML (Raatikainen et al., 2019). Machado et al. (2014), for example, focus on building These include the Stereotype-based Management of the early generation of SPL domain modeling tests. Variability (SMarty) (OliveiraJr et al., 2010). The motivation for choosing SMarty among other variability management approaches based on UML notation, is that it can be easily extended, it has a low learning curve, it supports many models, it is able

166 SMartyTesting: A Model-Based Testing Approach for Deriving Software Product Line Test Sequences

2.3 The SPLiT-MBt Method diagrams and sequence diagrams as input artifacts. They work directly with the UML metamodel, per- Software Product Line Testing Method Based on Sys- forming a conversion into test se- tem Models (SPLiT-MBt) (Costa, 2016) is a method quences and cases. However, it is not explicit whether to support automatic generation of functional test they consider variability for testing products. cases from SMarty activity diagrams modeled ac- Lamancha et al. (2013) present an MBT approach cording to SMarty. The idea is to generate test arti- for model-driven projects and SPLs. The approach facts during Domain Engineering and reuse them dur- uses OMG standards and defines transformations ing Application Engineering. Figure 1 presents the from design models to test models. Furthermore, steps of the SPLiT-MBt method. it was implemented as a framework using modeling The method is applied in two steps. The first one tools and QVT transformations. The contributions re- occurs during Domain Engineering, when test and lated to it and applied in a conversion implementation variability information are extracted from UML mod- provide improvements in the use of the QVT model, els. For SPLiT-MBt it is assumed that these models as well as the specificity of generating test cases from were previously designed by the SPL analyst using a previously converted sequence diagram. Lamancha SMarty. Therefore, a test analyst uses SPLiT-MBt et al. (2010) also present improvements on the use of to add test information on two UML diagrams, i.e. sequence diagrams for a micro vision artifact testing Use Case and Activity Diagrams. Then, once the Use process for a macro view, without losing the proper- Case and Activity Diagrams are annotated with test ties and details of the sequence diagram. We do not information, the test analyst uses SPLiT-MBt to gen- use QVT transformations in our work. erate Finite State Machines (FSMs) from these UML diagrams. These FSMs are extended in an SPL con- text and are used as input to generate test sequences 3 THE SMartyTesting APPROACH with variability information. These test sequences are generated through extending conventional test se- This section presents the characterization and de- quence generation methods in an SPL context, e.g., sign of the SMartyTesting approach for generating the Harmonized State Identifiers (HSI). The test se- test sequences in the SPL Domain Engineering from quences generated through applying these modified SMarty sequence diagrams. methods are stored in a test repository and the vari- SMartyTesting starts at an early stage of SPL de- ability present in these sequences is resolved during velopment taking into consideration use cases and Application Engineering. their basic and alternative flows and sequence dia- The second step of SPLiT-MBt takes place during grams (Stage 1). After that, it manually converts a Application Engineering, when the variability present sequence diagram into an (Stage 2). in those test sequences is resolved to meet the specific Finally, SMartyTesting, in its Stage 3, uses the SPLiT- requirements of each system. The generated test se- MBt method infrastructure based on the Domain quences are stored in a repository, which can be used Test Sequence Generation (c) from Figure 1 to in Application Engineering. automate the test sequence generation

2.4 Related Work 3.1 Modeling Sequence Diagrams from Use Cases Based on the MBT concepts, Petry et al. (2020) ana- lyzed 44 primary studies, from which a small subset This is a knowledge-based process in which an of them takes into account sequence diagrams to gen- UML-based SPL Expert takes into account the exist- erate SPL test sequences, as we do with SMartyTest- ing Use Cases and their basic and alternative flows ing (Section 3). Such studies are discussed as follows, description for a certain SPL to model a Sequence as we did with SPLiT-MBt in Section 2.3. Diagram containing variabilities. Lamancha et al. (2009) describe an MBT ap- Such modeling may be performed using general proach, which takes into account the OMG Testing purpose UML tools, such as Astah1 or our tool named Profile for deployment of industrial software tools. SMartyModeling. Process inputs are templates described in UML 2.0, while outputs consist of artifacts according to such profile. The transformation process for generat- ing test sequences is based on OMG Query-View- Transformation (QVT) 1.2 scripts. It takes use case 1http://astah.net

167 ICEIS 2021 - 23rd International Conference on Enterprise Information Systems

Figure 1: The SPLiT-MBt method (Costa, 2016).

3.2 Converting Sequence Diagrams to Activity Diagrams

As we are taking advantage of the existing SPLiT- MBt HSI-based engine to generate test sequences, we need to convert sequence diagrams to activity di- agrams. Activity diagram demonstrates the flow of control from one activity to another, as well as ac- tivities concurrency. Therefore, while a sequence dia- gram is closer to methods and source code, an activity diagram is closer to use cases in terms of abstraction. Figure 2: CCFG metamodel (Extended Activity Diagrams) Based on Garousi et al. (2005), through convert- (Garousi et al., 2005). ing sequence diagram to activity diagram there is no risk of distortion of properties, maintaining the orig- fiable with consistency rules between an SD and an inal characteristics and the variability contained in extended activity diagram using Constraint Control the sequence diagram. Validation of this conversion Flow Graph (CCFG). CCFG has all necessary classes was performed by Swain et al. (2010). Furthermore, and associations, as well as support for concurrent Garousi et al. (2005) proposal consists of a control control (concurrency) flow paths, which are a gen- flow analysis methodology based on UML 2.0 Se- eralization of the conventional (Garousi et al., 2005) quence Diagrams (SD). This technique can be used control flow path concept. throughout the development cycle and other testing The mapping consists of the use of an SD meta- approaches that make model understanding and exe- model (Figure 3) and a set of rules to be used in such cution. This technique can be used in SD-based sys- conversion, in which the CCFG metamodel (Figure 2) tems among several applications. is considered as validator. Based on well-defined activity diagrams, the To perform the activity mapping using CCFG, a proposed Control Flow Analysis of UML 2.0 Se- set of rules created from the Garousi et al. (2005) quence Diagrams (Garousi et al., 2005) brings an metamodels is used. The rules are presented in Ta- extended activity diagram metamodel (Figure 2) to ble 1. support control flow analysis of sequence diagrams. Thus, one can define an Object Constraint Language 3.3 Automating Test Sequence 2 (OCL) mapping for describing the rules that apply to Generation UML models. OCL application is formally performed and veri- SPLiT-MBt makes use of the HSI generation method. 2https://www.omg.org/spec/OCL According to Costa (2016), the reason for choosing

168 SMartyTesting: A Model-Based Testing Approach for Deriving Software Product Line Test Sequences

4 FEASIBILITY STUDY OF SMartyTesting

This study aims to: characterize the SMartyTesting approach, with the purpose of identifying its feasi- bility with respect to test sequence generating ca- pacity from sequence diagrams in the perspective of SPL researchers, in the context of lecturers and grad- Figure 3: UML sequence diagram metamodel (Garousi uate students of Software Engineering. et al., 2005). Based on the above mentioned goal, we de- fined the following research questions: RQ.1 - Can Table 1: Rules used for sequence diagram to activity dia- SMartyTesting generate more test sequences from se- gram conversion (Garousi et al., 2005). quence diagrams than SPLiT-MBt using activity di- Ord. Sequence Diagram Element CCFG Activity Diagram agrams?; and RQ.2 - Is there any difference among 1 Interaction Activity generated test sequences from sequence diagrams Flow between InitialNode 2 First message end and first control compared to activity diagrams?. 3 SynchCall/SynchSignal CallNode (CallNode+ForkNode) or To achieve the objective of this study, the follow- 4 AsynchCall or AsynchSignal ReplyNode Message SendEvent and ing criteria were defined: CT.1: Number of gener- 5 ControlFlow ReceiveEvent ated test sequences. It is the total number of gen- 6 Lifeline ObjectPartition 7 par CombinedFragment ForkNode erated test sequences by each approach (SMartyTest- 8 loop CombinedFragment DecisionNode 9 alt/opt CombinedFragment DecisionNode ing and SPLiT-MBt); and CT.2: Differentiation be- 10 break CombinedFragment ActivityEdge tween generated sequences. Input artifacts of each Flow between end control 11 Last message ends nodes and FinalNode approach differ from each other because of the par- 12 InteractionOccurrence Control Flow across CCFGs ticularities of the initial model. Thus, one can obtain 13 Polymorphic message DecisionNode 14 Nested InteractionFragmen Nested CCFGs different sequence paths, demonstrating that different paths have been taken, differing from each other. this method is because it is one of the least restric- For the generation of the test sequences, we se- tive methods with respect to the properties that Finite lected two diagrams. Table 2 contains characteristics State Machines (FSM) should have. For example, and variability of each diagram. HSI is capable of interpreting full and partial FSMs (Costa, 2016). Furthermore, the HSI method allows Table 2: Diagrams used in the process of generating test full coverage of existing faults and generates shorter sequences. Models Feature Variability test sequences than other methods, which contributes Play Selected Game Play Selected Game is the representation of the game menu. -variation point Fig:4 (AD) Through it is made the selection of which -variability to an optimized test process. These factors are very Fig:7 (SD) game will be played. -alternative OR Save Game relevant in the context of SPL, because the more fea- Fig:5 (AD) Save Game is the action of saving the game. -mandatory tures in an SPL, the more test cases it takes to test SPL Fig:6 (SD) products (Engstrom¨ and Runeson, 2011). SPLiT-MBt accepts an input file in the XML for- For test sequence generation with SPLiT-MBt, Ta- mat, reads the file, and validates all artifact input re- ble 3 lists the generated test sequences of an activ- quirements. If it is correct, the structure is assembled ity diagram for the AGM SPL Play Selected Game and it converts the activity diagram into an FSM at (Costa, 2016) (Figure 4). runtime and performs the process of generating test sequences containing variabilities from the respective Table 3: SPLiT-MBt test sequences generated from Figure activity diagram. Therefore, such sequences are ready 4. Test Sequence Step Action/Description Expected result to be used for testing SPL products by resolving their Initialization Creates the default instances of the required Test Case 1 1 - Select Play from menu; classes. Initialize the game Start the game action and the animation variabilities. The test scripts generated by SPLiT-MBt Test Case 1 2 - Left-click Button to begin play; begins. have a tabular format. These scripts are imported by VP Initialize the game {. - {; The paddles and disc begin to move. Test Case 1 3 - b{alternative OR}; The ball begins to move. a testing tool, e.g. MTM, for the test execution. - c{alternative OR}; Move racket horizontally - a{alternative OR}}; to follow mouse track }. An example of how SMartyTesting is used is pre- {. Responds to Won/Lost/Tied Dialog Return to the initial state of the sented along with its evaluation in Section 4. - {; tray. Test Case 1 4 - Responds to Won/Lost/Tied dialog; Return to the initial state of the - Responds to Won/Lost/Tied dialog; tray. - Responds to Won/Lost/Tied dialog }; Playback dialog is displayed again}. Initialization Returns the game board to its Test Case 1 5 - Respond“yes” in the dialog to play again; initialized state, ready to play.

169 ICEIS 2021 - 23rd International Conference on Enterprise Information Systems

Figure 4: Activity diagram of Play Selected Game (Costa, 2016).

Table 4 lists the generated test sequences of Figure 5, which is an activity diagram for the AGM SPL Save Game. Figure 6: Sequence diagram for Save Game (Marcolino et al., 2017).

Table 5: SMartyTesting test sequences generated from Fig- Figure 5: Activity diagram of Save Game (Costa, 2016). ure 8.

Test Sequence Step Action / Description Expected result 1:loadGame(-) Test Case 1 1 - Game Player start loadGame is loaded. Table 4: SPLiT-MBt test sequences generated from Figure loadGame method{Mandatory}; 2:getNumRecords() 5. Test Case 1 2 - Game menu after loading makes access data from recordStore. use method getNumRecords{Mandatory}; Test Sequence Score data is Step Action / Description Expected result 3:return Save your game Test Case 1 3 returned by Test Case 1 1 Finish the game. - recordStore send return messages; - save GAME window is shown; getNumrecords to GameMenu. Saved failed Test Case 1 2 message SAVE failed game is shown. {. - click SAVE GAME button; VP 3:return Instance feature of option close window Test Case 1 3 The SAVE GAME window is closed. - {; bowling. - Click close SAVE THE GAME; Test Case 1 4 - Bw option is selected{alternative OR}; instance feature of option Save your game Test Case 2 1 Finish the game. - save GAME window is showed; - Bigm option is selected{alternative OR}; bigm brickles. Save successful - Pgm option is selected{alternative OR}}; instance feature of option Test Case 2 2 SAVE GAME message is shown. - click SAVE GAME button; pong}. close window {. Test Case 2 3 The SAVE GAME window is closed. - click close SAVE GAME button; 5 - 7 - 9:return Returns after bw instanceof - {; GameMenu is executed. Test Case 1 5 - Return of option bw; Returns after bigm instanceof - Return of option bigm; GameMenu is executed. - Return of option pgm }; Returns after pgm instanceof For test sequence generation with SMartyTesting, we GameMenu is executed}. 10:return Test Case 1 6 Player gets return from chosen action. used the sequence diagrams created by Marcolino - Returning Information to the Game Player; et al. (2017), which are equivalent to those created by Costa (2016). AGM SPL Save Game (Marcolino et al., 2017) and This equivalence is due to the used level of ab- Figure 9 its converted activity diagram. straction. An example is in Figure 5, which represents Table 6 displays the generated test sequences from Save Game, in which two conditions are observed: Figure 9. save successful or save failed. In this paper, we also The number of test sequences generated (CT.1) represent the save success condition in Figure 6. for each activity diagram using original SPLiT-MBt Figure 7 depicts a sequence diagram and Figure compared to SMartyTesting is: Play Selected Game, 8 the converted activity diagram of the AGM Play six sequences for SMartyTesting and five for SPLiT- Selected Game (Marcolino et al., 2017). Thus, Ta- MBt; and Save Game, 12 for SMartyTesting and six ble 5 displays generated test sequences for the AD of for SPLiT-MBt. Figure 8. Based on data, we can observe that when using se- Figure 6 depicts the sequence diagram for the quence diagrams (SMartyTesting) the number of test

170 SMartyTesting: A Model-Based Testing Approach for Deriving Software Product Line Test Sequences

Figure 7: Sequence diagram for Play Selected Game (Marcolino et al., 2017).

Table 6: SMartyTesting test sequences generated from Fig- ure 9.

Test Sequence Step Action / Description Expected result 1:saveGame(-) Test Case 1 1 - GamePlayer start method GameMenu is loaded. saveGame{mandatory}; 2:getGameOver(-) Test Case 1 2 - GameMenu start method Board check action. getGameOver{mandatory}; 3:return Test Case 1 3 Value returns to selected game. - Board returns request; VP 3:return {. Figure 8: Activity diagram for Play Selected Game. - {; bw is selected{alternative OR}; start method instanceofGameMenu. Test Case 1 4 - bigm is selected{alternative OR}; start method instanceofGameMenu. from Figure 7. - pgm is selected{alternative OR}}; start method instanceofGameMenu}. 5-9-13:openRecordStore() {. - {; bw send data to method openRecordStore; data are used by openRecordStore. Test Case 1 5 - bigm send data to method openRecordStore; data are used by openRecordStore. - pgm send data to method openRecordStore}; data are used by openRecordStore}. Therefore, there is preliminary evidence that SMar- 6-10-14:return Test Case 1 6 confirms that you have data available. - operRecordStore returns to bw - bigm - pgm; tyTesting is able to generate a larger number of test 7-11-15:return Available data is returned Test Case 1 7 - Return data to GameMenu; to be added. sequences using sequence diagrams than SPLiT-MBt 16:addRecord() Test Case 1 8 Data is saved. - triggered method addRecord{mandatory}; using activity diagrams directly. 17:return Test Case 1 9 Confirms data persistence. - returns action of addRecord; 18:setMessage(message=”Game Saved!”) For difference between generated sequences Test Case 1 10 Confirmation message displayed. - triggers confirmation message{mandatory}; 19:closeRecord() (CT.2), we look at the input diagrams of both ap- Test Case 1 11 Method terminates operation. - start method closeRecord{mandatory}; 20:return Successfully completed return Test Case 1 12 proaches, in which their similarities are made by - return operation confirmation; Confirmation to user equivalence, we identified that there is a significant difference among generated test sequences, answer- sequences tends to be considerably higher than using ing RQ.2. We also believe this is due to the abstrac- directly activity diagrams (SPLiT-MBt). We under- tion level of each diagram as we expected. Besides, stand, therefore, that this can be determined by the there is a different applicability to each type of dia- level of abstraction of the diagram: the more abstract, gram, and because sequence diagrams are more de- the fewer test sequences. tailed, it is expected that they generate different test If we consider the level of abstraction of each di- sequences compared to a higher-level diagram. How- agram, we believe that SMartyTesting has the poten- ever, in certain cases the test sequences are almost tial to generate more test sequences than SPLiT-MBt. equivalent. Therefore, we believe that this depends on

171 ICEIS 2021 - 23rd International Conference on Enterprise Information Systems

REFERENCES

Almeida, E. S. (2019). Software Reuse and Product Line Engineering. In Cha, S., Taylor, R. N., and Kang, K., editors, Handbook of Software Engineering, pages 321–348. Springer International Publishing, Cham. Costa, L. T. (2016). SPLiT-MBt: A model-based testing method for software product lines. PhD thesis, Pon- tif´ıcia Universidade Catolica´ do Rio Grande do Sul. Devroey, X. (2014). Behavioural model based testing of software product lines. In SPLC. ACM. Engstrom,¨ E. and Runeson, P. (2011). Software product line testing–a systematic mapping study. Information and Software Technology, 53(1):2–13. Garousi, V., Briand, L. C., and Labiche, Y. (2005). Con- trol flow analysis of uml 2.0 sequence diagrams. In ECMDA-FA, pages 160–174. Springer. Isa, M. A. B., Razak, S. B. A., Jawawi, D. N. B. A., and Fuh, O. L. (2017). Model-based testing for software product line: A systematic literature review. Int. Jour. Soft. Eng. and Tech., 2(2). Lamancha, B. P., Mateo, P. R., de Guzman,´ I. R., Usaola, M. P., and Velthius, M. P. (2009). Automated model- based testing using the UML testing profile and qvt. In MODEVVA, page 6. ACM. Lamancha, B. P., Polo, M., and Piattini, M. (2013). System- atic review on software product line testing. In CSDT, pages 58–71. Springer. Figure 9: Activity diagram for Save Game from Figure 6. Lamancha, B. P., Usaola, M. P., and Velthius, M. P. (2010). A model based testing approach for model-driven de- the level of detail an SPL engineer models sequence velopment and software product lines. In ENASE, pages 193–208. Springer. diagrams. Machado, I., Mcgregor, J. D., Cavalcanti, Y. C., and Almeida, E. S. (2014). On strategies for testing soft- ware product lines: A systematic literature review. Inf. 5 CONCLUSION Sof. Tech., 56(10):1183–1199. Marcolino, A. S., OliveiraJr, E., Gimenes, I. M., and Bar- bosa, E. F. (2017). Variability resolution and prod- We compared the SMartyTesting feasibility to SPLiT- uct configuration with SMarty: An experimental study MBt according to two criteria: number of generated on uml class diagrams. Journal of Computer Science, test sequences and difference of sequences using both 13(8):307–319. approaches. OliveiraJr, E., Gimenes, I. M. S., and Maldonado, J. C. Results point out SMartyTesting is capable of gen- (2010). Systematic management of variability in uml- erating more test sequences based on the two used di- based software product lines. Jour. Univ. Comp. Sci., agrams. We understand the more the number of test 16(17):2374–2393. sequences, the more the test coverage due to a lower Petry, K. L., OliveiraJr, E., and Zorzo, A. F. (2020). Model- based testing of software product lines: Mapping abstraction level of sequence diagrams compared to study and research roadmap. Journal of Systems and activity diagrams. Test sequences generated by both Software, 167:110608. approaches are overall similar. We believe this de- Pohl, K., Bockle,¨ G., and van Der Linden, F. J. (2005). Soft- pends on the level of details expressed by the SPL ware product line engineering: foundations, princi- engineer at modeling sequence diagrams. ples and techniques. Springer. We plan as future work the full automation of Raatikainen, M., Tiihonen, J., and Mannist¨ o,¨ T. (2019). SMatyTesting by implementing a module to convert Software product lines and variability modeling: A sequence diagrams to finite state machines with no tertiary study. Journal of Systems and Software, need of the activity diagram as an intermediate arti- 149:485 – 510. fact. Swain, S. K., Mohapatra, D. P., and Mall, R. (2010). Test case generation based on use case and sequence dia- gram. International Journal of Software Engineering, 3(2):21–52.

172