Automatic Test Amplification for Executable Models


Behavioral models are important assets that must be thoroughly verified early in the design process. This can be achieved with manuallywritten test cases that embed carefully hand-picked domain-specific input data. However, such test cases may not always reach the desired level of quality, such as high coverage or being able to localize faults efficiently. Test amplification is an interesting emergent approach to improve a test suite by automatically generating new test cases out of existing manually-written ones. Yet, while ad-hoc test amplification solutions have been proposed for a few programming languages, no solution currently exists for amplifying the test cases of behavioral models. In this paper, we fill this gap with an automated and generic approach. Given an executable DSL, a conforming behavioral model, and an existing test suite, the proposed approach generates new regression test cases in three steps: (i) generating new test inputs by applying a set of generic modifiers on the existing test inputs; (ii) running the model under test with new inputs and generating assertions from the execution traces; and (iii) selecting the new test cases that increase the mutation score. We provide tool support for the approach atop the Eclipse GEMOC Studio and show its applicability in an empirical study. In the experiment, we applied the approach to 71 test suites written for models conforming to two different DSLs, and for 67 of the 71 cases, it successfully improved the mutation score between 3.17% and 54.11% depending on the initial setup.

In Proceedings of the 25th ACM/IEEE International Conference on Model Driven Engineering Languages and Systems
Faezeh Khorram
Faezeh Khorram
PhD Student of Computer Science, Marie-Curie Early Stage Researcher (Lowcomote project)

My research interests include Model-Driven Engineering (MDE), Domain-Specific Languages (DSL), Debugging and Testing of models, and software testing.