Results 1  10
of
10
Verification and Validation of Declarative ModeltoModel Transformations Through Invariants
, 2009
"... In this paper we propose a method to derive OCL invariants from declarative modeltomodel transformations in order to enable their verification and analysis. For this purpose we have defined a number of invariantbased verification properties which provide increasing degrees of confidence about tra ..."
Abstract

Cited by 44 (6 self)
 Add to MetaCart
In this paper we propose a method to derive OCL invariants from declarative modeltomodel transformations in order to enable their verification and analysis. For this purpose we have defined a number of invariantbased verification properties which provide increasing degrees of confidence about transformation correctness, such as whether a rule (or the whole transformation) is satisfiable by some model, executable or total. We also provide some heuristics for generating meaningful scenarios that can be used to semiautomatically validate the transformations. As a proof of concept, the method is instantiated for two prominent
Automatic Model Generation Strategies for Model Transformation Testing
 in Theory and Practice of Model Transformations
, 2009
"... Abstract. Testing model transformations requires input models which are graphs of interconnected objects that must conform to a metamodel and metaconstraints from heterogeneous sources such as wellformedness rules, transformation preconditions, and test strategies. Manually specifying such model ..."
Abstract

Cited by 21 (8 self)
 Add to MetaCart
(Show Context)
Abstract. Testing model transformations requires input models which are graphs of interconnected objects that must conform to a metamodel and metaconstraints from heterogeneous sources such as wellformedness rules, transformation preconditions, and test strategies. Manually specifying such models is tedious since models must simultaneously conform to several metaconstraints. We propose automatic model generation via constraint satisfaction using our tool Cartier for model transformation testing. Due to the virtually infinite number of models in the input domain we compare strategies based on input domain partitioning to guide model generation. We qualify the effectiveness of these strategies by performing mutation analysis on the transformation using generated sets of models. The test sets obtained using partitioning strategies gives mutation scores of up to 87 % vs. 72 % in the case of unguided/random generation. These scores are based on analysis of 360 automatically generated test models for the representative transformation of UML class diagram models to RDBMS models. 1
Static analysis of model transformations for effective test generation
 In: ISSRE, IEEE
, 2012
"... Abstract—Model transformations are an integral part of several computing systems that manipulate interconnected graphs of objects called models in an input domain specified by a metamodel and a set of invariants. Test models are used to look for faults in a transformation. A test model contains a s ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
(Show Context)
Abstract—Model transformations are an integral part of several computing systems that manipulate interconnected graphs of objects called models in an input domain specified by a metamodel and a set of invariants. Test models are used to look for faults in a transformation. A test model contains a specific set of objects, their interconnections and values for their attributes. Can we automatically generate an effective set of test models using knowledge from the transformation? We present a whitebox testing approach that uses static analysis to guide the automatic generation of test inputs for transformations. Our static analysis uncovers knowledge about how the input model elements are accessed by transformation operations. This information is called the input metamodel footprint due to the transformation. We transform footprint, input metamodel, its invariants, and transformation preconditions to a constraint satisfaction problem in Alloy. We solve the problem to generate sets of test models containing traces of the footprint. Are these test models effective? With the help of a case study transformation we evaluate the effectiveness of these test inputs. We use mutation analysis to show that the test models generated from footprints are more effective (97.62 % avg. mutation score) in detecting faults than previously developed approaches based on input domain coverage criteria (89.9 % avg.) and unguided generation (70.1 % avg.).
Using models of partial knowledge to test model transformations
 In ICMT
, 2012
"... Abstract. Testers often use partial knowledge to build test models. This knowledge comes from sources such as requirements, known faults, existing inputs, and execution traces. In ModelDriven Engineering, test inputs are models executed by model transformations. Modelers build them using partial k ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
(Show Context)
Abstract. Testers often use partial knowledge to build test models. This knowledge comes from sources such as requirements, known faults, existing inputs, and execution traces. In ModelDriven Engineering, test inputs are models executed by model transformations. Modelers build them using partial knowledge while meticulously satisfying several wellformedness rules imposed by the modelling language. This manual process is tedious and language constraints can force users to create complex models even for representing simple knowledge. In this paper, we want to simplify the development of test models by presenting an integrated methodology and semiautomated tool that allow users to build only small partial test models directly representing their testing intent. We argue that partial models are more readable and maintainable and can be automatically completed to full input models while considering language constraints. We validate this approach by evaluating the size and faultdetecting effectiveness of partial models compared to traditionallybuilt test models. We show that they can detect the same bugs/faults with a greatly reduced development effort.
An InvariantBased Method for the Analysis of Declarative ModeltoModel Transformations
"... Abstract. In this paper we propose a method to derive OCL invariants from declarative specifications of modeltomodel transformations. In particular we consider two of the most prominent approaches for specifying such transformations: Triple Graph Grammars and QVT. Once the specification is express ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
(Show Context)
Abstract. In this paper we propose a method to derive OCL invariants from declarative specifications of modeltomodel transformations. In particular we consider two of the most prominent approaches for specifying such transformations: Triple Graph Grammars and QVT. Once the specification is expressed in the form of invariants, the transformation developer can use such description to verify properties of the original transformation (e.g. whether it defines a total, surjective or injective function), and to validate the transformation by the automatic generation of valid pairs of source and target models. 1
Testing Model Transformations: A case for Test Generation from Input Domain Models
"... ABSTRACT: Model transformations can automate critical tasks in modeldriven development. Thorough validation techniques are required to ensure their correctness. In this lecture we focus on testing model transformations. In particular, we present an approach for systematic selection of input test da ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
ABSTRACT: Model transformations can automate critical tasks in modeldriven development. Thorough validation techniques are required to ensure their correctness. In this lecture we focus on testing model transformations. In particular, we present an approach for systematic selection of input test data. This approach is based on a key characteristic of model transformations: their input domain is formally captured in a metamodel. A major challenge for test generation is that metamodels usually model an infinite set of possible input models for the transformation. We start with a general motivation of the need for specific test selection techniques in the presence of very large and possibly infinite input domains. We also present two existing blackbox strategies to systematically select test data: categorypartition and combinatorial interaction testing. Then, we detail specific criteria based on metamodel coverage to select data for model transformation testing. We introduce object and model fragments to capture specific structural constraints that should be satisfied by input test data. These fragments are the basis for the definition of coverage criteria and for automatic generation of test data. They also serve to drive the automatic generation of models for testing.
Author manuscript, published in "ICMT (2009)" Automatic Model Generation Strategies for Model Transformation Testing
, 2010
"... Abstract. Testing model transformations requires input models which are graphs of interconnected objects that must conform to a metamodel and metaconstraints from heterogeneous sources such as wellformedness rules, transformation preconditions, and test strategies. Manually specifying such model ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. Testing model transformations requires input models which are graphs of interconnected objects that must conform to a metamodel and metaconstraints from heterogeneous sources such as wellformedness rules, transformation preconditions, and test strategies. Manually specifying such models is tedious since models must simultaneously conform to several metaconstraints. We propose automatic model generation via constraint satisfaction using our tool Cartier for model transformation testing. Due to the virtually infinite number of models in the input domain we compare strategies based on input domain partitioning to guide model generation. We qualify the effectiveness of these strategies by performing mutation analysis on the transformation using generated sets of models. The test sets obtained using partitioning strategies gives mutation scores of up to 87 % vs. 72 % in the case of unguided/random generation. These scores are based on analysis of 360 automatically generated test models for the representative transformation of UML class diagram models to RDBMS models. 1
Managing Variability Complexity in Aspect Oriented Modelling
"... Abstract. AspectOriented Modeling (AOM) approaches propose to model reusable aspects that can be composed in different systems at a model level. To improve the reusability, several contributions have pointed out the needs of variability in the AOM approaches. Nevertheless, the support of variabilit ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. AspectOriented Modeling (AOM) approaches propose to model reusable aspects that can be composed in different systems at a model level. To improve the reusability, several contributions have pointed out the needs of variability in the AOM approaches. Nevertheless, the support of variability makes more complex the aspect design and the introduction of several dimensions of variability (advice, pointcut and weaving) creates a combinatorial explosion of variants and a risk of inconsistency in the aspect model. As the integration of an aspect model may be complex, it is essential that the AOM framework ensures the consistency of the resulting model. This paper presents an approach describing how to ensure that an aspect model with variability can be safely integrated into an existing model. The verifications include static checking of aspect models consistency and dynamic checking through testing with a focus on the parts of the model that are impacted by the aspect. 1
Author manuscript, published in "Model Driven Engineering for Distributed Realtime Embedded Systems ISTE (Ed.) (2009)" Testing Model Transformations: A case for Test Generation from Input Domain Models
, 2010
"... ABSTRACT: Model transformations can automate critical tasks in modeldriven development. Thorough validation techniques are required to ensure their correctness. In this lecture we focus on testing model transformations. In particular, we present an approach for systematic selection of input test da ..."
Abstract
 Add to MetaCart
ABSTRACT: Model transformations can automate critical tasks in modeldriven development. Thorough validation techniques are required to ensure their correctness. In this lecture we focus on testing model transformations. In particular, we present an approach for systematic selection of input test data. This approach is based on a key characteristic of model transformations: their input domain is formally captured in a metamodel. A major challenge for test generation is that metamodels usually model an infinite set of possible input models for the transformation. We start with a general motivation of the need for specific test selection techniques in the presence of very large and possibly infinite input domains. We also present two existing blackbox strategies to systematically select test data: categorypartition and combinatorial interaction testing. Then, we detail specific criteria based on metamodel coverage to select data for model transformation testing. We introduce object and model fragments to capture specific structural constraints that should be satisfied by input test data. These fragments are the basis for the definition of coverage criteria and for automatic generation of test data. They also serve to drive the automatic generation of models for testing.