Results 1 -
5 of
5
A Unified Test Case Prioritization Approach
"... Test case prioritization techniques attempt to re-order test cases in a manner that increases the rate at which faults are detected during regression testing. Coverage-based test case prioritization techniques typ-ically use one of two overall strategies, a total strategy or an additional strategy. ..."
Abstract
-
Cited by 4 (1 self)
- Add to MetaCart
(Show Context)
Test case prioritization techniques attempt to re-order test cases in a manner that increases the rate at which faults are detected during regression testing. Coverage-based test case prioritization techniques typ-ically use one of two overall strategies, a total strategy or an additional strategy. These strategies prioritize test cases based on the total number of code (or code-related) elements covered per test case and the number of additional (not-yet-covered) code (or code-related) elements covered per test case, respectively. In this ar-ticle, we present a unified test case prioritization approach that encompasses both the total and additional strategies. Our unified test case prioritization approach includes two models (“basic ” and “extended”) by which a spectrum of test case prioritization techniques ranging from a purely total to a purely additional technique can be defined by specifying the value of a parameter referred to as the fp value. To evaluate our approach, we performed an empirical study on 28 Java objects and 40 C objects, considering the impact of three internal factors (model type, choice of fp value, and coverage type) and three external factors (cover-age granularity, test case granularity, and programming/testing paradigm), all of which can be manipulated by our approach. Our results demonstrate that a wide range of techniques derived from our basic and ex-tended models with uniform fp values can outperform purely total techniques, and are competitive with purely additional techniques. Considering the influence of each internal and external factor studied, the
An Empirical Study on the Scalability of Selective Mutation Testing
"... Abstract—Software testing plays an important role in ensur-ing software quality by running a program with test suites. Mutation testing is designed to evaluate whether a test suite is adequate in detecting faults. Due to the expensive cost of mutation testing, selective mutation testing was proposed ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
(Show Context)
Abstract—Software testing plays an important role in ensur-ing software quality by running a program with test suites. Mutation testing is designed to evaluate whether a test suite is adequate in detecting faults. Due to the expensive cost of mutation testing, selective mutation testing was proposed to select a subset of mutants whose effectiveness is similar to the whole set of generated mutants. Although selective mutation testing has been widely investigated in recent years, many people still doubt whether it can suit well for large programs. To study the scalability of selective mutation testing, we systematically explore how the program size impacts selective mutation testing through four projects (including 12 versions all together). Based on the empirical study, for programs smaller than 16 KLOC, selective mutation testing has surprisingly good scalability. In particular, for a program whose number of lines of executable code is E, the number of mutants used in selective mutation testing is proportional to Ec, where c is a constant whose value is between 0.05 and 0.25. I.
Empirically Detecting False Test Alarms Using Association Rules
"... Abstract—Applying code changes to software systems and testing these code changes can be a complex task that involves many different types of software testing strategies, e.g. system and integration tests. However, not all test failures reported during code integration are hinting towards code defec ..."
Abstract
- Add to MetaCart
Abstract—Applying code changes to software systems and testing these code changes can be a complex task that involves many different types of software testing strategies, e.g. system and integration tests. However, not all test failures reported during code integration are hinting towards code defects. Testing large systems such as the Microsoft Windows operating system requires complex test infrastructures, which may lead to test failures caused by faulty tests and test infrastructure issues. Such false test alarms are particular annoying as they raise engineer attention and require manual inspection without providing any benefit. The goal of this work is to use empirical data to minimize the number of false test alarms reported during system and integration testing. To achieve this goal, we use association rule learning to identify patterns among failing test steps that are typically for false test alarms and can be used to automatically classify them. A successful classification of false test alarms is particularly valuable for product teams as manual test failure inspection is an expensive and time-consuming process that not only costs engineering time and money but also slows down product development. We evaluating our approach on system and integration tests executed during Windows 8.1 and Microsoft Dynamics AX development. Performing more than 10,000 classifications for each product, our model shows a mean precision between 0.85 and 0.90 predicting between 34 % and 48 % of all false test alarms. Keywords—software testing; association rules; false test alarms; classification model; test improvement I.
Convolutional Neural Networks over Tree Structures for Programming Language Processing
"... Programming language processing (similar to natural language processing) is a hot research topic in the field of software engineering; it has also aroused growing in-terest in the artificial intelligence community. However, different from a natural language sentence, a program contains rich, explici ..."
Abstract
- Add to MetaCart
(Show Context)
Programming language processing (similar to natural language processing) is a hot research topic in the field of software engineering; it has also aroused growing in-terest in the artificial intelligence community. However, different from a natural language sentence, a program contains rich, explicit, and complicated structural infor-mation. Hence, traditional NLP models may be inappro-priate for programs. In this paper, we propose a novel tree-based convolutional neural network (TBCNN) for programming language processing, in which a convo-lution kernel is designed over programs ’ abstract syn-tax trees to capture structural information. TBCNN is a generic architecture for programming language pro-cessing; our experiments show its effectiveness in two different program analysis tasks: classifying programs according to functionality, and detecting code snippets of certain patterns. TBCNN outperforms baseline meth-ods, including several neural models for NLP.
Predicting Consistency-Maintenance Requirement of Code Clones at Copy-and-Paste Time
"... Abstract—Code clones have always been a double edged sword in software development. On one hand, it is a very convenient way to reuse existing code, and to save coding effort. On the other hand, since developers may need to ensure consistency among cloned code segments, code clones can lead to extra ..."
Abstract
- Add to MetaCart
(Show Context)
Abstract—Code clones have always been a double edged sword in software development. On one hand, it is a very convenient way to reuse existing code, and to save coding effort. On the other hand, since developers may need to ensure consistency among cloned code segments, code clones can lead to extra maintenance effort and even bugs. Recently studies on the evolution of code clones show that only some of the code clones experience consistent changes during their evolution history. Therefore, if we can accurately predict whether a code clone will experience consistent changes, we will be able to provide useful recommendations to developers on leveraging the convenience of some code cloning operations, while avoiding other code cloning operations to reduce future consistency maintenance effort. In this paper, we define a code cloning operation as consistency-maintenance-required if its generated code clones experience consistent changes in the software evolution history, and we propose a novel approach that automatically predicts whether a code cloning operation requires consistency maintenance at the time point of performing copy-and-paste operations. Our insight is that whether a code cloning operation requires consistency maintenance may relate to the characteristics of the code to be cloned and the characteristics of its context. Based on a number of attributes extracted from the cloned code and the context of the code cloning operation, we use Bayesian Networks, a machine-learning technique, to predict whether an intended code cloning operation requires consistency maintenance. We evaluated our approach on four subjects — two large-scale Microsoft software projects, and two popular open-source software projects — under two usage scenarios: 1) recommend developers to perform only the cloning operations