• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations

Dynamically Discovering Likely Program Invariants. (2000)

by M D Ernst
Add To MetaCart

Tools

Sorted by:
Results 1 - 10 of 112
Next 10 →

An Overview of JML Tools and Applications

by Lilian Burdy, Yoonsik Cheon, David Cok, Michael D. Ernst, Joe Kiniry, Gary T. Leavens, K. Rustan M. Leino, Erik Poll , 2003
"... The Java Modeling Language (JML) can be used to specify the detailed design of Java classes and interfaces by adding annotations to Java source files. The aim of JML is to provide a specification language that is easy to use for Java programmers and that is supported by a wide range of tools for ..."
Abstract - Cited by 368 (54 self) - Add to MetaCart
The Java Modeling Language (JML) can be used to specify the detailed design of Java classes and interfaces by adding annotations to Java source files. The aim of JML is to provide a specification language that is easy to use for Java programmers and that is supported by a wide range of tools for specification type-checking, runtime debugging, static analysis, and verification. This paper
(Show Context)

Citation Context

...can be timeconsuming, tedious, and error-prone, so tools that can help in this task can be of great benefit. 6.1 Invariant Detection with Daikon 6.1.1 Overview and Goals The Daikon invariant detector =-=[31,32]-=- is a tool that provides assistance in creating a specification. Daikon outputs observed program properties in JML syntax (as well as other output formats) and automatically inserts them into a target...

Chianti: A tool for change impact analysis of java programs

by Xiaoxia Ren, Fenil Shah, Frank Tip, Barbara G. Ryder, Ophelia Chesley - Conference on Object-Oriented Programming, Systems, Languages, and Applications , 2004
"... This paper reports on the design and implementation of Chianti, a change impact analysis tool for Java that is implemented in the context of the Eclipse environment. Chianti analyzes two versions of an application and decomposes their difference into a set of atomic changes. Change impact is then re ..."
Abstract - Cited by 148 (5 self) - Add to MetaCart
This paper reports on the design and implementation of Chianti, a change impact analysis tool for Java that is implemented in the context of the Eclipse environment. Chianti analyzes two versions of an application and decomposes their difference into a set of atomic changes. Change impact is then reported in terms of affected (regression or unit) tests whose execution behavior may have been modified by the applied changes. For each affected test, Chianti also determines a set of affecting changes that were responsible for the test’s modified behavior. This latter step of isolating the changes that induce the failure of one specific test from those changes that only affect other tests can be used as a debugging technique in situations where a test fails unexpectedly after a long editing session. We evaluated Chianti on a year (2002) of CVS data from M. Ernst’s Daikon system, and found that, on average, 52% of Daikon’s unit tests are affected. Furthermore, each affected unit test, on average, is affected by only 3.95 % of the atomic changes. These findings suggest that our change impact analysis is a promising technique for assisting developers with program understanding and debugging.
(Show Context)

Citation Context

...prototype change impact analysis tool, and its validation against the 2002 revision history (taken from the developers’ CVS repository) of Daikon, a realistic Java system developed by M. Ernst et al. =-=[7]-=-. Essentially, in this initial study we substituted CVS updates obtained at intervals throughout the year for programmer edits, thus acquiring enough data to make some initial conclusions about our ap...

Improving Test Suites via Operational Abstraction

by Michael Harder, Jeff Mellen, Michael D. Ernst - In Proceedings of the 25th International Conference on Software Engineering , 2003
"... This paper presents the operational difference technique for generating, augmenting, and minimizing test suites. The technique is analogous to structural code coverage techniques, but it operates in the semantic domain of program properties rather than the syntactic domain of program text. The opera ..."
Abstract - Cited by 103 (12 self) - Add to MetaCart
This paper presents the operational difference technique for generating, augmenting, and minimizing test suites. The technique is analogous to structural code coverage techniques, but it operates in the semantic domain of program properties rather than the syntactic domain of program text. The operational difference technique automatically selects test cases; it assumes only the existence of a source of test cases. The technique dynamically generates operational abstractions (which describe observed behavior and are syntactically identical to formal specifications) from test suite executions. Test suites can be generated by adding cases until the operational abstraction stops changing. The resulting test suites are as small, and detect as many faults, as suites with 100% branch coverage, and are better at detecting certain common faults.
(Show Context)

Citation Context

...alue coverage, a variety of data coverage. The Roast tool constructs such suites and supports dependent domains, which can reduce the size of test suites compared to full cross-product domains. Ernst =-=[Ern00] uses the -=-term "value coverage" to refer to covering all of a variable's values (including boundary values); the current research builds on that work. Hamlet's probable correctness theory [Ham87] call...

Static and Dynamic Analysis: Synergy and Duality

by Michael D. Ernst - IN WODA 2003: ICSE WORKSHOP ON DYNAMIC ANALYSIS , 2003
"... This paper presents two sets of observations relating static and dynamic analysis. The first concerns synergies between static and dynamic analysis. Wherever one is utilized, the other may also be applied, often in a complementary way, and existing analyses should inspire different approaches to the ..."
Abstract - Cited by 96 (3 self) - Add to MetaCart
This paper presents two sets of observations relating static and dynamic analysis. The first concerns synergies between static and dynamic analysis. Wherever one is utilized, the other may also be applied, often in a complementary way, and existing analyses should inspire different approaches to the same problem. Furthermore, existing static and dynamic analyses often have very similar structure and technical approaches. The second observation is that some static and dynamic approaches are similar in that each considers, and generalizes from, a subset of all possible executions. Researchers need to develop new analyses that complement existing ones. More importantly, researchers need to erase the boundaries between static and dynamic analysis and create unified analyses that can operate in either mode, or in a mode that blends the strengths of both approaches.

Temporal-Safety Proofs for Systems Code

by Thomas A. Henzinger, Ranjit Jhala, Rupak Majumdar, George C. Necula, Grégoire Sutre, Westley Weimer , 2002
"... We present a methodology and tool for verifying and certifying systems code. The veri cation is based on the lazy-abstraction paradigm for intertwining the following three logical steps: construct a predicate abstraction from the code, model check the abstraction, and automatically re ne the a ..."
Abstract - Cited by 88 (11 self) - Add to MetaCart
We present a methodology and tool for verifying and certifying systems code. The veri cation is based on the lazy-abstraction paradigm for intertwining the following three logical steps: construct a predicate abstraction from the code, model check the abstraction, and automatically re ne the abstraction based on counterexample analysis. The certi cation is based on the proof-carrying code paradigm. Lazy abstraction enables the automatic construction of small proof certi cates. The methodology is implemented in Blast, the Berkeley Lazy Abstraction Software veri cation Tool. We describe our experience applying Blast to Linux and Windows device drivers. Given the C code for a driver and for a temporal-safety monitor, Blast automatically generates an easily checkable correctness certi cate if the driver satis es the speci cation, and an error trace otherwise.
(Show Context)

Citation Context

...ow loop invariants can be inferred automatically for proofs of type and memory safety, but the problem of inferring invariants for behavioral properties, such as temporal safety, remains largely open =-=[11]-=-. We show that lazy abstraction can be used naturally and efficiently to construct small correctness proofs for temporal-safety properties in a PCC based framework. The proof generation is intertwined...

Automatic Generation of Program Specifications

by Jeremy W. Nimmer, Michael D. Ernst - In ISSTA 2002, Proceedings of the 2002 International Symposium on Software Testing and Analysis , 2002
"... Producing specifications by dynamic (runtime) analysis of program executions is potentially unsound, because the analyzed executions may not fully characterize all possible executions of the program. In practice, how accurate are the results of a dynamic analysis? This paper describes the results of ..."
Abstract - Cited by 82 (16 self) - Add to MetaCart
Producing specifications by dynamic (runtime) analysis of program executions is potentially unsound, because the analyzed executions may not fully characterize all possible executions of the program. In practice, how accurate are the results of a dynamic analysis? This paper describes the results of an investigation into this question, determining how much specifications generalized from program runs must be changed in order to be verified by a static checker.

Relational Queries Over Program Traces

by Simon Goldsmith, et al. , 2005
"... Instrumenting programs with code to monitor runtime behavior is a common technique for profiling and debugging. In practice, instrumentation is either inserted manually by programmers, or automatically by specialized tools that monitor particular properties. We propose Program Trace Query Language ( ..."
Abstract - Cited by 76 (2 self) - Add to MetaCart
Instrumenting programs with code to monitor runtime behavior is a common technique for profiling and debugging. In practice, instrumentation is either inserted manually by programmers, or automatically by specialized tools that monitor particular properties. We propose Program Trace Query Language (PTQL), a language based on relational queries over program traces, in which programmers can write expressive, declarative queries about program behavior. We also describe our compiler, PARTIQLE. Given a PTQL query and a Java program, PARTIQLE instruments the program to execute the query on-line. We apply several PTQL queries to a set of benchmark programs, including the Apache Tomcat Web server. Our queries reveal significant performance bugs in the jack SpecJVM98 benchmark, in Tomcat, and in the IBM Java class library, as well as some correct though uncomfortably subtle code in the Xerces XML parser. We present performance measurements demonstrating that our prototype system has usable performance.

Automated Support for Program Refactoring using Invariants

by Yoshio Kataoka , Michael D. Ernst, William G. Griswold, David Notkin - IN ICSM , 2001
"... Program refactoring --- transforming a program to improve readability, structure, performance, abstraction, maintainability, or other characteristics --- is not applied in practice as much as might be desired. One deterrent is the cost of detecting candidates for refactoring and of choosing the appr ..."
Abstract - Cited by 75 (11 self) - Add to MetaCart
Program refactoring --- transforming a program to improve readability, structure, performance, abstraction, maintainability, or other characteristics --- is not applied in practice as much as might be desired. One deterrent is the cost of detecting candidates for refactoring and of choosing the appropriate refactoring transformation. This paper demonstrates the feasibility of automatically finding places in the program that are candidates for specific refactorings. The approach uses program invariants: when particular invariants hold at a program point, a specific refactoring is applicable. Since most programs lack explicit invariants, an invariant detection tool called Daikon is used to infer the required invariants. We developed an invariant pattern matcher for several common refactorings and applied it to an existing Java code base. Numerous refactorings were detected, and one of the developers of the code base assessed their efficacy.

Static verification of dynamically detected program invariants: Integrating Daikon and ESC/Java

by Jeremy W. Nimmer, Michael D. Ernst , 2001
"... This paper shows how to integrate two complementary techniques for manipulating program invariants: dynamic detection and static verification. Dynamic detection proposes likely invariants based on program executions, but the resulting properties are not guaranteed to be true over all possible execut ..."
Abstract - Cited by 73 (5 self) - Add to MetaCart
This paper shows how to integrate two complementary techniques for manipulating program invariants: dynamic detection and static verification. Dynamic detection proposes likely invariants based on program executions, but the resulting properties are not guaranteed to be true over all possible executions. Static verification checks that properties are always true, but it can be difficult and tedious to select a goal and to annotate programs for input to a static checker. Combining these techniques overcomes the weaknesses of each: dynamically detected invariants can annotate a program or provide goals for static verification, and static veri cation can confirm properties proposed by a dynamic tool. We have

Debugging temporal specifications with concept analysis

by Glenn Ammons, David Mandelin, James R. Larus - In ACM SIGPLAN Conf on Prog Lang Design and Implem , 2003
"... ABSTRACT Program verification tools (such as model checkers and static ana-lyzers) can find many errors in programs. These tools need formal specifications of correct program behavior, but writing a correctspecification is difficult, just as writing a correct program is difficult. Thus, just as we n ..."
Abstract - Cited by 62 (0 self) - Add to MetaCart
ABSTRACT Program verification tools (such as model checkers and static ana-lyzers) can find many errors in programs. These tools need formal specifications of correct program behavior, but writing a correctspecification is difficult, just as writing a correct program is difficult. Thus, just as we need methods for debugging programs, weneed methods for debugging specifications. This paper describes a novel method for debugging formal, tem-poral specifications. A straightforward way to debug a specification is based on manually examining the short program execution tracesthat program verification tools generate from specification violations and that specification miners extract from programs. Thismethod is tedious and error-prone because there may be hundreds or thousands of traces to inspect. Our method uses concept anal-ysis to automatically group traces into highly similar clusters. By examining clusters instead of individual traces, a person can debuga specification with less work. To test our method, we implemented a tool, Cable, for debug-ging specifications. We have used Cable to debug specifications produced by Strauss, our specification miner. We found that us-ing Cable to debug these specifications requires, on average, less than one third as many user decisions as debugging by examiningall traces requires. In one case, using Cable required only 28 decisions, while debugging by examining all traces required 224.
(Show Context)

Citation Context

...all "noise". In our opinion, ranking and clustering are complementary: ranking tells the user what reports to inspect first, while clustering helps the user avoid inspecting redundant report=-=s. Daikon [8]-=-, a tool for dynamically discovering arithmetic invariants, uses statistical confidence checks to suppress invariants that appear to have occurred by chance. In our case, we found that some buggy trac...

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University