Results 1  10
of
15
Monte Carlo Model Checking
 In Proc. of Tools and Algorithms for Construction and Analysis of Systems (TACAS 2005), volume 3440 of LNCS
, 2005
"... Abstract. We present MC 2, what we believe to be the first randomized, Monte Carlo algorithm for temporallogic model checking, the classical problem of deciding whether or not a property specified in temporal logic holds of a system specification. Given a specification S of a finitestate system, a ..."
Abstract

Cited by 56 (4 self)
 Add to MetaCart
(Show Context)
Abstract. We present MC 2, what we believe to be the first randomized, Monte Carlo algorithm for temporallogic model checking, the classical problem of deciding whether or not a property specified in temporal logic holds of a system specification. Given a specification S of a finitestate system, an LTL (Linear Temporal Logic) formula ϕ, and parameters ɛ and δ, MC 2 takes N = ln(δ) / ln(1 − ɛ) random samples (random walks ending in a cycle, i.e lassos) from the Büchi automaton B = BS × B¬ϕ to decide if L(B) = ∅. Should a sample reveal an accepting lasso l, MC 2 returns false with l as a witness. Otherwise, it returns true and reports that with probability less than δ, pZ < ɛ, where pZ is the expectation of an accepting lasso in B. It does so in time O(N · D) and space O(D), where D is B’s recurrence diameter, using a number of samples N that is optimal to within a constant factor. Our experimental results demonstrate that MC 2 is fast, memoryefficient, and scales very well.
Probabilistically accurate program transformations
 In SAS
, 2011
"... Abstract. The standard approach to program transformation involves the use of discrete logical reasoning to prove that the transformation does not change the observable semantics of the program. We propose a new approach that, in contrast, uses probabilistic reasoning to justify the application of t ..."
Abstract

Cited by 38 (14 self)
 Add to MetaCart
(Show Context)
Abstract. The standard approach to program transformation involves the use of discrete logical reasoning to prove that the transformation does not change the observable semantics of the program. We propose a new approach that, in contrast, uses probabilistic reasoning to justify the application of transformations that may change, within probabilistic accuracy bounds, the result that the program produces. Our new approach produces probabilistic guarantees of the form P(D  ≥ B) ≤ ɛ, ɛ ∈ (0, 1), where D is the difference between the results that the transformed and original programs produce, B is an acceptability bound on the absolute value of D, and ɛ is the maximum acceptable probability of observing large D. We show how to use our approach to justify the application of loop perforation (which transforms loops to execute fewer iterations) to a set of computational patterns. 1
A few graphbased relational numerical abstract domains
 Static Analysis: Proceedings of the 9th International Symposium
, 2002
"... Abstract. This article presents the systematic design of a class of relational numerical abstract domains from nonrelational ones. Constructed domains represent sets of invariants of the form (vj − vi ∈ C), where vj and vi are two variables, and C lives in an abstraction of P(Z), P(Q), or P(R). We ..."
Abstract

Cited by 22 (1 self)
 Add to MetaCart
(Show Context)
Abstract. This article presents the systematic design of a class of relational numerical abstract domains from nonrelational ones. Constructed domains represent sets of invariants of the form (vj − vi ∈ C), where vj and vi are two variables, and C lives in an abstraction of P(Z), P(Q), or P(R). We will call this family of domains weakly relational domains. The underlying concept allowing this construction is an extension of potential graphs and shortestpath closure algorithms in exoticlike algebras. Example constructions are given in order to retrieve wellknown domains Interpretation framework in order to design various static analyses. A major benefit of this construction is its modularity, allowing to quickly implement new abstract domains from existing ones. 1
An overview of semantics for the validation of numerical programs
, 2005
"... Abstract. In this article, we introduce a simple formal semantics for floatingpoint numbers with errors which is expressive enough to be formally compared to the other methods. Next, we define formal semantics for interval, stochastic, automatic differentiation and error series methods. This enable ..."
Abstract

Cited by 20 (9 self)
 Add to MetaCart
(Show Context)
Abstract. In this article, we introduce a simple formal semantics for floatingpoint numbers with errors which is expressive enough to be formally compared to the other methods. Next, we define formal semantics for interval, stochastic, automatic differentiation and error series methods. This enables us to formally compare the properties calculated in each semantics to our reference, simple semantics. Most of these methods having been developed to verify numerical intensive codes, we also discuss their adequacy to the formal validation of softwares and to static analysis. Finally, this study is completed by experimental results. 1
Static Analysis for Probabilistic Programs: Inferring Whole Program Properties from Finitely Many Paths.
"... We propose an approach for the static analysis of probabilistic programs that sense, manipulate, and control based on uncertain data. Examples include programs used in risk analysis, medical decision making and cyberphysical systems. Correctness properties of such programs take the form of queries ..."
Abstract

Cited by 14 (1 self)
 Add to MetaCart
(Show Context)
We propose an approach for the static analysis of probabilistic programs that sense, manipulate, and control based on uncertain data. Examples include programs used in risk analysis, medical decision making and cyberphysical systems. Correctness properties of such programs take the form of queries that seek the probabilities of assertions over program variables. We present a static analysis approach that provides guaranteed interval bounds on the values (assertion probabilities) of such queries. First, we observe that for probabilistic programs, it is possible to conclude facts about the behavior of the entire program by choosing a finite, adequate set of its paths. We provide strategies for choosing such a set of paths and verifying its adequacy. The queries are evaluated over each path by a combination of symbolic execution and probabilistic volumebound computations. Each path yields interval bounds that can be summed up with a “coverage ” bound to yield an interval that encloses the probability of assertion for the program as a whole. We demonstrate promising results on a suite of benchmarks from many different sources including robotic manipulators and medical decision making programs.
Open source model checking
 In Proceedings of the Workshop on Software Model Checking
, 2005
"... Abstract. We present GMC 2,asoftwaremodelcheckerforGCC, theopensource compiler from the Free Software Foundation (FSF). GMC 2,which is part of the GMC staticanalysis and modelchecking tool suite for GCC under development at SUNY Stony Brook, can be seen as an extension of Monte Carlo model checkin ..."
Abstract

Cited by 10 (4 self)
 Add to MetaCart
(Show Context)
Abstract. We present GMC 2,asoftwaremodelcheckerforGCC, theopensource compiler from the Free Software Foundation (FSF). GMC 2,which is part of the GMC staticanalysis and modelchecking tool suite for GCC under development at SUNY Stony Brook, can be seen as an extension of Monte Carlo model checking to the setting of concurrent, procedural programming languages. Monte Carlo model checking is a newly developed technique that utilizes the theory of geometric random variables, statistical hypothesis testing, and random sampling of lassos in Büchi automata to realize a onesided error, randomized algorithm for LTL model checking. To handle the function call/return mechanisms inherent in procedural languages such as C/C++, the version of Monte Carlo model checking implemented in GMC 2 is optimized for pushdownautomaton models. Our experimental results demonstrate that this approach yields an efficient and scalable software model checker for GCC. 1
Static Analyses of FloatingPoint Operations
 In SAS’01, volume 2126 of LNCS
, 2001
"... Computers manipulate approximations of real numbers, called floatingpoint numbers. The calculations they make are accurate enough for most applications. Unfortunately, in some (catastrophic) situations, the floatingpoint operations lose so much precision that they quickly become irrelevant. In thi ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
Computers manipulate approximations of real numbers, called floatingpoint numbers. The calculations they make are accurate enough for most applications. Unfortunately, in some (catastrophic) situations, the floatingpoint operations lose so much precision that they quickly become irrelevant. In this article, we review some of the problems one can encounter, focussing on the IEEE7541985 norm. We give a (sketch of a) semantics of its basic operations then abstract them (in the sense of abstract interpretation) to extract information about the possible loss of precision. The expected application is abstract debugging of software ranging from simple onboard systems (which use more and more ontheshelf microprocessors with floatingpoint units) to scientific codes. The abstract analysis is demonstrated on simple examples and compared with related work. 1
Backwards abstract interpretation of probabilistic programs
 IN EUROPEAN SYMPOSIUM ON PROGRAMMING LANGUAGES AND SYSTEMS (ESOP '01), NUMBER 2028 IN LECTURE NOTES IN COMPUTER SCIENCE
, 2001
"... ..."
(Show Context)
Quantitative model checking
 In Proc. of ISoLA’04, the 1st Int. Symposium on Leveraging Applications of Formal Methods
, 2004
"... Abstract. We present QMC, a onesided error Monte Carlo decision procedure for the LTL modelchecking problem S  = ϕ. Besides serving as a randomized algorithm for LTL model checking, QMC delivers quantitative information about the likelihood that S  = ϕ. In particular, given a specification S of ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
(Show Context)
Abstract. We present QMC, a onesided error Monte Carlo decision procedure for the LTL modelchecking problem S  = ϕ. Besides serving as a randomized algorithm for LTL model checking, QMC delivers quantitative information about the likelihood that S  = ϕ. In particular, given a specification S of a finitestate system, an LTL formula ϕ, and parameters ɛ and δ, QMC performs random sampling to compute an estimate epZ of the expectation pZ that the language L(B) oftheBüchi automaton B = BS × B¬ϕ is empty; B is such that L(B) = ∅ iff S  = ϕ. A random sample in our case is a lasso, i.e. an initialized random walk through B ending in a cycle. The estimate epZ output by QMC is an (ɛ, δ)approximation of pZ—one that is within a factor of 1±ɛ with probability at least 1−δ—and is computed using a number of samples N that is optimal to within a constant factor, in expected time O(N · D) and expected space O(D), where D is B’s recurrence diameter. Experimental results demonstrate that QMC is fast, memoryefficient, and scales extremely well. 1
Compositional Solution Space Quantification for Probabilistic Software Analysis
 In: PLDI
"... Probabilistic software analysis aims at quantifying how likely a target event is to occur during program execution. Current approaches rely on symbolic execution to identify the conditions to reach the target event and try to quantify the fraction of the input domain satisfying these conditions. P ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
(Show Context)
Probabilistic software analysis aims at quantifying how likely a target event is to occur during program execution. Current approaches rely on symbolic execution to identify the conditions to reach the target event and try to quantify the fraction of the input domain satisfying these conditions. Precise quantification is usually limited to linear constraints, while only approximate solutions can be provided in general through statistical approaches. However, statistical approaches may fail to converge to an acceptable accuracy within a reasonable time. We present a compositional statistical approach for the efficient quantification of solution spaces for arbitrarily complex constraints over bounded floatingpoint domains. The approach leverages interval constraint propagation to improve the accuracy of the estimation by focusing the sampling on the regions of the input domain containing the sought solutions. Preliminary experiments show significant improvement on previous approaches both in results accuracy and analysis time.