Results 1 
6 of
6
Statistical Model Checking for Markov Decision Processes
"... Abstract—Statistical Model Checking (SMC) is a computationally very efficient verification technique based on selective system sampling. One well identified shortcoming of SMC is that, unlike probabilistic model checking, it cannot be applied to systems featuring nondeterminism, such as Markov Decis ..."
Abstract

Cited by 18 (1 self)
 Add to MetaCart
(Show Context)
Abstract—Statistical Model Checking (SMC) is a computationally very efficient verification technique based on selective system sampling. One well identified shortcoming of SMC is that, unlike probabilistic model checking, it cannot be applied to systems featuring nondeterminism, such as Markov Decision Processes (MDP). We address this limitation by developing an algorithm that resolves nondeterminism probabilistically, and then uses multiple rounds of sampling and Reinforcement Learning to provably improve resolutions of nondeterminism with respect to satisfying a Bounded Linear Temporal Logic (BLTL) property. Our algorithm thus reduces an MDP to a fully probabilistic Markov chain on which SMC may be applied to give an approximate solution to the problem of checking the probabilistic BLTL property. We integrate our algorithm in a parallelised modification of the PRISM simulation framework. Extensive validation with both new and PRISM benchmarks demonstrates that the approach scales very well in scenarios where symbolic algorithms fail to do so.
Information Hiding in Probabilistic Concurrent Systems
, 2010
"... Information hiding is a general concept which refers to the goal of preventing an adversary to infer secret information from the observables. Anonymity and Information Flow are examples of this notion. We study the problem of information hiding in systems characterized by the presence of randomizati ..."
Abstract

Cited by 11 (4 self)
 Add to MetaCart
(Show Context)
Information hiding is a general concept which refers to the goal of preventing an adversary to infer secret information from the observables. Anonymity and Information Flow are examples of this notion. We study the problem of information hiding in systems characterized by the presence of randomization and concurrency. It is well known that the raising of nondeterminism, due to the possible interleavings and interactions of the parallel components, can cause unintended information leaks. One way to solve this problem is to fix the strategy of the scheduler beforehand. In this work, we propose a milder restriction on the schedulers, and we define the notion of strong (probabilistic) information hiding under various notions of observables. Furthermore, we propose a method, based on the notion of automorphism, to verify that a system satisfies the property of strong information hiding, namely strong anonymity or nointerference, depending on the context.
Safe Equivalences for Security Properties
, 2010
"... In the field of Security, process equivalences have been used to characterize various informationhiding properties (for instance secrecy, anonymity and noninterference) based on the principle that a protocol P with a variable x satisfies such property if and only if, for every pair of secrets s1 ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
(Show Context)
In the field of Security, process equivalences have been used to characterize various informationhiding properties (for instance secrecy, anonymity and noninterference) based on the principle that a protocol P with a variable x satisfies such property if and only if, for every pair of secrets s1 and s2, P [ s1 /x] is equivalent to P [ s2 /x]. We argue that, in the presence of nondeterminism, the above principle relies on the assumption that the scheduler “works for the benefit of the protocol”, and this is usually not a safe assumption. Nonsafe equivalences, in this sense, include completetrace equivalence and bisimulation. We present a formalism in which we can specify admissible schedulers and, correspondingly, safe versions of these equivalences. We prove that safe bisimulation is still a congruence. Finally, we show that safe equivalences can be used to establish informationhiding properties.
An algorithmic approximation of the infimum reachability probability for probabilistic finite automata
 CoRR
, 2010
"... ar ..."
(Show Context)
Model checking concurrent programs with nondeterminism and randomization.
, 2010
"... Abstract For concurrent probabilistic programs having processlevel nondeterminism, it is often necessary to restrict the class of schedulers that resolve nondeterminism to obtain sound and precise model checking algorithms. In this paper, we introduce two classes of schedulers called view consiste ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Abstract For concurrent probabilistic programs having processlevel nondeterminism, it is often necessary to restrict the class of schedulers that resolve nondeterminism to obtain sound and precise model checking algorithms. In this paper, we introduce two classes of schedulers called view consistent and locally Markovian schedulers and consider the model checking problem of concurrent, probabilistic programs under these alternate semantics. Specifically, given a Büchi automaton Spec, a threshold x ∈ [0, 1], and a concurrent program P, the model checking problem asks if the measure of computations of P that satisfy Spec is at least x, under all view consistent (or locally Markovian) schedulers. We give precise complexity results for the model checking problem (for different classes of Büchi automata specifications) and contrast it with the complexity under the standard semantics that considers all schedulers. Digital Object Identifier 10.4230/LIPIcs.xxx.yyy.p Introduction The use of randomization in concurrent or distributed systems is often key to achieving certain objectives it is used in distributed algorithms to break symmetry Recently, many researchers have observed
ness, (Un)Decidability and Algorithms Contractual Date of Delivery to the CEC: 30Sep2013 Actual Date of Delivery to the CEC: 30Sep2013
, 2013
"... Probabilistic model checking computes the probability values of a given property quantifying over all possible schedulers. It turns out that maximum and minimum probabilities calculated in such a way are overestimations on models of distributed systems in which components are loosely coupled and s ..."
Abstract
 Add to MetaCart
(Show Context)
Probabilistic model checking computes the probability values of a given property quantifying over all possible schedulers. It turns out that maximum and minimum probabilities calculated in such a way are overestimations on models of distributed systems in which components are loosely coupled and share little information with each other (and hence arbitrary schedulers may result too powerful). Therefore, we introduced definitions that characterise which are the schedulers that properly capture the idea of distributed behaviour in probabilistic and nondeterministic systems modeled as a set of interacting components. In this article, we provide an overview of the work we have done in the last years which includes: (1) the definitions of distributed and strongly distributed schedulers, providing motivation and intuition; (2) expressiveness results, comparing them to restricted versions such as deterministic variants or finite memory variants; (3) undecidability results —in particular the model checking problem is not decidable in general when restricting to distributed schedulers; (4) a counterexample guided refinement technique that, using standard probabilistic model checking, allows to increase precision in the actual bounds in the distributed setting; and (5) a revision of the partial order reduction technique for probabilistic model checking. We conclude with an extensive review of related work dealing with similar approaches to ours. Note: