Results 1 - 10
of
68
Quantitative Stochastic Parity Games
"... We study perfect-information stochastic parity games. These are two-player nonterminating games which are played on a graph with turn-based probabilistic transitions. A play results in an infinite path and the conflicting goals of the two players are!-regular path properties, formalized as parity w ..."
Abstract
-
Cited by 69 (32 self)
- Add to MetaCart
(Show Context)
We study perfect-information stochastic parity games. These are two-player nonterminating games which are played on a graph with turn-based probabilistic transitions. A play results in an infinite path and the conflicting goals of the two players are!-regular path properties, formalized as parity winning conditions. The qualitative solution of such a game amounts to computing the set of vertices from which a player has a strategy to win with probability 1 (or with positive probability). The quantitative solution amounts to computing the value of the game in every vertex, i.e., the highest probability with which a player can guarantee satisfaction of his own objective in a play that starts from the vertex. For the important special case of one-player stochastic parity games (parity Markov decision processes) we give polynomial-time algorithms both for the qualitative and the quantitative solution. The running time of the qualitative solution is O(d \Delta m 3=2) for graphs with m edges and d priorities. The quantitative solution is based on a linearprogramming formulation.
Discounting the future in systems theory
- In Automata, Languages, and Programming, LNCS 2719
, 2003
"... ..."
(Show Context)
Quantitative Solution of Omega-Regular Games
"... We consider two-player games played for an infinite number of rounds, with ω-regular winning conditions. The games may be concurrent, in that the players choose their moves simultaneously and independently, and probabilistic, in that the moves determine a probability distribution for the successor s ..."
Abstract
-
Cited by 60 (18 self)
- Add to MetaCart
We consider two-player games played for an infinite number of rounds, with ω-regular winning conditions. The games may be concurrent, in that the players choose their moves simultaneously and independently, and probabilistic, in that the moves determine a probability distribution for the successor state. We introduce quantitative game µ-calculus, and we show that the maximal probability of winning such games can be expressed as the fixpoint formulas in this calculus. We develop the arguments both for deterministic and for probabilistic concurrent games; as a special case, we solve probabilistic turn-based games with ω-regular winning conditions, which was also open. We also characterize the optimality, and the memory requirements, of the winning strategies. In particular, we show that while memoryless strategies suffice for winning games with safety and reachability conditions, Büchi conditions require the use of strategies with infinite memory. The existence of optimal strategies, as opposed to ε-optimal, is only guaranteed in games with safety winning conditions.
Game-based abstraction for Markov decision processes
, 2006
"... In this paper we present a novel abstraction technique for Markov decision processes (MDPs), which are widely used for modelling systems that exhibit both probabilistic and nondeterministic behaviour. In the field of model checking, abstraction has proved an extremely successful tool to combat the s ..."
Abstract
-
Cited by 53 (15 self)
- Add to MetaCart
(Show Context)
In this paper we present a novel abstraction technique for Markov decision processes (MDPs), which are widely used for modelling systems that exhibit both probabilistic and nondeterministic behaviour. In the field of model checking, abstraction has proved an extremely successful tool to combat the state-space explosion problem. In the probabilistic setting, however, little practical progress has been made in this area. We propose an abstraction method for MDPs based on stochastic two-player games. The key idea behind this approach is to maintain a separation between nondeterminism present in the original MDP and nondeterminism introduced through abstraction, each type being represented by a different player in the game. Crucially, this allows us to obtain distinct lower and upper bounds for both the best and worst-case performance (minimum or maximum probabilities) of the MDP. We have implemented our techniques and illustrate their practical utility by applying them to a quantitative analysis of the Zeroconf dynamic network configuration protocol. 1.
Computing Minimum and Maximum Reachability Times in Probabilistic Systems
, 1999
"... A Markov decision process is a generalization of a Markov chain in which both probabilistic and nondeterministic choice coexist. Given a Markov decision process with costs associated with the transitions and a set of target states, the stochastic shortest path problem consists in computing the minim ..."
Abstract
-
Cited by 51 (2 self)
- Add to MetaCart
(Show Context)
A Markov decision process is a generalization of a Markov chain in which both probabilistic and nondeterministic choice coexist. Given a Markov decision process with costs associated with the transitions and a set of target states, the stochastic shortest path problem consists in computing the minimum expected cost of a control strategy that guarantees to reach the target. In this paper, we consider the classes of stochastic shortest path problems in which the costs are all non-negative, or all non-positive. Previously, these two classes of problems could be solved only under the assumption that the policies that minimize or maximize the expected cost also lead to the target with probability 1. This assumption does not necessarily hold for Markov decision processes that arise as model for distributed probabilistic systems. We present efficient methods for solving these two classes of problems without relying on additional assumptions. The methods are based on algorithms to transform th...
Concurrent Omega-Regular Games
, 2000
"... We consider two-player games which are played on a finite state space for an infinite number of rounds. The games are concurrent, that is, in each round, the two players choose their moves independently and simultaneously; the current state and the two moves determine a successor state. We consider ..."
Abstract
-
Cited by 42 (12 self)
- Add to MetaCart
We consider two-player games which are played on a finite state space for an infinite number of rounds. The games are concurrent, that is, in each round, the two players choose their moves independently and simultaneously; the current state and the two moves determine a successor state. We consider omega-regular winning conditions on the resulting infinite state sequence. To model the independent choice of moves, both players are allowed to use randomization for selecting their moves. This gives rise to the following qualitative modes of winning, which can be studied without numerical considerations concerning probabilities: sure-win (player 1 can ensure winning with certainty), almost-sure-win (player 1 can ensure winning with probability 1), limit-win (player 1 can ensure winning with probability arbitrarily close to 1), bounded-win (player 1 can ensure winning with probability bounded away from 0), positive-win (player 1 can ensure winning with positive probability), and exist-win (player 1 can ensure that at least one possible outcome of the game satisfies the winning condition). We provide algorithms for computing the sets of winning states for each of these winning modes. In particular, we solve concurrent Rabin-chain games in ÒÇ Ñ time, where Ò is the size of the game structure and Ñ is the number of pairs in the Rabin-chain condition. While this complexity is in line with traditional turn-based games, where in each state only one of the two players has a choice of moves, our algorithms are considerably more involved than those for turn-based games. This is because concurrent games violate two of the most fundamental properties of turn-based games. First, concurrent games are not determined, but rather exhibit a more general duality property which involves multiple modes of winning. Second, winning strategies for concurrent games may require infinite memory.
Controller synthesis for probabilistic systems
- In Proceedings of IFIP TCS’2004
, 2004
"... Supported by the DFG-Project “VERIAM ” and the DFG-NWO-Project “VOSS”. Supported by the European Research Training Network “Games”. Abstract Controller synthesis addresses the question of how to limit the internal behavior of a given implementation to meet its specification, regardless of the behavi ..."
Abstract
-
Cited by 32 (0 self)
- Add to MetaCart
Supported by the DFG-Project “VERIAM ” and the DFG-NWO-Project “VOSS”. Supported by the European Research Training Network “Games”. Abstract Controller synthesis addresses the question of how to limit the internal behavior of a given implementation to meet its specification, regardless of the behavior enforced by the environment. In this paper, we consider a model with probabilism and nondeterminism where the nondeterministic choices in some states are assumed to be controllable, while the others are under the control of an unpredictable environment. We first consider probabilistic computation tree logic as specification formalism, discuss the role of strategy-types for the controller and show the NP-hardness of the controller synthesis problem. The second part of the paper presents a controller synthesis algorithm for automata-specifications which relies on a reduction to the synthesis problem for PCTL with fairness. 1.
Recursive concurrent stochastic games
- In Proc. of 33rd Int. Coll. on Automata, Languages, and Programming (ICALP’06
, 2006
"... Abstract. We study Recursive Concurrent Stochastic Games (RCSGs), extending our recent analysis of recursive simple stochastic games [16, 17] to a concurrent setting where the two players choose moves simultaneously and independently at each state. For multi-exit games, our earlier work already show ..."
Abstract
-
Cited by 30 (4 self)
- Add to MetaCart
(Show Context)
Abstract. We study Recursive Concurrent Stochastic Games (RCSGs), extending our recent analysis of recursive simple stochastic games [16, 17] to a concurrent setting where the two players choose moves simultaneously and independently at each state. For multi-exit games, our earlier work already showed undecidability for basic questions like termination, thus we focus on the important case of single-exit RCSGs (1-RCSGs). We first characterize the value of a 1-RCSG termination game as the least fixed point solution of a system of nonlinear minimax functional equations, and use it to show PSPACE decidability for the quantitative termination problem. We then give a strategy improvement technique, which we use to show that player 1 (maximizer) has ǫ-optimal randomized Stackless & Memoryless (r-SM) strategies for all ǫ> 0, while player 2 (minimizer) has optimal r-SM strategies. Thus, such games are r-SM-determined. These results mirror and generalize in a strong sense the randomized memoryless determinacy results for finite stochastic games, and extend the classic Hoffman-Karp [22] strategy improvement approach from the finite to an infinite state setting. The proofs in our infinite-state setting are very different however, relying on subtle analytic properties of certain power series that arise from studying 1-RCSGs. We show that our upper bounds, even for qualitative (probability 1) termination, can not be improved, even to NP, without a major breakthrough, by giving two reductions: first a P-time reduction from the long-standing square-root sum problem to the quantitative termination decision problem for finite concurrent stochastic games, and then a P-time reduction from the latter problem to the qualitative termination problem for 1-RCSGs. 1.