Results 1  10
of
12
A deterministic subexponential algorithm for solving parity games
 SODA
, 2006
"... The existence of polynomial time algorithms for the solution of parity games is a major open problem. The fastest known algorithms for the problem are randomized algorithms that run in subexponential time. These algorithms are all ultimately based on the randomized subexponential simplex algorithms ..."
Abstract

Cited by 44 (2 self)
 Add to MetaCart
The existence of polynomial time algorithms for the solution of parity games is a major open problem. The fastest known algorithms for the problem are randomized algorithms that run in subexponential time. These algorithms are all ultimately based on the randomized subexponential simplex algorithms of Kalai and of Matousek, Sharir and Welzl. Randomness seems to play an essential role in these algorithms. We use a completely different, and elementary, approach to obtain a deterministic subexponential algorithm for the solution of parity games. The new algorithm, like the existing randomized subexponential algorithms, uses only polynomial space, and it is almost as fast as the randomized subexponential algorithms mentioned above.
Better quality in synthesis through quantitative objectives
 In CoRR, abs/0904.2638
, 2009
"... Abstract. Most specification languages express only qualitative constraints. However, among two implementations that satisfy a given specification, one may be preferred to another. For example, if a specification asks that every request is followed by a response, one may prefer an implementation tha ..."
Abstract

Cited by 21 (9 self)
 Add to MetaCart
Abstract. Most specification languages express only qualitative constraints. However, among two implementations that satisfy a given specification, one may be preferred to another. For example, if a specification asks that every request is followed by a response, one may prefer an implementation that generates responses quickly but does not generate unnecessary responses. We use quantitative properties to measure the “goodness ” of an implementation. Using games with corresponding quantitative objectives, we can synthesize “optimal ” implementations, which are preferred among the set of possible implementations that satisfy a given specification. In particular, we show how automata with lexicographic meanpayoff conditions can be used to express many interesting quantitative properties for reactive systems. In this framework, the synthesis of optimal implementations requires the solution of lexicographic meanpayoff games (for safety requirements), and the solution of games with both lexicographic meanpayoff and parity objectives (for liveness requirements). We present algorithms for solving both kinds of novel graph games. 1
Coalgebraic automata theory: Basic results
 Logical Methods in Computer Science
"... Vol. 4 (4:10) 2008, pp. 1–43 www.lmcsonline.org ..."
Environment Assumptions for Synthesis
, 2008
"... The synthesis problem asks to construct a reactive finitestate system from an ωregular specification. Initial specifications are often unrealizable, which means that there is no system that implements the specification. A common reason for unrealizability is that assumptions on the environment of ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
The synthesis problem asks to construct a reactive finitestate system from an ωregular specification. Initial specifications are often unrealizable, which means that there is no system that implements the specification. A common reason for unrealizability is that assumptions on the environment of the system are incomplete. We study the problem of correcting an unrealizable specification ϕ by computing an environment assumption ψ such that the new specification ψ → ϕ is realizable. Our aim is to construct an assumption ψ that constrains only the environment and is as weak as possible. We present a twostep algorithm for computing assumptions. The algorithm operates on the game graph that is used to answer the realizability question. First, we compute a safety assumption that removes a minimal set of environment edges from the graph. Second, we compute a liveness assumption that puts fairness conditions on some of the remaining environment edges. We show that the problem of finding a minimal set of fair edges is computationally hard, and we use probabilistic games to compute a locally minimal fairness assumption.
Games and Model Checking for Guarded Logics
 IN PROCEEDINGS OF LPAR 2001, LECTURE NOTES IN COMPUTER SCIENCE NR. 2250
, 2000
"... We investigate the model checking problems for guarded firstorder and fixed point logics by reducing them to parity games. This approach is known to provide good results for the modal µcalculus and is very closely related to automatabased methods. To obtain good results also for guarded logics, o ..."
Abstract

Cited by 9 (4 self)
 Add to MetaCart
We investigate the model checking problems for guarded firstorder and fixed point logics by reducing them to parity games. This approach is known to provide good results for the modal µcalculus and is very closely related to automatabased methods. To obtain good results also for guarded logics, optimized constructions of games have to be provided. Further, we study the structure of parity games, isolate `easy' cases that admit efficient algorithmic solutions, and determine their relationship to specific fragments of guarded fixed point logics.
A SubQuadratic Algorithm for Conjunctive and Disjunctive BESs
, 2004
"... We present an algorithm for conjunctive and disjunctive Boolean equation systems (BESs), which arise frequently in the verification and analysis of finite state concurrent systems. In contrast to the previously best known O(e²) time solutions, our algorithm computes the solution of such a fixpo ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
We present an algorithm for conjunctive and disjunctive Boolean equation systems (BESs), which arise frequently in the verification and analysis of finite state concurrent systems. In contrast to the previously best known O(e²) time solutions, our algorithm computes the solution of such a fixpoint equation system with size e and alternation depth d in O(e log d) time.
Faster and Dynamic Algorithms For Maximal EndComponent Decomposition And Related Graph Problems In Probabilistic Verification
"... We present faster and dynamic algorithms for the following problems arising in probabilistic verification: Computation of the maximal endcomponent (mec) decomposition of Markov decision processes (MDPs), and of the almost sure winning set for reachability and parity objectives in MDPs. We achieve t ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
We present faster and dynamic algorithms for the following problems arising in probabilistic verification: Computation of the maximal endcomponent (mec) decomposition of Markov decision processes (MDPs), and of the almost sure winning set for reachability and parity objectives in MDPs. We achieve the following running time for static algorithms in MDPs with graphs of n vertices and m edges: (1) O(m · min { √ m, n 2/3}) for the mec decomposition, improving the longstanding O(m·n) bound; (2) O(m·n 2/3) for reachability objectives, improving the previous O(m · √ m) bound for m> n 4/3; and (3) O(m · min { √ m, n 2/3}·log(d)) for parity objectives with d priorities, improving the previous O(m · √ m · d) bound. We also give incremental and decremental algorithms in linear time for mec decomposition and reachability objectives and O(m · log d) timeforparity objectives.
Open implication
"... Abstract. We argue that the usual tracebased notions of implication and equivalence for linear temporal logics are too strong and should be complemented by the weaker notions of open implication and open equivalence. Although open implication is harder to compute, it can be used to advantage both i ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
Abstract. We argue that the usual tracebased notions of implication and equivalence for linear temporal logics are too strong and should be complemented by the weaker notions of open implication and open equivalence. Although open implication is harder to compute, it can be used to advantage both in model checking and in synthesis. We study the difference between tracebased equivalence and open equivalence and describe an algorithm to compute open implication of Linear Temporal Logic formulas with an asymptotically optimal complexity. We also show how to compute open implication while avoiding Safra’s construction. We have implemented an openimplication solver for Generalized Reactivity(1) specifications. In a case study, we show that open equivalence can be used to justify the use of an alternative specification that allows us to synthesize much smaller systems in far less time. 1
Mechanizing the powerset construction for restricted classes of ωautomata
 Tech. Rep. 228, Institut für Informatik, AlbertLudwigsUniversität Freiburg
, 2007
"... Abstract. Automata over infinite words provide a powerful framework, which we can use to solve various decision problems. However, the automatized reasoning with restricted classes of automata over infinite words is often simpler and more efficient. For instance, weak deterministic Büchi automata, w ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
Abstract. Automata over infinite words provide a powerful framework, which we can use to solve various decision problems. However, the automatized reasoning with restricted classes of automata over infinite words is often simpler and more efficient. For instance, weak deterministic Büchi automata, which recognize the ωregular languages in the Borel class Fσ ∩ Gδ, can be handled algorithmically almost as efficient as deterministic automata over finite words. In this paper, we show how and when we can determinize automata over infinite words by the standard powerset construction for finite words. The presented construction is more efficient than allpurpose constructions for automata that recognize languages in Fσ ∩ Gδ. Further, based on the powerset construction, we present an improved automata construction that handles the quantification in the automatabased approach for FO(R, Z,+,<) much more efficiently. 1
State of Büchi Complementation
"... Büchi complementation has been studied for five decades since the formalism was introduced in 1960. Known complementation constructions can be classified into Ramseybased, determinizationbased, rankbased, and slicebased approaches. For the performance of these approaches, there have been severa ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Büchi complementation has been studied for five decades since the formalism was introduced in 1960. Known complementation constructions can be classified into Ramseybased, determinizationbased, rankbased, and slicebased approaches. For the performance of these approaches, there have been several complexity analyses but very few experimental results. What especially lacks is a comparative experiment on all the four approaches to see how they perform in practice. In this paper, we review the state of Büchi complementation, propose several optimization heuristics, and perform comparative experimentation on the four approaches. The experimental results show that the determinizationbased SafraPiterman construction outperforms the other three and our heuristics substantially improve the SafraPiterman construction and the slicebased construction.