Results 1  10
of
33
An AutomataTheoretic Approach to BranchingTime Model Checking
 JOURNAL OF THE ACM
, 1998
"... Translating linear temporal logic formulas to automata has proven to be an effective approach for implementing lineartime modelchecking, and for obtaining many extensions and improvements to this verification method. On the other hand, for branching temporal logic, automatatheoretic techniques ..."
Abstract

Cited by 297 (65 self)
 Add to MetaCart
Translating linear temporal logic formulas to automata has proven to be an effective approach for implementing lineartime modelchecking, and for obtaining many extensions and improvements to this verification method. On the other hand, for branching temporal logic, automatatheoretic techniques have long been thought to introduce an exponential penalty, making them essentially useless for modelchecking. Recently, Bernholtz and Grumberg have shown that this exponential penalty can be avoided, though they did not match the linear complexity of nonautomatatheoretic algorithms. In this paper we show that alternating tree automata are the key to a comprehensive automatatheoretic framework for branching temporal logics. Not only, as was shown by Muller et al., can they be used to obtain optimal decision procedures, but, as we show here, they also make it possible to derive optimal modelchecking algorithms. Moreover, the simple combinatorial structure that emerges from the a...
On the fixed parameter complexity of graph enumeration problems definable in monadic secondorder logic
, 2001
"... ..."
Models of Computation  Exploring the Power of Computing
"... Theoretical computer science treats any computational subject for which a good model can be created. Research on formal models of computation was initiated in the 1930s and 1940s by Turing, Post, Kleene, Church, and others. In the 1950s and 1960s programming languages, language translators, and oper ..."
Abstract

Cited by 56 (6 self)
 Add to MetaCart
Theoretical computer science treats any computational subject for which a good model can be created. Research on formal models of computation was initiated in the 1930s and 1940s by Turing, Post, Kleene, Church, and others. In the 1950s and 1960s programming languages, language translators, and operating systems were under development and therefore became both the subject and basis for a great deal of theoretical work. The power of computers of this period was limited by slow processors and small amounts of memory, and thus theories (models, algorithms, and analysis) were developed to explore the efficient use of computers as well as the inherent complexity of problems. The former subject is known today as algorithms and data structures, the latter computational complexity. The focus of theoretical computer scientists in the 1960s on languages is reflected in the first textbook on the subject, Formal Languages and Their Relation to Automata by John Hopcroft and Jeffrey Ullman. This influential book led to the creation of many languagecentered theoretical computer science courses; many introductory theory courses today continue to reflect the content of this book and the interests of theoreticians of the 1960s and early 1970s. Although
Fast parallel circuits for the quantum Fourier transform
 PROCEEDINGS 41ST ANNUAL SYMPOSIUM ON FOUNDATIONS OF COMPUTER SCIENCE (FOCS’00)
, 2000
"... We give new bounds on the circuit complexity of the quantum Fourier transform (QFT). We give an upper bound of O(log n + log log(1/ε)) on the circuit depth for computing an approximation of the QFT with respect to the modulus 2 n with error bounded by ε. Thus, even for exponentially small error, our ..."
Abstract

Cited by 54 (2 self)
 Add to MetaCart
We give new bounds on the circuit complexity of the quantum Fourier transform (QFT). We give an upper bound of O(log n + log log(1/ε)) on the circuit depth for computing an approximation of the QFT with respect to the modulus 2 n with error bounded by ε. Thus, even for exponentially small error, our circuits have depth O(log n). The best previous depth bound was O(n), even for approximations with constant error. Moreover, our circuits have size O(n log(n/ε)). We also give an upper bound of O(n(log n) 2 log log n) on the circuit size of the exact QFT modulo 2 n, for which the best previous bound was O(n 2). As an application of the above depth bound, we show that Shor’s factoring algorithm may be based on quantum circuits with depth only O(log n) and polynomialsize, in combination with classical polynomialtime pre and postprocessing. In the language of computational complexity, this implies that factoring is in the complexity class ZPP BQNC, where BQNC is the class of problems computable with boundederror probability by quantum circuits with polylogarithmic depth and polynomial size. Finally, we prove an Ω(log n) lower bound on the depth complexity of approximations of the
Verification of Fair Transition Systems
, 1998
"... . In program verification, we check that an implementation meets its specification. Both the specification and the implementation describe the possible behaviors of the program, though at different levels of abstraction. We distinguish between two approaches to implementation of specifications. The ..."
Abstract

Cited by 19 (9 self)
 Add to MetaCart
. In program verification, we check that an implementation meets its specification. Both the specification and the implementation describe the possible behaviors of the program, though at different levels of abstraction. We distinguish between two approaches to implementation of specifications. The first approach is tracebased implementation, where we require every computation of the implementation to correlate to some computation of the specification. The second approach is treebased implementation, where we require every computation tree embodied in the implementation to correlate to some computation tree embodied in the specification. The two approaches to implementation are strongly related to the lineartime versus branchingtime dichotomy in temporal logic. In this work we examine the tracebased and the treebased approachesfrom a complexitytheoretic point of view. We consider and compare the complexity of verification of fair transition systems, modeling both the implement...
Symmetries and the Complexity of Pure Nash Equilibrium
, 2006
"... Strategic games may exhibit symmetries in a variety of ways. A common aspect of symmetry, enabling the compact representation of games even when the number of players is unbounded, is that players cannot (or need not) distinguish between the other players. We define four classes of symmetric games b ..."
Abstract

Cited by 18 (3 self)
 Add to MetaCart
Strategic games may exhibit symmetries in a variety of ways. A common aspect of symmetry, enabling the compact representation of games even when the number of players is unbounded, is that players cannot (or need not) distinguish between the other players. We define four classes of symmetric games by considering two additional properties: identical payoff functions for all players and the ability to distinguish oneself from the other players. Based on these varying notions of symmetry, we investigate the computational complexity of pure Nash equilibria. It turns out that in all four classes of games equilibria can be found efficiently when only a constant number of actions is available to each player, a problem that has been shown intractable for other succinct representations of multiplayer games. We further show that identical payoff functions simplify the search for equilibria, while a growing number of actions renders it intractable. Finally, we show that our results extend to wider classes of threshold symmetric games where players are unable to determine the exact number of players playing a certain action.
Ideal Membership in Polynomial Rings over the Integers
 J. Amer. Math. Soc
"... Abstract. We present a new approach to the ideal membership problem for polynomial rings over the integers: given polynomials f0, f1,..., fn ∈ Z[X], where X = (X1,..., XN) is an Ntuple of indeterminates, are there g1,..., gn ∈ Z[X] such that f0 = g1f1 + · · · + gnfn? We show that the degree of th ..."
Abstract

Cited by 16 (2 self)
 Add to MetaCart
Abstract. We present a new approach to the ideal membership problem for polynomial rings over the integers: given polynomials f0, f1,..., fn ∈ Z[X], where X = (X1,..., XN) is an Ntuple of indeterminates, are there g1,..., gn ∈ Z[X] such that f0 = g1f1 + · · · + gnfn? We show that the degree of the polynomials g1,..., gn can be bounded by (2d) 2O(N2) (h + 1) where d is the maximum total degree and h the maximum height of the coefficients of f0,..., fn. Some related questions, primarily concerning linear equations in R[X], where R is the ring of integers of a number field, are also treated.
Parallel Approximation Algorithms for Maximum Weighted Matching in General Graphs
 Information Processing Letters
, 2000
"... . The problem of computing a matching of maximum weight in a given edgeweighted graph is not known to be Phard or in RNC. This paper presents four parallel approximation algorithms for this problem. The first is an RNCapproximation scheme, i.e., an RNC algorithm that computes a matching of weight ..."
Abstract

Cited by 15 (0 self)
 Add to MetaCart
. The problem of computing a matching of maximum weight in a given edgeweighted graph is not known to be Phard or in RNC. This paper presents four parallel approximation algorithms for this problem. The first is an RNCapproximation scheme, i.e., an RNC algorithm that computes a matching of weight at least 10 ffl times the maximum for any given constant ffl ? 0. The second one is an NC approximation algorithm achieving an approximation ratio of 1 2+ffl for any fixed ffl ? 0. The third and fourth algorithms only need to know the total order of weights, so they are useful when the edge weights require a large amount of memories to represent. The third one is an NC approximation algorithm that finds a matching of weight at least 2 31+2 times the maximum, where 1 is the maximum degree of the graph. The fourth one is an RNC algorithm that finds a matching of weight at least 1 21+4 times the maximum on average, and runs in O(log 1) time, not depending on the size of the graph. Key word...
Computing the minimal covering set
 In Proceedings of the 11th Conference on Theoretical Aspects of Rationality and Knowledge
, 2007
"... We present the first polynomialtime algorithm for computing the minimal covering set of a (weak) tournament. The algorithm draws upon a linear programming formulation of a subset of the minimal covering set known as the essential set. On the other hand, we show that no efficient algorithm exists fo ..."
Abstract

Cited by 14 (11 self)
 Add to MetaCart
We present the first polynomialtime algorithm for computing the minimal covering set of a (weak) tournament. The algorithm draws upon a linear programming formulation of a subset of the minimal covering set known as the essential set. On the other hand, we show that no efficient algorithm exists for two variants of the minimal covering set, the minimal upward covering set and the minimal downward covering set, unless P equals NP. Finally, we observe a strong relationship between von NeumannMorgenstern stable sets and upward covering on the one hand, and the Banks set and downward covering on the other.