Results 1  10
of
136
Almost Everywhere High Nonuniform Complexity
, 1992
"... . We investigate the distribution of nonuniform complexities in uniform complexity classes. We prove that almost every problem decidable in exponential space has essentially maximum circuitsize and spacebounded Kolmogorov complexity almost everywhere. (The circuitsize lower bound actually exceeds ..."
Abstract

Cited by 170 (34 self)
 Add to MetaCart
. We investigate the distribution of nonuniform complexities in uniform complexity classes. We prove that almost every problem decidable in exponential space has essentially maximum circuitsize and spacebounded Kolmogorov complexity almost everywhere. (The circuitsize lower bound actually exceeds, and thereby strengthens, the Shannon 2 n n lower bound for almost every problem, with no computability constraint.) In exponential time complexity classes, we prove that the strongest relativizable lower bounds hold almost everywhere for almost all problems. Finally, we show that infinite pseudorandom sequences have high nonuniform complexity almost everywhere. The results are unified by a new, more powerful formulation of the underlying measure theory, based on uniform systems of density functions, and by the introduction of a new nonuniform complexity measure, the selective Kolmogorov complexity. This research was supported in part by NSF Grants CCR8809238 and CCR9157382 and in ...
Constructing Conditional Plans by a TheoremProver
 Journal of Artificial Intelligence Research
, 1999
"... The research on conditional planning rejects the assumptions that there is no uncertainty or incompleteness of knowledge with respect to the state and changes of the system the plans operate on. Without these assumptions the sequences of operations that achieve the goals depend on the initial sta ..."
Abstract

Cited by 142 (6 self)
 Add to MetaCart
The research on conditional planning rejects the assumptions that there is no uncertainty or incompleteness of knowledge with respect to the state and changes of the system the plans operate on. Without these assumptions the sequences of operations that achieve the goals depend on the initial state and the outcomes of nondeterministic changes in the system. This setting raises the questions of how to represent the plans and how to perform plan search. The answers are quite different from those in the simpler classical framework. In this paper, we approach conditional planning from a new viewpoint that is motivated by the use of satisfiability algorithms in classical planning. Translating conditional planning to formulae in the propositional logic is not feasible because of inherent computational limitations. Instead, we translate conditional planning to quantified Boolean formulae. We discuss three formalizations of conditional planning as quantified Boolean formulae, and pr...
The quantitative structure of exponential time
 Complexity theory retrospective II
, 1997
"... ABSTRACT Recent results on the internal, measuretheoretic structure of the exponential time complexity classes E and EXP are surveyed. The measure structure of these classes is seen to interact in informative ways with biimmunity, complexity cores, polynomialtime reductions, completeness, circuit ..."
Abstract

Cited by 90 (13 self)
 Add to MetaCart
ABSTRACT Recent results on the internal, measuretheoretic structure of the exponential time complexity classes E and EXP are surveyed. The measure structure of these classes is seen to interact in informative ways with biimmunity, complexity cores, polynomialtime reductions, completeness, circuitsize complexity, Kolmogorov complexity, natural proofs, pseudorandom generators, the density of hard languages, randomized complexity, and lowness. Possible implications for the structure of NP are also discussed. 1
Analog Computation via Neural Networks
 THEORETICAL COMPUTER SCIENCE
, 1994
"... We pursue a particular approach to analog computation, based on dynamical systems of the type used in neural networks research. Our systems have a fixed structure, invariant in time, corresponding to an unchanging number of "neurons". If allowed exponential time for computation, they turn out to ha ..."
Abstract

Cited by 87 (8 self)
 Add to MetaCart
We pursue a particular approach to analog computation, based on dynamical systems of the type used in neural networks research. Our systems have a fixed structure, invariant in time, corresponding to an unchanging number of "neurons". If allowed exponential time for computation, they turn out to have unbounded power. However, under polynomialtime constraints there are limits on their capabilities, though being more powerful than Turing Machines. (A similar but more restricted model was shown to be polynomialtime equivalent to classical digital computation in the previous work [20].) Moreover, there is a precise correspondence between nets and standard nonuniform circuits with equivalent resources, and as a consequence one has lower bound constraints on what they can compute. This relationship is perhaps surprising since our analog devices do not change in any manner with input size. We note that these networks are not likely to solve polynomially NPhard problems, as the equality ...
Structure in Approximation Classes
, 1996
"... this paper we obtain new results on the structure of several computationallydefined approximation classes. In particular, after defining a new approximation preserving reducibility to be used for as many approximation classes as possible, we give the first examples of natural NPOcomplete problems ..."
Abstract

Cited by 75 (14 self)
 Add to MetaCart
this paper we obtain new results on the structure of several computationallydefined approximation classes. In particular, after defining a new approximation preserving reducibility to be used for as many approximation classes as possible, we give the first examples of natural NPOcomplete problems and the first examples of natural APXintermediate problems. Moreover, we state new connections between the approximability properties and the query complexity of NPO problems.
Succinct Quantum Proofs for Properties of Finite Groups
 In Proc. IEEE FOCS
, 2000
"... In this paper we consider a quantum computational variant of nondeterminism based on the notion of a quantum proof, which is a quantum state that plays a role similar to a certificate in an NPtype proof. Specifically, we consider quantum proofs for properties of blackbox groups, which are finite g ..."
Abstract

Cited by 64 (3 self)
 Add to MetaCart
In this paper we consider a quantum computational variant of nondeterminism based on the notion of a quantum proof, which is a quantum state that plays a role similar to a certificate in an NPtype proof. Specifically, we consider quantum proofs for properties of blackbox groups, which are finite groups whose elements are encoded as strings of a given length and whose group operations are performed by a group oracle. We prove that for an arbitrary group oracle there exist succinct (polynomiallength) quantum proofs for the Group NonMembership problem that can be checked with small error in polynomial time on a quantum computer. Classically this is impossibleit is proved that there exists a group oracle relative to which this problem does not have succinct proofs that can be checked classically with bounded error in polynomial time (i.e., the problem is not in MA relative to the group oracle constructed). By considering a certain subproblem of the Group NonMembership problem we obtain a simple proof that there exists an oracle relative to which BQP is not contained in MA. Finally, we show that quantum proofs for nonmembership and classical proofs for various other group properties can be combined to yield succinct quantum proofs for other group properties not having succinct proofs in the classical setting, such as verifying that a number divides the order of a group and verifying that a group is not a simple group.
Models of Computation  Exploring the Power of Computing
"... Theoretical computer science treats any computational subject for which a good model can be created. Research on formal models of computation was initiated in the 1930s and 1940s by Turing, Post, Kleene, Church, and others. In the 1950s and 1960s programming languages, language translators, and oper ..."
Abstract

Cited by 57 (7 self)
 Add to MetaCart
Theoretical computer science treats any computational subject for which a good model can be created. Research on formal models of computation was initiated in the 1930s and 1940s by Turing, Post, Kleene, Church, and others. In the 1950s and 1960s programming languages, language translators, and operating systems were under development and therefore became both the subject and basis for a great deal of theoretical work. The power of computers of this period was limited by slow processors and small amounts of memory, and thus theories (models, algorithms, and analysis) were developed to explore the efficient use of computers as well as the inherent complexity of problems. The former subject is known today as algorithms and data structures, the latter computational complexity. The focus of theoretical computer scientists in the 1960s on languages is reflected in the first textbook on the subject, Formal Languages and Their Relation to Automata by John Hopcroft and Jeffrey Ullman. This influential book led to the creation of many languagecentered theoretical computer science courses; many introductory theory courses today continue to reflect the content of this book and the interests of theoreticians of the 1960s and early 1970s. Although
Magic Functions
, 1999
"... We consider three apparently unrelated fundamental problems in distributed computing, cryptography and complexity theory and prove that they are essentially the same problem. ..."
Abstract

Cited by 55 (0 self)
 Add to MetaCart
We consider three apparently unrelated fundamental problems in distributed computing, cryptography and complexity theory and prove that they are essentially the same problem.
The Complexity and Distribution of Hard Problems
 SIAM JOURNAL ON COMPUTING
, 1993
"... Measuretheoretic aspects of the P m reducibility structure of the exponential time complexity classes E=DTIME(2 linear ) and E 2 = DTIME(2 polynomial ) are investigated. Particular attention is given to the complexity (measured by the size of complexity cores) and distribution (abundance in ..."
Abstract

Cited by 45 (16 self)
 Add to MetaCart
Measuretheoretic aspects of the P m reducibility structure of the exponential time complexity classes E=DTIME(2 linear ) and E 2 = DTIME(2 polynomial ) are investigated. Particular attention is given to the complexity (measured by the size of complexity cores) and distribution (abundance in the sense of measure) of languages that are P m  hard for E and other complexity classes. Tight upper and lower bounds on the size of complexity cores of hard languages are derived. The upper bound says that the P m hard languages for E are unusually simple, in the sense that they have smaller complexity cores than most languages in E. It follows that the P m complete languages for E form a measure 0 subset of E (and similarly in E 2 ). This latter fact is seen to be a special case of a more general theorem, namely, that every P m degree (e.g., the degree of all P m complete languages for NP) has measure 0 in E and in E 2 .
Complexity of Planning with Partial Observability
 ICAPS 2004. Proceedings of the Fourteenth International Conference on Automated Planning and Scheduling
, 2004
"... We show that for conditional planning with partial observability the problem of testing existence of plans with success probability 1 is 2EXPcomplete. This result completes the complexity picture for nonprobabilistic propositional planning. We also give new proofs for the EXPhardness of conditio ..."
Abstract

Cited by 39 (3 self)
 Add to MetaCart
We show that for conditional planning with partial observability the problem of testing existence of plans with success probability 1 is 2EXPcomplete. This result completes the complexity picture for nonprobabilistic propositional planning. We also give new proofs for the EXPhardness of conditional planning with full observability and the EXPSPACEhardness of conditional planning without observability. The proofs demonstrate how lack of full observability allows the encoding of exponential space Turing machines in the planning problem, and how the necessity to have branching in plans corresponds to the move to a complexity class defined in terms of alternation from the corresponding deterministic complexity class. Lack of full observability necessitates the use of beliefs states, the number of which is exponential in the number of states, and alternation corresponds to the choices a branching plan can make.