Results 1  10
of
11
Theory and Algorithms for Plan Merging
, 1992
"... Merging operators in a plan can yield significant savings in the cost to execute a plan. This paper provides a formal theory for plan merging and presents both optimal and efficient heuristic algorithms for finding minimumcost merged plans. The optimal planmerging algorithm applies a dynamic progr ..."
Abstract

Cited by 69 (3 self)
 Add to MetaCart
Merging operators in a plan can yield significant savings in the cost to execute a plan. This paper provides a formal theory for plan merging and presents both optimal and efficient heuristic algorithms for finding minimumcost merged plans. The optimal planmerging algorithm applies a dynamic programming method to handle multiple linear plans and is extended to partially ordered plans in a novel way. Furthermore, with worst and average case complexity analysis and empirical tests, we demonstrate that efficient and wellbehaved approximation algorithms are applicable for optimizing plans with large sizes.
On approximation preserving reductions: Complete problems and robust measures
, 1987
"... We investigate the wellknown anomalous differences in the approximability properties of NPcomplete optimization problems. We define a notion of polynomial time reduction between optimization problems, and introduce conditions guaranteeing that such reductions preserve various types of approximate ..."
Abstract

Cited by 35 (0 self)
 Add to MetaCart
We investigate the wellknown anomalous differences in the approximability properties of NPcomplete optimization problems. We define a notion of polynomial time reduction between optimization problems, and introduce conditions guaranteeing that such reductions preserve various types of approximate solutions. We then prove that a weighted version of the satisfiability problem, the traveling salesperson problem, and the zeroone integer programming problem are in a strong sense approximation complete for the class of NP minimization problems. Finally, we discuss the reasons that cause the standard relative error approximation quality measure to break down in computationally simple problem transformations, and give a general construction for producing quality measures that are more robust with respect to an arbitrary given class of invertible transformations. 1
Limitations of the Upward Separation Technique
, 1990
"... this paper was presented at the 16th International Colloquium on Automata, Languages, and Programming [3] ..."
Abstract

Cited by 16 (0 self)
 Add to MetaCart
this paper was presented at the 16th International Colloquium on Automata, Languages, and Programming [3]
Algorithmic Information Theory
, 1989
"... We present a critical discussion of the claim (most forcefully propounded by Chaitin) that algorithmic information theory sheds new light on G6del's first incompleteness theorem. ..."
Abstract

Cited by 13 (0 self)
 Add to MetaCart
We present a critical discussion of the claim (most forcefully propounded by Chaitin) that algorithmic information theory sheds new light on G6del's first incompleteness theorem.
An Upward Measure Separation Theorem
 Theoretical Computer Science
, 1991
"... It is shown that almost every language in ESPACE is very hard to approximate with circuits. It follows that P 6= BPP implies that E is a measure 0 subset of ESPACE. 1 Introduction Hartmanis and Yesha [HY84] proved that P is a proper subset of P/Poly " PSPACE if and only if E is a proper su ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
It is shown that almost every language in ESPACE is very hard to approximate with circuits. It follows that P<Fnan> 6= BPP implies that E is a measure 0 subset of ESPACE. 1 Introduction Hartmanis and Yesha [HY84] proved that P is a proper subset of P/Poly " PSPACE if and only if E is a proper subset of ESPACE. (See section 2 for notation and terminology used in this introduction.) This refined the downward separation result E ae 6= ESPACE =) P ae 6= PSPACE of Book [Boo74] and also led immediately to the upward separation result P ae 6= BPP =) E ae 6= ESPACE (1.1) of Hartmanis and Yesha [HY84]. (Work of Gill [Gil77], Adleman [Adl78], and Bennett and Gill [BG81] had already established that BPP is contained in P/Poly " PSPACE.) It is reasonable to conjecture that BPP is in fact a proper subset of P/Poly " PSPACE, and hence that the P ae 6= BPP hypothesis might yield a stronger conclusion than the separation of E from ESPACE. This paper supports this intuition by proving the fo...
Suffix Trees and String Complexity
 Advances in Cryptology: Proc. of EUROCRYPT, LNCS 658
, 1992
"... Let s = (s 1 ; s 2 ; : : : ; s n ) be a sequence of characters where s i 2 Z p for 1 i n. One measure of the complexity of the sequence s is the length of the shortest feedback shift register that will generate s, which is known as the maximum order complexity of s [17, 18]. We provide a proof th ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Let s = (s 1 ; s 2 ; : : : ; s n ) be a sequence of characters where s i 2 Z p for 1 i n. One measure of the complexity of the sequence s is the length of the shortest feedback shift register that will generate s, which is known as the maximum order complexity of s [17, 18]. We provide a proof that the expected length of the shortest feedback register to generate a sequence of length n is less than 2 log p n+ o(1), and also give several other statistics of interest for distinguishing random strings. The proof is based on relating the maximum order complexity to a data structure known as a suffix tree. 1 Introduction A common form of stream cipher are the socalled running key ciphers [4, 9] which are deterministic approximations to the one time pad. A running key cipher generates an ultimately periodic sequence s = (s 1 ; s 2 ; : : : ; s n ), s i 2 Z p ; 1 i n, for a given seed or key K. Encryption is performed as with the one time pad, using s as the key stream, but perfect secu...
Philosophical Issues in Kolmogorov Complexity
 In Proceedings on Automata, Languages and Programming (ICALP92
, 1992
"... this article at a conceptual level, it is sufficient to know that Kolmogorov complexity of a finite string x ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
this article at a conceptual level, it is sufficient to know that Kolmogorov complexity of a finite string x
Random Languages for NonUniform Complexity Classes
 Journal of Complexity
, 1991
"... A language A is considered to be random for a class C if for every language B in C the fraction of the strings where A and B coincide is approximately 1/2. We show that there exist languages in DSPACE(f(n)) which are random for the nonuniform class DSPACE(g(n))=h(n), where n, g(n) and h(n) are in ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
A language A is considered to be random for a class C if for every language B in C the fraction of the strings where A and B coincide is approximately 1/2. We show that there exist languages in DSPACE(f(n)) which are random for the nonuniform class DSPACE(g(n))=h(n), where n, g(n) and h(n) are in o(f(n)). Nonuniform complexity classes were introduced by Karp and Lipton [Karp and Lipton 1980] and allow an advice string that depends only on the length of the input as additional information. This paper extends a result by Wilber [Wilber 1983] who proved bounds for the existence of random languages for (uniform) time and space classes. Huynh [Huynh 1987] provides a result for the special case of P=polyrandom languages in EXPSPACE. Here we explore a different method using strings with high generalized Kolmogorov complexity [Hartmanis 1983]. A characterization of the nonuniform space classes in terms of Kolmogorov complexity is given. This generalizes a result of [Balc'azar, D'iaz, and...
Symmetry of Information and OneWay Functions
 Inform. Proc. letters
, 1993
"... Symmetry of information (in Kolmogorov complexity) is a concept that comes from formalizing the idea of how much information about a string y is contained in a string x. The situation is symmetric because it can be shown that the amount of information contained in the string y about the string x is ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Symmetry of information (in Kolmogorov complexity) is a concept that comes from formalizing the idea of how much information about a string y is contained in a string x. The situation is symmetric because it can be shown that the amount of information contained in the string y about the string x is almost exactly the same as that contained in x about y. In this paper we address symmetry of information in resource bounded environments. While we show that symmetry still holds in space bounded environments, it probably doesn't hold in time bounded environments. We show that if it holds for polynomial time bounds, then oneway functions cannot exist. 1 Introduction Keywords: computational complexity, Kolmogorov complexity, oneway functions. In probability theory, the phenomenon of dependence between random variables is well known. Cast in terms of classical Shannon entropy [Sha48, Sha49], the quantity of information in a random variable Y about another random variable X is I(X; Y ) =...
A View of Structural Complexity Theory
"... Introduction At several recent conferences, the question "What is Structural Complexity Theory?" has been the source of some lively discussions. At this time there does not exist one commonly accepted answer but the intersection of almost all answers is nonempty. The purpose of this paper is to des ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Introduction At several recent conferences, the question "What is Structural Complexity Theory?" has been the source of some lively discussions. At this time there does not exist one commonly accepted answer but the intersection of almost all answers is nonempty. The purpose of this paper is to describe one answer to this question. We will not describe in detail recent technical results, although some will be mentioned as examples, but rather will provide comments about themes and paradigms which may be useful in organizing much of the material. We assume that the reader is familiar with (or has access to) the book Structural Complexity I, by Balcazar, Daz, and Gabarro [BDG88]. What is desired in the formulation of a theory of computational complexity is a method for dealing with the quantitative aspects of computing. Such a method would depend upon a general theory that would provide a means for defining and studying the "inherent di#culty" of computing funct