Results 1  10
of
14
TimeSpace Tradeoffs for Satisfiability
 Journal of Computer and System Sciences
, 1997
"... We give the first nontrivial modelindependent timespace tradeoffs for satisfiability. Namely, we show that SAT cannot be solved simultaneously in n 1+o(1) time and n 1\Gammaffl space for any ffl ? 0 on general randomaccess nondeterministic Turing machines. In particular, SAT cannot be solved ..."
Abstract

Cited by 37 (1 self)
 Add to MetaCart
(Show Context)
We give the first nontrivial modelindependent timespace tradeoffs for satisfiability. Namely, we show that SAT cannot be solved simultaneously in n 1+o(1) time and n 1\Gammaffl space for any ffl ? 0 on general randomaccess nondeterministic Turing machines. In particular, SAT cannot be solved deterministically by a Turing machine using quasilinear time and p n space. We also give lower bounds for logspace uniform NC 1 circuits and branching programs. Our proof uses two basic ideas. First we show that if SAT can be solved nondeterministically with a small amount of time then we can collapse a nonconstant number of levels of the polynomialtime hierarchy. We combine this work with a result of Nepomnjascii that shows that a nondeterministic computation of super linear time and sublinear space can be simulated in alternating linear time. A simple diagonalization yields our main result. We discuss how these bounds lead to a new approach to separating the complexity classes NL a...
TimeSpace Lower Bounds for Satisfiability
 JACM
, 2005
"... We establish the first polynomial timespace lower bounds for satisfiability on general models of computation. We show that for any constant c less than the golden ratio there exists a positive constant d such that no deterministic randomaccess Turing machine can solve satisfiability in time n c an ..."
Abstract

Cited by 28 (8 self)
 Add to MetaCart
We establish the first polynomial timespace lower bounds for satisfiability on general models of computation. We show that for any constant c less than the golden ratio there exists a positive constant d such that no deterministic randomaccess Turing machine can solve satisfiability in time n c and space n d, where d approaches 1 when c does. On conondeterministic instead of deterministic machines, we prove the same for any constant c less than √ 2. Our lower bounds apply to nondeterministic linear time and almost all natural NPcomplete problems known. In fact, they even apply to the class of languages that can be solved on a nondeterministic machine in linear time and space n 1/c. Our proofs follow the paradigm of indirect diagonalization. We also use that paradigm to prove timespace lower bounds for languages higher up in the polynomialtime hierarchy.
TimeSpace Tradeoffs for Nondeterministic Computation
 In Proceedings of the 15th IEEE Conference on Computational Complexity
, 2000
"... We show new tradeoffs for satisfiability and nondeterministic linear time. Satisfiability cannot be solved on general purpose randomaccess Turing machines in time n 1.618 and space n o(1) . This improves recent results of Fortnow and of Lipton and Viglas. In general, for any constant a less tha ..."
Abstract

Cited by 23 (2 self)
 Add to MetaCart
We show new tradeoffs for satisfiability and nondeterministic linear time. Satisfiability cannot be solved on general purpose randomaccess Turing machines in time n 1.618 and space n o(1) . This improves recent results of Fortnow and of Lipton and Viglas. In general, for any constant a less than the golden ratio, we prove that satisfiability cannot be solved in time n a and space n b for some positive constant b. Our techniques allow us to establish this result for b < 1 2 ( a+2 a 2  a). We can do better for a close to the golden ratio, for example, satisfiability cannot be solved by a randomaccess Turing machine using n 1.46 time and n .11 space. We also show tradeoffs for nondeterministic linear time computations using sublinear space. For example, there exists a language computable in nondeterministic linear time and n .619 space that cannot be computed in deterministic n 1.618 time and n o(1) space. Higher up the polynomialtime hierarchy we can get be...
TimeSpace Tradeoffs in the Counting Hierarchy
, 2001
"... We extend the lower bound techniques of [14], to the unboundederror probabilistic model. A key step in the argument is a generalization of Nepomnjasci's theorem from the Boolean setting to the arithmetic setting. This generalization is made possible, due to the recent discovery of logspaceuni ..."
Abstract

Cited by 18 (3 self)
 Add to MetaCart
(Show Context)
We extend the lower bound techniques of [14], to the unboundederror probabilistic model. A key step in the argument is a generalization of Nepomnjasci's theorem from the Boolean setting to the arithmetic setting. This generalization is made possible, due to the recent discovery of logspaceuniform TC 0 circuits for iterated multiplication [9]. Here is an
Machine Models and Linear Time Complexity
 SIGACT News
, 1993
"... wer bounds. Machine models. Suppose that for every machine M 1 in model M 1 running in time t = t(n) there is a machine M 2 in M 2 which computes the same partial function in time g = g(t; n). If g = O(t)+O(n) we say that model M 2 simulates M 1 linearly. If g = O(t) the simulation has constantf ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
(Show Context)
wer bounds. Machine models. Suppose that for every machine M 1 in model M 1 running in time t = t(n) there is a machine M 2 in M 2 which computes the same partial function in time g = g(t; n). If g = O(t)+O(n) we say that model M 2 simulates M 1 linearly. If g = O(t) the simulation has constantfactor overhead ; if g = O(t log t) it has a factorofO(log t) overhead , and so on. The simulation is online if each step of M 1 i
Linear time and memoryefficient computation
, 1992
"... Abstract. A realistic model of computation called the Block Move (BM) model is developed. The BM regards computation as a sequence of finite transductions in memory, and operations are timed according to a memory cost parameter µ. Unlike previous memorycost models, the BM provides a rich theory of ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
(Show Context)
Abstract. A realistic model of computation called the Block Move (BM) model is developed. The BM regards computation as a sequence of finite transductions in memory, and operations are timed according to a memory cost parameter µ. Unlike previous memorycost models, the BM provides a rich theory of linear time, and in contrast to what is known for Turing machines, the BM is proved to be highly robust for linear time. Under a wide range of µ parameters, many forms of the BM model, ranging from a fixedwordsize RAM down to a single finite automaton iterating itself on a single tape, are shown to simulate each other up to constant factors in running time. The BM is proved to enjoy efficient universal simulation, and to have a tight deterministic time hierarchy. Relationships among BM and TM time complexity classes are studied. Key words. Computational complexity, theory of computation, machine models, Turing machines, randomaccess machines, simulation, memory hierarchies, finite automata, linear time, caching. AMS/MOS classification: 68Q05,68Q10,68Q15,68Q68.
On Quasilinear Time Complexity Theory
, 1994
"... This paper furthers the study of quasilinear time complexity initiated by Schnorr and Gurevich and Shelah. We show that the fundamental properties of the polynomialtime hierarchy carry over to the quasilineartime hierarchy. ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
This paper furthers the study of quasilinear time complexity initiated by Schnorr and Gurevich and Shelah. We show that the fundamental properties of the polynomialtime hierarchy carry over to the quasilineartime hierarchy.
On the Difference Between Turing Machine Time and RandomAccess Machine Time (Extended Abstract)
"... We introduce a model of computation called the Block Move (BM) model. The BM extends the Block Transfer (BT) model of Aggarwal, Chandra, and Snir [1], who studied time complexity under various memory access cost functions ranging from ¯ 1 (a) := a to ¯ log (a) := dlog 2 ae. We show that up to fact ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
We introduce a model of computation called the Block Move (BM) model. The BM extends the Block Transfer (BT) model of Aggarwal, Chandra, and Snir [1], who studied time complexity under various memory access cost functions ranging from ¯ 1 (a) := a to ¯ log (a) := dlog 2 ae. We show that up to factors of log t in the total running time t, BMs under ¯ 1 are equivalent to multitape Turing machines, and BMs under ¯ log are equivalent to logcost RAMs. We also prove that for any wellbehaved ¯ the BM classes D¯TIME[t(n)] form a tight deterministic time hierarchy. Whether there is any hierarchy at all when ¯ rather than t varies is tied to longstanding open problems of determinism vs. nondeterminism. Keywords Computational complexity, theory of computation, Turing machines, randomaccess machines, models, simulation, finite automata. 1. Introduction It is widely believed that randomaccess machines (RAMs) are more efficient and powerful than multitape Turing machines (TMs). However, no...
On superlinear lower bounds in complexity theory
 In Proc. 10th Annual IEEE Conference on Structure in Complexity Theory
, 1995
"... This paper first surveys the neartotal lack of superlinear lower bounds in complexity theory, for “natural” computational problems with respect to many models of computation. We note that the dividing line between models where such bounds are known and those where none are known comes when the mode ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
This paper first surveys the neartotal lack of superlinear lower bounds in complexity theory, for “natural” computational problems with respect to many models of computation. We note that the dividing line between models where such bounds are known and those where none are known comes when the model allows nonlocal communication with memory at unit cost. We study a model that imposes a “fair cost ” for nonlocal communication, and obtain modest superlinear lower bounds for some problems via a Kolmogorovcomplexity argument. Then we look to the larger picture of what it will take to prove really striking lower bounds, and pull from ours and others’ work a concept of information vicinity that may offer new tools and modes of analysis to a young field that rather lacks them.