Results 1 
6 of
6
Two Heads are Better than Two Tapes
, 1994
"... . We show that a Turing machine with two singlehead onedimensional tapes cannot recognize the set f x2x 0 j x 2 f0; 1g and x 0 is a prefix of x g in real time, although it can do so with three tapes, two twodimensional tapes, or one twohead tape, or in linear time with just one tape. In ..."
Abstract

Cited by 9 (5 self)
 Add to MetaCart
. We show that a Turing machine with two singlehead onedimensional tapes cannot recognize the set f x2x 0 j x 2 f0; 1g and x 0 is a prefix of x g in real time, although it can do so with three tapes, two twodimensional tapes, or one twohead tape, or in linear time with just one tape. In particular, this settles the longstanding conjecture that a twohead Turing machine can recognize more languages in real time if its heads are on the same onedimensional tape than if they are on separate onedimensional tapes. 1. Introduction The Turing machines commonly used and studied in computer science have separate tapes for input/output and for storage, so that we can conveniently study both storage as a dynamic resource and the more complex storage structures required for efficient implementation of practical algorithms [HS65]. Early researchers [MRF67] asked specifically whether twohead storage is more powerful if both heads are on the same onedimensional storage tape than if t...
Machine Models and Linear Time Complexity
 SIGACT News
, 1993
"... wer bounds. Machine models. Suppose that for every machine M 1 in model M 1 running in time t = t(n) there is a machine M 2 in M 2 which computes the same partial function in time g = g(t; n). If g = O(t)+O(n) we say that model M 2 simulates M 1 linearly. If g = O(t) the simulation has constantf ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
wer bounds. Machine models. Suppose that for every machine M 1 in model M 1 running in time t = t(n) there is a machine M 2 in M 2 which computes the same partial function in time g = g(t; n). If g = O(t)+O(n) we say that model M 2 simulates M 1 linearly. If g = O(t) the simulation has constantfactor overhead ; if g = O(t log t) it has a factorofO(log t) overhead , and so on. The simulation is online if each step of M 1 i
On superlinear lower bounds in complexity theory
 In Proc. 10th Annual IEEE Conference on Structure in Complexity Theory
, 1995
"... This paper first surveys the neartotal lack of superlinear lower bounds in complexity theory, for “natural” computational problems with respect to many models of computation. We note that the dividing line between models where such bounds are known and those where none are known comes when the mode ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
This paper first surveys the neartotal lack of superlinear lower bounds in complexity theory, for “natural” computational problems with respect to many models of computation. We note that the dividing line between models where such bounds are known and those where none are known comes when the model allows nonlocal communication with memory at unit cost. We study a model that imposes a “fair cost ” for nonlocal communication, and obtain modest superlinear lower bounds for some problems via a Kolmogorovcomplexity argument. Then we look to the larger picture of what it will take to prove really striking lower bounds, and pull from ours and others’ work a concept of information vicinity that may offer new tools and modes of analysis to a young field that rather lacks them.
On the Leftmost Derivation in Matrix Grammars
, 1997
"... Matrix grammars are one of the classical topics of formal languages, more specically, regulated rewriting. Although this type of control on the work of contextfree grammars is one of the earliest, matrix grammars still raise interesting questions (not to speak about old open problems in this area). ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Matrix grammars are one of the classical topics of formal languages, more specically, regulated rewriting. Although this type of control on the work of contextfree grammars is one of the earliest, matrix grammars still raise interesting questions (not to speak about old open problems in this area). One such class of problems concerns the leftmost derivation (in grammars without appearance checking). The main point of this paper is the systematic study of all possibilities of dening leftmost derivation in matrix grammars. Twelve types of such a restriction are dened, only four of which being discussed in literature. For seven of them, we nd a proof of a characterization of recursively enumerable languages (by matrix grammars with arbitrary contextfree rules but without appearance checking). Other three cases characterize the recursively enumerable languages modulo a morphism and an intersection with a regular language. In this way, we solve nearly all problems listed as open on ...
Fast nondeterministic recognition of contextfree languages using two queues
"... We show how to accept a contextfree language nondeterministically in O ( n log n) time on a twoqueue machine. Keywords: Algorithms, Formal Languages, Theory of Computation. 1 ..."
Abstract
 Add to MetaCart
We show how to accept a contextfree language nondeterministically in O ( n log n) time on a twoqueue machine. Keywords: Algorithms, Formal Languages, Theory of Computation. 1
Fast nondeterministic recognition of contextfree languages using two queues
"... Abstract We show how to accept a contextfree language nondeterministically in O ( n log n) time on a twoqueue machine. ..."
Abstract
 Add to MetaCart
Abstract We show how to accept a contextfree language nondeterministically in O ( n log n) time on a twoqueue machine.