Results 1 
8 of
8
Two Heads are Better than Two Tapes
, 1994
"... . We show that a Turing machine with two singlehead onedimensional tapes cannot recognize the set f x2x 0 j x 2 f0; 1g and x 0 is a prefix of x g in real time, although it can do so with three tapes, two twodimensional tapes, or one twohead tape, or in linear time with just one tape. In ..."
Abstract

Cited by 9 (5 self)
 Add to MetaCart
. We show that a Turing machine with two singlehead onedimensional tapes cannot recognize the set f x2x 0 j x 2 f0; 1g and x 0 is a prefix of x g in real time, although it can do so with three tapes, two twodimensional tapes, or one twohead tape, or in linear time with just one tape. In particular, this settles the longstanding conjecture that a twohead Turing machine can recognize more languages in real time if its heads are on the same onedimensional tape than if they are on separate onedimensional tapes. 1. Introduction The Turing machines commonly used and studied in computer science have separate tapes for input/output and for storage, so that we can conveniently study both storage as a dynamic resource and the more complex storage structures required for efficient implementation of practical algorithms [HS65]. Early researchers [MRF67] asked specifically whether twohead storage is more powerful if both heads are on the same onedimensional storage tape than if t...
Machine Models and Linear Time Complexity
 SIGACT News
, 1993
"... wer bounds. Machine models. Suppose that for every machine M 1 in model M 1 running in time t = t(n) there is a machine M 2 in M 2 which computes the same partial function in time g = g(t; n). If g = O(t)+O(n) we say that model M 2 simulates M 1 linearly. If g = O(t) the simulation has constantf ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
wer bounds. Machine models. Suppose that for every machine M 1 in model M 1 running in time t = t(n) there is a machine M 2 in M 2 which computes the same partial function in time g = g(t; n). If g = O(t)+O(n) we say that model M 2 simulates M 1 linearly. If g = O(t) the simulation has constantfactor overhead ; if g = O(t log t) it has a factorofO(log t) overhead , and so on. The simulation is online if each step of M 1 i
On the Leftmost Derivation in Matrix Grammars
, 1997
"... Matrix grammars are one of the classical topics of formal languages, more specically, regulated rewriting. Although this type of control on the work of contextfree grammars is one of the earliest, matrix grammars still raise interesting questions (not to speak about old open problems in this area). ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Matrix grammars are one of the classical topics of formal languages, more specically, regulated rewriting. Although this type of control on the work of contextfree grammars is one of the earliest, matrix grammars still raise interesting questions (not to speak about old open problems in this area). One such class of problems concerns the leftmost derivation (in grammars without appearance checking). The main point of this paper is the systematic study of all possibilities of dening leftmost derivation in matrix grammars. Twelve types of such a restriction are dened, only four of which being discussed in literature. For seven of them, we nd a proof of a characterization of recursively enumerable languages (by matrix grammars with arbitrary contextfree rules but without appearance checking). Other three cases characterize the recursively enumerable languages modulo a morphism and an intersection with a regular language. In this way, we solve nearly all problems listed as open on ...
On superlinear lower bounds in complexity theory
 In Proc. 10th Annual IEEE Conference on Structure in Complexity Theory
, 1995
"... This paper first surveys the neartotal lack of superlinear lower bounds in complexity theory, for “natural” computational problems with respect to many models of computation. We note that the dividing line between models where such bounds are known and those where none are known comes when the mode ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
This paper first surveys the neartotal lack of superlinear lower bounds in complexity theory, for “natural” computational problems with respect to many models of computation. We note that the dividing line between models where such bounds are known and those where none are known comes when the model allows nonlocal communication with memory at unit cost. We study a model that imposes a “fair cost ” for nonlocal communication, and obtain modest superlinear lower bounds for some problems via a Kolmogorovcomplexity argument. Then we look to the larger picture of what it will take to prove really striking lower bounds, and pull from ours and others’ work a concept of information vicinity that may offer new tools and modes of analysis to a young field that rather lacks them.
Fast nondeterministic recognition of contextfree languages using two queues
"... We show how to accept a contextfree language nondeterministically in O ( n log n) time on a twoqueue machine. Keywords: Algorithms, Formal Languages, Theory of Computation. 1 ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
We show how to accept a contextfree language nondeterministically in O ( n log n) time on a twoqueue machine. Keywords: Algorithms, Formal Languages, Theory of Computation. 1
Birkhiiuser Verlag, Basel TWO TAPES VERSUS ONE FOR OFFL INE TURING MACHINES
"... Abst rac t. We prove the first superlinear lower bound for a concrete, polynomial time recognizable d cision problem on a Taring machine with one work tape and a twoway input tape (also called offline 1tape Turing machine). In particular, for offline Turing machines we show that two tapes are bet ..."
Abstract
 Add to MetaCart
Abst rac t. We prove the first superlinear lower bound for a concrete, polynomial time recognizable d cision problem on a Taring machine with one work tape and a twoway input tape (also called offline 1tape Turing machine). In particular, for offline Turing machines we show that two tapes are better than one and that three pushdown stores are better than two (both in the deterministic and in the nondeterministic case). Key words, offline 1tape Turing machines; two tapes; lower bounds; time; nondeterminism. Subject classifications. 68Q05, 68Q25. 1. In t roduct ion A 1tape offline Turing machine (see Hennie 1965, p.166) is a Turing machine (TM) with one work tape and an additional twoway input tape, i.e., an input tape with end markers on which the associated readonly input head can move without restriction in both directions. These TM's are used as the standard model for the analysis of the space complexity of TMcomputations. In addition, they are of interest as an intermediate model between the relatively slow 1tape TM without input tape and the relatively powerful 2tape TM. No nontrivial ower bounds are known for the recognition of polynomial time computable languages on 2tape Turing machines. On the other hand, lower bound arguments for concrete languages on restricted TM's have progressed from 1tape TM's without input tape (Hennie 1965, Rabin 1963) to
Elsevier
, 1989
"... The complexity of matrix transposition on onetape offline Turing machines with output tape* ..."
Abstract
 Add to MetaCart
The complexity of matrix transposition on onetape offline Turing machines with output tape*
Fast nondeterministic recognition of contextfree languages using two queues
"... Abstract We show how to accept a contextfree language nondeterministically in O ( n log n) time on a twoqueue machine. ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract We show how to accept a contextfree language nondeterministically in O ( n log n) time on a twoqueue machine.