Results 1  10
of
13
A Survey of Combinatorial Gray Codes
 SIAM Review
, 1996
"... The term combinatorial Gray code was introduced in 1980 to refer to any method for generating combinatorial objects so that successive objects differ in some prespecified, small way. This notion generalizes the classical binary reflected Gray code scheme for listing nbit binary numbers so that ..."
Abstract

Cited by 84 (2 self)
 Add to MetaCart
The term combinatorial Gray code was introduced in 1980 to refer to any method for generating combinatorial objects so that successive objects differ in some prespecified, small way. This notion generalizes the classical binary reflected Gray code scheme for listing nbit binary numbers so that successive numbers differ in exactly one bit position, as well as work in the 1960's and 70's on minimal change listings for other combinatorial families, including permutations and combinations. The area of combinatorial Gray codes was popularized by Herbert Wilf in his invited address at the SIAM Discrete Mathematics Conference in 1988 and his subsequent SIAM monograph in which he posed some open problems and variations on the theme. This resulted in much recent activity in the area and most of the problems posed by Wilf are now solved. In this paper, we survey the area of combinatorial Gray codes, describe recent results, variations, and trends, and highlight some open problems. ...
Are bitvectors optimal?
"... ... We show lower bounds that come close to our upper bounds (for a large range of n and ffl): Schemes that answer queries with just one bitprobe and error probability ffl must use \Omega ( nffl log(1=ffl) log m) bits of storage; if the error is restricted to queries not in S, then the scheme must u ..."
Abstract

Cited by 54 (7 self)
 Add to MetaCart
... We show lower bounds that come close to our upper bounds (for a large range of n and ffl): Schemes that answer queries with just one bitprobe and error probability ffl must use \Omega ( nffl log(1=ffl) log m) bits of storage; if the error is restricted to queries not in S, then the scheme must use \Omega ( n2ffl2 log(n=ffl) log m) bits of storage. We also
Lower bounds for UnionSplitFind related problems on random access machines
, 1994
"... We prove \Omega\Gamma p log log n) lower bounds on the random access machine complexity of several dynamic, partially dynamic and static data structure problems, including the unionsplitfind problem, dynamic prefix problems and onedimensional range query problems. The proof techniques include a ..."
Abstract

Cited by 49 (3 self)
 Add to MetaCart
We prove \Omega\Gamma p log log n) lower bounds on the random access machine complexity of several dynamic, partially dynamic and static data structure problems, including the unionsplitfind problem, dynamic prefix problems and onedimensional range query problems. The proof techniques include a general technique using perfect hashing for reducing static data structure problems (with a restriction of the size of the structure) into partially dynamic data structure problems (with no such restriction), thus providing a way to transfer lower bounds. We use a generalization of a method due to Ajtai for proving the lower bounds on the static problems, but describe the proof in terms of communication complexity, revealing a striking similarity to the proof used by Karchmer and Wigderson for proving lower bounds on the monotone circuit depth of connectivity. 1 Introduction and summary of results In this paper we give lower bounds for the complexity of implementing several dynamic and sta...
Complexity Models for Incremental Computation
, 1994
"... We present a new complexity theoretic approach to incremental computation. We define complexity classes that capture the intuitive notion of incremental efficiency and study their relation to existing complexity classes. We show that problems that have small sequential space complexity also have sma ..."
Abstract

Cited by 42 (4 self)
 Add to MetaCart
We present a new complexity theoretic approach to incremental computation. We define complexity classes that capture the intuitive notion of incremental efficiency and study their relation to existing complexity classes. We show that problems that have small sequential space complexity also have small incremental time complexity. We show that all common LOGSPACEcomplete problems for P are also incrPOLYLOGTIMEcomplete for P. We introduce a restricted notion of completeness called NRPcompleteness and show that problems which are NRPcomplete for P are also incrPOLYLOGTIMEcomplete for P. We also give incrementally complete problems for NLOGSPACE, LOGSPACE, and nonuniform NC¹. We show that under certain restrictions problems which have efficient dynamic solutions also have efficient parallel solutions. We also consider a nonuniform model of incremental computation and show that in this model most problems have almost linear complexity. In addition, we present some techniques f...
Cell probe complexity  a survey
 In 19th Conference on the Foundations of Software Technology and Theoretical Computer Science (FSTTCS), 1999. Advances in Data Structures Workshop
"... The cell probe model is a general, combinatorial model of data structures. We give a survey of known results about the cell probe complexity of static and dynamic data structure problems, with an emphasis on techniques for proving lower bounds. 1 ..."
Abstract

Cited by 28 (0 self)
 Add to MetaCart
The cell probe model is a general, combinatorial model of data structures. We give a survey of known results about the cell probe complexity of static and dynamic data structure problems, with an emphasis on techniques for proving lower bounds. 1
Dynamic Word Problems
, 1993
"... Let M be a fixed finite monoid. We consider the problem of implementing a data type containing a vector x = (x1 ; x2 ; : : : ; xn) 2 M n , initially (1; 1; : : : ; 1), with two kinds of operations, for each i 2 f1; : : : ; ng and a 2 M , an operation change i;a which changes x i to a and a singl ..."
Abstract

Cited by 17 (6 self)
 Add to MetaCart
Let M be a fixed finite monoid. We consider the problem of implementing a data type containing a vector x = (x1 ; x2 ; : : : ; xn) 2 M n , initially (1; 1; : : : ; 1), with two kinds of operations, for each i 2 f1; : : : ; ng and a 2 M , an operation change i;a which changes x i to a and a single operation product returning Q n i=1 x i . This is the dynamic word problem for M . If we in addition for each j 2 f1; : : : ; ng have an operation prefix j returning Q j i=1 x i , we get the dynamic prefix problem for M . We analyze the complexity of these problems in the cell probe or decision assignment tree model for two natural cell sizes, 1 bit and log n bits. We obtain a partial classification of the complexity based on algebraic properties of M .
Hardness Results for Dynamic Problems by Extensions of Fredman and Saks' Chronogram Method
 In Proc. 25th Int. Coll. Automata, Languages, and Programming, number 1443 in Lecture Notes in Computer Science
, 1998
"... We introduce new models for dynamic computation based on the cell probe model of Fredman and Yao. We give these models access to nondeterministic queries or the right answer ±1 as an oracle. We prove that for the dynamic partial sum problem, these new powers do not help, the problem retai ..."
Abstract

Cited by 8 (3 self)
 Add to MetaCart
We introduce new models for dynamic computation based on the cell probe model of Fredman and Yao. We give these models access to nondeterministic queries or the right answer &plusmn;1 as an oracle. We prove that for the dynamic partial sum problem, these new powers do not help, the problem retains its lower bound of Omega (log n/ log log n). From...
New Lower Bound Techniques For Dynamic Partial Sums and Related Problems
 SIAM Journal on Computing
, 2003
"... We study the complexity of the dynamic partial sum problem in the cellprobe model. We give the model access to nondeterministic queries and prove that the problem remains hard. We give the model access to the right answer as an oracle and prove that the problem remains hard. This suggests which kin ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
We study the complexity of the dynamic partial sum problem in the cellprobe model. We give the model access to nondeterministic queries and prove that the problem remains hard. We give the model access to the right answer as an oracle and prove that the problem remains hard. This suggests which kind of information is hard to maintain. From these results, we derive a number of lower bounds for dynamic algorithms and data structures: We prove lower bounds for dynamic algorithms for existential range queries, reachability in directed graphs, planarity testing, planar point location, incremental parsing, and fundamental data structure problems like maintaining the majority of the prefixes of a string of bits. We prove a lower bound for reachability in grid graphs in terms of the graph's width. We characterize the complexity of maintaining the value of any symmetric function on the prefixes of a bit string. Keywords. cellprobe model, partial sum, dynamic algorithm, data structure AMS subject classifications. 68Q17, 68Q10, 68Q05, 68P05
Integer Representations towards Efficient Counting in the Bit Probe Model
"... Abstract. We consider the problem of representing numbers in close to optimal space and supporting increment, decrement, addition and subtraction operations efficiently. We study the problem in the bit probe model and analyse the number of bits read and written to perform the operations, both in the ..."
Abstract
 Add to MetaCart
Abstract. We consider the problem of representing numbers in close to optimal space and supporting increment, decrement, addition and subtraction operations efficiently. We study the problem in the bit probe model and analyse the number of bits read and written to perform the operations, both in the worstcase and in the averagecase. A counter is spaceoptimal if it represents any number in the range [0,..., 2 n − 1] using exactly n bits. We provide a spaceoptimal counter which supports increment and decrement operations by reading at most n − 1 bits and writing at most 3 bits in the worstcase. To the best of our knowledge, this is the first such representation which supports these operations by always reading strictly less than n bits. For redundant counters where we only need to represent numbers in the range [0,..., L] for some integer L < 2 n − 1 using n bits, we define the efficiency of the counter as the ratio between L + 1 and 2 n. We present various representations that achieve different tradeoffs between the read and write complexities and the efficiency. We also give another representation of integers that uses n + O(log n) bits to represent integers in the range [0,..., 2 n − 1] that supports efficient addition and subtraction operations, improving the space complexity of an earlier representation by Munro and Rahman [Algorithmica, 2010].