Results 1  10
of
16
Symmetric Logspace is Closed Under Complement
 CHICAGO JOURNAL OF THEORETICAL COMPUTER SCIENCE
, 1994
"... We present a Logspace, manyone reduction from the undirected stconnectivity problem to its complement. This shows that SL = co  SL. ..."
Abstract

Cited by 26 (1 self)
 Add to MetaCart
We present a Logspace, manyone reduction from the undirected stconnectivity problem to its complement. This shows that SL = co  SL.
Improved Optimal Shared Memory Simulations, and the Power of Reconfiguration
 In Proceedings of the 3rd Israel Symposium on Theory of Computing and Systems
"... We present timeprocessor optimal randomized algorithms for simulating a shared memory machine (EREW PRAM) on a distributed memory machine (DMM). The first algorithm simulates each step of an nprocessor EREW PRAM on an nprocessor DMM with O( log log n log log log n ) delay with high probability. ..."
Abstract

Cited by 8 (6 self)
 Add to MetaCart
We present timeprocessor optimal randomized algorithms for simulating a shared memory machine (EREW PRAM) on a distributed memory machine (DMM). The first algorithm simulates each step of an nprocessor EREW PRAM on an nprocessor DMM with O( log log n log log log n ) delay with high probability. This simulation is work optimal and can be made timeprocessor optimal. The best previous optimal simulations require O(log log n) delay. We also study reconfigurable DMMs which are a "complete network version " of the well studied reconfigurable meshes. We show an algorithm that simulates each step of an n processor EREW PRAM on an nprocessor reconfigurable DMM with only O(log n) delay with high probability. We further show how to make this simulation timeprocessor optimal. 1 Introduction Parallel machines that communicate via a shared memory (Parallel Random Access Machines, PRAMs) are the most commonly used machine model for describing parallel algorithms [J92]. The PRAM is relative...
Efficient Self Simulation Algorithms for Reconfigurable Arrays
, 1995
"... There are several reconfiguringnetwork models of parallel computation that are considered in the published literature, depending on their switching capabilities. Can these reconfigurable models be the basis for the design of massively parallel computers? Perhaps the most fundamental related issue i ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
There are several reconfiguringnetwork models of parallel computation that are considered in the published literature, depending on their switching capabilities. Can these reconfigurable models be the basis for the design of massively parallel computers? Perhaps the most fundamental related issue is virtual parallelism, or the self simulation problem: given an algorithm which is designed for a large reconfigurable mesh, can it be executed efficiently on a smaller reconfigurable mesh? In this work we give several positive answers to the self simulation problem. We show that the simulation of a reconfiguring mesh by a smaller one can be carried optimally and using standard methods on the model in which buses are established along rows or along columns. A novel technique is shown to achieve asymptotically optimal self simulation on models which allow buses to switch column and row edges, provided that a bus is a "linear" path of connected edges. Finally, for models in which a bus is any ...
Refining Randomness
, 1996
"... deny we succeeded with Gal. I want to conclude with my family. I will never forget the love and support I received from my brother and sisters even when we deeply disagreed. Nothing will divide us! Many warm wishes to you Deanna and Harry. I appreciate your letting us go our way. Many times when I ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
deny we succeeded with Gal. I want to conclude with my family. I will never forget the love and support I received from my brother and sisters even when we deeply disagreed. Nothing will divide us! Many warm wishes to you Deanna and Harry. I appreciate your letting us go our way. Many times when I look at Gal I appreciate the sacrifice you have made. We love you very much. We wish our small world was smaller. Finally, to the one who brought light into my lonely life. To the one with whom I share my life, happy or sad. Dear Paula and lovely Gal, my soul and blood  I love you. Contents 1 Introduction 1 1.1 Randomness Has Lots of Structure . . . . . . . . . . . . . . . . . . . 1 1.1.1 An Example: Random Walks . . . . . . . . . . . . . . . . . . 3 1.2 Is Randomness Feasible? . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.2.1 Chaos, Quantum Mechanics and Crude Randomness . . . . . 5 1.2.2 Refining Crude Randomness . . . . . . . . . . . . . . . . . .
On the Relation Between Parallel RealTime Computations and Sublogarithmic Space
 Proceedings of the Fourteenth Conference on Parallel and Distributed Computing and Systems
, 2001
"... We show that all the problems solvable by a nondeterministic machine with logarithmic work space (NLOGSPACE) can be solved in real time by a parallel machine, no matter how tight the realtime constraints are. We also show that, once realtime constraints are dropped, several other realtime prob ..."
Abstract

Cited by 6 (6 self)
 Add to MetaCart
We show that all the problems solvable by a nondeterministic machine with logarithmic work space (NLOGSPACE) can be solved in real time by a parallel machine, no matter how tight the realtime constraints are. We also show that, once realtime constraints are dropped, several other realtime problems are in effect solvable in nondeterministic logarithmic space. Therefore, we conjecture that NLOGSPACE contains exactly all the computations that admit efficient (poly(n) processors) realtime parallel implementations. The issue of approximate realtime solutions for problems not solvable in real time is also investigated. In the process, we determine the computational power of reconfigurable multiple bus machines (RMBMs) with polynomially bounded resources (processors and buses) and running in constant time, which is found to be exactly the same as the power of fusing directed RMBM with O(n 2 ) processors and O(n) buses, each of width 1, as well as exactly the same as the power of directed reconfigurable networks (DRNs) of polynomially bounded size and constant running time. 1
Simulating shared memory in real time: On the computation power of reconfigurable meshes
 in ``Proceedings of the 2nd IEEE Workshop on Reconfigurable Architectures
, 1995
"... We consider randomized simulations of shared memory on a distributed memory machine (DMM) where the n processors and the n memory modules of the DMM are connected via a reconfigurable architecture. We first present a randomized simulation of a CRCW PRAM on a reconfigurable DMM having a complete reco ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
We consider randomized simulations of shared memory on a distributed memory machine (DMM) where the n processors and the n memory modules of the DMM are connected via a reconfigurable architecture. We first present a randomized simulation of a CRCW PRAM on a reconfigurable DMM having a complete reconfigurable interconnection. It guarantees delay O(log *n), with high probability. Next we study a reconfigurable mesh DMM (RMDMM). Here the n processors and n modules are connected via an n_n reconfigurable mesh. It was already known that an n_m reconfigurable mesh can simulate in constant time an nprocessor CRCW PRAM with shared memory of size m. In this paper we present a randomized step by step simulation of a CRCW PRAM with arbitrarily large shared memory on an RMDMM. It guarantees constant delay with high probability, i.e., it simulates in real time. Finally we prove a lower bound showing that size 0(n 2) for the reconfigurable mesh is necessary for real time simulations.] 1997 Academic Press * Supported by DFGGraduiertenkolleg ``Parallele Rechnernetzwerke in der Produktionstechnik,''
Parallel RealTime Complexity Theory
, 2002
"... We present a new complexity theoretic approach to realtime computations. We define timed ωlanguages as a new formal model for such computations, that we believe to allow a unified treatment of all variants of realtime computations that are meaningful in practice. To our knowledge, such a practic ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
We present a new complexity theoretic approach to realtime computations. We define timed ωlanguages as a new formal model for such computations, that we believe to allow a unified treatment of all variants of realtime computations that are meaningful in practice. To our knowledge, such a practically meaningful formal definition does not exist at this time. In order to support our claim that timed ωlanguages capture all the realtime characteristics that are important in practice, we use this formalism to model the two most important features of realtime algorithms, namely the presence of deadlines and the realtime arrival of input data. We emphasize the expressive power of our model by using it to formalize aspects from the areas of realtime database systems and ad hoc networks. We also offer a complexity theoretic characterization of parallel realtime computations. First, we define complexity classes that capture the intuitive notion of resource requirements for realtime computations in a parallel environment. Then, we show that realtime algorithms form an infinite hierarchy with respect to the number of processors used, and
On the Communication Capability of the SelfReconfigurable Gate Array
 Architecture,” 9th Reconfigurable Architectures Workshop in Proc. Int. Parallel and Distrib. Proc. Symp
, 2002
"... The selfreconfigurable gate array (SRGA) architecture consists of an array of processing elements connected by row and column trees. In this paper, we study the communication capability of this interconnection fabric. We derive a necessary condition for any set of k onetoone communications to be ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
The selfreconfigurable gate array (SRGA) architecture consists of an array of processing elements connected by row and column trees. In this paper, we study the communication capability of this interconnection fabric. We derive a necessary condition for any set of k onetoone communications to be performed in t steps, for any 1 ≤ t ≤ k. Next we identify a property of the communication set, called partitionability, for which this necessary condition is sufficient as well. Then we show two classes of communication sets to possess this property. As a special case of one of these results, we show that the set of 1step communications of a segmentable bus requires at most two steps on the SRGA architecture. This result implies that the communication ability of the bit model HVRMesh, a special case of the bit model RMesh, can be emulated by the SRGA architecture without significant overhead. 1.
Fast, Efficient Mutual and Self Simulations for Shared Memory and Reconfigurable Mesh
 in Proceedings of the 7th IEEE Symposium on Parallel and Distributed Processing
, 1995
"... This paper studies relations between the parallel random access machine (pram) model, and the reconfigurable mesh (rmesh) model, by providing mutual simulations between the models. We present an algorithm simulating one step of an (n lg lg n) processor crcw pram on an n \Theta n rmesh with delay O ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
This paper studies relations between the parallel random access machine (pram) model, and the reconfigurable mesh (rmesh) model, by providing mutual simulations between the models. We present an algorithm simulating one step of an (n lg lg n) processor crcw pram on an n \Theta n rmesh with delay O(lg lg n) with high probability. We use our pram simulation to obtain the first efficient selfsimulation algorithm of an rmesh with general switches: An algorithm running on an n \Theta n rmesh is simulated on a p \Theta p rmesh with delay O((n=p) 2 + lg n lg lg p) with high probability, which is optimal for all p n= p lg n lg lg n. Finally, we consider the simulation of rmesh on the pram. We show that a 2 \Theta n rmesh can be optimally simulated on a crcw pram in \Theta(ff(n)) time, where ff(\Delta) is the slowgrowing inverse Ackermann function. In contrast, a pram with polynomial number of processors cannot simulate the 3 \Theta n rmesh in less than \Omega\Gammaha n= lg lg n) e...
SIZE MATTERS: LOGARITHMIC SPACE IS REAL TIME
"... We show that all the problems solvable by a nondeterministic machine with logarithmic work space (NL) can be solved in real time by a parallel machine, no matter how tight the realtime constraints are. We also show that several other realtime problems are in effect solvable in nondeterministic log ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
We show that all the problems solvable by a nondeterministic machine with logarithmic work space (NL) can be solved in real time by a parallel machine, no matter how tight the realtime constraints are. We also show that several other realtime problems are in effect solvable in nondeterministic logarithmic space once their realtime constraints are dropped and they become nonrealtime. We thus conjecture that NL contains exactly all the problems that admit feasible realtime parallel algorithms. The issue of realtime optimization problems is also investigated. We identify the class of such problems that are solvable in real time. In the process, we determine the computational power of directed reconfigurable multiple bus machines (DRMBMs) with polynomially bounded resources and running in constant time, which is found to be the same as the power of directed reconfigurable networks with the same properties. We also show that write conflict resolution rules such as Priority or even Common do not add computational power over the Collision rule, and that a bus of width 1 (a wire) suffices for any constant time computation on DRMBM. Key Words: realtime computation, timed ωlanguage, parallel complexity, reconfigurable multiple bus machine, independence system, matroid.