Results 1  10
of
62
A FASTER STRONGLY POLYNOMIAL MINIMUM COST FLOW ALGORITHM
, 1991
"... In this paper, we present a new strongly polynomial time algorithm for the minimum cost flow problem, based on a refinement of the EdmondsKarp scaling technique. Our algorithm solves the uncapacitated minimum cost flow problem as a sequence of O(n log n) shortest path problems on networks with n no ..."
Abstract

Cited by 116 (10 self)
 Add to MetaCart
In this paper, we present a new strongly polynomial time algorithm for the minimum cost flow problem, based on a refinement of the EdmondsKarp scaling technique. Our algorithm solves the uncapacitated minimum cost flow problem as a sequence of O(n log n) shortest path problems on networks with n nodes and m arcs and runs in O(n log n (m + n log n)) time. Using a standard transformation, thjis approach yields an O(m log n (m + n log n)) algorithm for the capacitated minimum cost flow problem. This algorithm improves the best previous strongly polynomial time algorithm, due to Z. Galil and E. Tardos, by a factor of n 2 /m. Our algorithm for the capacitated minimum cost flow problem is even more efficient if the number of arcs with finite upper bounds, say n', is much less than m. In this case, the running time of the algorithm is O((m ' + n)log n(m + n log n)).
Optical Communication for Pointer Based Algorithms
, 1988
"... ) Abstract In this paper we study the Local Memory PRAM. This model allows unit cost communication but assumes that the shared memory is divided into modules. This model is motivated by a consideration of potential optical computers. We show that fundamental problems such as listranking and parall ..."
Abstract

Cited by 54 (1 self)
 Add to MetaCart
) Abstract In this paper we study the Local Memory PRAM. This model allows unit cost communication but assumes that the shared memory is divided into modules. This model is motivated by a consideration of potential optical computers. We show that fundamental problems such as listranking and parallel tree contraction can be implemented on this model in O(log n) time using n= log n processors. To solve the listranking problem we introduce a general asynchronous technique which has relevance to a number of problems. 1 Introduction We consider a model of parallel computation that is especially suited to pointer based computation. We motivate this model by showing that basic problems, like listranking and parallel tree contraction, can be performed in O(log n) time using only n= log n processors. We also show that any step on this model can be simulated in unit time on this model by a machine with an optical communication architecture. Thus we contend that the basic problem of listra...
Horizons of Parallel Computation
 JOURNAL OF PARALLEL AND DISTRIBUTED COMPUTING
, 1993
"... This paper considers the ultimate impact of fundamental physical limitationsnotably, speed of light and device sizeon parallel computing machines. Although we fully expect an innovative and very gradual evolution to the limiting situation, we take here the provocative view of exploring the ..."
Abstract

Cited by 39 (3 self)
 Add to MetaCart
This paper considers the ultimate impact of fundamental physical limitationsnotably, speed of light and device sizeon parallel computing machines. Although we fully expect an innovative and very gradual evolution to the limiting situation, we take here the provocative view of exploring the consequences of the accomplished attainment of the physical bounds. The main result is that scalability holds only for neighborly interconnections, such as the square mesh, of boundedsize synchronous modules, presumably of the areauniversal type. We also discuss the ultimate infeasibility of latencyhiding, the violation of intuitive maximal speedups, and the emerging novel processortime tradeoffs.
The Complexity of Computation on the Parallel Random Access Machine
, 1993
"... PRAMs also approximate the situation where communication to and from shared memory is much more expensive than local operations, for example, where each processor is located on a separate chip and access to shared memory is through a combining network. Not surprisingly, abstract PRAMs can be much m ..."
Abstract

Cited by 32 (4 self)
 Add to MetaCart
PRAMs also approximate the situation where communication to and from shared memory is much more expensive than local operations, for example, where each processor is located on a separate chip and access to shared memory is through a combining network. Not surprisingly, abstract PRAMs can be much more powerful than restricted instruction set PRAMs. THEOREM 21.16 Any function of n variables can be computed by an abstract EROW PRAM in O(log n) steps using n= log 2 n processors and n=2 log 2 n shared memory cells. PROOF Each processor begins by reading log 2 n input values and combining them into one large value. The information known by processors are combined in a binarytreelike fashion. In each round, the remaining processors are grouped into pairs. In each pair, one processor communicates the information it knows about the input to the other processor and then leaves the computation. After dlog 2 ne rounds, one processor knows all n input values. Then this processor computes th...
Randomised Techniques in Combinatorial Algorithmics
, 1999
"... ix Chapter 1 Introduction 1 1.1 Algorithmic Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Technical Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.2.1 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 ..."
Abstract

Cited by 20 (7 self)
 Add to MetaCart
ix Chapter 1 Introduction 1 1.1 Algorithmic Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 1.2 Technical Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.2.1 Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.2.2 Parallel Computational Complexity . . . . . . . . . . . . . . . . . . . . . 7 1.2.3 Probability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 1.2.4 Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 1.2.5 Random Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 1.2.6 Group Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 1.3 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 Chapter 2 Parallel Uniform Generation of Unlabelled Graphs 25 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 2.2 Sampling O...
AlmostEverywhere Complexity Hierarchies for Nondeterministic Time
, 1993
"... this paper, if T is timeconstructible, then ..."
Tables Should Be Sorted (on Random Access Machines)
, 1995
"... We consider the problem of storing an n element subset S of a universe of size m, so that membership queries (is x 2 S?) can be answered efficiently. The model of computation is a random access machine with the standard instruction set (direct and indirect adressing, conditional branching, addit ..."
Abstract

Cited by 14 (4 self)
 Add to MetaCart
We consider the problem of storing an n element subset S of a universe of size m, so that membership queries (is x 2 S?) can be answered efficiently. The model of computation is a random access machine with the standard instruction set (direct and indirect adressing, conditional branching, addition, subtraction, and multiplication). We show that if s memory registers are used to store S, where n s m=n , then query time \Omega\Gammame/ n) is necessary in the worst case. That is, under these conditions, the solution consisting of storing S as a sorted table and doing binary search is optimal. The condition s m=n is essentially optimal; we show that if n + m=n o(1) registers may be used, query time o(log n) is possible.
Fine Separation of Average Time Complexity Classes
 SIAM Journal on Computing
, 1997
"... We extend Levin's definition of average polynomial time to arbitrary timebounds in accordance with the following general principles: (1) It essentially agrees with Levin's notion when applied to polynomial timebounds. (2) If a language L belongs to DTIME(T(n)), for some timebound T(n), then every ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
We extend Levin's definition of average polynomial time to arbitrary timebounds in accordance with the following general principles: (1) It essentially agrees with Levin's notion when applied to polynomial timebounds. (2) If a language L belongs to DTIME(T(n)), for some timebound T(n), then every distributional problem (L;µ) is T on the µaverage. (3) If L does not belong to DTIME(T(n)) almost everywhere, then no distributional problem (L;µ) is T on the µaverage. We present hierarchy theorems for averagecase complexity, for arbitrary timebounds, that are as tight as the wellknown HartmanisStearns [HS65] hierarchy theorem for deterministic complexity. As a consequence, for every timebound T(n), there are distributional problems (L;µ) that can be solved using only a slight increase in time but that cannot be solved on the µaverage in time T(n). Keywords: computational complexity, average time complexity classes, hierarchy, AverageP, logarithmicoexponential ACM Computing R...
A Short History of Computational Complexity
 IEEE CONFERENCE ON COMPUTATIONAL COMPLEXITY
, 2002
"... this article mention all of the amazing research in computational complexity theory. We survey various areas in complexity choosing papers more for their historical value than necessarily the importance of the results. We hope that this gives an insight into the richness and depth of this still quit ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
this article mention all of the amazing research in computational complexity theory. We survey various areas in complexity choosing papers more for their historical value than necessarily the importance of the results. We hope that this gives an insight into the richness and depth of this still quite young eld
ProcessorTime Tradeoffs under BoundedSpeed Message Propagation: Part I, Upper Bounds
 Theory of Computing Systems
, 1995
"... Upper bounds are derived for the processortime tradeoffs of machines such as linear arrays and twodimensional meshes, which are compatible with the physical limitation expressed by boundedspeed propagation of messages (due to the finiteness of the speed of light). It is shown that parallelism and ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
Upper bounds are derived for the processortime tradeoffs of machines such as linear arrays and twodimensional meshes, which are compatible with the physical limitation expressed by boundedspeed propagation of messages (due to the finiteness of the speed of light). It is shown that parallelism and locality combined may yield speedups superlinear in the number of processors. The speedups are inherent, due to the optimality of the obtained tradeoffs as established in a companion paper. Simulations are developed of multiprocessor machines by analogous machines with fewer processors. A crucial role is played by the hierarchical nature of the memory system. A divideandconquer technique for hierarchical memories is developed, based on the graphtheoretic notion of topological separator. For multiprocessors, this technique also requires a careful balance of memory access and interprocessor communication costs, which leads to nonintuitive orchestrations of the simulation process. Dipart...