Results 1  10
of
18
A new approach to the minimum cut problem
 Journal of the ACM
, 1996
"... Abstract. This paper presents a new approach to finding minimum cuts in undirected graphs. The fundamental principle is simple: the edges in a graph’s minimum cut form an extremely small fraction of the graph’s edges. Using this idea, we give a randomized, strongly polynomial algorithm that finds th ..."
Abstract

Cited by 95 (8 self)
 Add to MetaCart
Abstract. This paper presents a new approach to finding minimum cuts in undirected graphs. The fundamental principle is simple: the edges in a graph’s minimum cut form an extremely small fraction of the graph’s edges. Using this idea, we give a randomized, strongly polynomial algorithm that finds the minimum cut in an arbitrarily weighted undirected graph with high probability. The algorithm runs in O(n 2 log 3 n) time, a significant improvement over the previous Õ(mn) time bounds based on maximum flows. It is simple and intuitive and uses no complex data structures. Our algorithm can be parallelized to run in �� � with n 2 processors; this gives the first proof that the minimum cut problem can be solved in ���. The algorithm does more than find a single minimum cut; it finds all of them. With minor modifications, our algorithm solves two other problems of interest. Our algorithm finds all cuts with value within a multiplicative factor of � of the minimum cut’s in expected Õ(n 2 � ) time, or in �� � with n 2 � processors. The problem of finding a minimum multiway cut of a graph into r pieces is solved in expected Õ(n 2(r�1) ) time, or in �� � with n 2(r�1) processors. The “trace ” of the algorithm’s execution on these two problems forms a new compact data structure for representing all small cuts and all multiway cuts in a graph. This data structure can be efficiently transformed into the
The Measured Cost of Conservative Garbage Collection
 Software Practice and Experience
, 1993
"... this paper, I evaluate the costs of different dynamic storage management algorithms, including domainspecific allocators, widelyused generalpurpose allocators, and a publicly available conservative garbage collection algorithm. Surprisingly, I find that programmer enhancements often have little ef ..."
Abstract

Cited by 79 (6 self)
 Add to MetaCart
this paper, I evaluate the costs of different dynamic storage management algorithms, including domainspecific allocators, widelyused generalpurpose allocators, and a publicly available conservative garbage collection algorithm. Surprisingly, I find that programmer enhancements often have little effect on program performance. I also find that the true cost of conservative garbage collection is not the CPU overhead, but the memory system overhead of the algorithm. I conclude that conservative garbage collection is a promising alternative to explicit storage management and that the performance of conservative collection is likely to improve in the future. C programmers should now seriously consider using conservative garbage collection instead of explicitly calling free in programs they write
Markov Chain Monte Carlo Simulations and Their Statistical Analysis, World Scientific
, 2004
"... ..."
Sense and Denotation as Algorithm and Value
, 1990
"... this paper the author was partially supported by an NSF grant. ..."
Abstract

Cited by 23 (3 self)
 Add to MetaCart
this paper the author was partially supported by an NSF grant.
Preserving confidentiality of highdimensional tabular data: Statistical and computational issues
 AND COMPUTING
, 2003
"... Dissemination of information derived from large contingency tables formed from confidential data is a major responsibility of statistical agencies. In this paper we present solutions to several computational and algorithmic problems that arise in the dissemination of crosstabulations (marginal sub ..."
Abstract

Cited by 14 (10 self)
 Add to MetaCart
Dissemination of information derived from large contingency tables formed from confidential data is a major responsibility of statistical agencies. In this paper we present solutions to several computational and algorithmic problems that arise in the dissemination of crosstabulations (marginal subtables) from a single underlying table. These include data structures that exploit sparsity to support efficient computation of marginals and algorithms such as iterative proportional fitting, as well as a generalized form of the shuttle algorithm that computes sharp bounds on (small, confidentiality threatening) cells in the full table from arbitrary sets of released marginals. We give examples illustrating the techniques.
Sorting on a Parallel Pointer Machine with Applications to Set Expression Evaluation
 J. ACM
, 1989
"... We present optimal algorithms for sorting on parallel CREW and EREW versions of the pointer machine model. Intuitively, one can view our methods as being based on a parallel mergesort using linked lists rather than arrays (the usual parallel data structure). We also show how to exploit the "locality ..."
Abstract

Cited by 14 (5 self)
 Add to MetaCart
We present optimal algorithms for sorting on parallel CREW and EREW versions of the pointer machine model. Intuitively, one can view our methods as being based on a parallel mergesort using linked lists rather than arrays (the usual parallel data structure). We also show how to exploit the "locality" of our approach to solve the set expression evaluation problem, a problem with applications to database querying and logicprogramming, in O(log n) time using O(n) processors. Interestingly, this is an asymptotic improvement over what seems possible using previous techniques. Categories and Subject Descriptors: E.1 [Data Structures]: arrays, lists; F.2.2. [Analysis of Algorithms and Problem Complexity]: Nonnumerical Algorithms and Problemssorting and searching General Terms: Algorithms, Theory, Verification Additional Key Words and Phrases: parallel algorithms, PRAM, pointer machine, linking automaton, expression evaluation, mergesort, cascade merging 1 Introduction One of the primar...
DataFlow Frameworks for WorstCase Execution Time Analysis
 RealTime Systems
, 2000
"... The purpose of this paper is to introduce frameworks based on dataflow equations which provide for estimating the worstcase execution time (WCET) of (realtime) programs. These frameworks allow several different WCET analysis techniques, which range from nave approaches to exact analysis, provided ..."
Abstract

Cited by 12 (8 self)
 Add to MetaCart
The purpose of this paper is to introduce frameworks based on dataflow equations which provide for estimating the worstcase execution time (WCET) of (realtime) programs. These frameworks allow several different WCET analysis techniques, which range from nave approaches to exact analysis, provided exact knowledge on the program behaviour is available. However, dataflow frameworks can also be used for symbolic analysis based on information derived automatically from the source code of the program. As a byproduct we show that slightly modified elimination methods can be employed for solving WCET dataflow equations, while iteration algorithms cannot be used for this purpose.
Cluster Algorithms for Spin Models on MIMD Parallel Computers
"... Parallel computers are ideally suited to the Monte Carlo simulation of spin models using the standard Metropolis algorithm, since it is regular and local. However local algorithms have the major drawback that near a phase transition the number of sweeps needed to generate a statistically independent ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
Parallel computers are ideally suited to the Monte Carlo simulation of spin models using the standard Metropolis algorithm, since it is regular and local. However local algorithms have the major drawback that near a phase transition the number of sweeps needed to generate a statistically independent configuration increases as the square of the lattice size. New algorithms have recently been developed which dramatically reduce this ‘critical slowing down ’ by updating clusters of spins at a time. The highly irregular and nonlocal nature of these algorithms means that they are much more difficult to parallelize efficiently. Here we introduce the new cluster algorithms, explain some sequential algorithms for identifying and labelling connected clusters of spins, and then outline some parallel algorithms which have been implemented on MIMD machines.
DISTRIBUTED OPTIMISTIC SIMULATION OF DEVS AND CELLDEVS MODELS WITH PCD++
, 2006
"... DEVS is a sound formal modeling and simulation (M&S) framework based on generic dynamic system concepts. CellDEVS is a DEVSbased formalism intended to model complex physical systems as cell spaces. Time Warp is the most wellknown optimistic synchronization protocol for parallel and distributed si ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
DEVS is a sound formal modeling and simulation (M&S) framework based on generic dynamic system concepts. CellDEVS is a DEVSbased formalism intended to model complex physical systems as cell spaces. Time Warp is the most wellknown optimistic synchronization protocol for parallel and distributed simulations. This work is devoted to developing new techniques for executing DEVS and CellDEVS models in parallel and distributed environments based on the WARPED kernel, an implementation of the Time Warp protocol. The resultant optimistic simulator, called as PCD++, is built as a new simulation engine for CD++, an M&S toolkit that implements the DEVS and CellDEVS formalisms. Algorithms in CD++ and the WARPED kernel are redesigned to carry out optimistic simulations using a nonhierarchical approach that reduces the communication overhead. The messagepassing paradigm is analyzed using a highlevel abstraction called wall clock time slice. A twolevel usercontrolled statesaving mechanism is proposed to achieve efficient and flexible state saving at runtime. This mechanism is integrated with both the copy statesaving and periodic statesaving strategies to realize a hybrid technique that gives simulator developers the full power to dynamically choose the best possible