Results 1  10
of
18
Fast Parallel Algorithm for the Maximal Independent Set Problem
 Proc. 16th Annual ACM Symposium on Theory Of Computing
, 1984
"... Abstract. A parallel algorithm is presented that accepts as input a graph G and produces a maximal independent set of vertices in G. On a PRAM without the concurrent write or concurrent read features, the algorithm executes in G((10gn)~) time and uses 0((n/(logn))3) processors, where n is the numbe ..."
Abstract

Cited by 75 (1 self)
 Add to MetaCart
Abstract. A parallel algorithm is presented that accepts as input a graph G and produces a maximal independent set of vertices in G. On a PRAM without the concurrent write or concurrent read features, the algorithm executes in G((10gn)~) time and uses 0((n/(logn))3) processors, where n is the number of vertices in G. The algorithm has several novel features that may find other applications. These include the use of balanced incomplete block designs to replace random sampling by deterministic sampling, and the use of a “dynamic pigeonhole principle ” that generalizes the conventional pigeonhole principle.
Analysis of the binary Euclidean algorithm
 Directions and Recent Results in Algorithms and Complexity
, 1976
"... The binary Euclidean algorithm is a variant of the classical Euclidean algorithm. It avoids multiplications and divisions, except by powers of two, so is potentially faster than the classical algorithm on a binary machine. We describe the binary algorithm and consider its average case behaviour. In ..."
Abstract

Cited by 29 (2 self)
 Add to MetaCart
The binary Euclidean algorithm is a variant of the classical Euclidean algorithm. It avoids multiplications and divisions, except by powers of two, so is potentially faster than the classical algorithm on a binary machine. We describe the binary algorithm and consider its average case behaviour. In particular, we correct some errors in the literature, discuss some recent results of Vallée, and describe a numerical computation which supports a conjecture of Vallée. 1
Global Search Methods For Solving Nonlinear Optimization Problems
, 1997
"... ... these new methods, we develop a prototype, called Novel (Nonlinear Optimization Via External Lead), that solves nonlinear constrained and unconstrained problems in a unified framework. We show experimental results in applying Novel to solve nonlinear optimization problems, including (a) the lear ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
... these new methods, we develop a prototype, called Novel (Nonlinear Optimization Via External Lead), that solves nonlinear constrained and unconstrained problems in a unified framework. We show experimental results in applying Novel to solve nonlinear optimization problems, including (a) the learning of feedforward neural networks, (b) the design of quadraturemirrorfilter digital filter banks, (c) the satisfiability problem, (d) the maximum satisfiability problem, and (e) the design of multiplierless quadraturemirrorfilter digital filter banks. Our method achieves better solutions than existing methods, or achieves solutions of the same quality but at a lower cost.
A Short History of Computational Complexity
 IEEE CONFERENCE ON COMPUTATIONAL COMPLEXITY
, 2002
"... this article mention all of the amazing research in computational complexity theory. We survey various areas in complexity choosing papers more for their historical value than necessarily the importance of the results. We hope that this gives an insight into the richness and depth of this still quit ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
this article mention all of the amazing research in computational complexity theory. We survey various areas in complexity choosing papers more for their historical value than necessarily the importance of the results. We hope that this gives an insight into the richness and depth of this still quite young eld
Evaluating Parallel Algorithms: Theoretical and Practical Aspects
, 1990
"... The motivation for the work reported in this thesis has been to lessen the gap between theory and practice within the eld of parallel computing. When looking for new and faster parallel algorithms for use in massively parallel systems, it is tempting to investigate promising alternatives from the la ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
The motivation for the work reported in this thesis has been to lessen the gap between theory and practice within the eld of parallel computing. When looking for new and faster parallel algorithms for use in massively parallel systems, it is tempting to investigate promising alternatives from the large body of research done on parallel algorithms within the eld of theoretical computer science. These algorithms are mainly described for the PRAM (Parallel Random Access Machine) model of computation. This thesis proposes a method for evaluating the practical value of PRAM algorithms. The approach is based on implementing PRAM algorithms for execution on a CREW (Concurrent Read Exclusive Write) PRAM simulator. Measuring and analysis of implemented algorithms on nite problems provide new and more practically oriented results than those traditionally obtained by asymptotical analysis (Onotation). The evaluation method is demonstrated by investigating the practical value of a new and important parallel sorting algorithm from theoretical
A Randomized BSP/CGM Algorithm for the Maximal Independent Set Problem
 PARALLEL PROCESSING LETTERS
, 1999
"... This paper presents a randomized parallel algorithm for the Maximal Independent Set problem. Our algorithm uses a BSPlike computer with p processors and requires that n+m p = \Omega\Gamma p) for a graph with n vertices and m edges. Under this scalability assumption, and after a preprocessing ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
This paper presents a randomized parallel algorithm for the Maximal Independent Set problem. Our algorithm uses a BSPlike computer with p processors and requires that n+m p = \Omega\Gamma p) for a graph with n vertices and m edges. Under this scalability assumption, and after a preprocessing phase, it computes a maximal independent set after O(log p) communication rounds, with high probability, each round requiring linear computation time O( n+m p ). The preprocessing phase is deterministic and important in order to ensure that degree computations can be implemented efficiently. For this, we give an optimal parallel BSP/CGM algorithm to the pquantiles search problem, which runs in O( m log p p ) time and a constant number of communication rounds, and could be of interest in its own right, as shown in the text.
A Status Report on the P versus NP Question
"... We survey some of the history of the most famous open question in computing: the P versus NP question. We summarize some of the progress that has been made to date, and assess the current situation. ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
We survey some of the history of the most famous open question in computing: the P versus NP question. We summarize some of the progress that has been made to date, and assess the current situation.
DEDICATION GLOBAL SEARCH METHODS FOR SOLVING NONLINEAR OPTIMIZATION PROBLEMS
"... In this thesis, we present new methods for solving nonlinear optimization problems. These problems are di cult to solve because the nonlinear constraints form feasible regions that are di cult to nd, and the nonlinear objectives contain local minima that trap descenttype search methods. In order to ..."
Abstract
 Add to MetaCart
In this thesis, we present new methods for solving nonlinear optimization problems. These problems are di cult to solve because the nonlinear constraints form feasible regions that are di cult to nd, and the nonlinear objectives contain local minima that trap descenttype search methods. In order to nd good solutions in nonlinear optimization, we focus on the following two key issues: how to handle nonlinear constraints and how to escape from local minima. We use a Lagrangemultiplierbased formulation to handle nonlinear constraints, and develop Lagrangian methods with dynamic control to provide faster and more robust convergence. We extend the traditional Lagrangian theory for the continuous space to the To my wife Lei, my parents, and my son Charles discrete space and develop e cient discrete Lagrangian methods. To overcome local minima, we design a new tracebased globalsearch method that relies on an external traveling trace to pull a search trajectory out of a local optimum in a continuous fashion without having to restart the search from a new starting point. Good starting points identi ed in the global search are used in the local search to identify true local optima. By combining these new methods, we develop a prototype, called Novel (Nonlinear Optimization Via External Lead),