Results 11  20
of
135
The Quickest Transshipment Problem
 MATHEMATICS OF OPERATIONS RESEARCH
, 1995
"... A dynamic network consists of a graph with capacities and transit times on its edges. The quickest transshipment problem is defined by a dynamic network with several sources and sinks; each source has a specified supply and each sink has a specified demand. The problem is to send exactly the righ ..."
Abstract

Cited by 72 (1 self)
 Add to MetaCart
(Show Context)
A dynamic network consists of a graph with capacities and transit times on its edges. The quickest transshipment problem is defined by a dynamic network with several sources and sinks; each source has a specified supply and each sink has a specified demand. The problem is to send exactly the right amount of flow out of each source and into each sink in the minimum overall time. Variations of
Convex Nondifferentiable Optimization: A Survey Focussed On The Analytic Center Cutting Plane Method.
, 1999
"... We present a survey of nondifferentiable optimization problems and methods with special focus on the analytic center cutting plane method. We propose a selfcontained convergence analysis, that uses the formalism of the theory of selfconcordant functions, but for the main results, we give direct pr ..."
Abstract

Cited by 71 (2 self)
 Add to MetaCart
We present a survey of nondifferentiable optimization problems and methods with special focus on the analytic center cutting plane method. We propose a selfcontained convergence analysis, that uses the formalism of the theory of selfconcordant functions, but for the main results, we give direct proofs based on the properties of the logarithmic function. We also provide an in depth analysis of two extensions that are very relevant to practical problems: the case of multiple cuts and the case of deep cuts. We further examine extensions to problems including feasible sets partially described by an explicit barrier function, and to the case of nonlinear cuts. Finally, we review several implementation issues and discuss some applications.
Solving convex programs by random walks
 Journal of the ACM
, 2002
"... Minimizing a convex function over a convex set in ndimensional space is a basic, general problem with many interesting special cases. Here, we present a simple new algorithm for convex optimization based on sampling by a random walk. It extends naturally to minimizing quasiconvex functions and to ..."
Abstract

Cited by 68 (12 self)
 Add to MetaCart
(Show Context)
Minimizing a convex function over a convex set in ndimensional space is a basic, general problem with many interesting special cases. Here, we present a simple new algorithm for convex optimization based on sampling by a random walk. It extends naturally to minimizing quasiconvex functions and to other generalizations.
The Perceptron algorithm vs. Winnow: linear vs. logarithmic mistake bounds when few input variables are relevant
"... This paper addresses the familiar problem of predicting with a linear classifier . The ..."
Abstract

Cited by 61 (8 self)
 Add to MetaCart
(Show Context)
This paper addresses the familiar problem of predicting with a linear classifier . The
ConditionBased Complexity Of Convex Optimization In Conic Linear Form Via The Ellipsoid Algorithm
, 1998
"... A convex optimization problem in conic linear form is an optimization problem of the form CP (d) : maximize c T ..."
Abstract

Cited by 47 (20 self)
 Add to MetaCart
A convex optimization problem in conic linear form is an optimization problem of the form CP (d) : maximize c T
A simple polynomialtime rescaling algorithm for solving linear programs
 In Proceedings of the 36th Annual ACM Symposium on Theory of Computing (STOC
, 2004
"... We show that the perceptron algorithm along with periodic rescaling solves linear programs in polynomial time. The algorithm requires no matrix inversions and no barrier functions. 1 ..."
Abstract

Cited by 47 (6 self)
 Add to MetaCart
(Show Context)
We show that the perceptron algorithm along with periodic rescaling solves linear programs in polynomial time. The algorithm requires no matrix inversions and no barrier functions. 1
Fast Algorithms for Approximate Semidefinite Programming using the Multiplicative Weights Update Method
"... Semidefinite programming (SDP) relaxations appear inmany recent approximation algorithms but the only general technique for solving such SDP relaxations is via interior point methods. We use a Lagrangianrelaxation based technique (modified from the papers of Plotkin, Shmoys,and Tardos (PST), and ..."
Abstract

Cited by 45 (6 self)
 Add to MetaCart
(Show Context)
Semidefinite programming (SDP) relaxations appear inmany recent approximation algorithms but the only general technique for solving such SDP relaxations is via interior point methods. We use a Lagrangianrelaxation based technique (modified from the papers of Plotkin, Shmoys,and Tardos (PST), and Klein and Lu) to derive faster algorithms for approximately solving several families of SDPrelaxations. The algorithms are based upon some improvements to the PST ideas which lead to new results even fortheir framework as well as improvements in approximate eigenvalue computations by using random sampling.
A Framework for Exploiting Task and DataParallelism on Distributed Memory Multicomputers
 IEEE Transactions on Parallel and Distributed Systems
, 1997
"... offer significant advantages over shared memory multiprocessors in terms of cost and scalability. Unfortunately, the utilization of all the available computational power in these machines involves a tremendous programming effort on the part of users, which creates a need for sophisticated compiler a ..."
Abstract

Cited by 34 (0 self)
 Add to MetaCart
(Show Context)
offer significant advantages over shared memory multiprocessors in terms of cost and scalability. Unfortunately, the utilization of all the available computational power in these machines involves a tremendous programming effort on the part of users, which creates a need for sophisticated compiler and runtime support for distributed memory machines. In this paper, we explore a new compiler optimization for regular scientific applications–the simultaneous exploitation of task and data parallelism. Our optimization is implemented as part of the PARADIGM HPF compiler framework we have developed. The intuitive idea behind the optimization is the use of task parallelism to control the degree of data parallelism of individual tasks. The reason this provides increased performance is that data parallelism provides diminishing returns as the number of processors used is increased. By controlling the number of processors used for each data parallel task in an application and by concurrently executing these tasks, we make program execution more efficient and, therefore, faster. A practical implementation of a task and data parallel scheme of execution for an application on a distributed memory multicomputer also involves data redistribution. This data redistribution causes an overhead. However, as our experimental results show, this overhead is not a problem; execution of a program using task and data parallelism together can be significantly faster than its execution using data parallelism alone. This makes our proposed optimization practical and extremely useful.
Approximation Algorithms for Steiner and Directed Multicuts
 JOURNAL OF ALGORITHMS
, 1996
"... In this paper we consider the steiner multicut problem. This is a generalization of the minimum multicut problem where instead of separating node pairs, the goal is to find a minimum weight set of edges that separates all given sets of nodes. A set is considered separated if it is not contained in ..."
Abstract

Cited by 29 (1 self)
 Add to MetaCart
In this paper we consider the steiner multicut problem. This is a generalization of the minimum multicut problem where instead of separating node pairs, the goal is to find a minimum weight set of edges that separates all given sets of nodes. A set is considered separated if it is not contained in a single connected component. We show an O(log 3 (kt)) approximation algorithm for the steiner multicut problem, where k is the number of sets and t is the maximum cardinality of a set. This improves the O(t log k) bound that easily follows from the previously known multicut results. We also consider an extension of multicuts to directed case, namely the problem of finding a minimumweight set of edges whose removal ensures that none of the strongly connected components includes one of the prespecified k node pairs. In this paper we describe an O(log 2 k) approximation algorithm for this directed multicut problem. If k n, this represents and an improvement over the O(logn log ...
Efficient Algorithms Using The Multiplicative Weights Update Method
, 2006
"... Abstract Algorithms based on convex optimization, especially linear and semidefinite programming, are ubiquitous in Computer Science. While there are polynomial time algorithms known to solve such problems, quite often the running time of these algorithms is very high. Designing simpler and more eff ..."
Abstract

Cited by 28 (1 self)
 Add to MetaCart
(Show Context)
Abstract Algorithms based on convex optimization, especially linear and semidefinite programming, are ubiquitous in Computer Science. While there are polynomial time algorithms known to solve such problems, quite often the running time of these algorithms is very high. Designing simpler and more efficient algorithms is important for practical impact. In this thesis, we explore applications of the Multiplicative Weights method in the design of efficient algorithms for various optimization problems. This method, which was repeatedly discovered in quite diverse fields, is an algorithmic technique which maintains a distribution on a certain set of interest, and updates it iteratively by multiplying the probability mass of elements by suitably chosen factors based on feedback obtained by running another algorithm on the distribution. We present a single metaalgorithm which unifies all known applications of this method in a common framework. Next, we generalize the method to the setting of symmetric matrices rather than real numbers. We derive the following applications of the resulting Matrix Multiplicative Weights algorithm: 1. The first truly general, combinatorial, primaldual method for designing efficient algorithms for semidefinite programming. Using these techniques, we obtain significantly faster algorithms for obtaining O(plog n) approximations to various graph partitioning problems, such as Sparsest Cut, Balanced Separator in both directed and undirected weighted graphs, and constraint satisfaction problems such as Min UnCut and Min 2CNF Deletion. 2. An ~O(n3) time derandomization of the AlonRoichman construction of expanders using Cayley graphs. The algorithm yields a set of O(log n) elements which generates an expanding Cayley graph in any group of n elements. 3. An ~O(n3) time deterministic O(log n) approximation algorithm for the quantum hypergraph covering problem. 4. An alternative proof of a result of Aaronson that the flfatshattering dimension of quantum states on n qubits is O ( nfl2).