Results 11  20
of
99
The Quickest Transshipment Problem
 MATHEMATICS OF OPERATIONS RESEARCH
, 1995
"... A dynamic network consists of a graph with capacities and transit times on its edges. The quickest transshipment problem is defined by a dynamic network with several sources and sinks; each source has a specified supply and each sink has a specified demand. The problem is to send exactly the righ ..."
Abstract

Cited by 56 (1 self)
 Add to MetaCart
A dynamic network consists of a graph with capacities and transit times on its edges. The quickest transshipment problem is defined by a dynamic network with several sources and sinks; each source has a specified supply and each sink has a specified demand. The problem is to send exactly the right amount of flow out of each source and into each sink in the minimum overall time. Variations of
The Perceptron algorithm vs. Winnow: linear vs. logarithmic mistake bounds when few input variables are relevant
"... This paper addresses the familiar problem of predicting with a linear classifier . The ..."
Abstract

Cited by 54 (8 self)
 Add to MetaCart
This paper addresses the familiar problem of predicting with a linear classifier . The
Convex Nondifferentiable Optimization: A Survey Focussed On The Analytic Center Cutting Plane Method.
, 1999
"... We present a survey of nondifferentiable optimization problems and methods with special focus on the analytic center cutting plane method. We propose a selfcontained convergence analysis, that uses the formalism of the theory of selfconcordant functions, but for the main results, we give direct pr ..."
Abstract

Cited by 53 (2 self)
 Add to MetaCart
We present a survey of nondifferentiable optimization problems and methods with special focus on the analytic center cutting plane method. We propose a selfcontained convergence analysis, that uses the formalism of the theory of selfconcordant functions, but for the main results, we give direct proofs based on the properties of the logarithmic function. We also provide an in depth analysis of two extensions that are very relevant to practical problems: the case of multiple cuts and the case of deep cuts. We further examine extensions to problems including feasible sets partially described by an explicit barrier function, and to the case of nonlinear cuts. Finally, we review several implementation issues and discuss some applications.
Solving convex programs by random walks
 Journal of the ACM
, 2002
"... Minimizing a convex function over a convex set in ndimensional space is a basic, general problem with many interesting special cases. Here, we present a simple new algorithm for convex optimization based on sampling by a random walk. It extends naturally to minimizing quasiconvex functions and to ..."
Abstract

Cited by 49 (10 self)
 Add to MetaCart
Minimizing a convex function over a convex set in ndimensional space is a basic, general problem with many interesting special cases. Here, we present a simple new algorithm for convex optimization based on sampling by a random walk. It extends naturally to minimizing quasiconvex functions and to other generalizations.
ConditionBased Complexity Of Convex Optimization In Conic Linear Form Via The Ellipsoid Algorithm
, 1998
"... A convex optimization problem in conic linear form is an optimization problem of the form CP (d) : maximize c T ..."
Abstract

Cited by 38 (17 self)
 Add to MetaCart
A convex optimization problem in conic linear form is an optimization problem of the form CP (d) : maximize c T
A Framework for Exploiting Task and DataParallelism on Distributed Memory Multicomputers
 IEEE Transactions on Parallel and Distributed Systems
, 1997
"... offer significant advantages over shared memory multiprocessors in terms of cost and scalability. Unfortunately, the utilization of all the available computational power in these machines involves a tremendous programming effort on the part of users, which creates a need for sophisticated compiler a ..."
Abstract

Cited by 31 (0 self)
 Add to MetaCart
offer significant advantages over shared memory multiprocessors in terms of cost and scalability. Unfortunately, the utilization of all the available computational power in these machines involves a tremendous programming effort on the part of users, which creates a need for sophisticated compiler and runtime support for distributed memory machines. In this paper, we explore a new compiler optimization for regular scientific applications–the simultaneous exploitation of task and data parallelism. Our optimization is implemented as part of the PARADIGM HPF compiler framework we have developed. The intuitive idea behind the optimization is the use of task parallelism to control the degree of data parallelism of individual tasks. The reason this provides increased performance is that data parallelism provides diminishing returns as the number of processors used is increased. By controlling the number of processors used for each data parallel task in an application and by concurrently executing these tasks, we make program execution more efficient and, therefore, faster. A practical implementation of a task and data parallel scheme of execution for an application on a distributed memory multicomputer also involves data redistribution. This data redistribution causes an overhead. However, as our experimental results show, this overhead is not a problem; execution of a program using task and data parallelism together can be significantly faster than its execution using data parallelism alone. This makes our proposed optimization practical and extremely useful.
A Simple Polynomialtime Rescaling Algorithm for Solving Linear Programs
 Proceedings of STOC’04
, 2004
"... The perceptron algorithm, developed mainly in the machine learning literature, is a simple greedy method for finding a feasible solution to a linear program (alternatively, for learning a threshold function.). In spite of its exponential worstcase complexity, it is often quite useful, in part due to ..."
Abstract

Cited by 30 (4 self)
 Add to MetaCart
The perceptron algorithm, developed mainly in the machine learning literature, is a simple greedy method for finding a feasible solution to a linear program (alternatively, for learning a threshold function.). In spite of its exponential worstcase complexity, it is often quite useful, in part due to its noisetolerance and also its overall simplicity. In this paper, we show that a randomized version of the perceptron algorithm with periodic rescaling runs in polynomialtime. The resulting algorithm for linear programming has an elementary description and analysis.
Fast algorithms for approximate semidefinite programming using the multiplicative weights update method
 IN FOCS
, 2005
"... Semidefinite programming (SDP) relaxations appear in many recent approximation algorithms but the only general technique for solving such SDP relaxations is via interior point methods. We use a Lagrangianrelaxation based technique (modified from the papers of Plotkin, Shmoys, and Tardos (PST), and ..."
Abstract

Cited by 29 (6 self)
 Add to MetaCart
Semidefinite programming (SDP) relaxations appear in many recent approximation algorithms but the only general technique for solving such SDP relaxations is via interior point methods. We use a Lagrangianrelaxation based technique (modified from the papers of Plotkin, Shmoys, and Tardos (PST), and Klein and Lu) to derive faster algorithms for approximately solving several families of SDP relaxations. The algorithms are based upon some improvements to the PST ideas — which lead to new results even for their framework — as well as improvements in approximate eigenvalue computations by using random sampling.
Approximation Algorithms for Steiner and Directed Multicuts
 Journal of Algorithms
, 1996
"... In this paper we consider the steiner multicut problem. This is a generalization of the minimum multicut problem where instead of separating node pairs, the goal is to find a minimum weight set of edges that separates all given sets of nodes. A set is considered separated if it is not contained in ..."
Abstract

Cited by 26 (1 self)
 Add to MetaCart
In this paper we consider the steiner multicut problem. This is a generalization of the minimum multicut problem where instead of separating node pairs, the goal is to find a minimum weight set of edges that separates all given sets of nodes. A set is considered separated if it is not contained in a single connected component. We show an O(log 3 (kt)) approximation algorithm for the steiner multicut problem, where k is the number of sets and t is the maximum cardinality of a set. This improves the O(t log k) bound that easily follows from the previously known multicut results. We also consider an extension of multicuts to directed case, namely the problem of finding a minimumweight set of edges whose removal ensures that none of the strongly connected components includes one of the prespecified k node pairs. In this paper we describe an O(log 2 k) approximation algorithm for this directed multicut problem. If k n, this represents and an improvement over the O(logn log ...