Results 1  10
of
1,977
An Optimal Local Approximation Algorithm for MaxMin Linear Programs
"... In a maxmin LP, the objective is to maximise ω subject to Ax ≤ 1, Cx ≥ ω1, and x ≥ 0 for nonnegative matrices A and C. We present a local algorithm (constanttime distributed algorithm) for approximating maxmin LPs. The approximation ratio of our algorithm is the best possible for any local algori ..."
Abstract

Cited by 5 (5 self)
 Add to MetaCart
In a maxmin LP, the objective is to maximise ω subject to Ax ≤ 1, Cx ≥ ω1, and x ≥ 0 for nonnegative matrices A and C. We present a local algorithm (constanttime distributed algorithm) for approximating maxmin LPs. The approximation ratio of our algorithm is the best possible for any local
Approximating maxmin linear programs with local algorithms
 In Proc. 22nd IEEE International Parallel and Distributed Processing Symposium (IPDPS
, 2008
"... Abstract. A local algorithm is a distributed algorithm where each node must operate solely based on the information that was available at system startup within a constantsize neighbourhood of the node. We study the applicability of local algorithms to maxmin LPs where the objective is to maximise ..."
Abstract

Cited by 10 (10 self)
 Add to MetaCart
Abstract. A local algorithm is a distributed algorithm where each node must operate solely based on the information that was available at system startup within a constantsize neighbourhood of the node. We study the applicability of local algorithms to maxmin LPs where the objective is to maximise
Multicommodity maxflow mincut theorems and their use in designing approximation algorithms
 J. ACM
, 1999
"... In this paper, we establish maxflow mincut theorems for several important classes of multicommodity flow problems. In particular, we show that for any nnode multicommodity flow problem with uniform demands, the maxflow for the problem is within an O(log n) factor of the upper bound implied by ..."
Abstract

Cited by 357 (6 self)
 Add to MetaCart
by the mincut. The result (which is existentially optimal) establishes an important analogue of the famous 1commodity maxflow mincut theorem for problems with multiple commodities. The result also has substantial applications to the field of approximation algorithms. For example, we use the flow result
Local approximability of maxmin and minmax linear programs
 Theory of Computing Systems
, 2011
"... Abstract. In a maxmin LP, the objective is to maximise ω subject to Ax ≤ 1, Cx ≥ ω1, and x ≥ 0. In a minmax LP, the objective is to minimise ρ subject to Ax ≤ ρ1, Cx ≥ 1, and x ≥ 0. The matrices A and C are nonnegative and sparse: each row ai of A has at most ∆I positive elements, and each row ck ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
of C has at most ∆K positive elements. We study the approximability of maxmin LPs and minmax LPs in a distributed setting; in particular, we focus on local algorithms (constanttime distributed algorithms). We show that for any ∆I ≥ 2, ∆K ≥ 2, and ε> 0 there exists a local algorithm
Just Relax: Convex Programming Methods for Identifying Sparse Signals in Noise
, 2006
"... This paper studies a difficult and fundamental problem that arises throughout electrical engineering, applied mathematics, and statistics. Suppose that one forms a short linear combination of elementary signals drawn from a large, fixed collection. Given an observation of the linear combination that ..."
Abstract

Cited by 483 (2 self)
 Add to MetaCart
that has been contaminated with additive noise, the goal is to identify which elementary signals participated and to approximate their coefficients. Although many algorithms have been proposed, there is little theory which guarantees that these algorithms can accurately and efficiently solve the problem
Energy Conserving Routing in Wireless Adhoc Networks
, 2000
"... An adhoc network of wireless static nodes is considered as it arises in a rapidly deployed, sensor based, monitoring system. Information is generated in certain nodes and needs to reach a set of designated gateway nodes. Each node may adjust its power within a certain range that determines the set ..."
Abstract

Cited by 622 (2 self)
 Add to MetaCart
with node capacities and the algorithms converge to the optimal solution. When there are multiple power levels then the achievable lifetime is close to the optimal (that is computed by linear programming) most of the time. It turns out that in order to maximize the lifetime, the traffic should be routed
Graphical models, exponential families, and variational inference
, 2008
"... The formalism of probabilistic graphical models provides a unifying framework for capturing complex dependencies among random variables, and building largescale multivariate statistical models. Graphical models have become a focus of research in many statistical, computational and mathematical fiel ..."
Abstract

Cited by 819 (28 self)
 Add to MetaCart
likelihoods, marginal probabilities and most probable configurations. We describe how a wide varietyof algorithms — among them sumproduct, cluster variational methods, expectationpropagation, mean field methods, maxproduct and linear programming relaxation, as well as conic programming relaxations — can
Benchmarking Least Squares Support Vector Machine Classifiers
 NEURAL PROCESSING LETTERS
, 2001
"... In Support Vector Machines (SVMs), the solution of the classification problem is characterized by a (convex) quadratic programming (QP) problem. In a modified version of SVMs, called Least Squares SVM classifiers (LSSVMs), a least squares cost function is proposed so as to obtain a linear set of eq ..."
Abstract

Cited by 476 (46 self)
 Add to MetaCart
stage by gradually pruning the support value spectrum and optimizing the hyperparameters during the sparse approximation procedure. In this paper, twenty public domain benchmark datasets are used to evaluate the test set performance of LSSVM classifiers with linear, polynomial and radial basis function
Robust convex optimization
 Mathematics of Operations Research
, 1998
"... We study convex optimization problems for which the data is not specified exactly and it is only known to belong to a given uncertainty set U, yet the constraints must hold for all possible values of the data from U. The ensuing optimization problem is called robust optimization. In this paper we la ..."
Abstract

Cited by 416 (21 self)
 Add to MetaCart
lay the foundation of robust convex optimization. In the main part of the paper we show that if U is an ellipsoidal uncertainty set, then for some of the most important generic convex optimization problems (linear programming, quadratically constrained programming, semidefinite programming and others
Local approximation algorithms for a class of 0/1 maxmin linear programs
 Manuscript
, 2008
"... Abstract — We study the applicability of distributed, local algorithms P to 0/1 maxmin LPs where the objective is to maximise mink v ckvxv subject to P v aivxv ≤ 1 for each i and xv ≥ 0 for each v. Here ckv ∈ {0, 1}, aiv ∈ {0, 1}, and the support sets Vi = {v: aiv> 0} and Vk = {v: ckv> 0} hav ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
Abstract — We study the applicability of distributed, local algorithms P to 0/1 maxmin LPs where the objective is to maximise mink v ckvxv subject to P v aivxv ≤ 1 for each i and xv ≥ 0 for each v. Here ckv ∈ {0, 1}, aiv ∈ {0, 1}, and the support sets Vi = {v: aiv> 0} and Vk = {v: ckv> 0
Results 1  10
of
1,977