Results 1  10
of
47
Potential Function Methods for Approximately Solving Linear Programming Problems: Theory and Practice
, 2001
"... After several decades of sustained research and testing, linear programming has evolved into a remarkably reliable, accurate and useful tool for handling industrial optimization problems. Yet, large problems arising from several concrete applications routinely defeat the very best linear programming ..."
Abstract

Cited by 139 (4 self)
 Add to MetaCart
After several decades of sustained research and testing, linear programming has evolved into a remarkably reliable, accurate and useful tool for handling industrial optimization problems. Yet, large problems arising from several concrete applications routinely defeat the very best linear programming codes, running on the fastest computing hardware. Moreover, this is a trend that may well continue and intensify, as problem sizes escalate and the need for fast algorithms becomes more stringent. Traditionally, the focus in optimization algorithms, and in particular, in algorithms for linear programming, has been to solve problems "to optimality." In concrete implementations, this has always meant the solution ofproblems to some finite accuracy (for example, eight digits). An alternative approach would be to explicitly, and rigorously, trade o# accuracy for speed. One motivating factor is that in many practical applications, quickly obtaining a partially accurate solution is much preferable to obtaining a very accurate solution very slowly. A secondary (and independent) consideration is that the input data in many practical applications has limited accuracy to begin with. During the last ten years, a new body ofresearch has emerged, which seeks to develop provably good approximation algorithms for classes of linear programming problems. This work both has roots in fundamental areas of mathematical programming and is also framed in the context ofthe modern theory ofalgorithms. The result ofthis work has been a family ofalgorithms with solid theoretical foundations and with growing experimental success. In this manuscript we will study these algorithms, starting with some ofthe very earliest examples, and through the latest theoretical and computational developments.
Bayesian compressive sensing via belief propagation
 IEEE Trans. Signal Processing
, 2010
"... Compressive sensing (CS) is an emerging field based on the revelation that a small collection of linear projections of a sparse signal contains enough information for stable, subNyquist signal acquisition. When a statistical characterization of the signal is available, Bayesian inference can comple ..."
Abstract

Cited by 124 (19 self)
 Add to MetaCart
Compressive sensing (CS) is an emerging field based on the revelation that a small collection of linear projections of a sparse signal contains enough information for stable, subNyquist signal acquisition. When a statistical characterization of the signal is available, Bayesian inference can complement conventional CS methods based on linear programming or greedy algorithms. We perform approximate Bayesian inference using belief propagation (BP) decoding, which represents the CS encoding matrix as a graphical model. Fast encoding and decoding is provided using sparse encoding matrices, which also improve BP convergence by reducing the presence of loops in the graph. To decode a lengthN signal containing K large coefficients, our CSBP decoding algorithm uses O(K log(N)) measurements and O(N log 2 (N)) computation. Finally, sparse encoding matrices and the CSBP decoding algorithm can be modified to support a variety of signal models and measurement noise. 1
An Exact Solution to the Transistor Sizing Problem for CMOS Circuits Using Convex Optimization
 IEEE Transactions on ComputerAided Design
, 1993
"... this paper. Given the MOS circuit topology, the delay can be controlled byvarying the sizes of transistors in the circuit. Here, the size of a transistor is measured in terms of its channel width, since the channel lengths in a digital circuit are generally uniform. Roughly speaking, the sizes of ..."
Abstract

Cited by 107 (19 self)
 Add to MetaCart
(Show Context)
this paper. Given the MOS circuit topology, the delay can be controlled byvarying the sizes of transistors in the circuit. Here, the size of a transistor is measured in terms of its channel width, since the channel lengths in a digital circuit are generally uniform. Roughly speaking, the sizes of certain transistors can be increased to reduce the circuit delay at the expense of additional chip area
Continuation and Path Following
, 1992
"... CONTENTS 1 Introduction 1 2 The Basics of PredictorCorrector Path Following 3 3 Aspects of Implementations 7 4 Applications 15 5 PiecewiseLinear Methods 34 6 Complexity 41 7 Available Software 44 References 48 1. Introduction Continuation, embedding or homotopy methods have long served as useful ..."
Abstract

Cited by 86 (6 self)
 Add to MetaCart
CONTENTS 1 Introduction 1 2 The Basics of PredictorCorrector Path Following 3 3 Aspects of Implementations 7 4 Applications 15 5 PiecewiseLinear Methods 34 6 Complexity 41 7 Available Software 44 References 48 1. Introduction Continuation, embedding or homotopy methods have long served as useful theoretical tools in modern mathematics. Their use can be traced back at least to such venerated works as those of Poincar'e (18811886), Klein (1882 1883) and Bernstein (1910). Leray and Schauder (1934) refined the tool and presented it as a global result in topology, viz., the homotopy invariance of degree. The use of deformations to solve nonlinear systems of equations Partially supported by the National Science Foundation via grant # DMS9104058 y Preprint, Colorado State University, August 2 E. Allgower and K. Georg may be traced back at least to Lahaye (1934). The classical embedding methods were the
HOMOTOPY CONTINUATION METHODS FOR NONLINEAR COMPLEMENTARITY PROBLEMS
, 1991
"... A complementarity problem with a continuous mapping f from Rn into itself can be written as the system of equations F(x, y) = 0 and (x, y)> 0. Here F is the mapping from R ~ " into itself defined by F(x, y) = ( xl y,, x2yZ,..., x, ~ ye, y ffx)). Under the assumption that the mapping f is ..."
Abstract

Cited by 36 (3 self)
 Add to MetaCart
A complementarity problem with a continuous mapping f from Rn into itself can be written as the system of equations F(x, y) = 0 and (x, y)> 0. Here F is the mapping from R ~ " into itself defined by F(x, y) = ( xl y,, x2yZ,..., x, ~ ye, y ffx)). Under the assumption that the mapping f is a P,,function, we study various aspects of homotopy continuation methods that trace a trajectory consisting of solutions of the family of systems of equations F(x, y) = t(a, b) and (x, y) 8 0 until the parameter t> 0 attains 0. Here (a, b) denotes a 2ndimensional constant positive vector. We establish the existence of a trajectory which leads to a solution of the problem, and then present a numerical method for tracing the trajectory. We also discuss the global and local convergence of the method.
New efficient attacks on statistical disclosure control mechanisms
 In CRYPTO
, 2008
"... Abstract. The goal of a statistical database is to provide statistics about a population while simultaneously protecting the privacy of the individual records in the database. The tension between privacy and usability of statistical databases has attracted much attention in statistics, theoretical c ..."
Abstract

Cited by 30 (6 self)
 Add to MetaCart
(Show Context)
Abstract. The goal of a statistical database is to provide statistics about a population while simultaneously protecting the privacy of the individual records in the database. The tension between privacy and usability of statistical databases has attracted much attention in statistics, theoretical computer science, security, and database communities in recent years. A line of research initiated by Dinur and Nissim investigates for a particular type of queries, lower bounds on the distortion needed in order to prevent gross violations of privacy. The first result in the current paper simplifies and sharpens the Dinur and Nissim result. The DinurNissim style results are strong because they demonstrate insecurity of all lowdistortion privacy mechanisms. The attacks have an allornothing flavor: letting n denote the size of the database, Ω(n) queries are made before anything is learned, at which point Θ(n) secret bits are revealed. Restricting attention to a wide and realistic subset of possible lowdistortion mechanisms, our second result is a more acute attack, requiring only a fixed number of queries for each bit revealed. 1
A Truncated PrimalInfeasible DualFeasible Network Interior Point Method
, 1994
"... . In this paper we introduce the truncated primalinfeasible dualfeasible interior point algorithm for linear programming and describe an implementation of this algorithm for solving the minimum cost network flow problem. In each iteration, the linear system that determines the search direction is ..."
Abstract

Cited by 29 (3 self)
 Add to MetaCart
(Show Context)
. In this paper we introduce the truncated primalinfeasible dualfeasible interior point algorithm for linear programming and describe an implementation of this algorithm for solving the minimum cost network flow problem. In each iteration, the linear system that determines the search direction is computed inexactly, and the norm of the resulting residual vector is used in the stopping criteria of the iterative solver employed for the solution of the system. In the implementation, a preconditioned conjugate gradient method is used as the iterative solver. The details of the implementation are described and the code, pdnet, is tested on a large set of standard minimum cost network flow test problems. Computational results indicate that the implementation is competitive with stateoftheart network flow codes. Key Words. Interior point method, linear programming, network flows, primalinfeasible dualfeasible, truncated Newton method, conjugate gradient, maximum flow, experimental test...
Smoothed analysis of Renegar’s condition number for linear programming
, 2003
"... We perform a smoothed analysis of Renegar’s condition number for linear programming. In particular, we show that for every nbyd matrix Ā, nvector ¯ b and dvector ¯c satisfying ∥ Ā, ¯ b, ¯c ∥ ∥ F ≤ 1 and every σ ≤ 1 / √ dn, the expectation of the logarithm of C(A,b,c) is O(log(nd/σ)), where A, ..."
Abstract

Cited by 23 (5 self)
 Add to MetaCart
We perform a smoothed analysis of Renegar’s condition number for linear programming. In particular, we show that for every nbyd matrix Ā, nvector ¯ b and dvector ¯c satisfying ∥ Ā, ¯ b, ¯c ∥ ∥ F ≤ 1 and every σ ≤ 1 / √ dn, the expectation of the logarithm of C(A,b,c) is O(log(nd/σ)), where A, b and c are Gaussian perturbations of Ā, ¯ b and ¯c of variance σ 2. From this bound, we obtain a smoothed analysis of Renegar’s interior point algorithm. By combining this with the smoothed analysis of finite termination Spielman and Teng (Math. Prog. Ser. B, 2003), we show that the smoothed complexity of linear programming is O(n 3 log(nd/σ)).
Smoothed Analysis of Termination of Linear Programming Algorithms
"... We perform a smoothed analysis of a termination phase for linear programming algorithms. By combining this analysis with the smoothed analysis of Renegar’s condition number by Dunagan, Spielman and Teng ..."
Abstract

Cited by 23 (3 self)
 Add to MetaCart
We perform a smoothed analysis of a termination phase for linear programming algorithms. By combining this analysis with the smoothed analysis of Renegar’s condition number by Dunagan, Spielman and Teng
PAC Learning Intersections of Halfspaces with Membership Queries
 ALGORITHMICA
, 1998
"... A randomized learning algorithm Polly is presented that efficiently learns intersections of s halfspaces in n dimensions, in time polynomial in both s and n. The learning protocol is the "PAC" (probably approximately correct) model of Valiant, augmented with membership queries. In particul ..."
Abstract

Cited by 21 (1 self)
 Add to MetaCart
A randomized learning algorithm Polly is presented that efficiently learns intersections of s halfspaces in n dimensions, in time polynomial in both s and n. The learning protocol is the "PAC" (probably approximately correct) model of Valiant, augmented with membership queries. In particular, Polly receives a set S of m = poly(n; s; 1=ffl; 1=ffi) randomly generated points from an arbitrary distribution over the unit hypercube, and is told exactly which points are contained in, and which points are not contained in, the convex polyhedron P defined by the halfspaces. Polly may also obtain the same information about points of its own choosing. It is shown that after poly(n, s, 1=ffl, 1=ffi, log(1=d)) time, the probability that Polly fails to output a collection of s halfspaces with classification error at most ffl, is at most ffi . Here, d is the minimum distance between the boundary of the target and those examples in S that are not lying on the boundary. The parameter log(1=d) can be ...