Results 1 - 10
of
284
Low-Dimensional Linear Programming with Violations
- In Proc. 43th Annu. IEEE Sympos. Found. Comput. Sci
, 2002
"... Two decades ago, Megiddo and Dyer showed that linear programming in 2 and 3 dimensions (and subsequently, any constant number of dimensions) can be solved in linear time. In this paper, we consider linear programming with at most k violations: finding a point inside all but at most k of n given half ..."
Abstract
-
Cited by 43 (3 self)
- Add to MetaCart
Two decades ago, Megiddo and Dyer showed that linear programming in 2 and 3 dimensions (and subsequently, any constant number of dimensions) can be solved in linear time. In this paper, we consider linear programming with at most k violations: finding a point inside all but at most k of n given
The First Annual Large Dense Linear System Survey
- Int. Rept. Univ. California, Berkeley CA
, 1991
"... In the March 24, 1991 issue of NA Digest, I submitted a questionnaire asking who was solving large dense linear systems of equations. Based on the responses, nearly all large dense linear systems today arise from either the benchmarking of supercomputers or applications involving the influence of a ..."
Abstract
-
Cited by 9 (2 self)
- Add to MetaCart
In the March 24, 1991 issue of NA Digest, I submitted a questionnaire asking who was solving large dense linear systems of equations. Based on the responses, nearly all large dense linear systems today arise from either the benchmarking of supercomputers or applications involving the influence of a
Linear Programming and Fast Parallel Approximability
"... . In this work, we demonstrate how to transform any feasible Linear Program with non-negative coefficients to a Packing/Covering Linear Program, with the same optimal solutions. Packing /Covering Linear Programs can be near-optimally solved in NC [LN93]. This reduction is a step towards characterizi ..."
Abstract
-
Cited by 1 (0 self)
- Add to MetaCart
. In this work, we demonstrate how to transform any feasible Linear Program with non-negative coefficients to a Packing/Covering Linear Program, with the same optimal solutions. Packing /Covering Linear Programs can be near-optimally solved in NC [LN93]. This reduction is a step towards
Solving and analyzing side-chain positioning problems using linear and integer programming
- BIOINFORMATICS
, 2005
"... Motivation: Side-chain positioning is a central component of homology modeling and protein design. In a common formulation of the problem, the backbone is fixed, side-chain conformations come from a rotamer library, and a pairwise energy function is optimized. It is NP-complete to find even a reason ..."
Abstract
-
Cited by 37 (3 self)
- Add to MetaCart
reasonable approximate solution to this problem. We seek to put this hardness result into practical context. Results: We present an integer linear programming (ILP) formulation of side-chain positioning that allows us to tackle large problem sizes. We relax the integrality constraint to give a polynomial-time
NEAR-ISOMETRIC LINEAR EMBEDDINGS OF MANIFOLDS
"... We propose a new method for linear dimensionality reduction of manifold-modeled data. Given a training set X of Q points belonging to a manifold M ⊂ R N, we construct a linear operator P: R N → R M that approximately preserves the norms of all ` ´ Q pairwise difference vectors (or secants) of X. We ..."
Abstract
-
Cited by 3 (2 self)
- Add to MetaCart
2 design the matrix P via a trace-norm minimization that can be efficiently solved as a semi-definite program (SDP). When X comprises a sufficiently dense sampling of M, we prove that the optimal matrix P preserves all pairs of secants over M. We numerically demonstrate the considerable gains using
algorithm for linear programming problems
"... Abstract—The simplex method is perhaps the most widely used method for solving linear programming (LP) problems. The computation time of simplex type algorithms depends on the basis inverse that occurs in each iteration. Parallelizing simplex type algorithms is one of the most challenging problems. ..."
Abstract
- Add to MetaCart
Abstract—The simplex method is perhaps the most widely used method for solving linear programming (LP) problems. The computation time of simplex type algorithms depends on the basis inverse that occurs in each iteration. Parallelizing simplex type algorithms is one of the most challenging problems
Matching the universal barrier without paying the costs : Solving linear programs with Õ( √ rank) linear system solves
- CoRR
"... In this paper we present a new algorithm for solving linear programs that requires only Õ( rank(A)L) iterations where A is the constraint matrix of a linear program with m con-straints and n variables and L is the bit complexity of a linear program. Each iteration of our method consists of solving ..."
Abstract
-
Cited by 4 (1 self)
- Add to MetaCart
Õ(1) linear systems and additional nearly linear time computation. Our method improves upon the previous best iteration bound by factor of Ω̃((m / rank(A))1/4) for methods with polynomial time computable iterations and by Ω̃((m / rank(A))1/2) for meth-ods which solve at most Õ(1) linear systems
Nearly Optimal Vector Quantization via Linear Programming (Extended Abstract)
- In Proceedings of the IEEE Data Compression Conference
, 1992
"... We present new vector quantization algorithms based on the theory devel- oped in [LiV]. The new approach is to formulate a vector quantization problem as a 0-1 integer linear program. We first solve its relaxed linear program by linear programming techniques. Then we transform the linear program ..."
Abstract
-
Cited by 3 (2 self)
- Add to MetaCart
We present new vector quantization algorithms based on the theory devel- oped in [LiV]. The new approach is to formulate a vector quantization problem as a 0-1 integer linear program. We first solve its relaxed linear program by linear programming techniques. Then we transform the linear program
Solving “Large” Dense Matrix Problems on Multi-Core Processors and GPUs
, 2009
"... Few realize that, for large matrices, many dense matrix computations achieve nearly the same performance when the matrices are stored on disk as when they are stored in a very large main memory. Similarly, few realize that, given the right programming abstractions, coding Out-of-Core (OOC) implement ..."
Abstract
-
Cited by 3 (0 self)
- Add to MetaCart
Few realize that, for large matrices, many dense matrix computations achieve nearly the same performance when the matrices are stored on disk as when they are stored in a very large main memory. Similarly, few realize that, given the right programming abstractions, coding Out-of-Core (OOC
A Convex Approach for Learning Near-Isometric Linear Embeddings
- A CONVEX APPROACH FOR LEARNING NEAR-ISOMETRIC LINEAR EMBEDDINGS
, 2012
"... We propose a novel framework for the deterministic construction of linear, near-isometric embeddings of a finite set of data points. Given a set of training points X ⊂ RN, we consider the secant set S(X) that consists of all pairwise difference vectors of X, normalized to lie on the unit sphere. We ..."
Abstract
-
Cited by 8 (2 self)
- Add to MetaCart
with Max-norm constraints (NuMax) to solve the SDP. Second, we develop a greedy, approximate version of NuMax based on the column generation method commonly used to solve large-scale linear programs. We demonstrate that our framework is useful for a number of applications in machine learning and signal
Results 1 - 10
of
284