Results 1  10
of
520
Loopy belief propagation for approximate inference: An empirical study. In:
 Proceedings of Uncertainty in AI,
, 1999
"... Abstract Recently, researchers have demonstrated that "loopy belief propagation" the use of Pearl's polytree algorithm in a Bayesian network with loops can perform well in the context of errorcorrecting codes. The most dramatic instance of this is the near Shannonlimit performanc ..."
Abstract

Cited by 676 (15 self)
 Add to MetaCart
Abstract Recently, researchers have demonstrated that "loopy belief propagation" the use of Pearl's polytree algorithm in a Bayesian network with loops can perform well in the context of errorcorrecting codes. The most dramatic instance of this is the near Shannon
Near neighbor search in large metric spaces
 In Proceedings of the 21th International Conference on Very Large Data Bases
, 1995
"... Given user data, one often wants to find approximate matches in a large database. A good example of such a task is finding images similar to a given image in a large collection of images. We focus on the important and technically difficult case where each data element is high dimensional, or more ge ..."
Abstract

Cited by 216 (0 self)
 Add to MetaCart
generally, is represented by a point in a large metric spaceand distance calculations are computationally expensive. In this paper we introduce a data structure to solve this problem called a GNAT Geometric Nearneighbor Access Tree. It is based on the philosophy that the data structure should act as a
Efficient Search for Approximate Nearest Neighbor in High Dimensional Spaces
, 1998
"... We address the problem of designing data structures that allow efficient search for approximate nearest neighbors. More specifically, given a database consisting of a set of vectors in some high dimensional Euclidean space, we want to construct a spaceefficient data structure that would allow us to ..."
Abstract

Cited by 215 (9 self)
 Add to MetaCart
We address the problem of designing data structures that allow efficient search for approximate nearest neighbors. More specifically, given a database consisting of a set of vectors in some high dimensional Euclidean space, we want to construct a spaceefficient data structure that would allow us
A Rank Minimization Heuristic with Application to Minimum Order System Approximation
, 2001
"... Several problems arising in control system analysis and design, such as reduced order controller synthesis, involve minimizing the rank of a matrix variable subject to linear matrix inequality (LMI) constraints. Except in some special cases, solving this rank minimization probiem (globally) is ve ..."
Abstract

Cited by 274 (10 self)
 Add to MetaCart
generalization of the trace heuristic that applies to general nonsymmetric, even nonsquare, matrices, and reduces to the trace heuristic when the matrix is positive selinidefinite. The heuristic is to replace the (nonconvex) rank objective with the sum of the singular values of the matrix, which is the dual
For most large underdetermined systems of equations, the minimal l1norm nearsolution approximates the sparsest nearsolution
 Comm. Pure Appl. Math
, 2004
"... We consider inexact linear equations y ≈ Φα where y is a given vector in R n, Φ is a given n by m matrix, and we wish to find an α0,ɛ which is sparse and gives an approximate solution, obeying �y − Φα0,ɛ�2 ≤ ɛ. In general this requires combinatorial optimization and so is considered intractable. On ..."
Abstract

Cited by 122 (1 self)
 Add to MetaCart
. On the other hand, the ℓ 1 minimization problem min �α�1 subject to �y − Φα�2 ≤ ɛ, is convex, and is considered tractable. We show that for most Φ the solution ˆα1,ɛ = ˆα1,ɛ(y, Φ) of this problem is quite generally a good approximation for ˆα0,ɛ. We suppose that the columns of Φ are normalized to unit ℓ 2 norm
Approximate Bregman near neighbors in sublinear time: Beyond the triangle inequality
 CoRR
"... Bregman divergences are important distance measures that are used extensively in datadriven applications such as computer vision, text mining, and speech processing, and are a key focus of interest in machine learning. Answering nearest neighbor (NN) queries under these measures is very important ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
that the square root of a Bregman divergence does satisfy µdefectiveness. This allows us to then utilize both properties in an efficient search data structure that follows the general twostage paradigm of a ringtree decomposition followed by a quad tree search used in previous nearneighbor algorithms
Chapter 3 Approximate Nearest Neighbors
"... The main topic of this chapter is the following general algorithmic problem, called nearest neighbor search: Given a set P of n points in some metric space (X, ρ), we would like to construct a data structure such that, for any point x ∈ X (called a query point), we can quickly find a nearest neighbo ..."
Abstract
 Add to MetaCart
The main topic of this chapter is the following general algorithmic problem, called nearest neighbor search: Given a set P of n points in some metric space (X, ρ), we would like to construct a data structure such that, for any point x ∈ X (called a query point), we can quickly find a nearest
Allnorms and allLpnorms approximation algorithms
, 2007
"... ABSTRACT. In many optimization problems, a solution can be viewed as ascribing a “cost ” to each client, and the goal is to optimize some aggregation of the perclient costs. We often optimize some Lpnorm (or some other symmetric convex function or norm) of the vector of costs—though different appl ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
analysis techniques to give similar results for the more general submodular set cover, and prove some results for the socalled pipelined set cover problem. We then go on to examine approximation algorithms in the “allnorms ” and the “allLpnorms ” frameworks more broadly, and present algorithms
Nonlinear Methods of Approximation
, 2002
"... Our main interest in this paper is nonlinear approximation. The basic idea behind nonlinear approximation is that the elements used in the approximation do not come from a xed linear space but are allowed to depend on the function being approximated. While the scope of this paper is mostly theoretic ..."
Abstract

Cited by 71 (9 self)
 Add to MetaCart
, we want to understand the properties (usually smoothness) of the function which govern its rate of approximation in some given norm (or metric). We are also interested in stable algorithms for nding good or near best approximations using m terms. Some of our earlier work has introduced and analyzed
A geometric approach to lower bounds for approximate nearneighbor search and partial match
 In Proc. 49th IEEE Symposium on Foundations of Computer Science (FOCS
, 2008
"... This work investigates a geometric approach to proving cell probe lower bounds for data structure problems. We consider the approximate nearest neighbor search problem on the Boolean hypercube ({0, 1} d, ‖ · ‖1) with d = Θ(log n). We show that any (randomized) data structure for the problem that a ..."
Abstract

Cited by 16 (2 self)
 Add to MetaCart
bound holds for the near neighbor problem, where the algorithm knows in advance a good approximation to the distance to the nearest neighbor. Additionally, it is an average case lower bound for the natural distribution for the problem. Our approach also gives the same bound for (2 − 1)approximation
Results 1  10
of
520