Results 1  10
of
572,021
Nearoptimal hashing algorithms for approximate nearest neighbor in high dimensions
, 2008
"... In this article, we give an overview of efficient algorithms for the approximate and exact nearest neighbor problem. The goal is to preprocess a dataset of objects (e.g., images) so that later, given a new query object, one can quickly return the dataset object that is most similar to the query. The ..."
Abstract

Cited by 459 (7 self)
 Add to MetaCart
In this article, we give an overview of efficient algorithms for the approximate and exact nearest neighbor problem. The goal is to preprocess a dataset of objects (e.g., images) so that later, given a new query object, one can quickly return the dataset object that is most similar to the query
Nearoptimal sensor placements in gaussian processes
 In ICML
, 2005
"... When monitoring spatial phenomena, which can often be modeled as Gaussian processes (GPs), choosing sensor locations is a fundamental task. There are several common strategies to address this task, for example, geometry or disk models, placing sensors at the points of highest entropy (variance) in t ..."
Abstract

Cited by 342 (34 self)
 Add to MetaCart
approximation guarantees, exploiting the submodularity of the objective function. We demonstrate the advantages of our approach towards optimizing mutual information in a very extensive empirical study on two realworld data sets.
Genetic Programming
, 1997
"... Introduction Genetic programming is a domainindependent problemsolving approach in which computer programs are evolved to solve, or approximately solve, problems. Genetic programming is based on the Darwinian principle of reproduction and survival of the fittest and analogs of naturally occurring ..."
Abstract

Cited by 1055 (12 self)
 Add to MetaCart
Introduction Genetic programming is a domainindependent problemsolving approach in which computer programs are evolved to solve, or approximately solve, problems. Genetic programming is based on the Darwinian principle of reproduction and survival of the fittest and analogs of naturally occurring
Lambertian Reflectance and Linear Subspaces
, 2000
"... We prove that the set of all reflectance functions (the mapping from surface normals to intensities) produced by Lambertian objects under distant, isotropic lighting lies close to a 9D linear subspace. This implies that, in general, the set of images of a convex Lambertian object obtained under a wi ..."
Abstract

Cited by 526 (20 self)
 Add to MetaCart
the effects of Lambertian materials as the analog of a convolution. These results allow us to construct algorithms for object recognition based on linear methods as well as algorithms that use convex optimization to enforce nonnegative lighting functions. Finally, we show a simple way to enforce non
Robust convex optimization
 Mathematics of Operations Research
, 1998
"... We study convex optimization problems for which the data is not specified exactly and it is only known to belong to a given uncertainty set U, yet the constraints must hold for all possible values of the data from U. The ensuing optimization problem is called robust optimization. In this paper we la ..."
Abstract

Cited by 416 (21 self)
 Add to MetaCart
) the corresponding robust convex program is either exactly, or approximately, a tractable problem which lends itself to efficient algorithms such as polynomial time interior point methods.
A sparse sampling algorithm for nearoptimal planning in large Markov decision processes
 Machine Learning
, 1999
"... An issue that is critical for the application of Markov decision processes (MDPs) to realistic problems is how the complexity of planning scales with the size of the MDP. In stochastic environments with very large or even innite state spaces, traditional planning and reinforcement learning algorith ..."
Abstract

Cited by 239 (7 self)
 Add to MetaCart
algorithms are often inapplicable, since their running time typically scales linearly with the state space size in the worst case. In this paper we present a new algorithm that, given only a generative model (simulator) for an arbitrary MDP, performs nearoptimal planning with a running time that has
Nearoptimal network design with selfish agents
, 2003
"... We introduce a simple network design game that models how independent selfish agents can build or maintain a large network. In our game every agent has a specific connectivity requirement, i.e. each agent has a set of terminals and wants to build a network in which his terminals are connected. Possi ..."
Abstract

Cited by 151 (19 self)
 Add to MetaCart
as cheap as the optimal network, and give a polynomial time algorithmtofinda(1+ε)approximate Nash equilibrium that does not cost much more. For the general connection game we prove that there is a 3approximate Nash equilibrium that is as cheap as the optimal network, and give an algorithm to find a (4
Nearoptimal nonmyopic value of information in graphical models
 IN ANNUAL CONFERENCE ON UNCERTAINTY IN ARTIFICIAL INTELLIGENCE
, 2005
"... A fundamental issue in realworld systems, such as sensor networks, is the selection of observations which most effectively reduce uncertainty. More specifically, we address the long standing problem of nonmyopically selecting the most informative subset of variables in a graphical model. We present ..."
Abstract

Cited by 146 (25 self)
 Add to MetaCart
present the first efficient randomized algorithm providing a constant factor (1 − 1/e − ε) approximation guarantee for any ε> 0 with high confidence. The algorithm leverages the theory of submodular functions, in combination with a polynomial bound on sample complexity. We furthermore prove
Splitters and nearoptimal derandomization
"... We present a fairly general method for finding deterministic constructions obeying what we call krestrictions; this yields structures of size not much larger than the probabilistic bound. The structures constructed by our method include (n; k)universal sets (a collection of binary vectors of lengt ..."
Abstract

Cited by 60 (1 self)
 Add to MetaCart
of length n such that for any subset of size k of the indices, all 2k configurations appear) and families of perfect hash functions. The nearoptimal constructions of these objects imply the very efficient derandomization of algorithms in learning, of fixedsubgraph finding algorithms, and of near optimal
Results 1  10
of
572,021