• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • Donate

CiteSeerX logo

Advanced Search Include Citations

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 266
Next 10 →

1 Near-Oracle Performance of Greedy Block-Sparse Estimation Techniques from Noisy Measurements

by Zvika Ben-haim, Student Member, Yonina C. Eldar, Senior Member
"... ar ..."
Abstract - Add to MetaCart
Abstract not found

Near-oracle performance of greedy block-sparse estimation techniques from noisy measurements

by Zvika Ben-haim, Yonina C. Eldar, Senior Member - Signal Process., 2010 [Online]. Available: http://arxiv. org/pdf/1009.0906
"... Abstract—This paper examines the ability of greedy algorithms to estimate a block sparse parameter vector from noisy measurements. In particular, block sparse versions of the orthogonal matching pursuit and thresholding algorithms are analyzed under both adversarial and Gaussian noise models. In the ..."
Abstract - Cited by 18 (2 self) - Add to MetaCart
Abstract—This paper examines the ability of greedy algorithms to estimate a block sparse parameter vector from noisy measurements. In particular, block sparse versions of the orthogonal matching pursuit and thresholding algorithms are analyzed under both adversarial and Gaussian noise models

Inducing Features of Random Fields

by Stephen Della Pietra, Vincent Della Pietra, John Lafferty - IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE , 1997
"... We present a technique for constructing random fields from a set of training samples. The learning paradigm builds increasingly complex fields by allowing potential functions, or features, that are supported by increasingly large subgraphs. Each feature has a weight that is trained by minimizing the ..."
Abstract - Cited by 670 (10 self) - Add to MetaCart
the Kullback-Leibler divergence between the model and the empirical distribution of the training data. A greedy algorithm determines how features are incrementally added to the field and an iterative scaling algorithm is used to estimate the optimal values of the weights. The random field models and techniques

NESTED SPARSE BAYESIAN LEARNING FOR BLOCK-SPARSE SIGNALS WITH INTRA-BLOCK CORRELATION

by Ranjitha Prasad Ch
"... In this work, we address the recovery of block sparse vectors with intra-block correlation, i.e., the recovery of vectors in which the correlated nonzero entries are constrained to lie in a few clusters, from noisy underdetermined linear measurements. Among Bayesian sparse recovery techniques, the c ..."
Abstract - Add to MetaCart
, the cluster Sparse Bayesian Learning (SBL) is an efficient tool for block-sparse vector recovery, with intra-block correlation. However, this technique uses a heuristic method to estimate the intra-block correlation. In this paper, we propose the Nested SBL (NSBL) algorithm, which we derive using a novel

Sparse Greedy Gaussian Process Regression

by Alex J. Smola, Peter Bartlett - Advances in Neural Information Processing Systems 13 , 2001
"... We present a simple sparse greedy technique to approximate the maximum a posteriori estimate of Gaussian Processes with much improved scaling behaviour in the sample size m. In particular, computational requirements are O(n m), storage is O(nm), the cost for prediction is O(n) and the cost to comput ..."
Abstract - Cited by 131 (1 self) - Add to MetaCart
We present a simple sparse greedy technique to approximate the maximum a posteriori estimate of Gaussian Processes with much improved scaling behaviour in the sample size m. In particular, computational requirements are O(n m), storage is O(nm), the cost for prediction is O(n) and the cost

A Parallel, Block Greedy Method for Sparse Inverse Covariance Estimation for Ultra-high Dimensions

by Prabhanjan Kambadur
"... Discovering the graph structure of a Gaus-sian Markov Random Field is an important problem in application areas such as com-putational biology and atmospheric sciences. This task, which translates to estimating the sparsity pattern of the inverse covariance ma-trix, has been extensively studied in t ..."
Abstract - Add to MetaCart
in the lit-erature. However, the existing approaches are unable to handle ultra-high dimensional datasets and there is a crucial need to de-velop methods that are both highly scal-able and memory-efficient. In this paper, we present GINCO, a blocked greedy method for sparse inverse covariance matrix

Greedy spectral embedding

by Marie Ouimet, et al.
"... Spectral dimensionality reduction methods and spectral clustering methods require computation of the principal eigenvectors of an n × n matrix where n is the number of examples. Following up on previously proposed techniques to speed-up kernel methods by focusing on a subset of m examples, we study ..."
Abstract - Cited by 20 (2 self) - Add to MetaCart
Spectral dimensionality reduction methods and spectral clustering methods require computation of the principal eigenvectors of an n × n matrix where n is the number of examples. Following up on previously proposed techniques to speed-up kernel methods by focusing on a subset of m examples, we study

Greedy importance sampling

by Finnegan Southey, Dale Schuurmans, Ali Ghodsi - In Proceedings NIPS-12 , 1999
"... Greedy importance sampling is an unbiased estimation technique that re-duces the variance of standard importance sampling by explicitly search-ing for modes in the estimation objective. Previous work has demon-strated the feasibility of implementing this method and proved that the technique is unbia ..."
Abstract - Cited by 8 (3 self) - Add to MetaCart
Greedy importance sampling is an unbiased estimation technique that re-duces the variance of standard importance sampling by explicitly search-ing for modes in the estimation objective. Previous work has demon-strated the feasibility of implementing this method and proved that the technique

Greedy importance sampling

by unknown authors
"... Abstract I present a simple variation of importance sampling that explicitly search-es for important regions in the target distribution. I prove that the technique yields unbiased estimates, and show empirically it can reduce thevariance of standard Monte Carlo estimators. This is achieved by concen ..."
Abstract - Add to MetaCart
Abstract I present a simple variation of importance sampling that explicitly search-es for important regions in the target distribution. I prove that the technique yields unbiased estimates, and show empirically it can reduce thevariance of standard Monte Carlo estimators. This is achieved

Combining stochastic and greedy search in hybrid estimation

by Lars Blackmore - in Proceedings of 20th National Conference on Artificial Intelligence (AAAI05 , 2005
"... Techniques for robot monitoring and diagnosis have been developed that perform state estimation using probabilistic hybrid discrete/continuous models. Exact inference in hybrid dynamic systems is, in general, intractable. Approximate algorithms are based on either 1) greedy search, as in the case of ..."
Abstract - Cited by 5 (3 self) - Add to MetaCart
Techniques for robot monitoring and diagnosis have been developed that perform state estimation using probabilistic hybrid discrete/continuous models. Exact inference in hybrid dynamic systems is, in general, intractable. Approximate algorithms are based on either 1) greedy search, as in the case
Next 10 →
Results 1 - 10 of 266
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2016 The Pennsylvania State University