• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 574
Next 10 →

Robust Faces Manifold Modeling: Most Expressive Vs. Most Sparse Criterion

by Xiaoyang Tan, Lishan Qiao, Wenjuan Gao, Jun Liu
"... Robust face image modeling under uncontrolled condi-tions is crucial for the current face recognition systems in practice. One approach is to seek a compact representation of the given image set which encodes the intrinsic lower dimensional manifold of them. Among others, Local Lin-ear Embedding (LL ..."
Abstract - Cited by 1 (0 self) - Add to MetaCart
. In this paper, we introduce the Sparse Locally Linear Embedding (SLLE) to address these issues. By replacing the most-expressive type crite-rion in modeling local patches in LLE with a most-sparse one, SLLE essentially finds and models more discriminative patches. This gives higher model flexibility

Efficient erasure correcting codes

by Michael G. Luby, Michael Mitzenmacher, M. Amin Shokrollahi, Daniel A. Spielman - IEEE TRANSACTIONS ON INFORMATION THEORY , 2001
"... We introduce a simple erasure recovery algorithm for codes derived from cascades of sparse bipartite graphs and analyze the algorithm by analyzing a corresponding discrete-time random process. As a result, we obtain a simple criterion involving the fractions of nodes of different degrees on both si ..."
Abstract - Cited by 360 (26 self) - Add to MetaCart
We introduce a simple erasure recovery algorithm for codes derived from cascades of sparse bipartite graphs and analyze the algorithm by analyzing a corresponding discrete-time random process. As a result, we obtain a simple criterion involving the fractions of nodes of different degrees on both

A Sparse Signal Reconstruction Perspective for Source Localization With Sensor Arrays

by Dmitry Malioutov, Müjdat Çetin, Alan S. Willsky , 2005
"... We present a source localization method based on a sparse representation of sensor measurements with an overcomplete basis composed of samples from the array manifold. We enforce sparsity by imposing penalties based on the 1-norm. A number of recent theoretical results on sparsifying properties of ..."
Abstract - Cited by 231 (6 self) - Add to MetaCart
We present a source localization method based on a sparse representation of sensor measurements with an overcomplete basis composed of samples from the array manifold. We enforce sparsity by imposing penalties based on the 1-norm. A number of recent theoretical results on sparsifying properties

Monaural sound source separation by nonnegative matrix factorization with temporal continuity and sparseness criteria

by Tuomas Virtanen - IEEE Trans. On Audio, Speech and Lang. Processing , 2007
"... Abstract—An unsupervised learning algorithm for the separation of sound sources in one-channel music signals is presented. The algorithm is based on factorizing the magnitude spectrogram of an input signal into a sum of components, each of which has a fixed magnitude spectrum and a time-varying gain ..."
Abstract - Cited by 189 (30 self) - Add to MetaCart
of pitched musical sounds. The sparseness criterion did not produce significant improvements. Index Terms—Acoustic signal analysis, audio source separation, blind source separation, music, nonnegative matrix factorization, sparse coding, unsupervised learning. I.

Sparse Greedy Gaussian Process Regression

by Alex J. Smola, Peter Bartlett - Advances in Neural Information Processing Systems 13 , 2001
"... We present a simple sparse greedy technique to approximate the maximum a posteriori estimate of Gaussian Processes with much improved scaling behaviour in the sample size m. In particular, computational requirements are O(n m), storage is O(nm), the cost for prediction is O(n) and the cost to comput ..."
Abstract - Cited by 131 (1 self) - Add to MetaCart
We present a simple sparse greedy technique to approximate the maximum a posteriori estimate of Gaussian Processes with much improved scaling behaviour in the sample size m. In particular, computational requirements are O(n m), storage is O(nm), the cost for prediction is O(n) and the cost

Subspace Information Criterion for Sparse Regressors

by Koji Tsuda, Masashi Sugiyama, Klaus-Robert Müller , 2001
"... : Non-quadratic regularizers, in particular the ` 1 norm regularizer can yield sparse solutions that generalize well. In this work we propose the Generalized Subspace Information Criterion (GSIC) that allows to predict the generalization error for this useful family of regularizers. We show that und ..."
Abstract - Add to MetaCart
: Non-quadratic regularizers, in particular the ` 1 norm regularizer can yield sparse solutions that generalize well. In this work we propose the Generalized Subspace Information Criterion (GSIC) that allows to predict the generalization error for this useful family of regularizers. We show

Theoretical results on sparse representations of multiple-measurement vectors

by Jie Chen, Xiaoming Huo - IEEE Trans. Signal Process , 2006
"... Abstract — Multiple measurement vector (MMV) is a relatively new problem in sparse representations. Efficient methods have been proposed. Considering many theoretical results that are available in a simple case – single measure vector (SMV) – the theoretical analysis regarding MMV is lacking. In th ..."
Abstract - Cited by 147 (2 self) - Add to MetaCart
Abstract — Multiple measurement vector (MMV) is a relatively new problem in sparse representations. Efficient methods have been proposed. Considering many theoretical results that are available in a simple case – single measure vector (SMV) – the theoretical analysis regarding MMV is lacking

Y.: Sparse feature learning for deep belief networks

by Y-lan Boureau, Yann Lecun, Inria Rocquencourt - In: Advances in Neural Information Processing Systems (NIPS 2007 , 2007
"... Unsupervised learning algorithms aim to discover the structure hidden in the data, and to learn representations that are more suitable as input to a supervised machine than the raw input. Many unsupervised methods are based on reconstructing the input from the representation, while constraining the ..."
Abstract - Cited by 130 (14 self) - Add to MetaCart
the representation to have certain desirable properties (e.g. low dimension, sparsity, etc). Others are based on approximating density by stochastically reconstructing the input from the representation. We describe a novel and efficient algorithm to learn sparse representations, and compare it theoretically

Functional data analysis for sparse longitudinal data.

by Fang Yao , Hans-Georg Müller , Jane-Ling Wang - Journal of the American Statistical Association , 2005
"... We propose a nonparametric method to perform functional principal components analysis for the case of sparse longitudinal data. The method aims at irregularly spaced longitudinal data, where the number of repeated measurements available per subject is small. In contrast, classical functional data a ..."
Abstract - Cited by 123 (24 self) - Add to MetaCart
We propose a nonparametric method to perform functional principal components analysis for the case of sparse longitudinal data. The method aims at irregularly spaced longitudinal data, where the number of repeated measurements available per subject is small. In contrast, classical functional data

Bayesian Information Criterion in a sparse linear

by Piotr Szulc
"... ar ..."
Abstract - Add to MetaCart
Abstract not found
Next 10 →
Results 1 - 10 of 574
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University