Results 1  10
of
224
Sparse principal component analysis and iterative thresholding, The Annals of Statistics 41
"... ar ..."
(Show Context)
Kronecker Compressive Sensing
"... Compressive sensing (CS) is an emerging approach for acquisition of signals having a sparse or compressible representation in some basis. While the CS literature has mostly focused on problems involving 1D signals and 2D images, many important applications involve signals that are multidimensional ..."
Abstract

Cited by 38 (2 self)
 Add to MetaCart
Compressive sensing (CS) is an emerging approach for acquisition of signals having a sparse or compressible representation in some basis. While the CS literature has mostly focused on problems involving 1D signals and 2D images, many important applications involve signals that are multidimensional; in this case, CS works best with representations that encapsulate the structure of such signals in every dimension. We propose the use of Kronecker product matrices in CS for two purposes. First, we can use such matrices as sparsifying bases that jointly model the different types of structure present in the signal. Second, the measurement matrices used in distributed settings can be easily expressed as Kronecker product matrices. The Kronecker product formulation in these two settings enables the derivation of analytical bounds for sparse approximation of multidimensional signals and CS recovery performance as well as a means to evaluate novel distributed measurement schemes.
Sparsity constrained nonlinear optimization: Optimality conditions and algorithms, arXiv preprint arXiv:1203.4580
, 2012
"... This paper treats the problem of minimizing a general continuously differentiable function subject to sparsity constraints. We present and analyze several different optimality criteria which are based on the notions of stationarity and coordinatewise optimality. These conditions are then used to de ..."
Abstract

Cited by 33 (9 self)
 Add to MetaCart
(Show Context)
This paper treats the problem of minimizing a general continuously differentiable function subject to sparsity constraints. We present and analyze several different optimality criteria which are based on the notions of stationarity and coordinatewise optimality. These conditions are then used to derive three numerical algorithms aimed at finding points satisfying the resulting optimality criteria: the iterative hard thresholding method and the greedy and partial sparsesimplex methods. The first algorithm is essentially a gradient projection method while the remaining two algorithms are of coordinate descent type. The theoretical convergence of these methods and their relations to the derived optimality conditions are studied. The algorithms and results are illustrated by several numerical examples. 1
Hyperspectral Image Classification Using DictionaryBased . . .
 IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING
, 2011
"... A new sparsitybased algorithm for the classification of hyperspectral imagery is proposed in this paper. The proposed algorithm relies on the observation that a hyperspectral pixel can be sparsely represented by a linear combination of a few training samples from a structured dictionary. The spars ..."
Abstract

Cited by 29 (5 self)
 Add to MetaCart
A new sparsitybased algorithm for the classification of hyperspectral imagery is proposed in this paper. The proposed algorithm relies on the observation that a hyperspectral pixel can be sparsely represented by a linear combination of a few training samples from a structured dictionary. The sparse representation of an unknown pixel is expressed as a sparse vector whose nonzero entries correspond to the weights of the selected training samples. The sparse vector is recovered by solving a sparsityconstrained optimization problem, and it can directly determine the class label of the test sample. Two different approaches are proposed to incorporate the contextual information into the sparse recovery optimization problem in order to improve the classification performance. In the first approach, an explicit smoothing constraint is imposed on the problem formulation by forcing the vector Laplacian of the reconstructed image to become zero. In this approach, the reconstructed pixel of interest has similar spectral characteristics to its four nearest neighbors. The second approach is via a joint sparsity model where hyperspectral pixels in a small neighborhood around the test pixel are simultaneously represented by linear combinations of a few common training samples, which are weighted with a different set of coefficients for each pixel. The proposed sparsitybased algorithm is applied to several real hyperspectral images for classification. Experimental results show that our algorithm outperforms the classical supervised classifier support vector machines in most cases.
A Panorama on Multiscale Geometric Representations, Intertwining Spatial, Directional and Frequency Selectivity
, 2011
"... The richness of natural images makes the quest for optimal representations in image processing and computer vision challenging. The latter observation has not prevented the design of image representations, which trade off between efficiency and complexity, while achieving accurate rendering of smoot ..."
Abstract

Cited by 21 (8 self)
 Add to MetaCart
(Show Context)
The richness of natural images makes the quest for optimal representations in image processing and computer vision challenging. The latter observation has not prevented the design of image representations, which trade off between efficiency and complexity, while achieving accurate rendering of smooth regions as well as reproducing faithful contours and textures. The most recent ones, proposed in the past decade, share an hybrid heritage highlighting the multiscale and oriented nature of edges and patterns in images. This paper presents a panorama of the aforementioned literature on decompositions in multiscale, multiorientation bases or dictionaries. They typically exhibit redundancy to improve sparsity in the transformed domain and sometimes its invariance with respect to simple geometric deformations (translation, rotation). Oriented multiscale dictionaries extend traditional wavelet processing and may offer rotation invariance. Highly redundant dictionaries require specific algorithms to simplify the search for an efficient (sparse) representation. We also discuss the extension of multiscale geometric decompositions to nonEuclidean domains such as the sphere or arbitrary meshed surfaces. The etymology of panorama suggests an overview, based on a choice of partially overlapping “pictures”.
Largescale image classification with tracenorm regularization
 IEEE Conference on Computer Vision & Pattern Recognition (CVPR
, 2012
"... With the advent of larger image classification datasets such as ImageNet, designing scalable and efficient multiclass classification algorithms is now an important challenge. We introduce a new scalable learning algorithm for largescale multiclass image classification, based on the multinomial l ..."
Abstract

Cited by 19 (2 self)
 Add to MetaCart
(Show Context)
With the advent of larger image classification datasets such as ImageNet, designing scalable and efficient multiclass classification algorithms is now an important challenge. We introduce a new scalable learning algorithm for largescale multiclass image classification, based on the multinomial logistic loss and the tracenorm regularization penalty. Reframing the challenging nonsmooth optimization problem into a surrogate infinitedimensional optimization problem with a regular `1regularization penalty, we propose a simple and provably efficient accelerated coordinate descent algorithm. Furthermore, we show how to perform efficient matrix computations in the compressed domain for quantized dense visual features, scaling up to 100,000s examples, 1,000sdimensional features, and 100s of categories. Promising experimental results on the “Fungus”, “Ungulate”, and “Vehicles ” subsets of ImageNet are presented, where we show that our approach performs significantly better than stateoftheart approaches for Fisher vectors with 16 Gaussians. 1.
A SparsityBased Model of Bounded Rationality ∗
, 2011
"... This paper proposes a model in which the decision maker builds an optimally simplified representation of the world which is “sparse,”i.e., uses few parameters that are nonzero. Sparsity is formulated so as to lead to wellbehaved, convex maximization problems. The agent’s choice of a representation ..."
Abstract

Cited by 17 (1 self)
 Add to MetaCart
(Show Context)
This paper proposes a model in which the decision maker builds an optimally simplified representation of the world which is “sparse,”i.e., uses few parameters that are nonzero. Sparsity is formulated so as to lead to wellbehaved, convex maximization problems. The agent’s choice of a representation of the world features a quadratic proxy for the benefits of thinking and a linear formulation for the costs of thinking. The agent then picks the optimal action given his representation of the world. This model yields a tractable procedure, which embeds the traditional rational agent as a particular case, and can be used for analyzing classic economic questions under bounded rationality. For instance, the paper studies how boundedly rational agents select a consumption bundle while paying imperfect attention to prices, and how frictionless firms set prices optimally in response. This leads to a novel mechanism for price rigidity. The model is also used to examine boundedly rational intertemporal consumption problems and portfolio choice with imperfect understanding of returns.
Description of the minimizers of least squares regularized with ℓ0norm. Uniqueness of the global minimizer
 SIAM J. IMAGING SCIENCES
, 2013
"... ..."
(Show Context)
STATISTICAL COMPRESSIVE SENSING OF GAUSSIAN MIXTURE MODELS By
, 2010
"... A new framework of compressive sensing (CS), namely statistical compressive sensing (SCS), that aims at efficiently sampling a collection of signals that follow a statistical distribution and achieving accurate reconstruction on average, is introduced. For signals following a Gaussian distribution, ..."
Abstract

Cited by 14 (3 self)
 Add to MetaCart
(Show Context)
A new framework of compressive sensing (CS), namely statistical compressive sensing (SCS), that aims at efficiently sampling a collection of signals that follow a statistical distribution and achieving accurate reconstruction on average, is introduced. For signals following a Gaussian distribution, with Gaussian or Bernoulli sensing matrices of O(k) measurements, considerably smaller than the O(k log(N/k)) required by conventional CS, where N is the signal dimension, and with an optimal decoder implemented with linear filtering, significantly faster than the pursuit decoders applied in conventional CS, the error of SCS is shown tightly upper bounded by a constant times the kbest term approximation error, with overwhelming probability. The failure probability is also significantly smaller than that of conventional CS. Stronger yet simpler results further show that for any sensing matrix, the error of Gaussian SCS is upper bounded by a constant times the kbest term approximation with probability one, and the bound constant can be efficiently calculated. For signals following Gaussian mixture models, SCS with a piecewise linear decoder is introduced and shown to produce for real images better results than conventional CS based on sparse models. Index Terms — Compressive sensing, Gaussian mixture models I.