Results 1  10
of
26
Hyperspectral Image Classification Using DictionaryBased . . .
 IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING
, 2011
"... A new sparsitybased algorithm for the classification of hyperspectral imagery is proposed in this paper. The proposed algorithm relies on the observation that a hyperspectral pixel can be sparsely represented by a linear combination of a few training samples from a structured dictionary. The spars ..."
Abstract

Cited by 29 (5 self)
 Add to MetaCart
A new sparsitybased algorithm for the classification of hyperspectral imagery is proposed in this paper. The proposed algorithm relies on the observation that a hyperspectral pixel can be sparsely represented by a linear combination of a few training samples from a structured dictionary. The sparse representation of an unknown pixel is expressed as a sparse vector whose nonzero entries correspond to the weights of the selected training samples. The sparse vector is recovered by solving a sparsityconstrained optimization problem, and it can directly determine the class label of the test sample. Two different approaches are proposed to incorporate the contextual information into the sparse recovery optimization problem in order to improve the classification performance. In the first approach, an explicit smoothing constraint is imposed on the problem formulation by forcing the vector Laplacian of the reconstructed image to become zero. In this approach, the reconstructed pixel of interest has similar spectral characteristics to its four nearest neighbors. The second approach is via a joint sparsity model where hyperspectral pixels in a small neighborhood around the test pixel are simultaneously represented by linear combinations of a few common training samples, which are weighted with a different set of coefficients for each pixel. The proposed sparsitybased algorithm is applied to several real hyperspectral images for classification. Experimental results show that our algorithm outperforms the classical supervised classifier support vector machines in most cases.
Simultaneous joint sparsity model for target detection in hyperspectral imagery
 IEEE Geoscience and Remote Sensing Letters
, 2011
"... Abstract—This letter proposes a simultaneous joint sparsity model for target detection in hyperspectral imagery (HSI). The key innovative idea here is that hyperspectral pixels within a small neighborhood in the test image can be simultaneously represented by a linear combination of a few common tra ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
(Show Context)
Abstract—This letter proposes a simultaneous joint sparsity model for target detection in hyperspectral imagery (HSI). The key innovative idea here is that hyperspectral pixels within a small neighborhood in the test image can be simultaneously represented by a linear combination of a few common training samples but weighted with a different set of coefficients for each pixel. The joint sparsity model automatically incorporates the interpixel correlation within the HSI by assuming that neighboring pixels usually consist of similar materials. The sparse representations of the neighboring pixels are obtained by simultaneously decomposing the pixels over a given dictionary consisting of training samples of both the target and background classes. The recovered sparse coefficient vectors are then directly used for determining the label of the test pixels. Simulation results show that the proposed algorithm outperforms the classical hyperspectral target detection algorithms, such as the popular spectral matched filters, matched subspace detectors, and adaptive subspace detectors, as well as binary classifiers such as support vector machines. Index Terms—Hyperspectral imagery, joint sparsity model, simultaneous orthogonal matching pursuit, sparse representation, target detection. I.
Oneshot Learning Gesture Recognition from RGBD Data Using Bag of Features
"... For oneshot learning gesture recognition, two important challenges are: how to extract distinctive features and how to learn a discriminative model from only one training sample per gesture class. For feature extraction, a new spatiotemporal feature representation called 3D enhanced motion scalei ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
For oneshot learning gesture recognition, two important challenges are: how to extract distinctive features and how to learn a discriminative model from only one training sample per gesture class. For feature extraction, a new spatiotemporal feature representation called 3D enhanced motion scaleinvariant feature transform (3D EMoSIFT) is proposed, which fuses RGBD data. Compared with other features, the new feature set is invariant to scale and rotation, and has more compact and richer visual representations. For learning a discriminative model, all features extracted from training samples are clustered with the kmeans algorithm to learn a visual codebook. Then, unlike the traditional bag of feature (BoF) models using vector quantization (VQ) to map each feature into a certain visual codeword, a sparse coding method named simulation orthogonal matching pursuit (SOMP) is applied and thus each feature can be represented by some linear combination of a small number of codewords. Compared with VQ, SOMP leads to a much lower reconstruction error and achieves better performance. The proposed approach has been evaluated on ChaLearn gesture database and the result has been ranked amongst the top best performing techniques on ChaLearn gesture challenge (round 2).
Spatialaware dictionary learning for hyperspectral image classication
 IEEE Transactions on Medical Imaging
, 2015
"... Abstract—This paper presents a structured dictionarybased model for hyperspectral data that incorporates both spectral and contextual characteristics of a spectral sample, with the goal of hyperspectral image classification. The idea is to partition the pixels of a hyperspectral image into a number ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
Abstract—This paper presents a structured dictionarybased model for hyperspectral data that incorporates both spectral and contextual characteristics of a spectral sample, with the goal of hyperspectral image classification. The idea is to partition the pixels of a hyperspectral image into a number of spatial neighborhoods called contextual groups and to model each pixel with a linear combination of a few dictionary elements learned from the data. Since pixels inside a contextual group are often made up of the same materials, their linear combinations are constrained to use common elements from the dictionary. To this end, dictionary learning is carried out with a joint sparse regularizer to induce a common sparsity pattern in the sparse coefficients of each contextual group. The sparse coefficients are then used for classification using a linear SVM. Experimental results on a number of real hyperspectral images confirm the effectiveness of the proposed representation for hyperspectral image classification. Moreover, experiments with simulated multispectral data show that the proposed model is capable of finding representations that may effectively be used for classification of multispectralresolution samples. Index Terms—Classification, hyperspectral imagery, dictionary learning, probabilistic joint sparse model, linear support vector machines. I.
D R A F T Local Sparse Coding for Image Classification and Retrieval
"... The success of sparse representations in image modeling and recovery has motivated its use in computer vision applications. Object recognition has been effectively performed by aggregating sparse codes of local features in an image at multiple spatial scales. Though sparse coding guarantees a highf ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
The success of sparse representations in image modeling and recovery has motivated its use in computer vision applications. Object recognition has been effectively performed by aggregating sparse codes of local features in an image at multiple spatial scales. Though sparse coding guarantees a highfidelity representation, it does not exploit the dependence between the local features. By incorporating suitable locality constraints, sparse coding can be regularized to obtain similar codes for similar features. In this paper, we develop an algorithm to design dictionaries for local sparse coding of image descriptors and perform object recognition using the learned dictionaries. Furthermore, we propose to perform kernel local sparse coding in order to exploit the nonlinear similarity of features and describe an algorithm to learn dictionaries when the Radial Basis Function (RBF) kernel is used. In addition, we develop a supervised local sparse coding approach for image retrieval using subimage heterogeneous features. Simulation results for object recognition demonstrate that the two proposed algorithms achieve higher classification accuracies in comparison to other sparse coding based approaches. By performing image retrieval on the Microsoft Research Cam
Parallel Selective Algorithms for Nonconvex Big Data Optimization
 IEEE Transactions on Signal Processing
, 2015
"... Abstract—We propose a decomposition framework for the parallel optimization of the sum of a differentiable (possibly nonconvex) function and a (block) separable nonsmooth, convex one. The latter term is usually employed to enforce structure in the solution, typically sparsity. Our framework is very ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract—We propose a decomposition framework for the parallel optimization of the sum of a differentiable (possibly nonconvex) function and a (block) separable nonsmooth, convex one. The latter term is usually employed to enforce structure in the solution, typically sparsity. Our framework is very flexible and includes both fully parallel Jacobi schemes and Gauss–Seidel (i.e., sequential) ones, as well as virtually all possibilities “in between” with only a subset of variables updated at each iteration. Our theoretical convergence results improve on existing ones, and numerical results on LASSO, logistic regression, and some nonconvex quadratic problems show that the new method consistently outperforms existing algorithms. Index Terms—Parallel optimization, variables selection, distributed methods, Jacobi method, LASSO, sparse solution.
1 Direct Optimization of the Dictionary Learning Problem
"... Abstract—A novel way of solving the dictionary learning problem is proposed in this paper. It is based on a socalled direct optimization as it avoids the usual technique which consists in alternatively optimizing the coefficients of a sparse decomposition and in optimizing dictionary atoms. The alg ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Abstract—A novel way of solving the dictionary learning problem is proposed in this paper. It is based on a socalled direct optimization as it avoids the usual technique which consists in alternatively optimizing the coefficients of a sparse decomposition and in optimizing dictionary atoms. The algorithm we advocate simply performs a joint proximal gradient descent step over the dictionary atoms and the coefficient matrix. As such, we have denoted the algorithm as a onestep blockcoordinate proximal gradient descent and we have shown that it can be applied to a broader class of nonconvex optimization problems than the dictionary learning one. After having derived the algorithm, we also provided indepth discussions on how the stepsizes of the proximal gradient descent have been chosen. In addition, we uncover the connection between our direct approach and the alternating optimization method for dictionary learning. The main advantage of our novel algorithm is that, as suggested by our simulation study, it is far more efficient than alternating optimization algorithms. Index Terms—dictionary learning, nonconvex proximal, onestep blockcoordinate descent. I.
Improved MEG/EEG source localization with reweighted mixednorms
"... Abstract—MEG/EEG source imaging allows for the noninvasive analysis of brain activity with high temporal and good spatial resolution. As the bioelectromagnetic inverse problem is illposed, a priori information is required to find a unique source estimate. For the analysis of evoked brain activity, ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract—MEG/EEG source imaging allows for the noninvasive analysis of brain activity with high temporal and good spatial resolution. As the bioelectromagnetic inverse problem is illposed, a priori information is required to find a unique source estimate. For the analysis of evoked brain activity, spatial sparsity of the neuronal activation can be assumed. Due to the convexity, `1norm based constraints are often used for this, which however lead to source estimates biased in amplitude and often suboptimal in terms of source selection. As an alternative, nonconvex regularization functionals such as `pquasinorms with 0 < p < 1 can be used. In this work, we present a MEG/EEG inverse solver based on a `2,0.5quasinorm penalty promoting spatial sparsity as well as temporal stationarity of the brain activity. For solving the resulting nonconvex optimization problem, we propose the iterative reweighted Mixed Norm Estimate, which is based on reweighted convex optimization and combines a block coordinate descent scheme and an active set strategy to solve each surrogate problem efficiently. We provide empirical evidence based on simulations and analysis of MEG data that the proposed method outperforms the standard Mixed Norm Estimate in terms of active source identification and amplitude bias. Keywords—MEG; EEG; bioelectromagnetic inverse problem; structured sparsity; iterative reweighted optimization algorithm I.