Results 1 
5 of
5
Discriminative Recurrent Sparse AutoEncoders
, 1301
"... We present the discriminative recurrent sparse autoencoder model, comprising a recurrent encoder of rectified linear units, unrolled for a fixed number of iterations, and connected to two linear decoders that reconstruct the input and predict its supervised classification. Training via backpropagat ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
(Show Context)
We present the discriminative recurrent sparse autoencoder model, comprising a recurrent encoder of rectified linear units, unrolled for a fixed number of iterations, and connected to two linear decoders that reconstruct the input and predict its supervised classification. Training via backpropagationthroughtime initially minimizes an unsupervised sparse reconstruction error; the loss function is then augmented with a discriminative term on the supervised classification. The depth implicit in the temporallyunrolled form allows the system to exhibit far more representational power, while keeping the number of trainable parameters fixed. From an initially unstructured network the hidden units differentiate into categoricalunits, each of which represents an input prototype with a welldefined class; and partunits representing deformations of these prototypes. The learned organization of the recurrent encoder is hierarchical: partunits are driven directly by the input, whereas the activity of categoricalunits builds up over time through interactions with the partunits. Even using a small number of hidden units per layer, discriminative recurrent sparse autoencoders achieve excellent performance on MNIST. 1
Learning Robust Subspace Clustering
, 2013
"... We propose a lowrank transformationlearning framework to robustify subspace clustering. Many highdimensional data, such as face images and motion sequences, lie in a union of lowdimensional subspaces. The subspace clustering problem has been extensively studied in the literature to partition suc ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
We propose a lowrank transformationlearning framework to robustify subspace clustering. Many highdimensional data, such as face images and motion sequences, lie in a union of lowdimensional subspaces. The subspace clustering problem has been extensively studied in the literature to partition such highdimensional data into clusters corresponding to their underlying lowdimensional subspaces. However, lowdimensional intrinsic structures are often violated for realworld observations, as they can be corrupted by errors or deviate from ideal models. We propose to address this by learning a linear transformation on subspaces using matrix rank, via its convex surrogate nuclear norm, as the optimization criteria. The learned linear transformation restores a lowrank structure for data from the same subspace, and, at the same time, forces a highrank structure for data from different subspaces. In this way, we reduce variations within the subspaces, and increase separations between the subspaces for more accurate subspace clustering. This learned Robust Subspace Clustering framework significantly enhances the performance of existing subspace clustering methods. To exploit the lowrank structures of the transformed subspaces, we further introduce a subspace clustering technique, called Robust Sparse Subspace Clustering, which efficiently combines robust PCA with sparse modeling. We also discuss the online learning of the transformation, and learning of the transformation while simultaneously reducing the data dimensionality. Extensive experiments using public datasets are presented, showing that the proposed approach significantly outperforms stateoftheart subspace clustering methods. 1
Editor: *
, 1309
"... A lowrank transformation learning framework for subspace clustering and classification is here proposed. Many highdimensional data, such as face images and motion sequences, approximately lie in a union of lowdimensional subspaces. The corresponding subspace clustering problem has been extensivel ..."
Abstract
 Add to MetaCart
A lowrank transformation learning framework for subspace clustering and classification is here proposed. Many highdimensional data, such as face images and motion sequences, approximately lie in a union of lowdimensional subspaces. The corresponding subspace clustering problem has been extensively studied in the literature to partition such highdimensional data into clusters corresponding to their underlying lowdimensional subspaces. However, lowdimensional intrinsic structures are often violated for realworld observations, as they can be corrupted by errors or deviate from ideal models. We propose to address this by learning a linear transformation on subspaces using matrix rank, via its convex surrogate nuclear norm, as the optimization criteria. The learned linear transformation restores a lowrank structure for data from the same subspace, and, at the same time, forces a highrank structure for data from different subspaces. In this way, we reduce variations within the subspaces, and increase separation between the subspaces for a more robust subspace clustering. This proposed learned robust subspace clustering framework significantly enhances the performance of existing subspace clustering methods. Basic theoretical results here presented help to further support the underlying framework. To exploit the
LEARNING TRANSFORMATIONS
"... A lowrank transformation learning framework for subspace clustering and classification is here proposed. Many highdimensional data, such as face images and motion sequences, approximately lie in a union of lowdimensional subspaces. The corresponding subspace clustering problem has been extensivel ..."
Abstract
 Add to MetaCart
(Show Context)
A lowrank transformation learning framework for subspace clustering and classification is here proposed. Many highdimensional data, such as face images and motion sequences, approximately lie in a union of lowdimensional subspaces. The corresponding subspace clustering problem has been extensively studied in the literature, partitioning such highdimensional data into clusters corresponding to their underlying lowdimensional subspaces. However, lowdimensional intrinsic structures are often violated for realworld observations, as they can be corrupted by errors or deviate from ideal models. We propose to address this by learning a linear transformation on subspaces using matrix rank, via its convex surrogate nuclear norm, as the optimization criteria. The learned linear transformation restores a lowrank structure for data from the same subspace, and, at the same time, forces a highrank structure for data from different subspaces. In this way, we reduce variations within the subspaces, and increase separation between the subspaces for improved subspace clustering and classification. 1.
Learning computationally efficient dictionaries and their implementation as fast transforms
"... Dictionary learning is a branch of signal processing and machine learning that aims at finding a frame (called dictionary) in which some training data admits a sparse representation. The sparser the representation, the better the dictionary. The resulting dictionary is in general a dense matrix, and ..."
Abstract
 Add to MetaCart
(Show Context)
Dictionary learning is a branch of signal processing and machine learning that aims at finding a frame (called dictionary) in which some training data admits a sparse representation. The sparser the representation, the better the dictionary. The resulting dictionary is in general a dense matrix, and its manipulation can be computationally costly both at the learning stage and later in the usage of this dictionary, for tasks such as sparse coding. Dictionary learning is thus limited to relatively smallscale problems. In this paper, inspired by usual fast transforms, we consider a general dictionary structure that allows cheaper manipulation, and propose an algorithm to learn such dictionaries –and their fast implementation – over training data. The approach is demonstrated experimentally with the factorization of the Hadamard matrix and with synthetic dictionary learning experiments. 1