Results 1  10
of
66
Image deblurring and superresolution by adaptive sparse domain selection and adaptive regularization
 IEEE Trans. Image Process
, 2011
"... Abstract—As a powerful statistical image modeling technique, sparse representation has been successfully used in various image restoration applications. The success of sparse representation owes to the development of thenorm optimization techniques and the fact that natural images are intrinsically ..."
Abstract

Cited by 59 (11 self)
 Add to MetaCart
Abstract—As a powerful statistical image modeling technique, sparse representation has been successfully used in various image restoration applications. The success of sparse representation owes to the development of thenorm optimization techniques and the fact that natural images are intrinsically sparse in some domains. The image restoration quality largely depends on whether the employed sparse domain can represent well the underlying image. Considering that the contents can vary significantly across different images or different patches in a single image, we propose to learn various sets of bases from a precollected dataset of example image patches, and then, for a given patch to be processed, one set of bases are adaptively selected to characterize the local sparse domain. We further introduce two adaptive regularization terms into the sparse representation framework. First, a set of autoregressive (AR) models are learned from the dataset of example image patches. The best fitted AR models to a given patch are adaptively selected to regularize the image local structures. Second, the image nonlocal selfsimilarity is introduced as another regularization term. In addition, the sparsity regularization parameter is adaptively estimated for better image restoration performance. Extensive experiments on image deblurring and superresolution validate that by using adaptive sparse domain selection and adaptive regularization, the proposed method achieves much better results than many stateoftheart algorithms in terms of both PSNR and visual perception. Index Terms—Deblurring, image restoration (IR), regularization, sparse representation, superresolution. I.
Blind compressed sensing
 IEEE TRANS. INF. THEORY
, 2011
"... The fundamental principle underlying compressed sensing is that a signal, which is sparse under some basis representation, can be recovered from a small number of linear measurements. However, prior knowledge of the sparsity basis is essential for the recovery process. This work introduces the conc ..."
Abstract

Cited by 15 (3 self)
 Add to MetaCart
(Show Context)
The fundamental principle underlying compressed sensing is that a signal, which is sparse under some basis representation, can be recovered from a small number of linear measurements. However, prior knowledge of the sparsity basis is essential for the recovery process. This work introduces the concept of blind compressed sensing, which avoids the need to know the sparsity basis in both the sampling and the recovery process. We suggest three possible constraints on the sparsity basis that can be added to the problem in order to guarantee a unique solution. For each constraint, we prove conditions for uniqueness, and suggest a simple method to retrieve the solution. We demonstrate through simulations that our methods can achieve results similar to those of standard compressed sensing, which rely on prior knowledge of the sparsity basis, as long as the signals are sparse enough. This offers a general sampling and reconstruction system that fits all sparse signals, regardless of the sparsity basis, under the conditions and constraints presented in this work.
Regularized Latent Semantic Indexing
"... Topic modeling can boost the performance of information retrieval, but its realworld application is limited due to scalability issues. Scaling to larger document collections via parallelization is an active area of research, but most solutions require drastic steps such as vastly reducing input voc ..."
Abstract

Cited by 15 (2 self)
 Add to MetaCart
(Show Context)
Topic modeling can boost the performance of information retrieval, but its realworld application is limited due to scalability issues. Scaling to larger document collections via parallelization is an active area of research, but most solutions require drastic steps such as vastly reducing input vocabulary. We introduce Regularized Latent Semantic Indexing (RLSI), a new method which is designed for parallelization. It is as effective as existing topic models, and scales to larger datasets without reducing input vocabulary. RLSI formalizes topic modeling as a problem of minimizing a quadratic loss function regularized by ℓ1 and/or ℓ2 norm. This formulation allows the learning process to be decomposed into multiple suboptimization problems which can be optimized in parallel, for example via MapReduce. We particularly propose adopting ℓ1 norm on topics and ℓ2 norm on document representations, to create a model with compact and readable topics and useful for retrieval. Relevance ranking experiments on three TREC datasets show that RLSI performs better than LSI, PLSI, and LDA, and the improvements are sometimes statistically significant. Experiments on a web dataset, containing about 1.6 million documents and 7 million terms, demonstrate a similar boost in performance on a larger corpus and vocabulary than in previous studies.
Phase diagram and approximate message passing for blind calibration and dictionary learning. arXiv:1301.5898
, 2013
"... Abstract—We consider dictionary learning and blind calibration for signals and matrices created from a random ensemble. We study the meansquared error in the limit of large signal dimension using the replica method and unveil the appearance of phase transitions delimiting impossible, possiblebut ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
(Show Context)
Abstract—We consider dictionary learning and blind calibration for signals and matrices created from a random ensemble. We study the meansquared error in the limit of large signal dimension using the replica method and unveil the appearance of phase transitions delimiting impossible, possiblebuthard and possible inference regions. We also introduce an approximate message passing algorithm that asymptotically matches the theoretical performance, and show through numerical tests that it performs very well, for the calibration problem, for tractable system sizes. I.
Learning Separable Filters ⋆
, 2012
"... Abstract. While learned image features can achieve great accuracy on different Computer Vision problems, their use in realworld situations is still very limited as their extraction is typically timeconsuming. We therefore propose a method to learn image features that can be extracted very efficien ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
(Show Context)
Abstract. While learned image features can achieve great accuracy on different Computer Vision problems, their use in realworld situations is still very limited as their extraction is typically timeconsuming. We therefore propose a method to learn image features that can be extracted very efficiently using separable filters, by looking for low rank filters. We evaluate our approach on both the image categorization and the pixel classification tasks and show that we obtain similar accuracy as stateoftheart methods, at a fraction of the computational cost. 1
Separable Dictionary Learning
"... Many techniques in computer vision, machine learning, and statistics rely on the fact that a signal of interest admits a sparse representation over some dictionary. Dictionaries are either available analytically, or can be learned from a suitable training set. While analytic dictionaries permit to c ..."
Abstract

Cited by 9 (5 self)
 Add to MetaCart
(Show Context)
Many techniques in computer vision, machine learning, and statistics rely on the fact that a signal of interest admits a sparse representation over some dictionary. Dictionaries are either available analytically, or can be learned from a suitable training set. While analytic dictionaries permit to capture the global structure of a signal and allow a fast implementation, learned dictionaries often perform better in applications as they are more adapted to the considered class of signals. In imagery, unfortunately, the numerical burden for (i) learning a dictionary and for (ii) employing the dictionary for reconstruction tasks only allows to deal with relatively small image patches that only capture local image information. The approach presented in this paper aims at overcoming these drawbacks by allowing a separable structure on the dictionary throughout the learning process. On the one hand, this permits larger patchsizes for the learning phase, on the other hand, the dictionary is applied efficiently in reconstruction tasks. The learning procedure is based on optimizing over a product of spheres which updates the dictionary as a whole, thus enforces basic dictionary properties such as mutual coherence explicitly during the learning procedure. In the special case where no separable structure is enforced, our method competes with stateoftheart dictionary learning methods like KSVD. 1.
Sparse Variation Dictionary Learning for Face Recognition with A Single Training Sample Per Person
"... Face recognition (FR) with a single training sample per person (STSPP) is a very challenging problem due to the lack of information to predict the variations in the query sample. Sparse representation based classification has shown interesting results in robust FR; however, its performance will dete ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
(Show Context)
Face recognition (FR) with a single training sample per person (STSPP) is a very challenging problem due to the lack of information to predict the variations in the query sample. Sparse representation based classification has shown interesting results in robust FR; however, its performance will deteriorate much for FR with STSPP. To address this issue, in this paper we learn a sparse variation dictionary from a generic training set to improve the query sample representation by STSPP. Instead of learning from the generic training set independently w.r.t. the gallery set, the proposed sparse variation dictionary learning (SVDL) method is adaptive to the gallery set by jointly learning a projection to connect the generic training set with the gallery set. The learnt sparse variation dictionary can be easily integrated into the framework of sparse representation based classification so that various variations in face images, including illumination, expression, occlusion, pose, etc., can be better handled. Experiments on the largescale CMU MultiPIE, FRGC and LFW databases demonstrate the promising performance of SVDL on FR with STSPP. 1.
KERNEL DICTIONARY LEARNING
"... In this paper, we present dictionary learning methods for sparse and redundant signal representations in high dimensional feature space. Using the kernel method, we describe how the wellknown dictionary learning approaches such as the method of optimal directions and KSVD can be made nonlinear. We ..."
Abstract

Cited by 8 (5 self)
 Add to MetaCart
(Show Context)
In this paper, we present dictionary learning methods for sparse and redundant signal representations in high dimensional feature space. Using the kernel method, we describe how the wellknown dictionary learning approaches such as the method of optimal directions and KSVD can be made nonlinear. We analyze these constructions and demonstrate their improved performance through several experiments on classification problems. It is shown that nonlinear dictionary learning approaches can provide better discrimination compared to their linear counterparts and kernel PCA, especially when the data is corrupted by noise. Index Terms — Kernel methods, dictionary learning, method of optimal directions, KSVD. 1.
Predicting parameters in deep learning
 In Proc. NIPS
, 2013
"... We demonstrate that there is significant redundancy in the parameterization of several deep learning models. Given only a few weight values for each feature it is possible to accurately predict the remaining values. Moreover, we show that not only can the parameter values be predicted, but many of t ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
(Show Context)
We demonstrate that there is significant redundancy in the parameterization of several deep learning models. Given only a few weight values for each feature it is possible to accurately predict the remaining values. Moreover, we show that not only can the parameter values be predicted, but many of them need not be learned at all. We train several different architectures by learning only a small number of weights and predicting the rest. In the best case we are able to predict more than 95 % of the weights of a network without any drop in accuracy. 1