Results 1 
9 of
9
Compressive Estimation of Doubly Selective Channels: Exploiting Channel Sparsity to Improve Spectral Efficiency in Multicarrier Transmissions
"... We consider the estimation of doubly selective wireless channels within pulseshaping multicarrier systems (which include OFDM systems as a special case). A pilotassisted channel estimation technique using the methodology of compressed sensing (CS) is proposed. By exploiting a channel’s delayDopple ..."
Abstract

Cited by 24 (1 self)
 Add to MetaCart
(Show Context)
We consider the estimation of doubly selective wireless channels within pulseshaping multicarrier systems (which include OFDM systems as a special case). A pilotassisted channel estimation technique using the methodology of compressed sensing (CS) is proposed. By exploiting a channel’s delayDoppler sparsity, CSbased channel estimation allows an increase in spectral efficiency through a reduction of the number of pilot symbols that have to be transmitted. We also present an extension of our basic channel estimator that employs a sparsityimproving basis expansion. We propose a framework for optimizing the basis and an iterative approximate basis optimization algorithm. Simulation results using three different CS recovery algorithms demonstrate significant performance gains (in terms of improved estimation accuracy or reduction of the number of pilots) relative to conventional leastsquares estimation, as well as substantial advantages of using an optimized basis.
AN L1 CRITERION FOR DICTIONARY LEARNING BY SUBSPACE IDENTIFICATION
"... We propose an ℓ 1 criterion for dictionary learning for sparse signal representation. Instead of directly searching for the dictionary vectors, our dictionary learning approach identifies vectors that are orthogonal to the subspaces in which the training data concentrate. We study conditions on the ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
(Show Context)
We propose an ℓ 1 criterion for dictionary learning for sparse signal representation. Instead of directly searching for the dictionary vectors, our dictionary learning approach identifies vectors that are orthogonal to the subspaces in which the training data concentrate. We study conditions on the coefficients of training data that guarantee that ideal normal vectors deduced from the dictionary are local optima of the criterion. We illustrate the behavior of the criterion on a 2D example, showing that the local minima correspond to ideal normal vectors when the number of training data is sufficient. We conclude by describing an algorithm that can be used to optimize the criterion in higher dimension. Index Terms — Sparse representation, dictionary learning, nonconvex optimization
Sparse Source Separation from Orthogonal Mixture
, 2008
"... This paper addresses source separation from a linear mixture under two assumptions: source sparsity and orthogonality of the mixing matrix. We propose efficient sparse separation via a twostage process. In the first stage we attempt to recover the sparsity pattern of the sources by exploiting the o ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
This paper addresses source separation from a linear mixture under two assumptions: source sparsity and orthogonality of the mixing matrix. We propose efficient sparse separation via a twostage process. In the first stage we attempt to recover the sparsity pattern of the sources by exploiting the orthogonality prior. In the second stage, the support is used to reformulate the recovery task as an optimization problem. We then suggest a solution based on alternating minimization. Random simulations are performed to analyze the behavior of the resulting algorithm. The simulations demonstrate convergence of our approach as well as superior recovery rate in comparison with alternative source separation methods and KSVD, a leading algorithm in dictionary learning. Index Terms — Blind source separation (BSS), complete representations, orthogonal mixture, sparse component analysis (SCA). 1.
Basis Identification from Random Sparse Samples
, 2009
"... This article treats the problem of learning a dictionary providing sparse representations for a given signal class, via ℓ1 minimisation. The problem is to identify a dictionary Φ from a set of training samples Y knowing that Y = ΦX for some coefficient matrix X. Using a characterisation of coeffici ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
This article treats the problem of learning a dictionary providing sparse representations for a given signal class, via ℓ1 minimisation. The problem is to identify a dictionary Φ from a set of training samples Y knowing that Y = ΦX for some coefficient matrix X. Using a characterisation of coefficient matrices X that allow to recover any basis as a local minimum of an ℓ1 minimisation problem, it is shown that certain types of sparse random coefficient matrices will ensure local identifiability of the basis with high probability. The necessary number of training samples grows up to a logarithmic factor linearly with the signal dimension.
THEME Audio, Speech, and Language ProcessingTable of contents
"... Speech and sound data modeling and processing IN COLLABORATION WITH: Institut de recherche en informatique et systèmes aléatoires (IRISA) ..."
Abstract
 Add to MetaCart
(Show Context)
Speech and sound data modeling and processing IN COLLABORATION WITH: Institut de recherche en informatique et systèmes aléatoires (IRISA)
Author manuscript, published in "SPARS'09 Signal Processing with Adaptive Sparse Structured Representations (2009)" DICTIONARY LEARNING WITH SPATIOSPECTRAL SPARSITY CONSTRAINTS
"... where the entries of the sparse matrix of coefficients ν representing X in the multichannel dictionary Ω = A ⊗ Φ are noted νk ′ k and N ∈ Rm,t is included to account for Gaussian instrumental noise or modeling errors. GMCA further assumes that the dictionary of spatial waveforms Φ is known beforhand ..."
Abstract
 Add to MetaCart
(Show Context)
where the entries of the sparse matrix of coefficients ν representing X in the multichannel dictionary Ω = A ⊗ Φ are noted νk ′ k and N ∈ Rm,t is included to account for Gaussian instrumental noise or modeling errors. GMCA further assumes that the dictionary of spatial waveforms Φ is known beforhand while the spectral components A, also called the mixing matrix in blind source separation (BSS) applications, is learned from the data. The image from the pth channel is represented here as the pth row of X, xp. The successful use of GMCA in a variety of multichannel data processing applications such as BSS [2], color image restoration and inpainting [1] motivated research to extend its applicability. In particular, there are instances where one is urged by additional prior knowledge to further constrain the dictionary space. For instance, one may want to enforce equality constraints on some atoms, or the positivity or the sparsity of the learned dictionary atoms. Builiding on GMCA, the purpose of this contribution is to describe a new dictionary learning algorithm for socalled hyperspectral data processing. Hyperspectral imaging systems collect data in a large number (up to several hundreds) of contiguous regions of the spectrum so that it makes sense to consider for instance that some physical property will show some regularity from one channel to the next. In fact, the proposed algorithm, referred to as hypGMCA, assumes that the multichannel atoms to be learned from the collected data exhibit diversely sparse spatial morphologies as well as diversely sparse spectral signatures in specified dictionaries Φ ∈ Rt,t ′ and Ψ ∈ Rm,m ′ of respectively spatial and spectral waveforms. The proposed algorithm is used to learn from the data rank one multichannel atoms which are diversely sparse [2] in a given larger multichannel dictioinria00369488,
ProjectTeam METISS Modélisation et Expérimentation pour le Traitement des Informations et des Signaux
"... c t i v it y e p o r t 2008 Table of contents ..."
(Show Context)