Results 11  20
of
24
On learning discrete graphical models using greedy methods
 In Neural Information Processing Systems (NIPS) (currently under review
, 2011
"... In this paper, we address the problem of learning the structure of a pairwise graphical model from samples in a highdimensional setting. Our first main result studies the sparsistency, or consistency in sparsity pattern recovery, properties of a forwardbackward greedy algorithm as applied to gener ..."
Abstract

Cited by 9 (4 self)
 Add to MetaCart
In this paper, we address the problem of learning the structure of a pairwise graphical model from samples in a highdimensional setting. Our first main result studies the sparsistency, or consistency in sparsity pattern recovery, properties of a forwardbackward greedy algorithm as applied to general statistical models. As a special case, we then apply this algorithm to learn the structure of a discrete graphical model via neighborhood estimation. As a corollary of our general result, we derive sufficient conditions on the number of samples n, the maximum nodedegreed and the problem size p, as well as other conditions on the model parameters, so that the algorithm recovers all the edges with high probability. Our result guarantees graph selection for samples scaling asn = Ω(d 2 log(p)), in contrast to existing convexoptimization based algorithms that require a sample complexity of Ω(d 3 log(p)). Further, the greedy algorithm only requires a restricted strong convexity condition which is typically milder than irrepresentability assumptions. We corroborate these results using numerical simulations at the end. 1
Dictionary identifiability from few training samples
 in Proc. EUSIPCO’08
, 2008
"... This article treats the problem of learning a dictionary providing sparse representations for a given signal class, via ℓ1 minimisation. The problem is to identify a dictionary Φ from a set of training samples Y knowing that Y = ΦX for some coefficient matrix X. Using a characterisation of coefficie ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
This article treats the problem of learning a dictionary providing sparse representations for a given signal class, via ℓ1 minimisation. The problem is to identify a dictionary Φ from a set of training samples Y knowing that Y = ΦX for some coefficient matrix X. Using a characterisation of coefficient matrices X that allow to recover any orthonormal basis (ONB) as a local minimum of an ℓ1 minimisation problem, it is shown that certain types of sparse random coefficient matrices will ensure local identifiability of the ONB with high probability, for a number of training samples which essentially grows linearly with the signal dimension. 1.
Some recovery conditions for basis learning by l1minimization
 In 3rd IEEE International Symposium on Communications, Control and Signal Processing (ISCCSP 2008
, 2008
"... Abstract—Many recent works have shown that if a given signal admits a sufficiently sparse representation in a given dictionary, then this representation is recovered by several standard optimization algorithms, in particular the convex 1 minimization approach. Here we investigate the related problem ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
Abstract—Many recent works have shown that if a given signal admits a sufficiently sparse representation in a given dictionary, then this representation is recovered by several standard optimization algorithms, in particular the convex 1 minimization approach. Here we investigate the related problem of infering the dictionary from training data, with an approach where 1minimization is used as a criterion to select a dictionary. We restrict our analysis to basis learning and identify necessary / sufficient / necessary and sufficient conditions on ideal (not necessarily very sparse) coefficients of the training data in an ideal basis to guarantee that the ideal basis is a strict local optimum of the 1minimization criterion among (not necessarily orthogonal) bases of normalized vectors. We illustrate these conditions on deterministic as well as toy random models in dimension two and highlight the main challenges that remain open by this preliminary theoretical results. Index Terms—Sparse representation, dictionary learning, nonconvex optimization, independent component analysis. I.
Efficient sampling of sparse wideband analog signals
 in Proc. Conv. IEEE in Israel (IEEEI), Eilat
, 2008
"... Periodic nonuniform sampling is a known method to sample spectrally sparse signals below the Nyquist rate. This strategy relies on the implicit assumption that the individual samplers are exposed to the entire frequency range. This assumption becomes impractical for wideband sparse signals. The curr ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
Periodic nonuniform sampling is a known method to sample spectrally sparse signals below the Nyquist rate. This strategy relies on the implicit assumption that the individual samplers are exposed to the entire frequency range. This assumption becomes impractical for wideband sparse signals. The current paper proposes an alternative sampling stage that does not require a fullband front end. Instead, signals are captured with an analog front end that consists of a bank of multipliers and lowpass filters whose cutoff is much lower than the Nyquist rate. The problem of recovering the original signal from the lowrate samples can be studied within the framework of compressive sampling. An appropriate parameter selection ensures that the samples uniquely determine the analog input. Moreover, the analog input can be stably reconstructed with digital algorithms. Numerical experiments support the theoretical analysis. Index Terms — Analog to digital conversion, compressive sampling, infinite measurement vectors (IMV), multiband sampling. 1.
On Learning Discrete Graphical Models using GroupSparse
"... We study the problem of learning the graph structure associated with a general discrete graphical models (each variable can take any of m> 1 values, the clique factors have maximum size c ≥ 2) from samples, under highdimensional scaling where the number of variables p could be larger than the numbe ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
We study the problem of learning the graph structure associated with a general discrete graphical models (each variable can take any of m> 1 values, the clique factors have maximum size c ≥ 2) from samples, under highdimensional scaling where the number of variables p could be larger than the number of samples n. We provide a quantitative consistency analysis of a procedure based on nodewise multiclass logistic regression with groupsparse regularization. We first consider general mary pairwise models – where each factor depends on at most two variables. We show that when
Distributed Sampling of Signals Linked by Sparse Filtering: Theory and Applications
, 2009
"... We study the distributed sampling and centralized reconstruction of two correlated signals, modeled as the input and output of an unknown sparse filtering operation. This is akin to a SlepianWolf setup, but in the sampling rather than the lossless compression case. Two different scenarios are consi ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
We study the distributed sampling and centralized reconstruction of two correlated signals, modeled as the input and output of an unknown sparse filtering operation. This is akin to a SlepianWolf setup, but in the sampling rather than the lossless compression case. Two different scenarios are considered: In the case of universal reconstruction, we look for a sensing and recovery mechanism that works for all possible signals, whereas in what we call almost sure reconstruction, we allow to have a small set (with measure zero) of unrecoverable signals. We derive achievability bounds on the number of samples needed for both scenarios. Our results show that, only in the almost sure setup can we effectively exploit the signal correlations to achieve effective gains in sampling efficiency. In addition to the above theoretical analysis, we propose an efficient and robust distributed sampling and reconstruction algorithm based on annihilating filters. Finally, we evaluate the performance of our method in one synthetic scenario, and two practical applications, including the distributed audio sampling in binaural hearing aids and the efficient estimation of room impulse responses. The numerical results confirm the effectiveness and robustness of the proposed algorithm in both synthetic and practical setups.
Reduce and Boost: Recovering Arbitrary Sets of 1 Jointly Sparse Vectors
, 802
"... The rapid developing area of compressed sensing suggests that a sparse vector lying in an arbitrary high dimensional space can be accurately recovered from only a small set of nonadaptive linear measurements. Under appropriate conditions on the measurement matrix, the entire information about the o ..."
Abstract
 Add to MetaCart
The rapid developing area of compressed sensing suggests that a sparse vector lying in an arbitrary high dimensional space can be accurately recovered from only a small set of nonadaptive linear measurements. Under appropriate conditions on the measurement matrix, the entire information about the original sparse vector is captured in the measurements, and can be recovered using efficient polynomial methods. The vector model has been extended both theoretically and practically to a finite set of sparse vectors sharing a common nonzero location set. In this paper, we treat a broader framework in which the goal is to recover a possibly infinite set of jointly sparse vectors. Extending existing recovery methods to this model is difficult due to the infinite structure of the sparse vector set. Instead, we prove that the entire infinite set of sparse vectors can recovered by solving a single, reducedsize finitedimensional problem, corresponding to recovery of a finite set of sparse vectors. We then show that the problem can be further reduced to the basic recovery of a single sparse vector by randomly combining the measurement vectors. Our approach results in exact recovery of both countable and uncountable sets as it does not rely on discretization or heuristic techniques. To efficiently recover the single sparse vector produced by the last reduction step, we suggest an empirical boosting strategy that improves the recovery ability of any given suboptimal method for recovering a sparse vector. Numerical experiments on random data demonstrate that when applied to infinite sets our strategy outperforms discretization techniques in terms of both run time and empirical recovery rate. In the finite model, our boosting algorithm is characterized by fast run time and superior recovery rate than known popular methods.
1 HighResolution Radar via Compressed Sensing
, 803
"... A stylized compressed sensing radar is proposed in which the timefrequency plane is discretized into an N × N grid. Assuming the number of targets K is small (i.e., K ≪ N 2), then we can transmit a sufficiently “incoherent ” pulse and employ the techniques of compressed sensing to reconstruct the t ..."
Abstract
 Add to MetaCart
A stylized compressed sensing radar is proposed in which the timefrequency plane is discretized into an N × N grid. Assuming the number of targets K is small (i.e., K ≪ N 2), then we can transmit a sufficiently “incoherent ” pulse and employ the techniques of compressed sensing to reconstruct the target scene. A theoretical upper bound on the sparsity K is presented. Numerical simulations verify that even better performance can be achieved in practice. This novel compressed sensing approach offers great potential for better resolution over classical radar.