Results 21 - 30
of
111
Group-Sparse Signal Denoising: Non-Convex Regularization, Convex Optimization
, 2014
"... Abstract—Convex optimization with sparsity-promoting con-vex regularization is a standard approach for estimating sparse signals in noise. In order to promote sparsity more strongly than convex regularization, it is also standard practice to employ non-convex optimization. In this paper, we take a t ..."
Abstract
-
Cited by 6 (3 self)
- Add to MetaCart
(Show Context)
Abstract—Convex optimization with sparsity-promoting con-vex regularization is a standard approach for estimating sparse signals in noise. In order to promote sparsity more strongly than convex regularization, it is also standard practice to employ non-convex optimization. In this paper, we take a third approach. We utilize a non-convex regularization term chosen such that the total cost function (consisting of data consistency and regularization terms) is convex. Therefore, sparsity is more strongly promoted than in the standard convex formulation, but without sacrificing the attractive aspects of convex optimization (unique minimum, robust algorithms, etc.). We use this idea to improve the recently developed ‘overlapping group shrinkage ’ (OGS) algorithm for the denoising of group-sparse signals. The algorithm is applied to the problem of speech enhancement with favorable results in terms of both SNR and perceptual quality. Index Terms—group sparse model; convex optimization; non-convex optimization; sparse optimization; translation-invariant denoising; denoising; speech enhancement I.
Simultaneous low-pass filtering and total variation denoising
- IEEE TRANS. SIGNAL PROCESS
, 2014
"... This paper seeks to combine linear time-invariant (LTI) filtering and sparsity-based denoising in a principled way in order to effectively filter (denoise) a wider class of signals. LTI filtering is most suitable for signals restricted to a known frequency band, while sparsity-based denoising is su ..."
Abstract
-
Cited by 6 (4 self)
- Add to MetaCart
This paper seeks to combine linear time-invariant (LTI) filtering and sparsity-based denoising in a principled way in order to effectively filter (denoise) a wider class of signals. LTI filtering is most suitable for signals restricted to a known frequency band, while sparsity-based denoising is suitable for signals admitting a sparse representation with respect to a known transform. However, some signals cannot be accurately categorized as either band-limited or sparse. This paper addresses the problem of filtering noisy data for the particular case where the underlying signal comprises a low-frequency component and a sparse or sparse-derivative component. A convex optimization approach is presented and two algorithms derived: one based on majorization-minimization (MM), and the other based on the alternating direction method of multipliers (ADMM). It is shown that a particular choice of discrete-time filter, namely zero-phase noncausal recursive filters for finite-length data formulated in terms of banded matrices, makes the algorithms effective and computationally efficient. The efficiency stems from the use of fast algorithms for solving banded systems of linear equations. The method is illustrated using data from a physiological-measurement technique (i.e., near infrared spectroscopic time series imaging) that in many cases yields data that is well-approximated as the sum of low-frequency, sparse or sparse-derivative, and noise components.
1 Joint Multicast Beamforming and Antenna Selection
"... Abstract—Multicast beamforming exploits subscriber channel state information at the base station to steer the transmission power towards the subscribers, while minimizing interference to other users and systems. Such functionality has been provisioned in the long-term evolution (LTE) enhanced multim ..."
Abstract
-
Cited by 5 (0 self)
- Add to MetaCart
(Show Context)
Abstract—Multicast beamforming exploits subscriber channel state information at the base station to steer the transmission power towards the subscribers, while minimizing interference to other users and systems. Such functionality has been provisioned in the long-term evolution (LTE) enhanced multimedia broadcast multicast service (EMBMS). As antennas become smaller and cheaper relative to up-conversion chains, transmit antenna selection at the base station becomes increasingly appealing in this context. This paper addresses the problem of joint multicast beamforming and antenna selection for multiple co-channel multicast groups. Whereas this problem (and even plain multicast beamforming) is NP-hard, it is shown that the mixed ℓ1,∞-norm squared is a prudent group-sparsity inducing convex regularization, in that it naturally yields a suitable semidefinite relaxation, which is further shown to be the Lagrange bi-dual of the original NP-hard problem. Careful simulations indicate that the proposed algorithm significantly reduces the number of antennas required to meet prescribed service levels, at relatively small excess transmission power. Furthermore, its performance is close to that attained by exhaustive search, at far lower complexity. Extensions to max-min-fair, robust, and capacity-achieving designs are also considered.
Sparse Learning-to-Rank via an Efficient Primal-Dual Algorithm
"... Abstract—Learning-to-rank for information retrieval has gained increasing interest in recent years. Inspired by the success of sparse models, we consider the problem of sparse learning-to-rank, where the learned ranking models are constrained to be with only a few non-zero coefficients. We begin by ..."
Abstract
-
Cited by 4 (0 self)
- Add to MetaCart
(Show Context)
Abstract—Learning-to-rank for information retrieval has gained increasing interest in recent years. Inspired by the success of sparse models, we consider the problem of sparse learning-to-rank, where the learned ranking models are constrained to be with only a few non-zero coefficients. We begin by formulating the sparse learning-to-rank problem as a convex optimization problem with a sparse-inducing ℓ1 constraint. Since the ℓ1 constraint is non-differentiable, the critical issue arising here is how to efficiently solve the optimization problem. To address this issue, we propose a learning algorithm from the primal dual perspective. Furthermore, we prove that, after at most O ( 1) iterations, the proposed algorithm can guarantee the obtainment of an ɛ-accurate ɛ solution. This convergence rate is better than that of the popular sub-gradient descent algorithm. i.e., O ( 1 ɛ2). Empirical evaluation on several public benchmark datasets demonstrates the effectiveness of the proposed algorithm: (1) Compared to the methods that learn dense models, learning a ranking model with sparsity constraints significantly improves the ranking accuracies. (2) Compared to other methods for sparse learning-to-rank, the proposed algorithm tends to obtain sparser models and has superior performance gain on both ranking accuracies and training time. (3) Compared to several state-of-the-art algorithms, the ranking accuracies of the proposed algorithm are very competitive and stable. Index Terms—learning-to-rank, sparse models, ranking algorithm, Fenchel Duality.
Penalty and Shrinkage Functions for Sparse Signal Processing
, 2013
"... The use of sparsity in signal processing frequently calls for the solution to the minimization problem arg min x ..."
Abstract
-
Cited by 4 (4 self)
- Add to MetaCart
The use of sparsity in signal processing frequently calls for the solution to the minimization problem arg min x
A Convex Formulation for Learning Scale-Free Networks via Submodular Relaxation
"... A key problem in statistics and machine learning is the determination of network structure from data. We consider the case where the structure of the graph to be reconstructed is known to be scale-free. We show that in such cases it is natural to formulate structured sparsity inducing priors using s ..."
Abstract
-
Cited by 4 (0 self)
- Add to MetaCart
A key problem in statistics and machine learning is the determination of network structure from data. We consider the case where the structure of the graph to be reconstructed is known to be scale-free. We show that in such cases it is natural to formulate structured sparsity inducing priors using submodular functions, and we use their Lovász extension to obtain a convex relaxation. For tractable classes such as Gaussian graphical models, this leads to a convex optimization problem that can be efficiently solved. We show that our method results in an improvement in the accuracy of reconstructed networks for synthetic data. We also show how our prior encourages scale-free reconstructions on a bioinfomatics dataset.
Structured Penalties for Log-linear Language Models
"... Language models can be formalized as loglinear regression models where the input features represent previously observed contexts up to a certain length m. The complexity of existing algorithms to learn the parameters by maximum likelihood scale linearly in nd, where n is the length of the training c ..."
Abstract
-
Cited by 3 (0 self)
- Add to MetaCart
(Show Context)
Language models can be formalized as loglinear regression models where the input features represent previously observed contexts up to a certain length m. The complexity of existing algorithms to learn the parameters by maximum likelihood scale linearly in nd, where n is the length of the training corpus and d is the number of observed features. We present a model that grows logarithmically in d, making it possible to efficiently leverage longer contexts. We account for the sequential structure of natural language using treestructured penalized objectives to avoid overfitting and achieve better generalization. 1
Learning the structure of mixed graphical models
- JOURNAL OF COMPUTATIONAL AND GRAPHICAL STATISTICS
, 2014
"... We consider the problem of learning the structure of a pairwise graph-ical model over continuous and discrete variables. We present a new pairwise model for graphical models with both continuous and discrete variables that is amenable to structure learning. In previous work, au-thors have considered ..."
Abstract
-
Cited by 3 (0 self)
- Add to MetaCart
We consider the problem of learning the structure of a pairwise graph-ical model over continuous and discrete variables. We present a new pairwise model for graphical models with both continuous and discrete variables that is amenable to structure learning. In previous work, au-thors have considered structure learning of Gaussian graphical models and structure learning of discrete models. Our approach is a natural gen-eralization of these two lines of work to the mixed case. The penalization scheme involves a novel symmetric use of the group-lasso norm and follows naturally from a particular parametrization of the model.
Hybrid Random/Deterministic Parallel Algorithms for Convex and Nonconvex Big Data Optimization
"... We propose a decomposition framework for the parallel optimization of the sum of a differentiable (possibly nonconvex) function and a nonsmooth (possibly nonseparable), convex one. The latter term is usually employed to enforce structure in the solution, typically sparsity. The main contribution of ..."
Abstract
-
Cited by 3 (2 self)
- Add to MetaCart
We propose a decomposition framework for the parallel optimization of the sum of a differentiable (possibly nonconvex) function and a nonsmooth (possibly nonseparable), convex one. The latter term is usually employed to enforce structure in the solution, typically sparsity. The main contribution of this work is a novel parallel, hybrid random/deterministic decomposition scheme wherein, at each iteration, a subset of (block) variables is updated at the same time by minimizing a convex surrogate of the original nonconvex function. To tackle huge-scale problems, the (block) variables to be updated are chosen according to a mixed random and deterministic procedure, which captures the advantages of both pure deterministic and random update-based schemes. Almost sure convergence of the proposed scheme is established. Numerical results show that on huge-scale problems the proposed hybrid random/deterministic algorithm compares favorably to random and deterministic schemes on both convex and nonconvex problems.