Results 21  30
of
111
GroupSparse Signal Denoising: NonConvex Regularization, Convex Optimization
, 2014
"... Abstract—Convex optimization with sparsitypromoting convex regularization is a standard approach for estimating sparse signals in noise. In order to promote sparsity more strongly than convex regularization, it is also standard practice to employ nonconvex optimization. In this paper, we take a t ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
(Show Context)
Abstract—Convex optimization with sparsitypromoting convex regularization is a standard approach for estimating sparse signals in noise. In order to promote sparsity more strongly than convex regularization, it is also standard practice to employ nonconvex optimization. In this paper, we take a third approach. We utilize a nonconvex regularization term chosen such that the total cost function (consisting of data consistency and regularization terms) is convex. Therefore, sparsity is more strongly promoted than in the standard convex formulation, but without sacrificing the attractive aspects of convex optimization (unique minimum, robust algorithms, etc.). We use this idea to improve the recently developed ‘overlapping group shrinkage ’ (OGS) algorithm for the denoising of groupsparse signals. The algorithm is applied to the problem of speech enhancement with favorable results in terms of both SNR and perceptual quality. Index Terms—group sparse model; convex optimization; nonconvex optimization; sparse optimization; translationinvariant denoising; denoising; speech enhancement I.
Simultaneous lowpass filtering and total variation denoising
 IEEE TRANS. SIGNAL PROCESS
, 2014
"... This paper seeks to combine linear timeinvariant (LTI) filtering and sparsitybased denoising in a principled way in order to effectively filter (denoise) a wider class of signals. LTI filtering is most suitable for signals restricted to a known frequency band, while sparsitybased denoising is su ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
This paper seeks to combine linear timeinvariant (LTI) filtering and sparsitybased denoising in a principled way in order to effectively filter (denoise) a wider class of signals. LTI filtering is most suitable for signals restricted to a known frequency band, while sparsitybased denoising is suitable for signals admitting a sparse representation with respect to a known transform. However, some signals cannot be accurately categorized as either bandlimited or sparse. This paper addresses the problem of filtering noisy data for the particular case where the underlying signal comprises a lowfrequency component and a sparse or sparsederivative component. A convex optimization approach is presented and two algorithms derived: one based on majorizationminimization (MM), and the other based on the alternating direction method of multipliers (ADMM). It is shown that a particular choice of discretetime filter, namely zerophase noncausal recursive filters for finitelength data formulated in terms of banded matrices, makes the algorithms effective and computationally efficient. The efficiency stems from the use of fast algorithms for solving banded systems of linear equations. The method is illustrated using data from a physiologicalmeasurement technique (i.e., near infrared spectroscopic time series imaging) that in many cases yields data that is wellapproximated as the sum of lowfrequency, sparse or sparsederivative, and noise components.
1 Joint Multicast Beamforming and Antenna Selection
"... Abstract—Multicast beamforming exploits subscriber channel state information at the base station to steer the transmission power towards the subscribers, while minimizing interference to other users and systems. Such functionality has been provisioned in the longterm evolution (LTE) enhanced multim ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
(Show Context)
Abstract—Multicast beamforming exploits subscriber channel state information at the base station to steer the transmission power towards the subscribers, while minimizing interference to other users and systems. Such functionality has been provisioned in the longterm evolution (LTE) enhanced multimedia broadcast multicast service (EMBMS). As antennas become smaller and cheaper relative to upconversion chains, transmit antenna selection at the base station becomes increasingly appealing in this context. This paper addresses the problem of joint multicast beamforming and antenna selection for multiple cochannel multicast groups. Whereas this problem (and even plain multicast beamforming) is NPhard, it is shown that the mixed ℓ1,∞norm squared is a prudent groupsparsity inducing convex regularization, in that it naturally yields a suitable semidefinite relaxation, which is further shown to be the Lagrange bidual of the original NPhard problem. Careful simulations indicate that the proposed algorithm significantly reduces the number of antennas required to meet prescribed service levels, at relatively small excess transmission power. Furthermore, its performance is close to that attained by exhaustive search, at far lower complexity. Extensions to maxminfair, robust, and capacityachieving designs are also considered.
Sparse LearningtoRank via an Efficient PrimalDual Algorithm
"... Abstract—Learningtorank for information retrieval has gained increasing interest in recent years. Inspired by the success of sparse models, we consider the problem of sparse learningtorank, where the learned ranking models are constrained to be with only a few nonzero coefficients. We begin by ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
(Show Context)
Abstract—Learningtorank for information retrieval has gained increasing interest in recent years. Inspired by the success of sparse models, we consider the problem of sparse learningtorank, where the learned ranking models are constrained to be with only a few nonzero coefficients. We begin by formulating the sparse learningtorank problem as a convex optimization problem with a sparseinducing ℓ1 constraint. Since the ℓ1 constraint is nondifferentiable, the critical issue arising here is how to efficiently solve the optimization problem. To address this issue, we propose a learning algorithm from the primal dual perspective. Furthermore, we prove that, after at most O ( 1) iterations, the proposed algorithm can guarantee the obtainment of an ɛaccurate ɛ solution. This convergence rate is better than that of the popular subgradient descent algorithm. i.e., O ( 1 ɛ2). Empirical evaluation on several public benchmark datasets demonstrates the effectiveness of the proposed algorithm: (1) Compared to the methods that learn dense models, learning a ranking model with sparsity constraints significantly improves the ranking accuracies. (2) Compared to other methods for sparse learningtorank, the proposed algorithm tends to obtain sparser models and has superior performance gain on both ranking accuracies and training time. (3) Compared to several stateoftheart algorithms, the ranking accuracies of the proposed algorithm are very competitive and stable. Index Terms—learningtorank, sparse models, ranking algorithm, Fenchel Duality.
Penalty and Shrinkage Functions for Sparse Signal Processing
, 2013
"... The use of sparsity in signal processing frequently calls for the solution to the minimization problem arg min x ..."
Abstract

Cited by 4 (4 self)
 Add to MetaCart
The use of sparsity in signal processing frequently calls for the solution to the minimization problem arg min x
A Convex Formulation for Learning ScaleFree Networks via Submodular Relaxation
"... A key problem in statistics and machine learning is the determination of network structure from data. We consider the case where the structure of the graph to be reconstructed is known to be scalefree. We show that in such cases it is natural to formulate structured sparsity inducing priors using s ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
A key problem in statistics and machine learning is the determination of network structure from data. We consider the case where the structure of the graph to be reconstructed is known to be scalefree. We show that in such cases it is natural to formulate structured sparsity inducing priors using submodular functions, and we use their Lovász extension to obtain a convex relaxation. For tractable classes such as Gaussian graphical models, this leads to a convex optimization problem that can be efficiently solved. We show that our method results in an improvement in the accuracy of reconstructed networks for synthetic data. We also show how our prior encourages scalefree reconstructions on a bioinfomatics dataset.
Structured Penalties for Loglinear Language Models
"... Language models can be formalized as loglinear regression models where the input features represent previously observed contexts up to a certain length m. The complexity of existing algorithms to learn the parameters by maximum likelihood scale linearly in nd, where n is the length of the training c ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
(Show Context)
Language models can be formalized as loglinear regression models where the input features represent previously observed contexts up to a certain length m. The complexity of existing algorithms to learn the parameters by maximum likelihood scale linearly in nd, where n is the length of the training corpus and d is the number of observed features. We present a model that grows logarithmically in d, making it possible to efficiently leverage longer contexts. We account for the sequential structure of natural language using treestructured penalized objectives to avoid overfitting and achieve better generalization. 1
Learning the structure of mixed graphical models
 JOURNAL OF COMPUTATIONAL AND GRAPHICAL STATISTICS
, 2014
"... We consider the problem of learning the structure of a pairwise graphical model over continuous and discrete variables. We present a new pairwise model for graphical models with both continuous and discrete variables that is amenable to structure learning. In previous work, authors have considered ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
We consider the problem of learning the structure of a pairwise graphical model over continuous and discrete variables. We present a new pairwise model for graphical models with both continuous and discrete variables that is amenable to structure learning. In previous work, authors have considered structure learning of Gaussian graphical models and structure learning of discrete models. Our approach is a natural generalization of these two lines of work to the mixed case. The penalization scheme involves a novel symmetric use of the grouplasso norm and follows naturally from a particular parametrization of the model.
Hybrid Random/Deterministic Parallel Algorithms for Convex and Nonconvex Big Data Optimization
"... We propose a decomposition framework for the parallel optimization of the sum of a differentiable (possibly nonconvex) function and a nonsmooth (possibly nonseparable), convex one. The latter term is usually employed to enforce structure in the solution, typically sparsity. The main contribution of ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
We propose a decomposition framework for the parallel optimization of the sum of a differentiable (possibly nonconvex) function and a nonsmooth (possibly nonseparable), convex one. The latter term is usually employed to enforce structure in the solution, typically sparsity. The main contribution of this work is a novel parallel, hybrid random/deterministic decomposition scheme wherein, at each iteration, a subset of (block) variables is updated at the same time by minimizing a convex surrogate of the original nonconvex function. To tackle hugescale problems, the (block) variables to be updated are chosen according to a mixed random and deterministic procedure, which captures the advantages of both pure deterministic and random updatebased schemes. Almost sure convergence of the proposed scheme is established. Numerical results show that on hugescale problems the proposed hybrid random/deterministic algorithm compares favorably to random and deterministic schemes on both convex and nonconvex problems.