Results 1 
7 of
7
A primaldual algorithmic framework for constrained convex minimization
, 2014
"... Abstract We present a primaldual algorithmic framework to obtain approximate solutions to a prototypical constrained convex optimization problem, and rigorously characterize how common structural assumptions affect the numerical efficiency. Our main analysis technique provides a fresh perspective ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
(Show Context)
Abstract We present a primaldual algorithmic framework to obtain approximate solutions to a prototypical constrained convex optimization problem, and rigorously characterize how common structural assumptions affect the numerical efficiency. Our main analysis technique provides a fresh perspective on Nesterov's excessive gap technique in a structured fashion and unifies it with smoothing and primaldual methods. For instance, through the choices of a dual smoothing strategy and a center point, our framework subsumes decomposition algorithms, augmented Lagrangian as well as the alternating direction methodofmultipliers methods as its special cases, and provides optimal convergence rates on the primal objective residual as well as the primal feasibility gap of the iterates for all.
Constrained convex minimization via modelbased excessive gap
 in Proceedings of Neural Information Processing Systems Foundation (NIPS
, 2014
"... We introduce a modelbased excessive gap technique to analyze firstorder primaldual methods for constrained convex minimization. As a result, we construct firstorder primaldual methods with optimal convergence rates on the primal objective residual and the primal feasibility gap of their iterat ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
(Show Context)
We introduce a modelbased excessive gap technique to analyze firstorder primaldual methods for constrained convex minimization. As a result, we construct firstorder primaldual methods with optimal convergence rates on the primal objective residual and the primal feasibility gap of their iterates separately. Through a dual smoothing and proxcenter selection strategy, our framework subsumes the augmented Lagrangian, alternating direction, and dual fastgradient methods as special cases, where our rates apply. 1
Computational Methods for Underdetermined Convolutive Speech Localization and Separation via Modelbased Sparse Component Analysis
"... In this paper, the problem of speech source localization and separation from recordings of convolutive underdetermined mixtures is studied. The problem is cast as recovering the spatiospectral speech information embedded in a microphone array compressed measurements of the acoustic field. A model ..."
Abstract
 Add to MetaCart
(Show Context)
In this paper, the problem of speech source localization and separation from recordings of convolutive underdetermined mixtures is studied. The problem is cast as recovering the spatiospectral speech information embedded in a microphone array compressed measurements of the acoustic field. A modelbased sparse component analysis framework is formulated for sparse reconstruction of the speech spectra in a reverberant acoustic resulting in joint localization and separation of the individual sources. We compare and contrast the computational approaches to modelbased sparse recovery exploiting spatial sparsity as well as spectral structures underlying spectrographic representation of speech signals. In this context, we explore identification of the sparsity structures at the auditory and acoustic representation spaces. The auditory structures are formulated upon the principles of structural grouping based on proximity, autoregressive correlation and harmonicity of the spectral coefficients and they are incorporated for sparse reconstruction. The acoustic structures are formulated upon the image model of multipath propagation and they are exploited to characterize the compressive measurement matrix associated with microphone array recordings. Three approaches to sparse recovery relying on combinatorial optimization, convex relaxation and Bayesian methods are studied and evaluated based on thorough experiments. The sparse Bayesian learn
ForwardBackward Greedy Algorithms for Atomic Norm Regularization
, 2014
"... In many signal processing applications, one aims to reconstruct a signal that has a simple representation with respect to a certain basis or frame. Fundamental elements of the basis known as “atoms ” allow us to define “atomic norms” that can be used to construct convex regularizers for the reconstr ..."
Abstract
 Add to MetaCart
In many signal processing applications, one aims to reconstruct a signal that has a simple representation with respect to a certain basis or frame. Fundamental elements of the basis known as “atoms ” allow us to define “atomic norms” that can be used to construct convex regularizers for the reconstruction problem. Efficient algorithms are available to solve the reconstruction problems in certain special cases, but an approach that works well for general atomic norms remains to be found. This paper describes an optimization algorithm called CoGEnT, which produces solutions with succinct atomic representations for reconstruction problems, generally formulated with atomicnorm constraints. CoGEnT combines a greedy selection scheme based on the conditional gradient approach with a backward (or “truncation”) step that exploits the quadratic nature of the objective to reduce the basis size. We establish convergence properties and validate the algorithm via extensive numerical experiments on a suite of signal processing applications. Our algorithm and analysis are also novel in that they allow for inexact forward steps. In practice, CoGEnT significantly outperforms the basic conditional gradient method, and indeed many methods that are tailored to specific applications, when the truncation steps are defined appropriately. We also introduce several novel applications that are enabled by the atomicnorm framework, including tensor completion, moment problems in signal processing, and graph deconvolution.
An optimal firstorder primaldual gap reduction framework for constrained convex optimization
"... ..."
Convex Optimization for Big Data
, 2014
"... This article reviews recent advances in convex optimization algorithms for Big Data, which aim to reduce the computational, storage, and communications bottlenecks. We provide an overview of this emerging field, describe contemporary approximation techniques like firstorder methods and randomizatio ..."
Abstract
 Add to MetaCart
(Show Context)
This article reviews recent advances in convex optimization algorithms for Big Data, which aim to reduce the computational, storage, and communications bottlenecks. We provide an overview of this emerging field, describe contemporary approximation techniques like firstorder methods and randomization for scalability, and survey the important role of parallel and distributed computation. The new Big Data algorithms are based on surprisingly simple principles and attain staggering accelerations even on classical problems. Convex optimization in the wake of Big Data Convexity in signal processing dates back to the dawn of the field, with problems like leastsquares being ubiquitous across nearly all subareas. However, the importance of convex formulations and optimization has increased even more dramatically in the last decade due to the rise of new theory for structured sparsity and rank minimization, and successful statistical learning models like support vector machines. These formulations are now employed in a wide variety of signal processing applications including compressive sensing, medical imaging, geophysics, and bioinformatics [1–4]. There are several important reasons for this explosion of interest, with two of the most obvious ones being the existence of efficient algorithms for computing globally optimal solutions and the ability to use convex geometry to prove useful properties about the solution [1, 2]. A unified convex formulation also transfers useful knowledge across different disciplines, such as sampling and computation, that focus on different aspects of the same underlying mathematical problem [5]. However, the renewed popularity of convex optimization places convex algorithms under tremendous pressure to accommodate increasingly large data sets and to solve problems in unprecedented dimensions. Internet, text, and imaging problems (among a myriad of other examples) no longer produce data sizes from megabytes to gigabytes, but rather from terabytes to exabytes. Despite
Yorktown Heights, NY
"... We introduce a new convex formulation for stable principal component pursuit (SPCP) to decompose noisy signals into lowrank and sparse representations. For numerical solutions of our SPCP formulation, we first develop a convex variational framework and then accelerate it with quasiNewton method ..."
Abstract
 Add to MetaCart
We introduce a new convex formulation for stable principal component pursuit (SPCP) to decompose noisy signals into lowrank and sparse representations. For numerical solutions of our SPCP formulation, we first develop a convex variational framework and then accelerate it with quasiNewton methods. We show, via synthetic and real data experiments, that our approach offers advantages over the classical SPCP formulations in scalability and practical parameter selection. 1