Results 1  10
of
120
Consistency of the group lasso and multiple kernel learning
 JOURNAL OF MACHINE LEARNING RESEARCH
, 2007
"... We consider the leastsquare regression problem with regularization by a block 1norm, i.e., a sum of Euclidean norms over spaces of dimensions larger than one. This problem, referred to as the group Lasso, extends the usual regularization by the 1norm where all spaces have dimension one, where it ..."
Abstract

Cited by 158 (26 self)
 Add to MetaCart
We consider the leastsquare regression problem with regularization by a block 1norm, i.e., a sum of Euclidean norms over spaces of dimensions larger than one. This problem, referred to as the group Lasso, extends the usual regularization by the 1norm where all spaces have dimension one, where it is commonly referred to as the Lasso. In this paper, we study the asymptotic model consistency of the group Lasso. We derive necessary and sufficient conditions for the consistency of group Lasso under practical assumptions, such as model misspecification. When the linear predictors and Euclidean norms are replaced by functions and reproducing kernel Hilbert norms, the problem is usually referred to as multiple kernel learning and is commonly used for learning from heterogeneous data sources and for non linear variable selection. Using tools from functional analysis, and in particular covariance operators, we extend the consistency results to this infinite dimensional case and also propose an adaptive scheme to obtain a consistent model estimate, even when the necessary condition required for the non adaptive scheme is not satisfied.
An interiorpoint method for largescale l1regularized logistic regression
 Journal of Machine Learning Research
, 2007
"... Logistic regression with ℓ1 regularization has been proposed as a promising method for feature selection in classification problems. In this paper we describe an efficient interiorpoint method for solving largescale ℓ1regularized logistic regression problems. Small problems with up to a thousand ..."
Abstract

Cited by 158 (5 self)
 Add to MetaCart
Logistic regression with ℓ1 regularization has been proposed as a promising method for feature selection in classification problems. In this paper we describe an efficient interiorpoint method for solving largescale ℓ1regularized logistic regression problems. Small problems with up to a thousand or so features and examples can be solved in seconds on a PC; medium sized problems, with tens of thousands of features and examples, can be solved in tens of seconds (assuming some sparsity in the data). A variation on the basic method, that uses a preconditioned conjugate gradient method to compute the search step, can solve very large problems, with a million features and examples (e.g., the 20 Newsgroups data set), in a few minutes, on a PC. Using warmstart techniques, a good approximation of the entire regularization path can be computed much more efficiently than by solving a family of problems independently.
A Sparse Signal Reconstruction Perspective for Source Localization With Sensor Arrays
 M.S. thesis, Mass. Inst. Technol
, 2003
"... Abstract—We present a source localization method based on a sparse representation of sensor measurements with an overcomplete basis composed of samples from the array manifold. We enforce sparsity by imposing penalties based on the 1norm. A number of recent theoretical results on sparsifying proper ..."
Abstract

Cited by 118 (5 self)
 Add to MetaCart
Abstract—We present a source localization method based on a sparse representation of sensor measurements with an overcomplete basis composed of samples from the array manifold. We enforce sparsity by imposing penalties based on the 1norm. A number of recent theoretical results on sparsifying properties of 1 penalties justify this choice. Explicitly enforcing the sparsity of the representation is motivated by a desire to obtain a sharp estimate of the spatial spectrum that exhibits superresolution. We propose to use the singular value decomposition (SVD) of the data matrix to summarize multiple time or frequency samples. Our formulation leads to an optimization problem, which we solve efficiently in a secondorder cone (SOC) programming framework by an interior point implementation. We propose a grid refinement method to mitigate the effects of limiting estimates to a grid of spatial locations and introduce an automatic selection criterion for the regularization parameter involved in our approach. We demonstrate the effectiveness of the method on simulated data by plots of spatial spectra and by comparing the estimator variance to the Cramér–Rao bound (CRB). We observe that our approach has a number of advantages over other source localization techniques, including increased resolution, improved robustness to noise, limitations in data quantity, and correlation of the sources, as well as not requiring an accurate initialization. Index Terms—Directionofarrival estimation, overcomplete representation, sensor array processing, source localization, sparse representation, superresolution. I.
Robust Portfolio Selection Problems
 Mathematics of Operations Research
, 2001
"... In this paper we show how to formulate and solve robust portfolio selection problems. The objective of these robust formulations is to systematically combat the sensitivity of the optimal portfolio to statistical and modeling errors in the estimates of the relevant market parameters. We introduce &q ..."
Abstract

Cited by 100 (8 self)
 Add to MetaCart
In this paper we show how to formulate and solve robust portfolio selection problems. The objective of these robust formulations is to systematically combat the sensitivity of the optimal portfolio to statistical and modeling errors in the estimates of the relevant market parameters. We introduce "uncertainty structures" for the market parameters and show that the robust portfolio selection problems corresponding to these uncertainty structures can be reformulated as secondorder cone programs and, therefore, the computational effort required to solve them is comparable to that required for solving convex quadratic programs. Moreover, we show that these uncertainty structures correspond to confidence regions associated with the statistical procedures used to estimate the market parameters. We demonstrate a simple recipe for efficiently computing robust portfolios given raw market data and a desired level of confidence.
Complete search in continuous global optimization and constraint satisfaction, Acta Numerica 13
, 2004
"... A chapter for ..."
Uncertain convex programs: Randomized solutions and confidence levels
 MATH. PROGRAM., SER. A (2004)
, 2004
"... Many engineering problems can be cast as optimization problems subject to convex constraints that are parameterized by an uncertainty or ‘instance’ parameter. Two main approaches are generally available to tackle constrained optimization problems in presence of uncertainty: robust optimization and ..."
Abstract

Cited by 63 (6 self)
 Add to MetaCart
Many engineering problems can be cast as optimization problems subject to convex constraints that are parameterized by an uncertainty or ‘instance’ parameter. Two main approaches are generally available to tackle constrained optimization problems in presence of uncertainty: robust optimization and chanceconstrained optimization. Robust optimization is a deterministic paradigm where one seeks a solution which simultaneously satisfies all possible constraint instances. In chanceconstrained optimization a probability distribution is instead assumed on the uncertain parameters, and the constraints are enforced up to a prespecified level of probability. Unfortunately however, both approaches lead to computationally intractable problem formulations. In this paper, we consider an alternative ‘randomized ’ or ‘scenario ’ approach for dealing with uncertainty in optimization, based on constraint sampling. In particular, we study the constrained optimization problem resulting by taking into account only a finite set of N constraints, chosen at random among the possible constraint instances of the uncertain problem. We show that the resulting randomized solution fails to satisfy only a small portion of the original constraints, provided that a sufficient number of samples is drawn. Our key result is to provide an efficient and explicit bound on the measure (probability or volume) of the original constraints that are possibly violated by the randomized solution. This volume rapidly decreases to zero as N is increased.
An interiorpoint method for largescale ℓ1regularized logistic regression
 JOURNAL OF MACHINE LEARNING RESEARCH
, 2007
"... Recently, a lot of attention has been paid to ℓ1regularization based methods for sparse signal reconstruction (e.g., basis pursuit denoising and compressed sensing) and feature selection (e.g., the Lasso algorithm) in signal processing, statistics, and related fields. These problems can be cast as ..."
Abstract

Cited by 56 (4 self)
 Add to MetaCart
Recently, a lot of attention has been paid to ℓ1regularization based methods for sparse signal reconstruction (e.g., basis pursuit denoising and compressed sensing) and feature selection (e.g., the Lasso algorithm) in signal processing, statistics, and related fields. These problems can be cast as ℓ1regularized leastsquares programs (LSPs), which can be reformulated as convex quadratic programs, and then solved by several standard methods such as interiorpoint methods, at least for small and medium size problems. In this paper, we describe a specialized interiorpoint method for solving largescale ℓ1regularized LSPs that uses the preconditioned conjugate gradients algorithm to compute the search direction. The interiorpoint method can solve large sparse problems, with a million variables and observations, in a few tens of minutes on a PC. It can efficiently solve large dense problems, that arise in sparse signal recovery with orthogonal transforms, by exploiting fast algorithms for these transforms. The method is illustrated on a magnetic resonance imaging data set.
Grasp analysis as linear matrix inequality problems
 IEEE Transactions on Robotics and Automation
, 2000
"... ..."
Optimization over state feedback policies for robust control with constraints
, 2005
"... This paper is concerned with the optimal control of linear discretetime systems, which are subject to unknown but bounded state disturbances and mixed constraints on the state and input. It is shown that the class of admissible affine state feedback control policies with memory of prior states is e ..."
Abstract

Cited by 33 (4 self)
 Add to MetaCart
This paper is concerned with the optimal control of linear discretetime systems, which are subject to unknown but bounded state disturbances and mixed constraints on the state and input. It is shown that the class of admissible affine state feedback control policies with memory of prior states is equivalent to the class of admissible feedback policies that are affine functions of the past disturbance sequence. This result implies that a broad class of constrained finite horizon robust and optimal control problems, where the optimization is over affine state feedback policies, can be solved in a computationally efficient fashion using convex optimization methods without having to introduce any conservatism in the problem formulation. This equivalence result is used to design a robust receding horizon control (RHC) state feedback policy such that the closedloop system is inputtostate stable (ISS) and the constraints are satisfied for all time and for all allowable disturbance sequences. The cost that is chosen to be minimized in the associated finite horizon optimal control problem is a quadratic function in the disturbancefree state and input sequences. It is shown that the value of the receding horizon control law can be calculated at each sample instant using a single, tractable and convex quadratic program (QP) if the disturbance set is polytopic or given by a 1norm or ∞norm bound, or a secondorder cone program (SOCP) if the disturbance set is ellipsoidal or given by a 2norm bound.
Lowauthority controller design via convex optimization
 AIAA Journal of Guidance, Control, and Dynamics
, 1999
"... In this paper we address the problem of lowauthority controller (LAC) design. The premise is that the actuators have limited authority, and hence cannot significantly shift the eigenvalues of the system. As a result, the closedloop eigenvalues can be well approximated analytically using perturbati ..."
Abstract

Cited by 30 (14 self)
 Add to MetaCart
In this paper we address the problem of lowauthority controller (LAC) design. The premise is that the actuators have limited authority, and hence cannot significantly shift the eigenvalues of the system. As a result, the closedloop eigenvalues can be well approximated analytically using perturbation theory. These analytical approximations may suffice to predict the behavior of the closedloop system in practical cases, and will provide at least a very strong rationale for the first step in the design iteration loop. We will show that LAC design can be cast as convex optimization problems that can be solved efficiently in practice using interiorpoint methods. Also, we will show that by optimizing the ℓ1 norm of the feedback gains, we can arrive at sparse designs, i.e., designs in which only a small number of the control gains are nonzero. Thus, in effect, we can also solve actuator/sensor placement or controller architecture design problems. Keywords: Lowauthority control, actuator/sensor placement, linear operator perturbation theory, convex optimization, secondorder cone programming, semidefinite programming, linear matrix inequality. 1