Results 1  10
of
64
Regression Shrinkage and Selection Via the Lasso
 JOURNAL OF THE ROYAL STATISTICAL SOCIETY, SERIES B
, 1994
"... We propose a new method for estimation in linear models. The "lasso" minimizes the residual sum of squares subject to the sum of the absolute value of the coefficients being less than a constant. Because of the nature of this constraint it tends to produce some coefficients that are exactl ..."
Abstract

Cited by 4212 (49 self)
 Add to MetaCart
an interesting relationship with recent work in adaptive function estimation by Donoho and Johnstone. The lasso idea is quite general and can be applied in a variety of statistical models: extensions to generalized regression models and treebased models are briefly described.
The Gamma Lasso
"... faculty.chicagobooth.edu/matt.taddy This article describes a very fast algorithm for obtaining continuous regularization paths corresponding to cost functions spanning the range of concavity between L0 and L1 norms. The ‘gamma lasso ’ heuristic does L1 (lasso) penalized regression estimation on a g ..."
Abstract
 Add to MetaCart
faculty.chicagobooth.edu/matt.taddy This article describes a very fast algorithm for obtaining continuous regularization paths corresponding to cost functions spanning the range of concavity between L0 and L1 norms. The ‘gamma lasso ’ heuristic does L1 (lasso) penalized regression estimation on a
A Component Lasso
, 2013
"... We propose a new sparse regression method called the component lasso, based on a simple idea. The method uses the connectedcomponents structure of the sample covariance matrix to split the problem into smaller ones. It then applies the lasso to each subproblem separately, obtaining a coefficient v ..."
Abstract
 Add to MetaCart
We propose a new sparse regression method called the component lasso, based on a simple idea. The method uses the connectedcomponents structure of the sample covariance matrix to split the problem into smaller ones. It then applies the lasso to each subproblem separately, obtaining a coefficient
Variable Selection for Outliers with the LASSO
"... Suppose one or more observations in a multivariate data set have been flagged as outliers, e.g. by using a robust covariance estimator. Then it is quite natural to wonder which variables contribute the most to this outlyingness, especially if the dimension of the data is rather high. A straightforwa ..."
Abstract
 Add to MetaCart
straightforward idea is to check the coefficients of the univariate direction for which the standardized distance between the projected outlier and a projected multivariate location estimate is maximal. However, this strategy comes with a few drawbacks. The coefficients for instance highly depend
Robust regression shrinkage and consistent variable selection via the LADLASSO
 JOURNAL OF BUSINESS AND ECONOMIC STATISTICS
, 2005
"... The least absolute deviation (LAD) regression is a useful method for robust regression, and the least absolute shrinkage and selection operator (lasso) is a popular choice for shrinkage estimation and variable selection. In this article we combine these two classical ideas together to produce LADla ..."
Abstract

Cited by 58 (7 self)
 Add to MetaCart
The least absolute deviation (LAD) regression is a useful method for robust regression, and the least absolute shrinkage and selection operator (lasso) is a popular choice for shrinkage estimation and variable selection. In this article we combine these two classical ideas together to produce LADlasso
An Ordered Lasso and Sparse Timelagged Regression
"... We consider a regression scenario where it is natural to impose an order constraint on the coefficients. We propose an orderconstrained version of `1regularized regression (lasso) for this problem, and show how to solve it efficiently using the wellknown Pool Adjacent Violators Algorithm as its ..."
Abstract
 Add to MetaCart
We consider a regression scenario where it is natural to impose an order constraint on the coefficients. We propose an orderconstrained version of `1regularized regression (lasso) for this problem, and show how to solve it efficiently using the wellknown Pool Adjacent Violators Algorithm as its
Variable Selection Incorporating Prior Constraint Information into Lasso
, 705
"... We propose the variable selection procedure incorporating prior constraint information into lasso. The proposed procedure combines the sample and prior information, and selects significant variables for responses in a narrower region where the true parameters lie. It increases the efficiency to choo ..."
Abstract
 Add to MetaCart
information can be also used for other modified lasso procedures. Some examples are used for illustration of the idea of incorporating prior constraint information in variable selection procedures.
The LASSO risk: asymptotic results and real world examples
"... We consider the problem of learning a coefficient vector x0 ∈ R N from noisy linear observation y = Ax0 + w ∈ R n. In many contexts (ranging from model selection to image processing) it is desirable to construct a sparse estimator ̂x. In this case, a popular approach consists in solving an ℓ1penali ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
is the first rigorous derivation of an explicit formula for the asymptotic mean square error of the LASSO for random instances. The proof technique is based on the analysis of AMP, a recently developed efficient algorithm, that is inspired from graphical models ideas. Through simulations on real data matrices
LASSO ISOtone for High Dimensional Additive Isotonic Regression
, 2010
"... Additive isotonic regression attempts to determine the relationship between a multidimensional observation variable and a response, under the constraint that the estimate is the additive sum of univariate component effects that are monotonically increasing. In this article, we present a new method ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
for such regression called LASSO Isotone (LISO). LISO adapts ideas from sparse linear modelling to additive isotonic regression. Thus, it is viable in many situations with high dimensional predictor variables, where selection of significant versus insignificant variables are required. We suggest an algorithm
1 Strong Rules for Discarding Predictors in Lassotype Problems
"... “God knows the last thing we need is another algorithm for the lasso” Stephen Boyd, Sept 28, 2010 This is not quite a talk about algorithms for the lasso – but ideas for speeding up existing algorithms. Also reveals interesting aspects of convex statistical problems. 3 Top 7 reasons why this Lasso/L ..."
Abstract
 Add to MetaCart
“God knows the last thing we need is another algorithm for the lasso” Stephen Boyd, Sept 28, 2010 This is not quite a talk about algorithms for the lasso – but ideas for speeding up existing algorithms. Also reveals interesting aspects of convex statistical problems. 3 Top 7 reasons why this Lasso
Results 1  10
of
64