@TECHREPORT{Zhao04boostedlasso, author = {Peng Zhao and Bin Yu}, title = {Boosted lasso}, institution = {}, year = {2004} }

Years of Citing Articles

Bookmark

OpenURL

Abstract

In this paper, we propose the Boosted Lasso (BLasso) algorithm which ties the Boosting algorithm with the Lasso method. BLasso is derived as a coordinate descent method with a fixed step size applied to the general Lasso loss function (L1 penalized convex loss). It consists of both a forward step and a backward step. The forward step is similar to Boosting and Forward Stagewise Fitting, but the backward step is new and makes the Boosting path to approximate the Lasso path. In the cases of a finite number of base learners and a bounded Hessian of the loss function, when the step size goes to zero, the BLasso path is shown to converge to the Lasso path. For cases with a large number of base learners, our simulations show that since BLasso approximate the Lasso paths, the model estimates are sparser than Forward Stagewise Fitting with equivalent or better prediction performance when the true model is sparse and there are more predictors than the sample size. In addition, we extend BLasso to minimizing a general convex loss penalized by a general convex function. Since BLasso relies only on differeneces not derivatives, we demonstrate this extension as a simple off-the-shelf algorithm for tracing the solution paths of regularization problems.