Results 1  10
of
12
Asymptotics for Lassotype estimators
, 2000
"... this paper, we consider the asymptotic behaviour of regression estimators that minimize the residual sum of squares plus a penalty proportional to ..."
Abstract

Cited by 138 (3 self)
 Add to MetaCart
this paper, we consider the asymptotic behaviour of regression estimators that minimize the residual sum of squares plus a penalty proportional to
Sparsity and smoothness via the fused lasso
 Journal of the Royal Statistical Society Series B
, 2005
"... The lasso (Tibshirani 1996) penalizes a least squares regression by the sum of the absolute values (L1 norm) of the coefficients. The form of this penalty encourages sparse solutions, that is, having many coefficients equal to zero. Here we propose the “fused lasso”, a generalization of the lasso de ..."
Abstract

Cited by 132 (12 self)
 Add to MetaCart
The lasso (Tibshirani 1996) penalizes a least squares regression by the sum of the absolute values (L1 norm) of the coefficients. The form of this penalty encourages sparse solutions, that is, having many coefficients equal to zero. Here we propose the “fused lasso”, a generalization of the lasso designed for problems with features that can be ordered in some meaningful way. The fused lasso penalizes both the L1 norm of the coefficients and their successive differences. Thus it encourages both sparsity
Hiroshi Imai and Masao Iri. Polygonal approximations of a curve – formulations and algorithms
 Computational Morphology
, 1988
"... Regularization by the sum of singular values, also referred to as the trace norm, is a popular technique for estimating low rank rectangular matrices. In this paper, we extend some of the consistency results of the Lasso to provide necessary and sufficient conditions for rank consistency of trace no ..."
Abstract

Cited by 40 (6 self)
 Add to MetaCart
Regularization by the sum of singular values, also referred to as the trace norm, is a popular technique for estimating low rank rectangular matrices. In this paper, we extend some of the consistency results of the Lasso to provide necessary and sufficient conditions for rank consistency of trace norm minimization with the square loss. We also provide an adaptive version that is rank consistent even when the necessary condition for the non adaptive version is not fulfilled. 1.
Extremal Quantile Regression
 Annals of Statistics
, 2005
"... Abstract. Quantile regression is an important tool for estimation of conditional quantiles of a response Y given a vector of covariates X. It can be used to measure the effect of covariates not only in the center of a distribution, but also in the upper and lower tails. This paper develops a theory ..."
Abstract

Cited by 15 (2 self)
 Add to MetaCart
Abstract. Quantile regression is an important tool for estimation of conditional quantiles of a response Y given a vector of covariates X. It can be used to measure the effect of covariates not only in the center of a distribution, but also in the upper and lower tails. This paper develops a theory of quantile regression in the tails. Specifically, it obtains the large sample properties of extremal (extreme order and intermediate order) quantile regression estimators for the linear quantile regression model with the tails restricted to the domain of minimum attraction and closed under tail equivalence across regressor values. This modelling set up combines restrictions of extreme value theory with leading homoscedastic and heteroscedastic linear specifications of regression analysis. In large samples, extreme order regression quantiles converge weakly to argmin functionals of stochastic integrals of Poisson processes that depend on regressors, while intermediate regression quantiles and their functionals converge to normal vectors with variance matrices dependent on the tail parameters and the regressor design.
EpiConvergence in Distribution and Stochastic EquiSemicontinuity
 C o rpusbased wo rk on discourse marke rs such as ‘ a n d ’ ,‘ i f’ , ‘ bu t ’ ,e
, 1997
"... : Epiconvergence in distribution is a useful tool in establishing limiting distributions of "argmin" estimators; however, it is not always easy to find the epilimit of a given sequence of objective functions. In this paper, we define the notion of stochastic equilowersemicontinuity of a sequence ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
: Epiconvergence in distribution is a useful tool in establishing limiting distributions of "argmin" estimators; however, it is not always easy to find the epilimit of a given sequence of objective functions. In this paper, we define the notion of stochastic equilowersemicontinuity of a sequence of random objective functions. It is shown that epiconvergence in distribution and finite dimensional convergence in distribution (to a given limit) of a sequence of random objective functions are equivalent under this condition. Key words and phrases: argmin estimators, convergence in distribution, epiconvergence, equisemicontinuity AMS 1991 subject classifications: Primary 62F12, 60F05; Secondary 62E20, 60F17. Running head: Stochastic equisemicontinuity 1 Introduction Many statistical estimators are defined as the minimizer (or maximizer) of some objective function; common examples include maximum likelihood estimation and Mestimation. Since any maximization problem can be reexp...
Asymptotic theory for Mestimators over a convex kernel
, 1997
"... We study the convergence in distribution of Mestimators over a convex kernel. Under convexity, the limit distribution of Mestimators can be obtained under minimal assumptions. We consider the case when the limit is arbitrary, not necessarily normal. If some Taylor expansions hold, the limit dist ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
We study the convergence in distribution of Mestimators over a convex kernel. Under convexity, the limit distribution of Mestimators can be obtained under minimal assumptions. We consider the case when the limit is arbitrary, not necessarily normal. If some Taylor expansions hold, the limit distribution is stable. As an application, we examine the limit distribution of Mestimators for the multivariate linear regression model. We obtain the distributional convergence of Mestimators for the multivariate linear regression model for a wide range of sequences of regressors and different types of conditions on the sequence of errors. 1. Introduction. There exists an extensive literature in estimators which are defined as the minimizer of certain stochastic process. For example, a maximum likelihood estimator ` n is a value satisfying n X j=1 g(X j ; ` n ) = inf `2\Theta n X j=1 g(X j ; `); where e \Gammag(x;`) , ` 2 \Theta, is a family of densities. Huber (1964) cons...
Asymptotics for L_1 regression estimators under general conditions
 Scandinavian Journal of Statistics
, 1997
"... : It is wellknown that L 1 estimators of regression parameters are asymptotically Normal if the distribution function has a positive derivative at 0. In this paper, we derive the asymptotic distributions under more general conditions on the behaviour of the distribution function near 0. Second ord ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
: It is wellknown that L 1 estimators of regression parameters are asymptotically Normal if the distribution function has a positive derivative at 0. In this paper, we derive the asymptotic distributions under more general conditions on the behaviour of the distribution function near 0. Second order or weak BahadurKiefer representations are also derived. 1 Introduction Consider the linear regression model Y i = fi 0 + fi 1 x 1i + \Delta \Delta \Delta + fi p x pi + " i (1) where fi 0 ; fi 1 ; \Delta \Delta \Delta ; fi p are unknown parameters and f" i g are unobservable independent, identically distributed (i.i.d.) random variables each with median 0. For simplicity, we will assume that the x ki 's are nonrandom although the results will typically hold for random x ki 's. We will consider the asymptotic behaviour of L 1 estimators of fi = (fi 0 ; \Delta \Delta \Delta ; fi p ); that is, b fi 0 ; b fi 1 ; \Delta \Delta \Delta b fi p minimize the objective function g n (OE) = n ...
67, Part 1, pp. 91–108 Sparsity and smoothness via the fused lasso
, 2003
"... Summary. The lasso penalizes a least squares regression by the sum of the absolute values (L1norm) of the coefficients. The form of this penalty encourages sparse solutions (with many coefficients equal to 0). We propose the ‘fused lasso’, a generalization that is designed for problems with feature ..."
Abstract
 Add to MetaCart
Summary. The lasso penalizes a least squares regression by the sum of the absolute values (L1norm) of the coefficients. The form of this penalty encourages sparse solutions (with many coefficients equal to 0). We propose the ‘fused lasso’, a generalization that is designed for problems with features that can be ordered in some meaningful way. The fused lasso penalizes the L1norm of both the coefficients and their successive differences. Thus it encourages sparsity of the coefficients and also sparsity of their differences—i.e. local constancy of the coefficient profile. The fused lasso is especially useful when the number of features p is much greater than N, the sample size.The technique is also extended to the ‘hinge ’ loss function that underlies the support vector classifier.We illustrate the methods on examples from protein mass spectroscopy and gene expression data.
Variable Selection Incorporating Prior Constraint Information into Lasso
, 705
"... We propose the variable selection procedure incorporating prior constraint information into lasso. The proposed procedure combines the sample and prior information, and selects significant variables for responses in a narrower region where the true parameters lie. It increases the efficiency to choo ..."
Abstract
 Add to MetaCart
We propose the variable selection procedure incorporating prior constraint information into lasso. The proposed procedure combines the sample and prior information, and selects significant variables for responses in a narrower region where the true parameters lie. It increases the efficiency to choose the true model correctly. The proposed procedure can be executed by many constrained quadratic programming methods and the initial estimator can be found by least square or Monte Carlo method. The proposed procedure also enjoys good theoretical properties. Moreover, the proposed procedure is not only used for linear models but also can be used for generalized linear models(GLM), Cox models, quantile regression models and many others with the help of Wang and Leng (2007)’s LSA, which changes these models as the approximation of linear models. The idea of combining sample and prior constraint information can be also used for other modified lasso procedures. Some examples are used for illustration of the idea of incorporating prior constraint information in variable selection procedures.
referred to in published work without the written consent of the author. On the Consistency of Approximate Maximizing Estimator Sequences in the Case of Quasiconcave Functions
, 2007
"... This paper demonstrates consistency for estimators obtained by approximately maximizing a sequence of stochastic quasiconcave functions on R P that converges in probability pointwise to a nonstochastic function. In the scalar parameter case all that is necessary for consistency is that the paramete ..."
Abstract
 Add to MetaCart
This paper demonstrates consistency for estimators obtained by approximately maximizing a sequence of stochastic quasiconcave functions on R P that converges in probability pointwise to a nonstochastic function. In the scalar parameter case all that is necessary for consistency is that the parameter value of interest is a unique maximizer of the limiting function. However, in the vector parameter case certain further conditions on the limiting function are necessary to establish consistency. The paper also discusses the relation of these results to existing results on the consistency of estimators obtained by approximately maximizing concave functions and to the concepts of hypoconvergence and epiconvergence.