Results 1  10
of
55
Generalized Likelihood Ratio Statistics And Wilks Phenomenon
, 2000
"... this paper. We introduce the generalized likelihood statistics to overcome the drawbacks of nonparametric maximum likelihood ratio statistics. New Wilks phenomenon is unveiled. We demonstrate that a class of the generalized likelihood statistics based on some appropriate nonparametric estimators are ..."
Abstract

Cited by 78 (20 self)
 Add to MetaCart
this paper. We introduce the generalized likelihood statistics to overcome the drawbacks of nonparametric maximum likelihood ratio statistics. New Wilks phenomenon is unveiled. We demonstrate that a class of the generalized likelihood statistics based on some appropriate nonparametric estimators are asymptotically distribution free and follow
Effective dimension reduction methods for tumor classification using gene expression data
 Bioinformatics
, 2003
"... Motivation: One particular application of microarray data, is to uncover the molecular variation among cancers. One feature of microarray studies is the fact that the number n of samples collected is relatively small compared to the number p of genes per sample which are usually in the thousands. In ..."
Abstract

Cited by 35 (2 self)
 Add to MetaCart
Motivation: One particular application of microarray data, is to uncover the molecular variation among cancers. One feature of microarray studies is the fact that the number n of samples collected is relatively small compared to the number p of genes per sample which are usually in the thousands. In statistical terms this very large number of predictors compared to a small number of samples or observations makes the classification problem difficult. An efficient way to solve this problem is by using dimension reduction statistical techniques in conjunction with nonparametric discriminant procedures. Results: We view the classification problem as a regression problem with few observations and many predictor variables. We use an adaptive dimension reduction method for generalized semiparametric regression models that allows us to solve the ‘curse of dimensionality problem ’ arising in the context of expression data. The predictive performance of the resulting classification rule is illustrated on two well know data sets in the microarray literature: the leukemia data that is known to contain classes that are easy ‘separable ’ and the colon data set. Availability: Software that implements the procedures on which this paper focus are freely available at
Efficient Estimation and Inferences for VaryingCoefficient Models
 Journal of the American Statistical Association
, 1999
"... This paper deals with statistical inferences based on the varyingcoefficient models proposed by Hastie and Tibshirani (1993). Local polynomial regression techniques are used to estimate coefficient functions and the asymptotic normality of the resulting estimators is established. The standard error ..."
Abstract

Cited by 32 (15 self)
 Add to MetaCart
This paper deals with statistical inferences based on the varyingcoefficient models proposed by Hastie and Tibshirani (1993). Local polynomial regression techniques are used to estimate coefficient functions and the asymptotic normality of the resulting estimators is established. The standard error formulas for estimated coefficients are derived and are empirically tested. A goodnessoffit test technique, based on a nonparametric maximum likelihood ratio type of test, is also proposed to detect whether certain coefficient functions in a varyingcoefficient model are constant or whether any covariates are statistically significant in the model. The null distribution of the test is estimated by a conditional bootstrap method. Our estimation techniques involve solving hundreds of local likelihood equations. To reduce computational burden, a onestep NewtonRaphson estimator is proposed and implemented. We show that the resulting onestep procedure can save computational cost in an order ...
V.: Structure adaptive approach for dimension reduction
 Ann. Stat
, 2001
"... We propose a new method of effective dimension reduction for a multiindex model which is based on iterative improvement of the family of average derivative estimates. The procedure is computationally straightforward and does not require any prior information about the structure of the underlying mod ..."
Abstract

Cited by 20 (3 self)
 Add to MetaCart
We propose a new method of effective dimension reduction for a multiindex model which is based on iterative improvement of the family of average derivative estimates. The procedure is computationally straightforward and does not require any prior information about the structure of the underlying model. We show that in the case when the effective dimension m of the index space does not exceed 3, this space can be estimated with the rate n −1/2 under rather mild assumptions on the model.
New estimation and model selection procedures for semiparametric modeling in longitudinal data analysis
 J. Am. Statist. Ass
, 2004
"... Semiparametric regression models are very useful for longitudinal data analysis. The complexity of semiparametric models and the structure of longitudinal data pose new challenges to parametric inferences and model selection that frequently arise from longitudinal data analysis. In this article, two ..."
Abstract

Cited by 20 (7 self)
 Add to MetaCart
Semiparametric regression models are very useful for longitudinal data analysis. The complexity of semiparametric models and the structure of longitudinal data pose new challenges to parametric inferences and model selection that frequently arise from longitudinal data analysis. In this article, two new approaches are proposed for estimating the regression coefficients in a semiparametric model. The asymptotic normality of the resulting estimators is established. An innovative class of variable selection procedures is proposed to select significant variables in the semiparametric models. The proposed procedures are distinguished from others in that they simultaneously select significant variables and estimate unknown parameters. Rates of convergence of the resulting estimators are established. With a proper choice of regularization parameters and penalty functions, the proposed variable selection procedures are shown to perform as well as an oracle estimator. A robust standard error formula is derived using a sandwich formula and is empirically tested. Local polynomial regression techniques are used to estimate the baseline function in the semiparametric model.
Variable selection in semiparametric regression modeling. Available at http://www.stat.psu.edu/˜rli/research/varyselTR.pdf
, 2005
"... In this paper, we are concerned with how to select significant variables in semiparametric modeling. Variable selection for semiparametric regression models consists of two components: model selection for nonparametric components and selection of significant variables for the parametric portion. Thu ..."
Abstract

Cited by 13 (3 self)
 Add to MetaCart
In this paper, we are concerned with how to select significant variables in semiparametric modeling. Variable selection for semiparametric regression models consists of two components: model selection for nonparametric components and selection of significant variables for the parametric portion. Thus, semiparametric variable selection is much more challenging than parametric variable selection (e.g., linear and generalized linear models) because traditional variable selection procedures including stepwise regression and the best subset selection now require separate model selection for the nonparametric components for each submodel. This leads to a very heavy computational burden. In this paper, we propose a class of variable selection procedures for semiparametric regression models using nonconcave penalized likelihood. We establish the rate of convergence of the resulting estimate. With proper choices of penalty functions and regularization parameters, we show the asymptotic normality of the resulting estimate and further demonstrate that the proposed procedures perform as well as an oracle procedure. A semiparametric generalized likelihood ratio test is proposed to select significant variables in the nonparametric component. We investigate the asymptotic behavior of the proposed test and demonstrate that its limiting null distribution follows a chisquare distribution which is independent of the nuisance parameters. Extensive Monte Carlo simulation studies are conducted to examine the finite sample performance of the proposed variable selection procedures.
Spline adaptation in extended linear models
 Statistical Science
, 2002
"... Abstract. In many statistical applications, nonparametric modeling can provide insight into the features of a dataset that are not obtainable by other means. One successful approach involves the use of (univariate or multivariate) spline spaces. As a class, these methods have inherited much from cla ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
Abstract. In many statistical applications, nonparametric modeling can provide insight into the features of a dataset that are not obtainable by other means. One successful approach involves the use of (univariate or multivariate) spline spaces. As a class, these methods have inherited much from classical tools for parametric modeling. For example, stepwise variable selection with spline basis terms is a simple scheme for locating knots (breakpoints) in regions where the data exhibit strong, local features. Similarly, candidate knot con gurations (generated by this or some other search technique), are routinely evaluated with traditional selection criteria like AIC or BIC. In short, strategies typically applied in parametric model selection have proved useful in constructing exible, lowdimensional models for nonparametric problems. Until recently, greedy, stepwise procedures were most frequently suggested in the literature. Researchinto Bayesian variable selection, however, has given rise to a number of new splinebased methods that primarily rely on some form of Markov chain Monte Carlo to identify promising knot locations. In this paper, we consider various alternatives to greedy, deterministic schemes, and present aBayesian framework for studying adaptation in the context of an extended linear model (ELM). Our major test cases are Logspline density estimation and (bivariate) Triogram regression models. We selected these because they illustrate a number of computational and methodological issues concerning model adaptation that arise in ELMs.
Nonparametric Estimation Via Local Estimating Equations, With Applications To Nutrition Calibration
 Journal of the American Statistical Association
, 1997
"... Estimating equations have found wide popularity recently in parametric problems, yielding consistent estimators with asymptotically valid inferences obtained via the sandwich formula. Motivated by a problem in nutritional epidemiology, we use estimating equations to derive nonparametric estimators o ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
Estimating equations have found wide popularity recently in parametric problems, yielding consistent estimators with asymptotically valid inferences obtained via the sandwich formula. Motivated by a problem in nutritional epidemiology, we use estimating equations to derive nonparametric estimators of a "parameter" depending on a predictor. The nonparametric component is estimated via local polynomials with loess or kernel weighting; asymptotic theory is derived for the latter. In keeping with the estimating equation paradigm, variances of the nonparametric function estimate are estimated using the sandwich method, in an automatic fashion, without the need typical in the literature to derive asymptotic formulae and plugin an estimate of a density function. The same philosophy is used in estimating the bias of the nonparametric function, i.e., we use an empirical method without deriving asymptotic theory on a casebycase basis. The methods are applied to a series of examples. The appli...
Statistical Estimation in VaryingCoefficient Models
"... Varyingcoefficient models are a useful extension of classical linear models. They arise naturally when one wishes to examine how regression coefficients change over different groups characterized by certain covariates such as age. The appeal of these models is that the coefficient functions can ea ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
Varyingcoefficient models are a useful extension of classical linear models. They arise naturally when one wishes to examine how regression coefficients change over different groups characterized by certain covariates such as age. The appeal of these models is that the coefficient functions can easily be estimated via a simple local regression. This yields a simple onestep estimation procedure. We show that such a onestep method can not be optimal when different coefficient functions admit different degrees of smoothness. This drawback can be repaired by using our proposed twostep estimation procedure. The asymptotic meansquared error for the twostep procedure is obtained and is shown to achieve the optimal rate of convergence. A few simulation studies show that the gain by the twostep procedure can be quite substantial. The methodology is illustrated by an application to an environmental dataset.