Results 1  10
of
21
Improving Regression Estimation: Averaging Methods for Variance Reduction with Extensions to General Convex Measure Optimization
, 1993
"... ..."
Predictive Model Selection
 Journal of the Royal Statistical Society, Ser. B
, 1995
"... this article we propose three criteria that can be used to address model selection. These emphasize observables rather than parameters and are based on a certain Bayesian predictive density. They have a unifying basis that is simple and interpretable,are free of asymptotic de#nitions,and allow the i ..."
Abstract

Cited by 61 (4 self)
 Add to MetaCart
this article we propose three criteria that can be used to address model selection. These emphasize observables rather than parameters and are based on a certain Bayesian predictive density. They have a unifying basis that is simple and interpretable,are free of asymptotic de#nitions,and allow the incorporation of prior information. Moreover,two of these criteria are readily calibrated.
Nonparametric Estimation Via Local Estimating Equations, With Applications To Nutrition Calibration
 Journal of the American Statistical Association
, 1997
"... Estimating equations have found wide popularity recently in parametric problems, yielding consistent estimators with asymptotically valid inferences obtained via the sandwich formula. Motivated by a problem in nutritional epidemiology, we use estimating equations to derive nonparametric estimators o ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
Estimating equations have found wide popularity recently in parametric problems, yielding consistent estimators with asymptotically valid inferences obtained via the sandwich formula. Motivated by a problem in nutritional epidemiology, we use estimating equations to derive nonparametric estimators of a "parameter" depending on a predictor. The nonparametric component is estimated via local polynomials with loess or kernel weighting; asymptotic theory is derived for the latter. In keeping with the estimating equation paradigm, variances of the nonparametric function estimate are estimated using the sandwich method, in an automatic fashion, without the need typical in the literature to derive asymptotic formulae and plugin an estimate of a density function. The same philosophy is used in estimating the bias of the nonparametric function, i.e., we use an empirical method without deriving asymptotic theory on a casebycase basis. The methods are applied to a series of examples. The appli...
The Catline for Deep Regression
 J. MULTIVARIATE ANALYSIS
, 1998
"... Motivated by the notion of regression depth (Rousseeuw and Hubert 1996) we introduce the catline, a new method for simple linear regression. At any bivariate data set Z n = f(x i ; y i ); i = 1; : : : ; ng its regression depth is at least n=3. This lower bound is attained for data lying on a convex ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
Motivated by the notion of regression depth (Rousseeuw and Hubert 1996) we introduce the catline, a new method for simple linear regression. At any bivariate data set Z n = f(x i ; y i ); i = 1; : : : ; ng its regression depth is at least n=3. This lower bound is attained for data lying on a convex or concave curve, whereas for perfectly linear data the catline attains a depth of n. We construct an O(n log n) algorithm for the catline, so it can be computed fast in practice. The catline is Fisherconsistent at any linear model y = fix + ff + e in which the error distribution satisfies med(ejx) = 0, which encompasses skewed and/or heteroscedastic errors. The breakdown value of the catline is 1=3, and its influence function is bounded. At the bivariate gaussian distribution its asymptotic relative efficiency compared to the L 1 line is 79.3% for the slope, and 88.9% for the intercept. The finitesample relative efficiencies are in close agreement with these values. This combination of...
Estimating residual variance in nonparametric regression using least squares, Biometrika 92: 821–830
, 2005
"... We propose a new estimator for the error variance in a nonparametric regression model. We estimate the error variance as the intercept in a simple linear regression model with squared differences of paired observations as the dependent variable and squared distances between the paired covariates as ..."
Abstract

Cited by 8 (4 self)
 Add to MetaCart
We propose a new estimator for the error variance in a nonparametric regression model. We estimate the error variance as the intercept in a simple linear regression model with squared differences of paired observations as the dependent variable and squared distances between the paired covariates as the regressor. Our method can be applied to nonparametric regression models with multivariate functions defined on arbitrary subsets of normed spaces, possibly observed on unequally spaced or clustered designed points. No ordering is required for our method. We develop methods for selecting the bandwidth. For the special case of one dimensional domain with equally spaced design points, we show that our method reaches an asymptotic optimal rate which is not achieved by some existing methods. We conduct extensive simulations to evaluate finite sample performance of our method and compare it with existing methods. We illustrate our method using a real data set.
Nonparametric transformations for both sides of a regression model
 J. Royal Statist. Soc. B
, 1995
"... One way to model heteroscedasticity and skewness of the error distribution in regression is to transform both sides (TBS) of the regression equation. Ifit is possible to transform the regression equation to result in normally distributed errors, then one can obtain more efficient parameter estimates ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
One way to model heteroscedasticity and skewness of the error distribution in regression is to transform both sides (TBS) of the regression equation. Ifit is possible to transform the regression equation to result in normally distributed errors, then one can obtain more efficient parameter estimates and valid prediction intervals. One problem with this approach is that the choice of transformation is usually restricted to the shifted/power family. Often there is no scientific basis for this model and the limited flexibility of this parametric family may miss important features of the distribution. A more comprehensive approach is to estimate the transformation using nonparametric methods based on maximizing a penalized likelihood function. By expressing the likelihood in terms 9f the log derivative transformation we are able to derive an approximate maximum penalized likelihood estimate that has the form of a spline function. A fast algorithm for computing this estimate is introduced and applied to two data sets where a TBS model has been found to work well. The results based on estimating a spline transformation give some insight into the s('nsitivity of prediction intervals to the choice of the transformation. Some results are also given concerning the existence and unique of the transformation spline estimate. 1.
Standard error computations for uncertainty quantification in inverse problems: asymptotic theory vs. bootstrapping
, 2010
"... We computationally investigate two approaches for uncertainty quantification in inverse problems for nonlinear parameter dependent dynamical systems. We compare the bootstrapping and asymptotic theory approaches for problems involving data with several noise forms and levels. We consider both consta ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
We computationally investigate two approaches for uncertainty quantification in inverse problems for nonlinear parameter dependent dynamical systems. We compare the bootstrapping and asymptotic theory approaches for problems involving data with several noise forms and levels. We consider both constant variance absolute error data and relative error which produces nonconstant variance data in our parameter estimation formulations. We compare and contrast parameter estimates, standard errors, confidence intervals, and computational times for both bootstrapping and asymptotic theory methods. Key words: Uncertainty quantification, parameter estimation, nonlinear dynamic models, bootstrapping, asymptotic theory standard errors, ordinary least squares vs. generalized least squares, computational examples. 1 1
Bayesian Predictive Simultaneous Variable and Transformation Selection in the Linear Model
"... this paper, we propose two variable and transformation selection procedures on the predictor variables in the linear model. The first procedure is a simultaneous variable and transformation selection procedure. For data sets with many predictors, a stepwise variable selection procedure is also prese ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
this paper, we propose two variable and transformation selection procedures on the predictor variables in the linear model. The first procedure is a simultaneous variable and transformation selection procedure. For data sets with many predictors, a stepwise variable selection procedure is also presented. The procedures are based on Bayesian model selection criteria introduced by Ibrahim and Laud (1994) and Laud and Ibrahim (1995). Several examples are given to illustrate the methodology.
MetaModeling Of A Cluster Tool Simulator
, 2000
"... : We develop regression spline metamodels to predict the output of a cluster tool simulator. By proper selection of the predictor variables, it is often possible to find simple spline metamodels that predict the cluster tool output with very little error. The statistical metamodels have two uses ..."
Abstract
 Add to MetaCart
: We develop regression spline metamodels to predict the output of a cluster tool simulator. By proper selection of the predictor variables, it is often possible to find simple spline metamodels that predict the cluster tool output with very little error. The statistical metamodels have two uses. First, they can be used as a rapidlycomputed replacement of the cluster tool simulator in factory level simulations. Second, their component functions can be plotted for visualization of the factors influencing cluster tool throughput. By examining metamodel components, we were able to quantify several interesting characteristics of cluster tools. In particular, we found an extreme, perhaps chaotic, sensitivity of average tool cycle times to small perturbations in individual chamber processing times. The methodology presented in this paper can be used to quantify the domains of applicability for different cluster tool modeling techniques. The indication from this study is that the accur...
A NarE ON aJMPUfING ROBUST REGRESSION FSfIMATES VIA ITERATIVELY REWEIGHfED LEAST SQUARES
"... robust regression estimates using iterative reweighted least squares and the nonlinear regression procedure NLIN. We show that. while the estimates are asymptotically correct, the resulting standard errors are not. We also discuss computation of the estimates. 1Section 1 ..."
Abstract
 Add to MetaCart
robust regression estimates using iterative reweighted least squares and the nonlinear regression procedure NLIN. We show that. while the estimates are asymptotically correct, the resulting standard errors are not. We also discuss computation of the estimates. 1Section 1