Results 11  20
of
480
TwoStep Estimation of Functional Linear Models with Applications to Longitudinal Data
 Journal of the Royal Statistical Society, Series B
, 2000
"... Functional linear models are useful in longitudinal data analysis. They include many classical and recently proposed statistical models for longitudinal data and other functional data. Recently, smoothing spline and kernel methods have been proposed for estimating their coefficient functions nonpara ..."
Abstract

Cited by 41 (5 self)
 Add to MetaCart
Functional linear models are useful in longitudinal data analysis. They include many classical and recently proposed statistical models for longitudinal data and other functional data. Recently, smoothing spline and kernel methods have been proposed for estimating their coefficient functions nonparametrically but these methods are either intensive in computation or inefficient in performance. Toovercome these drawbacks, in this paper, a simple and powerful twostep alternativeis proposed. In particular, the implementation of the proposed approach via local polynomial smoothing is discussed. Methods for estimating standard deviations of estimated coefficient functions are also proposed. Some asymptotic results for the local polynomial estimators are established. Two longitudinal data sets, one of which involves timedependent covariates, are used to demonstrate the proposed approach. Simulation studies show that our twostep approach improves the kernel method proposed in Hoover, et al...
Nonparametric Function Estimation for Clustered Data When the Predictor is Measured Without/With Error
 Journal of the American Statistical Association
, 1999
"... We consider local polynomial kernel regression with a single covariate for clustered data using estimating equations. We assume that at most m < # observations are available on each cluster. In the case of random regressors, with no measurement error in the predictor, we show that it is generally ..."
Abstract

Cited by 31 (6 self)
 Add to MetaCart
We consider local polynomial kernel regression with a single covariate for clustered data using estimating equations. We assume that at most m < # observations are available on each cluster. In the case of random regressors, with no measurement error in the predictor, we show that it is generally the best strategy to ignore entirely the correlation structure within each cluster, and instead to pretend that all observations are independent. In the further special case of longitudinal data on individuals with fixed common observation times, we show that equivalent to the pooled data approach is the strategy of fitting separate nonparametric regressions at each observation time and constructing an optimal weighted average. We also consider what happens when the predictor is measured with error. Using the SIMEX approach to correct for measurement error, we construct an asymptotic theory for both the pooled and weighted average estimators. Surprisingly, for the same amount of smoothing, t...
Estimating Functions for Discretely Sampled DiffusionType Models. Chapter of the Handbook of financial econometrics, AitSahalia and Hansen eds. http://home.uchicago.edu/ lhansen/handbook.htm Birgé
 in Festschrift for Lucien Le Cam: Research Papers in Probability and Statistics
, 2004
"... Estimating functions provide a general framework for finding estimators and studying their properties in many different kinds of statistical models, including stochastic process models. An estimating function is a function of the data as well as of the parameter to be estimated. An estimator is obta ..."
Abstract

Cited by 26 (9 self)
 Add to MetaCart
Estimating functions provide a general framework for finding estimators and studying their properties in many different kinds of statistical models, including stochastic process models. An estimating function is a function of the data as well as of the parameter to be estimated. An estimator is obtained by equating the estimating function to zero and solving the resulting
Mandatory Disclosure and Operational Risk: Evidence from Hedge Fund Registration
"... Mandatory disclosure is a regulatory tool intended to allow market participants to assess manager risks. We use the recent controversial SEC filing to study the value of disclosure. By examining Form ADVs filed by major hedge funds in February 2006, we are able to quantify operational risk. Leverage ..."
Abstract

Cited by 25 (2 self)
 Add to MetaCart
Mandatory disclosure is a regulatory tool intended to allow market participants to assess manager risks. We use the recent controversial SEC filing to study the value of disclosure. By examining Form ADVs filed by major hedge funds in February 2006, we are able to quantify operational risk. Leverage and ownership structures suggest that lenders and equity investors were already aware of operational risk. However, operational risk does not mediate the flowperformance relationship, suggesting that investors either lack this information, or they do not regard it as material. These findings suggest that any consideration of disclosure should take into account the endogenous production of information within the industry, and the marginal benefit of disclosure on different investment clienteles. JEL Classification: G2, K2
Functional adaptive model estimation
 J. Amer
, 2005
"... In this article we are interested in modeling the relationship between a scalar, Y, and a functional predictor, X(t). We introduce a highly flexible approach called ”Functional Adaptive Model Estimation” (FAME) which extends generalized linear models (GLM), generalized additive models (GAM) and proj ..."
Abstract

Cited by 24 (7 self)
 Add to MetaCart
In this article we are interested in modeling the relationship between a scalar, Y, and a functional predictor, X(t). We introduce a highly flexible approach called ”Functional Adaptive Model Estimation” (FAME) which extends generalized linear models (GLM), generalized additive models (GAM) and projection pursuit regression (PPR) to handle functional predictors. The FAME approach can model any of the standard exponential family of response distributions that are assumed for GLM or GAM while maintaining the flexibility of PPR. For example standard linear or logistic regression with functional predictors, as well as far more complicated models, can easily be applied using this approach. A functional principal components decomposition of the predictor functions is used to aid visualization of the relationship between X(t) and Y. We also show how the FAME procedure can be extended to deal with multiple functional and standard finite dimensional predictors, possibly with missing data. The FAME approach is illustrated on simulated data as well as on the prediction of arthritis based on bone shape. We end with a discussion of the relationships between standard regression approaches, their extensions to functional data and FAME.
Semiparametric Bayesian Analysis Of Survival Data
 Journal of the American Statistical Association
, 1996
"... this paper are motivated and aimed at analyzing some common types of survival data from different medical studies. We will center our attention to the following topics. ..."
Abstract

Cited by 24 (0 self)
 Add to MetaCart
this paper are motivated and aimed at analyzing some common types of survival data from different medical studies. We will center our attention to the following topics.
A note on the efficiency of sandwich covariance matrix estimation
 Journal of American Statistical Association
, 2001
"... The sandwich estimator, also known as robust covariance matrix estimator, heteroskedasticityconsistent covariance matrix estimate or empirical covariance matrix estimator, has achieved increasing use in the econometric literature as well as with the growing popularity of generalized estimating equa ..."
Abstract

Cited by 24 (1 self)
 Add to MetaCart
The sandwich estimator, also known as robust covariance matrix estimator, heteroskedasticityconsistent covariance matrix estimate or empirical covariance matrix estimator, has achieved increasing use in the econometric literature as well as with the growing popularity of generalized estimating equations. Its virtue is that it provides consistent estimates of the covariance matrix for parameter estimates even when the tted parametric model fails to hold, or is not even specied. Surprisingly though, there has been little discussion of the properties of the sandwich method other than consistency. We investigate the sandwich estimator in quasilikelihood models asymptotically, and in the linear case analytically. Under certain circumstances we show that when the quasilikelihood model is correct, the sandwich estimate is often far more variable than the usual parametric variance estimate. The increased variance is a xed feature of the method, and the price one pays to obtain consistency even when the parametric model fails or when there is heteroskedasticity. We show that the additional variability directly aects the coverage probability of condence intervals constructed from sandwich variance estimates. In fact the use of sandwich variance estimates combined
Empirical Bayes approaches to mixture problems and wavelet regression
, 1998
"... We consider model selection in a hierarchical Bayes formulation of the sparse normal linear model in which individual variables have, independently, an unknown prior probability of being included in the model. The focus is on orthogonal designs, which are of particular importance in nonparametric ..."
Abstract

Cited by 23 (2 self)
 Add to MetaCart
We consider model selection in a hierarchical Bayes formulation of the sparse normal linear model in which individual variables have, independently, an unknown prior probability of being included in the model. The focus is on orthogonal designs, which are of particular importance in nonparametric regression via wavelet shrinkage. Empirical Bayes estimates of hyperparameters are easily obtained via the EM algorithm, and this approach is contrasted with a recent conditional likelihood proposal. Our model selection approach yields a straightforward method for data dependent threshold selection in wavelet regression. Performance on standard test sets and data examples is encouraging, especially if a translation invariant form of the estimator is used. Since the method produces separate threshold estimates on each wavelet resolution level, it also comfortably handles stationary correlated error structures.