Results 1  10
of
11
Generalized Likelihood Ratio Statistics And Wilks Phenomenon
, 2000
"... this paper. We introduce the generalized likelihood statistics to overcome the drawbacks of nonparametric maximum likelihood ratio statistics. New Wilks phenomenon is unveiled. We demonstrate that a class of the generalized likelihood statistics based on some appropriate nonparametric estimators are ..."
Abstract

Cited by 78 (22 self)
 Add to MetaCart
this paper. We introduce the generalized likelihood statistics to overcome the drawbacks of nonparametric maximum likelihood ratio statistics. New Wilks phenomenon is unveiled. We demonstrate that a class of the generalized likelihood statistics based on some appropriate nonparametric estimators are asymptotically distribution free and follow
Consistent Specification Testing With Nuisance Parameters Present Only Under The Alternative
, 1995
"... . The nonparametric and the nuisance parameter approaches to consistently testing statistical models are both attempts to estimate topological measures of distance between a parametric and a nonparametric fit, and neither dominates in experiments. This topological unification allows us to greatly ex ..."
Abstract

Cited by 55 (10 self)
 Add to MetaCart
. The nonparametric and the nuisance parameter approaches to consistently testing statistical models are both attempts to estimate topological measures of distance between a parametric and a nonparametric fit, and neither dominates in experiments. This topological unification allows us to greatly extend the nuisance parameter approach. How and why the nuisance parameter approach works and how it can be extended bears closely on recent developments in artificial neural networks. Statistical content is provided by viewing specification tests with nuisance parameters as tests of hypotheses about Banachvalued random elements and applying the Banach Central Limit Theorem and Law of Iterated Logarithm, leading to simple procedures that can be used as a guide to when computationally more elaborate procedures may be warranted. 1. Introduction In testing whether or not a parametric statistical model is correctly specified, there are a number of apparently distinct approaches one might take. T...
Test of significance when data are curves
 Journal of the American Statistical Association
, 1998
"... With modern technology, massive data can easily be collected in a form of multiple sets of curves. New statistical challenge includes testing whether there is any statistically significant difference among these sets of curves. In this paper, we propose some new tests for comparing two groups of cur ..."
Abstract

Cited by 32 (1 self)
 Add to MetaCart
With modern technology, massive data can easily be collected in a form of multiple sets of curves. New statistical challenge includes testing whether there is any statistically significant difference among these sets of curves. In this paper, we propose some new tests for comparing two groups of curves based on the adaptive Neyman test and the wavelet thresholding techniques introduced in Fan (1996). We demonstrate that these tests inherit the properties outlined in Fan (1996) and they are simple and powerful for detecting di erences between two sets of curves. We then further generalize the idea to compare multiple sets of curves, resulting in an adaptive highdimensional analysis of variance, called HANOVA. These newly developed techniques are illustrated by using a dataset on pizza commercial where observations are curves and an analysis of cornea topography in ophthalmology where images of individuals are observed. A simulation example is also presented to illustrate the power of the adaptive Neyman test.
GoodnessofFit Tests for Parametric Regression Models
 JOUR. AMERI. STATIST. ASSOC
, 2001
"... Several new tests are proposed for examining the adequacy of a family of parametric models against large nonparametric alternatives. These tests formally check if the bias vector of residuals from parametric ts is negligible by using the adaptive Neyman test and other methods. The testing procedures ..."
Abstract

Cited by 19 (5 self)
 Add to MetaCart
Several new tests are proposed for examining the adequacy of a family of parametric models against large nonparametric alternatives. These tests formally check if the bias vector of residuals from parametric ts is negligible by using the adaptive Neyman test and other methods. The testing procedures formalize the traditional model diagnostic tools based on residual plots. We examine the rates of contiguous alternatives that can be detected consistently by the adaptive Neyman test. Applications of the procedures to the partially linear models are thoroughly discussed. Our simulation studies show that the new testing procedures are indeed powerful and omnibus. The power of the proposed tests is comparable to the Ftest statistic even in the situations where F test is known to be suitable and can be far more powerful than the Ftest statistic in other situations. An application to testing linear models versus additive models are discussed.
Sieved empirical likelihood ratio tests for nonparametric functions
 Ann. Statist
, 2004
"... Generalized likelihood ratio statistics have been proposed in Fan, Zhang and Zhang [Ann. Statist. 29 (2001) 153–193] as a generally applicable method for testing nonparametric hypotheses about nonparametric functions. The likelihood ratio statistics are constructed based on the assumption that the d ..."
Abstract

Cited by 12 (1 self)
 Add to MetaCart
Generalized likelihood ratio statistics have been proposed in Fan, Zhang and Zhang [Ann. Statist. 29 (2001) 153–193] as a generally applicable method for testing nonparametric hypotheses about nonparametric functions. The likelihood ratio statistics are constructed based on the assumption that the distributions of stochastic errors are in a certain parametric family. We extend their work to the case where the error distribution is completely unspecified via newly proposed sieve empirical likelihood ratio (SELR) tests. The approach is also applied to test conditional estimating equations on the distributions of stochastic errors. It is shown that the proposed SELR statistics follow asymptotically rescaled χ 2distributions, with the scale constants and the degrees of freedom being independent of the nuisance parameters. This demonstrates that the Wilks phenomenon observed in Fan, Zhang and Zhang [Ann. Statist. 29 (2001) 153–193] continues to hold under more relaxed models and a larger class of techniques. The asymptotic power of the proposed test is also derived, which achieves the optimal rate for nonparametric hypothesis testing. The proposed approach has two advantages over the generalized likelihood ratio method: it requires one only to specify some conditional estimating equations rather than the entire distribution of the stochastic error, and the procedure adapts automatically to the unknown error distribution including heteroscedasticity. A simulation study is conducted to evaluate our proposed procedure empirically.
The Penalty in Data Driven Neyman’s Tests
, 2000
"... Abstract Data driven Neyman’s tests are based on two elements: Neyman’s smooth tests in finite dimensional submodels and a selection rule to choose the “right ” submodel. As selection rule usually (a modification of) Schwarz’s rule is applied. In this paper we consider data driven Neyman’s tests wit ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Abstract Data driven Neyman’s tests are based on two elements: Neyman’s smooth tests in finite dimensional submodels and a selection rule to choose the “right ” submodel. As selection rule usually (a modification of) Schwarz’s rule is applied. In this paper we consider data driven Neyman’s tests with selection rules allowing also other penalties than the one in Schwarz’s rule. It is shown that the nice properties of consistency against very large classes of alternatives and the more deep result of asymptotic optimality in the sense of vanishing shortcoming continue to hold for other penalties as well, including the one corresponding to Akaike’s selection rule. Keyword and phrases: goodnessoffit, model selection, Schwarz’s criterion, Akaike’s criterion, penalty, data driven test, consistency, vanishing shortcoming, intermediate efficiency, moderate deviations.
Testing GoodnessofFit Based on a Roughness Measure
 Journal of the American Statistical Association
, 1997
"... This article was part of the author's doctoral dissertation under the supervision of Jianqing Fan at the University of North Carolina; the author is very grateful for his generous guidance and encouragements. The author also thanks E. Carlstein, A.R. Gallant, M.R. Leadbetter, P.K. Sen, J.S. Simonoff ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
This article was part of the author's doctoral dissertation under the supervision of Jianqing Fan at the University of North Carolina; the author is very grateful for his generous guidance and encouragements. The author also thanks E. Carlstein, A.R. Gallant, M.R. Leadbetter, P.K. Sen, J.S. Simonoff, and Y.K.N. Truong for useful comments on the first version of this article. Special thanks goes to a referee and an associate editor for their constructive comments that led to significant improvements of the presentation. underlying density and its expected value under the null hypothesis. Konakov, Lauter, and Liero (1996) extended the BickelRosenblatt (BR) test for a more general null hypothesis that the underlying density lies in a parametric class. Other recent work includes that of Bickel and Ritov (1992), Bowman (1992), Kim (1992), and Landsman and Rom (1995). This article considers the goodnessoffit problem based on comparing the first derivatives of the underlying density and the hypothesized density, estimated by the kernel method. As pointed out by Muller (1992), "comparison of derivatives sometimes can pinpoint difference more sharply than just comparison of the functions themselves" (see also Chen 1994). The setting considered herein can be described more precisely as follows. Suppose that
A Basis Approach to GoodnessofFit Testing in Recurrent Event Models
"... A class of tests for the hypothesis that the baseline hazard function in Cox’s proportional hazards model and for a general recurrent event model belongs to a parametric family C ≡ {λ0(·; ξ) : ξ ∈ Ξ} is proposed. Finite properties of the tests are examined via simulations, while asymptotic propertie ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
A class of tests for the hypothesis that the baseline hazard function in Cox’s proportional hazards model and for a general recurrent event model belongs to a parametric family C ≡ {λ0(·; ξ) : ξ ∈ Ξ} is proposed. Finite properties of the tests are examined via simulations, while asymptotic properties of the tests under a contiguous sequence of local alternatives are studied theoretically. An application of the tests to the general recurrent event model, which is an extended minimal repair model admitting covariates, is demonstrated. In addition, two real data sets are used to illustrate the applicability of the proposed tests. Key Words: Counting process; goodnessoffit test; minimal repair model; Neyman’s test; nonhomogeneous Poisson process; repairable system; score test.
GOODNESS OF FIT FOR LATTICE PROCESSES
"... Abstract. The paper discusses tests for the correct speci…cation of a model when data is observed in a ddimensional lattice, extending previous work when the data is collected in the real line. As it happens with the latter type of data, the asymptotic distribution of the tests are functionals of a ..."
Abstract
 Add to MetaCart
Abstract. The paper discusses tests for the correct speci…cation of a model when data is observed in a ddimensional lattice, extending previous work when the data is collected in the real line. As it happens with the latter type of data, the asymptotic distribution of the tests are functionals of a Gaussian sheet process, say B (), 2 [0; ] d. Because it is not easy to …nd a time transformation h ( ) such that B (h ()) becomes the standard Brownian sheet, a consequence is that the critical values are di ¢ cult, if at all possible, to obtain. So, to overcome the problem of its implementation, we propose to employ a bootstrap approach, showing its validity in our context. JEL Classi…cation: C21, C23. 1.
Testing in functional data analysis using quadratic forms
, 2008
"... Tests of hypotheses associated with the functional linear model are investigated under smoothness assumptions. The tests considered are those which use a quadraticform test statistic calculated on a highdimensional discrete model that is obtained by Fourier transformation. Asymptotic performance b ..."
Abstract
 Add to MetaCart
Tests of hypotheses associated with the functional linear model are investigated under smoothness assumptions. The tests considered are those which use a quadraticform test statistic calculated on a highdimensional discrete model that is obtained by Fourier transformation. Asymptotic performance bounds for this class of tests are deduced under ratesoftesting theory, and explicit formulas are given that characterize the performance of many such tests. Examples are discussed, including an optimal class of tests based on quadratic forms, and recommendations are made for the use of the tests in practice. Among other insights, results describe the impact of model dimension on performance, which is a particular concern in functional data analysis. KEY WORDS: functional data analysis; quadratic forms; highdimensional testing; rates of testing; Fourier decomposition