Results 1  10
of
40
Stock Prices and Volume
, 1990
"... We undertake a comprehensive investigation of price and volume comovement using daily New York Stock Exchange data from 1928 to 1987. We adjust the data to take into account wellknown calendar effects and longrun trends. To describt tbe process, we use a seminonparametric estimate of the joint de ..."
Abstract

Cited by 109 (9 self)
 Add to MetaCart
We undertake a comprehensive investigation of price and volume comovement using daily New York Stock Exchange data from 1928 to 1987. We adjust the data to take into account wellknown calendar effects and longrun trends. To describt tbe process, we use a seminonparametric estimate of the joint density of current price change and volume conditional on past price changes and volume. Four empirical regularities are found: 1) positive correlation between conditional volatility and volume, 2) large price movements are followed by high volume, 3) conditioning on lagged volume substantially attenuates the "leverage " effect, and 4) after conditioning on lagged volume, there is a positive risk/return relation.
Nonparametric Density Estimation and Tests of Continuous Time Interest Rate Models
 Review of Financial Studies
, 1998
"... A number of recent papers have used nonparametric density estimation or nonparametric regression to study the instantaneous spot interest rate, and to test term structure models. However, little is known about the performance of these methods when applied to persistent timeseries, such as U.S. inte ..."
Abstract

Cited by 64 (2 self)
 Add to MetaCart
A number of recent papers have used nonparametric density estimation or nonparametric regression to study the instantaneous spot interest rate, and to test term structure models. However, little is known about the performance of these methods when applied to persistent timeseries, such as U.S. interest rates. This paper uses the Vasicek [1977] model to study the performance of kernel density estimates of the ergodic distribution of the instantaneous spot rate. The model's tractability allows me to analyze the MISE of the kernel estimate as a function of persistence, variance of the ergodic distribution, span of the data, sampling frequency, and kernel bandwidth. Our principle result is that persistence has an important impact on optimal bandwidth selection and on nite sample performance. We also nd that sampling the data more frequently has little e ect on estimator quality. We also examine one of AitSahalia's [1996a] new nonparametric tests of parametric continuoustime Markov models of the instantaneous spot interest rate. The test is based on the distance between parametric and nonparametric (kernel) estimates of the ergodic distribution of the interest rate process. Our principal result is that the test rejects too often when using asymptotic critical values and 22 years of data. The reason for the high rejection rate is probably because the asymptotic distribution of the test does not depend on persistence, but the nite sample performance of the estimator does. After critical values are adjusted for size, the test has low power in distinguishing between the Vasicek and CoxIngersollRoss models when compared with a conditional moment based speci cation test.
Information Theoretic Approaches to Inference in Moment Condition Models
 Econometrica
, 1998
"... Onestep efficient GMM estimation has been developed in the recent papers of Back and Brown (1990), Imbens (1993) and Qin and Lawless (1994). These papers emphasized methods that correspond to using Owen's (1988) method of empirical likelihood to reweight the data so that the reweighted sample obeys ..."
Abstract

Cited by 61 (2 self)
 Add to MetaCart
Onestep efficient GMM estimation has been developed in the recent papers of Back and Brown (1990), Imbens (1993) and Qin and Lawless (1994). These papers emphasized methods that correspond to using Owen's (1988) method of empirical likelihood to reweight the data so that the reweighted sample obeys all the moment restrictions at the parameter estimates. In this paper we consider an alternative KLIC motivated weighting and show how it and similar discrete reweightings define a class of unconstrained optimization problems which includes GMM as a special case. Such KLIC motivated reweightings introduce M auxiliary `tilting' parameters, where M is the number of moments; parameter and overidentification hypotheses can be recast in terms of these tilting parameters. Such tests, when appropriately conditioned on the estimates of the original parameters, are often startlingly more effective than their conventional counterparts. This is apparently due to the local ancillarity of the original parameters for the tilting parameters. 1.
Consistent Specification Testing With Nuisance Parameters Present Only Under The Alternative
, 1995
"... . The nonparametric and the nuisance parameter approaches to consistently testing statistical models are both attempts to estimate topological measures of distance between a parametric and a nonparametric fit, and neither dominates in experiments. This topological unification allows us to greatly ex ..."
Abstract

Cited by 55 (10 self)
 Add to MetaCart
. The nonparametric and the nuisance parameter approaches to consistently testing statistical models are both attempts to estimate topological measures of distance between a parametric and a nonparametric fit, and neither dominates in experiments. This topological unification allows us to greatly extend the nuisance parameter approach. How and why the nuisance parameter approach works and how it can be extended bears closely on recent developments in artificial neural networks. Statistical content is provided by viewing specification tests with nuisance parameters as tests of hypotheses about Banachvalued random elements and applying the Banach Central Limit Theorem and Law of Iterated Logarithm, leading to simple procedures that can be used as a guide to when computationally more elaborate procedures may be warranted. 1. Introduction In testing whether or not a parametric statistical model is correctly specified, there are a number of apparently distinct approaches one might take. T...
Large Scale Conditional Covariance Matrix Modeling, Estimation and Testing,” University of California at San Diego working paper
, 1994
"... A new representation of the diagonal Vech model is given using the Hadamard product. Sufficient conditions on parameter matrices are provided to ensure the positive definiteness of covariance matrices from the new representation. Based on this, some new and simple models are discussed. A set of diag ..."
Abstract

Cited by 20 (2 self)
 Add to MetaCart
A new representation of the diagonal Vech model is given using the Hadamard product. Sufficient conditions on parameter matrices are provided to ensure the positive definiteness of covariance matrices from the new representation. Based on this, some new and simple models are discussed. A set of diagnostic tests for multivariate ARCH models is proposed. The tests are able to detect various model misspecifications by examing the orthogonality of the squared normalized residuals. A small MonteCarlo study is carried out to check the small sample performance of the test. An empirical example is also given as guidance for model estimation and selection in the multivariate framework. For the specific data set considered, it is found that the simple one and two parameter models and the constant conditional correlation model perform fairly well.
Working During School and Academic Performance
, 2000
"... this paper, we utilize unique new data in an attempt to examine the extent to which the endogeneity of hours may bias estimates of the effect of employment on academic performance. The data are obtained directly from the administrative records of Berea College. Located in central Kentucky, this libe ..."
Abstract

Cited by 20 (3 self)
 Add to MetaCart
this paper, we utilize unique new data in an attempt to examine the extent to which the endogeneity of hours may bias estimates of the effect of employment on academic performance. The data are obtained directly from the administrative records of Berea College. Located in central Kentucky, this liberal arts institution operates under a mission of providing an education to those who "have great promise, but limited economic resources." As part of this mission, all students who attend Berea receive full tuition scholarships. Part of the cost of schooling is defrayed through a mandatory workstudy program. Although all students must work at least a minimum often hours a week, variation in hours worked arises because students can often choose to earn extra income by working additional hours. We wish to note in advance that, given the unique nature of Berea College, it is our belief that our results should be viewed cautiously. Nonetheless, as will be described throughout the paper, the institutional details of the Berea College labor program and the detailed nature of our administrative data 3See Bound et al. (1995) for a discussion of the potential problems that can arise in instrumental variables estimation when the correlation between instruments and the endogenous explanatory variable is weak
INVERSE PROBABILITY WEIGHTED ESTIMATION FOR GENERAL MISSING DATA PROBLEMS
"... I study inverse probability weighted Mestimation under a general missing data scheme. Examples include Mestimation with missing data due to a censored survival time, propensity score estimation of the average treatment effect in the linear exponential family, and variable probability sampling with ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
I study inverse probability weighted Mestimation under a general missing data scheme. Examples include Mestimation with missing data due to a censored survival time, propensity score estimation of the average treatment effect in the linear exponential family, and variable probability sampling with observed retention frequencies. I extend an important result known to hold in special cases: estimating the selection probabilities is generally more efficient than if the known selection probabilities could be used in estimation. For the treatment effect case, the setup allows a general characterization of a “double robustness ” result due to Scharfstein, Rotnitzky, and Robins (1999).
Testing Distributional Assumptions: A GMM Approach. Universite de Montreal
, 2005
"... In this paper, we consider testing distributional assumptions. Special cases that we consider are the Pearson’s family like the normal, Student, gamma, beta and uniform distributions. The test statistics we consider are based on a set of moment conditions. This set coincides with the first moment co ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
In this paper, we consider testing distributional assumptions. Special cases that we consider are the Pearson’s family like the normal, Student, gamma, beta and uniform distributions. The test statistics we consider are based on a set of moment conditions. This set coincides with the first moment conditions derived by Hansen and Scheinkman (1995) when one considers a continuous time model. By testing moment conditions, we treat in detail the parameter uncertainty problem when the considered variable is not observed but depends on estimators of unknown parameters. In particular, we derive moment tests that are robust against parameter uncertainty. We also consider the case where the variable of interest is serially correlated with unknown dependence by adopting a HAC approach for this purpose. This paper extends Bontemps and Meddahi (2005) who considered this approach for the normal case. Finite sample properties of our tests when the variable of interest is a Student are derived through a comprehensive Monte Carlo study. An empirical application to StudentGARCH model is presented. Keywords: Pearson’s distributions; HansenScheinkman moment conditions; parameter uncertainty; serial correlation; HAC.
Neural network test and nonparametric kernel test for neglected nonlinearity in regression models
 Studies in Nonlinear Dynamics and Econometrics
, 2000
"... We consider two conditional moment tests for neglected nonlinearity in regression models and examine their finite sample performance. The two tests are the nonparametric kernel test by Li and Wang (1998) and Zheng (1996) and the neural network test of White (1989). We examine asymptotic test, naive ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
We consider two conditional moment tests for neglected nonlinearity in regression models and examine their finite sample performance. The two tests are the nonparametric kernel test by Li and Wang (1998) and Zheng (1996) and the neural network test of White (1989). We examine asymptotic test, naive bootstrap test, and wild bootstrap test for weakly dependent time series and independent data.
Semiparametric Testing of the Link Function in Models For Binary Outcomes
, 1994
"... This work stands in the larger context of tests of parametric models against semiparametric alternatives. A semiparametric statistic proposed by Horowitz and Hardle (1994), which we call the HH statistic, can be used to construct a test for unknown deviations from a hypothesized link function in a ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
This work stands in the larger context of tests of parametric models against semiparametric alternatives. A semiparametric statistic proposed by Horowitz and Hardle (1994), which we call the HH statistic, can be used to construct a test for unknown deviations from a hypothesized link function in a parametric single index model which rejects for large values of the statistic. Here we study empirically the finite sample performance of this statistic for the important special case of logistic regression for binary data. We show that its asymptotic distribution is not a good approximation for the finite sample distribution up to sample sizes of several thousand observations. For the chosen examples the value of the test statistic for finite sample sizes tends to be smaller than what would be expected under the asymptotic approximation, thus reducing the power of the test based on asymptotic critical values. Moreover, the finite sample variance of the statistic depends on the bandwidth, wh...