Results 1  10
of
12
Limited information estimators and exogeneity tests for simultaneous probit models
, 1988
"... A twostep maximum likelihood procedure is proposed for estimating simultaneous probit models and is compared to alternative limited information estimators. Conditions under which each estimator attains the CramerRao lower bound are obtained. Simple tests for exogeneity based on the new twostep es ..."
Abstract

Cited by 387 (0 self)
 Add to MetaCart
A twostep maximum likelihood procedure is proposed for estimating simultaneous probit models and is compared to alternative limited information estimators. Conditions under which each estimator attains the CramerRao lower bound are obtained. Simple tests for exogeneity based on the new twostep estimator are proposed and are shown to be asymptotically equivalent to one another and to have the same local asymptotic power as classical tests based on the limited information maximum likelihood estimator. Finite sample comparisons between the new and alternative estimators are presented based on some Monte Carlo evidence. The performance of the proposed tests for exogeneity is also assessed.
An econometric analysis of residential electric appliance holdings and consumption
 Econometrica
, 1984
"... Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at ..."
Abstract

Cited by 281 (2 self)
 Add to MetaCart
(Show Context)
Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at
Consistent Specification Testing With Nuisance Parameters Present Only Under The Alternative
, 1995
"... . The nonparametric and the nuisance parameter approaches to consistently testing statistical models are both attempts to estimate topological measures of distance between a parametric and a nonparametric fit, and neither dominates in experiments. This topological unification allows us to greatly ex ..."
Abstract

Cited by 83 (13 self)
 Add to MetaCart
. The nonparametric and the nuisance parameter approaches to consistently testing statistical models are both attempts to estimate topological measures of distance between a parametric and a nonparametric fit, and neither dominates in experiments. This topological unification allows us to greatly extend the nuisance parameter approach. How and why the nuisance parameter approach works and how it can be extended bears closely on recent developments in artificial neural networks. Statistical content is provided by viewing specification tests with nuisance parameters as tests of hypotheses about Banachvalued random elements and applying the Banach Central Limit Theorem and Law of Iterated Logarithm, leading to simple procedures that can be used as a guide to when computationally more elaborate procedures may be warranted. 1. Introduction In testing whether or not a parametric statistical model is correctly specified, there are a number of apparently distinct approaches one might take. T...
Consistent Specification Testing Via Nonparametric Series Regression
 Econometrica
, 1995
"... This paper proposes two consistent onesided specification tests for parametric regression models, one based on the sample covariance between the residual from the parametric model and the discrepancy between the parametric and nonparametric fitted values; the other based on the difference in sum ..."
Abstract

Cited by 65 (3 self)
 Add to MetaCart
This paper proposes two consistent onesided specification tests for parametric regression models, one based on the sample covariance between the residual from the parametric model and the discrepancy between the parametric and nonparametric fitted values; the other based on the difference in sums of squared residuals between the parametric and nonparametric models. We estimate the nonparametric model by series regression
Testing Slope Homogeneity in Large Panels ∗
, 2007
"... This paper proposes a standardized version of Swamy’s test of slope homogeneity for panel data models where the cross section dimension (N) could be large relative to the time series dimension (T). The proposed test, denoted by ˜ ∆, exploits the cross section dispersion of individual slopes weighted ..."
Abstract

Cited by 18 (2 self)
 Add to MetaCart
This paper proposes a standardized version of Swamy’s test of slope homogeneity for panel data models where the cross section dimension (N) could be large relative to the time series dimension (T). The proposed test, denoted by ˜ ∆, exploits the cross section dispersion of individual slopes weighted by their relative precision. In the case of models with strictly exogenous regressors, but with nonnormally distributed errors, the test is shown to have a standard normal distribution as (N, T) j → ∞ such that √ N/T 2 → 0. When the errors are normally distributed, a meanvariance bias adjusted version of the test is shown to be normally distributed irrespective of the relative expansion rates of N and T. The test is also applied to stationary dynamic models, and shown to be valid asymptotically so long as N/T → κ, as (N, T) j → ∞, where 0 ≤ κ < ∞. Using Monte Carlo experiments, it is shown that the test has the correct size and satisfactory power in panels with strictly exogenous regressors for various combinations of N and T. Similar results are also obtained for dynamic panels, but only if the autoregressive coefficient is not too close to unity and so long as T ≥ N.
The elasticity of substitution: evidence from a UK firmlevel data set”, Bank of England Working Paper no
, 2008
"... Using a panel of UK firms spanning three decades, we provide estimates of the longrun elasticity of substitution between capital and labour, the (negative of the) elasticity of capital and investment with respect to the user cost. The parameter is estimated with longperiod differenced data and poo ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
Using a panel of UK firms spanning three decades, we provide estimates of the longrun elasticity of substitution between capital and labour, the (negative of the) elasticity of capital and investment with respect to the user cost. The parameter is estimated with longperiod differenced data and pooled mean group panel methods. The robust result is that it lies below 0.5, confirming previous results obtained using aggregate UK data, and consistent with some recent results using US data. The estimated returns to scale exceed unity, but when constant returns are imposed the estimated elasticity of substitution is substantially unchanged.
Sustainable pioneering advantage? Profit implications of market entry order. Marketing Science 22(3): 371–392
, 2003
"... There is strong theoretical and empirical evidence supporting the idea that “firsttomarket ” leads to an enduring market share advantage. In sharp contrast to these findings, we find that at the business unit level being firsttomarket leads, on average, to a longterm profit disadvantage. This r ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
(Show Context)
There is strong theoretical and empirical evidence supporting the idea that “firsttomarket ” leads to an enduring market share advantage. In sharp contrast to these findings, we find that at the business unit level being firsttomarket leads, on average, to a longterm profit disadvantage. This result holds for a sample of consumer goods as well as a sample of industrial goods and leads to questions about the validity of first mover advantage, in and of itself, as a strategy to achieve superior performance. We replicate the typical demandside pioneering advantage but find an even greater average cost disadvantage, which is the source of the pioneering profit disadvantage. In an extended analysis, we show that firsttomarket leads to an initial profit advantage, which, depending on the sample or profit measure, lasts for about 12 to 14 years before turning into a disadvantage. Moreover, we show that pioneers differentially benefit from a lack of consumer learning, a strong market position and patent protection. These three moderating factors together can actually help pioneers achieve a sustainable profit advantage over later entrants. Finally, we find strong support for the theoretical argument that the entry order decision should be treated as endogenous in empirical estimation.
Macroeconomic relativity: government spending, private investment and unemployment
 in the USA 19481998’, Jnl. Structural Change and Economic Dynamics
, 1999
"... A new approach to time series modelling is used to explore how government spending and private capital investment may have influenced the unemployment rate in the USA between 1948 and 1988. The resulting model suggests strongly that the investigation of dynamic relationships between purely relative ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
(Show Context)
A new approach to time series modelling is used to explore how government spending and private capital investment may have influenced the unemployment rate in the USA between 1948 and 1988. The resulting model suggests strongly that the investigation of dynamic relationships between purely relative measures of the major macroeconomic variables can help in understanding changes in economic behaviour. It also allows for an initial investigation of the post1988 period and an analysis of possible reasons for the differences in the investmentunemployment behaviour of the US economy before and after 1988. KEY WORDS: nonlinear time series analysis; databased mechanistic modelling; investment and unemployment; relativistic economic variables.
The Hausman test statistic can be negative even asymptotically”, unpublished manuscript
, 2007
"... ABSTRACT. We show that under the alternative hypothesis the Hausman chisquare test statistic can be negative not only in small samples but even asymptotically. Therefore in large samples such a result is only compatible with the alternative and should be interpreted accordingly. Applying a known i ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
ABSTRACT. We show that under the alternative hypothesis the Hausman chisquare test statistic can be negative not only in small samples but even asymptotically. Therefore in large samples such a result is only compatible with the alternative and should be interpreted accordingly. Applying a known insight from finite samples, this can only occur if the different estimation precisions (often the residual variance estimates) under the null and the alternative both enter the test statistic. In finite samples, using the absolute value of the test statistic is a remedy that does not alter the null distribution and is thus admissible. [add the following paragraph for long summary:] Even for positive test statistics the relevant covariance matrix difference should be routinely checked for positive semidefiniteness, because we also show that otherwise test results may be misleading. Of course the preferable solution still is to impose the same nuisance parameter (i.e., residual variance) estimate under the null and alternative hypotheses, if the model context permits that with relative ease. We complement the likelihoodbased exposition by a formal proof in an omittedvariable context, we present simulation evidence for the test of panel random effects, and we illustrate the problems with a panel homogeneity test.
Econometrica
"... This paper examines the consequences and detection of model misspecification when using maximum likelihood techniques for estimation and inference. The quasimaximum likelihood estimator (QMLE) converges to a well defined limit, and may or may not be consistent for particular parameters of interest. ..."
Abstract
 Add to MetaCart
This paper examines the consequences and detection of model misspecification when using maximum likelihood techniques for estimation and inference. The quasimaximum likelihood estimator (QMLE) converges to a well defined limit, and may or may not be consistent for particular parameters of interest. Standard tests (Wald, Lagrange Multiplier, or Likelihood Ratio) are invalid in the presence of misspecification, but more general statistics are given which allow inferences to be drawn robustly. The properties of the QMLI and the information matrix are exploited to yield several useful tests for model misspecification