Results 1  10
of
355
Emerging Equity Market Volatility
, 1997
"... Understanding volatility in emerging capital markets is important for determining the cost of capital and for evaluating direct investment and asset allocation decisions. We provide an approach that allows the relative importance of world and local information to change through time in both the expe ..."
Abstract

Cited by 164 (28 self)
 Add to MetaCart
Understanding volatility in emerging capital markets is important for determining the cost of capital and for evaluating direct investment and asset allocation decisions. We provide an approach that allows the relative importance of world and local information to change through time in both the expected returns and conditional variance processes. Our timeseries and crosssectional models analyze the reasons that volatility is different across emerging markets, particularly with respect to the timing of capital market reforms. We find that capital market liberalizations often increase the correlation between local market returns and the world market but do not drive up local market volatility.
Robust Inference with Multiway Clustering
, 2006
"... In this paper we propose a new variance estimator for OLS as well as for nonlinear estimators such as logit, probit and GMM. This variance estimator enables clusterrobust inference when there is twoway or multiway clustering that is nonnested. The variance estimator extends the standard clusterr ..."
Abstract

Cited by 141 (4 self)
 Add to MetaCart
In this paper we propose a new variance estimator for OLS as well as for nonlinear estimators such as logit, probit and GMM. This variance estimator enables clusterrobust inference when there is twoway or multiway clustering that is nonnested. The variance estimator extends the standard clusterrobust variance estimator or sandwich estimator for oneway clustering (e.g. Liang and Zeger (1986), Arellano (1987)) and relies on similar relatively weak distributional assumptions. Our method is easily implemented in statistical packages, such as Stata and SAS, that already offer clusterrobust standard errors when there is oneway clustering. The method is demonstrated by a Monte Carlo analysis for a twoway random effects model; a Monte Carlo analysis of a placebo law that extends the stateyear effects example of Bertrand et al. (2004) to two dimensions; and by application to two studies in the empirical public/labor literature where twoway clustering is present.
CapitalSkill Complementarity and Inequality: A Macroeconomic Analysis
, 1997
"... There have been striking postwar changes in the supply and price of skilled labor relative to unskilled labor. The relative quantity of skilled labor has increased substantially, and the skill premium, which is the wage of skilled labor relative to unskilled labor, has grown significantly since 1980 ..."
Abstract

Cited by 120 (5 self)
 Add to MetaCart
There have been striking postwar changes in the supply and price of skilled labor relative to unskilled labor. The relative quantity of skilled labor has increased substantially, and the skill premium, which is the wage of skilled labor relative to unskilled labor, has grown significantly since 1980. Many studies have found that it is difficult to account for the increase in the skill premium on the basis of observable variables and have concluded that latent "skillbiased technological change" is the main factor responsible for the increase. This paper develops a framework that provides a simple, explicit economic mechanism for understanding skillbiased technological change in terms of observable variables and uses the framework to evaluate the fraction of variation in the skill premium that can be accounted for by changes in observed factor quantities. We use a version of the neoclassical growth model in which the key feature of the aggregate technology is capitalskill complementar...
Is default event risk priced in corporate bonds. Working
, 2002
"... We identify and estimate the sources of risk that cause corporate bonds to earn an excess return over defaultfree bonds. In particular, we estimate the risk premium associated with a default event. Default is modelled using a jump process with stochastic intensity. For a large set of firms, we mode ..."
Abstract

Cited by 91 (1 self)
 Add to MetaCart
We identify and estimate the sources of risk that cause corporate bonds to earn an excess return over defaultfree bonds. In particular, we estimate the risk premium associated with a default event. Default is modelled using a jump process with stochastic intensity. For a large set of firms, we model the default intensity of each firm as a function of common and firmspecific factors. In the model, corporate bond excess returns can be due to risk premia on factors driving the intensities and due to a risk premium on the default jump risk. The model is estimated using data on corporate bond prices for 104 US firms and historical default rate data. We find significant risk premia on the factors that drive intensities. However, these risk premia cannot fully explain the size of corporate bond excess returns. Next, we estimate the size of the default jump risk premium, correcting for possible tax and liquidity effects. The estimates show that this event risk premium is a significant and economically important determinant of excess corporate bond returns.
How often to sample a continuoustime process in the presence of market microstructure noise
 Review of Financial Studies
, 2005
"... In theory, the sum of squares of log returns sampled at high frequency estimates their variance. When market microstructure noise is present but unaccounted for, however, we show that the optimal sampling frequency is finite and derives its closedform expression. But even with optimal sampling, usi ..."
Abstract

Cited by 90 (13 self)
 Add to MetaCart
In theory, the sum of squares of log returns sampled at high frequency estimates their variance. When market microstructure noise is present but unaccounted for, however, we show that the optimal sampling frequency is finite and derives its closedform expression. But even with optimal sampling, using say 5min returns when transactions are recorded every second, a vast amount of data is discarded, in contradiction to basic statistical principles. We demonstrate that modeling the noise and using all the data is a better solution, even if one misspecifies the noise distribution. So the answer is: sample as often as possible. Over the past few years, price data sampled at very high frequency have become increasingly available in the form of the Olsen dataset of currency exchange rates or the TAQ database of NYSE stocks. If such data were not affected by market microstructure noise, the realized volatility of the process (i.e., the average sum of squares of logreturns sampled at high frequency) would estimate the returns ’ variance, as is well known. In fact, sampling as often as possible would theoretically produce in the limit a perfect estimate of that variance. We start by asking whether it remains optimal to sample the price process at very high frequency in the presence of market microstructure noise, consistently with the basic statistical principle that, ceteris paribus, more data are preferred to less. We first show that, if noise is present but unaccounted for, then the optimal sampling frequency is finite, and we We are grateful for comments and suggestions from the editor, Maureen O’Hara, and two anonymous
The bootstrap
 In Handbook of Econometrics
, 2001
"... The bootstrap is a method for estimating the distribution of an estimator or test statistic by resampling one’s data. It amounts to treating the data as if they were the population for the purpose of evaluating the distribution of interest. Under mild regularity conditions, the bootstrap yields an a ..."
Abstract

Cited by 75 (1 self)
 Add to MetaCart
The bootstrap is a method for estimating the distribution of an estimator or test statistic by resampling one’s data. It amounts to treating the data as if they were the population for the purpose of evaluating the distribution of interest. Under mild regularity conditions, the bootstrap yields an approximation to the distribution of an estimator or test statistic that is at least as accurate as the
Robust permanent income and pricing
 Review of Economic Studies
, 1999
"... and Nancy Stokey for useful criticisms of earlier drafts. We are grateful to WenFang Liu for excellent research assistance. We thank two referees of an earlier draft for comments that prompted an extensive reorientation of our research. Robust Permanent Income and Pricing \::: I suppose there exist ..."
Abstract

Cited by 66 (14 self)
 Add to MetaCart
and Nancy Stokey for useful criticisms of earlier drafts. We are grateful to WenFang Liu for excellent research assistance. We thank two referees of an earlier draft for comments that prompted an extensive reorientation of our research. Robust Permanent Income and Pricing \::: I suppose there exists an extremely powerful, and, if I may so speak, malignant being, whose whole endeavors are directed toward deceiving me. " Rene Descartes, Meditations, II. 1 1.
Semisupervised Learning of Classifiers: Theory, Algorithms and Their Application to HumanComputer Interaction
 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
, 2004
"... Automatic classification is one of the basic tasks required in any pattern recognition and human computer interaction application. In this paper we discuss training probabilistic classifiers with labeled and unlabeled data. We provide a new analysis that shows under what conditions unlabeled data ..."
Abstract

Cited by 60 (15 self)
 Add to MetaCart
Automatic classification is one of the basic tasks required in any pattern recognition and human computer interaction application. In this paper we discuss training probabilistic classifiers with labeled and unlabeled data. We provide a new analysis that shows under what conditions unlabeled data can be used in learning to improve classification performance. We also show that if the conditions are violated, using unlabeled data can be detrimental to classification performance. We discuss the implications of this analysis to a specific type of probabilistic classifiers, Bayesian networks, and propose a new structure learning algorithm that can utilize unlabeled data to improve classification. Finally, we show how the resulting algorithms are successfully employed in two applications related to humancomputer interaction and pattern recognition; facial expression recognition and face detection.
Bayesian analysis of DSGE models
 ECONOMETRICS REVIEW
, 2007
"... This paper reviews Bayesian methods that have been developed in recent years to estimate and evaluate dynamic stochastic general equilibrium (DSGE) models. We consider the estimation of linearized DSGE models, the evaluation of models based on Bayesian model checking, posterior odds comparisons, and ..."
Abstract

Cited by 56 (2 self)
 Add to MetaCart
This paper reviews Bayesian methods that have been developed in recent years to estimate and evaluate dynamic stochastic general equilibrium (DSGE) models. We consider the estimation of linearized DSGE models, the evaluation of models based on Bayesian model checking, posterior odds comparisons, and comparisons to vector autoregressions, as well as the nonlinear estimation based on a secondorder accurate model solution. These methods are applied to data generated from correctly specified and misspecified linearized DSGE models, and a DSGE model that was solved with a secondorder perturbation method. (JEL C11, C32, C51, C52)