Results 1 - 10
of
916
Strictly Proper Scoring Rules, Prediction, and Estimation
, 2007
"... Scoring rules assess the quality of probabilistic forecasts, by assigning a numerical score based on the predictive distribution and on the event or value that materializes. A scoring rule is proper if the forecaster maximizes the expected score for an observation drawn from the distribution F if he ..."
Abstract
-
Cited by 373 (28 self)
- Add to MetaCart
Scoring rules assess the quality of probabilistic forecasts, by assigning a numerical score based on the predictive distribution and on the event or value that materializes. A scoring rule is proper if the forecaster maximizes the expected score for an observation drawn from the distribution F if he or she issues the probabilistic forecast F, rather than G ̸ = F. It is strictly proper if the maximum is unique. In prediction problems, proper scoring rules encourage the forecaster to make careful assessments and to be honest. In estimation problems, strictly proper scoring rules provide attractive loss and utility functions that can be tailored to the problem at hand. This article reviews and develops the theory of proper scoring rules on general probability spaces, and proposes and discusses examples thereof. Proper scoring rules derive from convex functions and relate to information measures, entropy functions, and Bregman divergences. In the case of categorical variables, we prove a rigorous version of the Savage representation. Examples of scoring rules for probabilistic forecasts in the form of predictive densities include the logarithmic, spherical, pseudospherical, and quadratic scores. The continuous ranked probability score applies to probabilistic forecasts that take the form of predictive cumulative distribution functions. It generalizes the absolute error and forms a special case of a new and very general type of score, the energy score. Like many other scoring rules, the energy score admits a kernel representation in terms of negative definite functions, with links to inequalities of Hoeffding type, in both univariate and multivariate settings. Proper scoring rules for quantile and interval forecasts are also discussed. We relate proper scoring rules to Bayes factors and to cross-validation, and propose a novel form of cross-validation known as random-fold cross-validation. A case study on probabilistic weather forecasts in the North American Pacific Northwest illustrates the importance of propriety. We note optimum score approaches to point and quantile
Counterfactual decomposition of changes in wage distributions using quantile regression
- Journal of Applied Econometrics
, 2005
"... We propose a method to decompose the changes in the wage distribution over a period of time in several factors contributing to those changes. The method is based on the estimation of marginal wage distributions consistent with a conditional distribution estimated by quantile regression as well as wi ..."
Abstract
-
Cited by 310 (0 self)
- Add to MetaCart
(Show Context)
We propose a method to decompose the changes in the wage distribution over a period of time in several factors contributing to those changes. The method is based on the estimation of marginal wage distributions consistent with a conditional distribution estimated by quantile regression as well as with any hypothesized distribution for the covariates. Comparing the marginal distributions implied by different distributions for the covariates, one is then able to perform counterfactual exercises. The proposed methodology enables the identification of the sources of the increased wage inequality observed in most countries. Specifically, it decomposes the changes in the wage distribution over a period of time into several factors contributing to those changes, namely by discriminating between changes in the characteristics of the working population and changes in the returns to these characteristics. We apply this methodology to Portuguese data for the period 1986–1995, and find that the observed increase in educational levels contributed decisively towards greater wage inequality. Copyright © 2005 John Wiley & Sons, Ltd. 1.
On the coherence of expected shortfall
- In: Szegö, G. (Ed.), “Beyond VaR” (Special Issue). Journal of Banking & Finance
, 2002
"... Expected Shortfall (ES) in several variants has been proposed as remedy for the deficiencies of Value-at-Risk (VaR) which in general is not a coherent risk measure. In fact, most definitions of ES lead to the same results when applied to continuous loss distributions. Differences may appear when the ..."
Abstract
-
Cited by 217 (8 self)
- Add to MetaCart
(Show Context)
Expected Shortfall (ES) in several variants has been proposed as remedy for the deficiencies of Value-at-Risk (VaR) which in general is not a coherent risk measure. In fact, most definitions of ES lead to the same results when applied to continuous loss distributions. Differences may appear when the underlying loss distributions have discontinuities. In this case even the coherence property of ES can get lost unless one took care of the details in its definition. We compare some of the definitions of Expected Shortfall, pointing out that there is one which is robust in the sense of yielding a coherent risk measure regardless of the underlying distributions. Moreover, this Expected Shortfall can be estimated effectively even in cases where the usual estimators for VaR fail.
Large Sample Sieve Estimation of Semi-Nonparametric Models
- Handbook of Econometrics
, 2007
"... Often researchers find parametric models restrictive and sensitive to deviations from the parametric specifications; semi-nonparametric models are more flexible and robust, but lead to other complications such as introducing infinite dimensional parameter spaces that may not be compact. The method o ..."
Abstract
-
Cited by 185 (19 self)
- Add to MetaCart
Often researchers find parametric models restrictive and sensitive to deviations from the parametric specifications; semi-nonparametric models are more flexible and robust, but lead to other complications such as introducing infinite dimensional parameter spaces that may not be compact. The method of sieves provides one way to tackle such complexities by optimizing an empirical criterion function over a sequence of approximating parameter spaces, called sieves, which are significantly less complex than the original parameter space. With different choices of criteria and sieves, the method of sieves is very flexible in estimating complicated econometric models. For example, it can simultaneously estimate the parametric and nonparametric components in semi-nonparametric models with or without constraints. It can easily incorporate prior information, often derived from economic theory, such as monotonicity, convexity, additivity, multiplicity, exclusion and non-negativity. This chapter describes estimation of semi-nonparametric econometric models via the method of sieves. We present some general results on the large sample properties of the sieve estimates, including consistency of the sieve extremum estimates, convergence rates of the sieve M-estimates, pointwise normality of series estimates of regression functions, root-n asymptotic normality and efficiency of sieve estimates of smooth functionals of infinite dimensional parameters. Examples are used to illustrate the general results.
Semiparametric Analysis of Discrete Response: Asymptotic Properties of Maximum Score
- Estimation,Journal of Econometrics
, 1985
"... This paper is concerned with the estimation of the model MED ( y 1 x) = x/3 from a random sample of observations on (sgn y, x). Manski (1975) introduced the maximum score estimator of the normalized parameter vector /3 * = /3/]]8]]. In the present paper, strong consistency is proved. It is also pr ..."
Abstract
-
Cited by 164 (1 self)
- Add to MetaCart
This paper is concerned with the estimation of the model MED ( y 1 x) = x/3 from a random sample of observations on (sgn y, x). Manski (1975) introduced the maximum score estimator of the normalized parameter vector /3 * = /3/]]8]]. In the present paper, strong consistency is proved. It is also proved that the maximum score estimate lies outside any fixed neighborhood of p * with probability that goes to zero at exponential rate. 1.
A tutorial on MM algorithms
- Amer. Statist
, 2004
"... Most problems in frequentist statistics involve optimization of a function such as a likelihood or a sum of squares. EM algorithms are among the most effective algorithms for maximum likelihood estimation because they consistently drive the likelihood uphill by maximizing a simple surrogate function ..."
Abstract
-
Cited by 154 (6 self)
- Add to MetaCart
Most problems in frequentist statistics involve optimization of a function such as a likelihood or a sum of squares. EM algorithms are among the most effective algorithms for maximum likelihood estimation because they consistently drive the likelihood uphill by maximizing a simple surrogate function for the loglikelihood. Iterative optimization of a surrogate function as exemplified by an EM algorithm does not necessarily require missing data. Indeed, every EM algorithm is a special case of the more general class of MM optimization algorithms, which typically exploit convexity rather than missing data in majorizing or minorizing an objective function. In our opinion, MM algorithms deserve to part of the standard toolkit of professional statisticians. The current article explains the principle behind MM algorithms, suggests some methods for constructing them, and discusses some of their attractive features. We include numerous examples throughout the article to illustrate the concepts described. In addition to surveying previous work on MM algorithms, this article introduces some new material on constrained optimization and standard error estimation. Key words and phrases: constrained optimization, EM algorithm, majorization, minorization, Newton-Raphson 1 1
Is There a Glass Ceiling over Europe? Exploring the Gender
- Pay Gap across the Wages Distribution”, Industrial and Labor Relations Review 60
, 2007
"... Using harmonized data for the years 1995-2001 from the European Community Household Panel, the authors analyze gender pay gaps by sector across the wage dis tribution in eleven countries. In estimations that control for the effects of individual characteristics at different points of the distributio ..."
Abstract
-
Cited by 135 (3 self)
- Add to MetaCart
Using harmonized data for the years 1995-2001 from the European Community Household Panel, the authors analyze gender pay gaps by sector across the wage dis tribution in eleven countries. In estimations that control for the effects of individual characteristics at different points of the distribution, they calculate the part of the gap attributable to differing returns between men and women. The magnitude of the gen der pay gap, thus measured, varied substantially across countries and across the public and private sector wage distributions. The gap typically widened toward the top of the wage distribution (the "glass ceiling " effect), and in a few cases it also widened at the bottom (the "sticky floor " effect). The authors suggest that differences in childcare provision and wage setting institutions across EU countries may partly account for the variation in patterns by country and sector. Although the mean gender wage gap has been extensively studied in the labor eco nomics literature, only relatively recently has attention shifted to investigating the degree to which the gender gap might vary across the wage distribution and why. Albrecht, Bjorklund, and Vroman (2003), using 1998 data for Sweden, showed that the gender wage gap was increasing throughout the wage distribution and accelerating at the top, and they interpreted this as evidence of a glass ceiling in Sweden. De la Rica, Dolado, and
Reciprocally Interlocking Boards of Directors and Executive
- C© 2010 Blackwell Publishing Ltd 1142 SHEU, CHUNG AND LIU
, 1997
"... Support this valuable resource today! ..."