Results 1  10
of
41
Evaluating Interval Forecasts
 International Economic Review
, 1997
"... This paper is intended to address the deficiency by clearly defining what is meant by a "good" interval forecast, and describing how to test if a given interval forecast deserves the label "good". One of the motivations of Engle's (1982) classic paper was to form dynamic int ..."
Abstract

Cited by 289 (11 self)
 Add to MetaCart
This paper is intended to address the deficiency by clearly defining what is meant by a "good" interval forecast, and describing how to test if a given interval forecast deserves the label "good". One of the motivations of Engle's (1982) classic paper was to form dynamic interval forecasts around point predictions. The insight was that the intervals should be narrow in tranquil times and wide in volatile times, so that the occurrences of observations outside the interval forecast would be spread out over the sample and not come in clusters. An interval forecast that 3 fails to account for higherorder dynamics may be correct on average (have correct unconditional coverage), but in any given period it will have incorrect conditional coverage characterized by clustered outliers. These concepts will be defined precisely below, and tests for correct conditional coverage are suggested. Chatfield (1993) emphasizes that model misspecification is a much more important source of poor interval forecasting than is simple estimation error. Thus, our testing criterion and the tests of this criterion are model free. In this regard, the approach taken here is similar to the one taken by Diebold and Mariano (1995). This paper can also be seen as establishing a formal framework for the ideas suggested in Granger, White and Kamstra (1989). Recently, financial market participants have shown increasing interest in interval forecasts as measures of uncertainty. Thus, we apply our methods to the interval forecasts provided by J.P. Morgan (1995). Furthermore, the socalled "ValueatRisk" measures suggested for risk measurement correspond to tail forecasts, i.e., onesided interval forecasts of portfolio returns. Lopez (1996) evaluates these types of forecasts applying the procedures develo...
Value–at–Risk Prediction: A Comparison of Alternative Strategies
 J. Financ. Econometr
"... Given thegrowingneed formanaging financial risk, riskpredictionplays an increasing role inbanking and finance. In this studywecompare theoutofsample performance of existing methods and some new models for predicting valueatrisk (VaR) in a univariate context. Usingmore than 30 years of the daily ..."
Abstract

Cited by 40 (6 self)
 Add to MetaCart
Given thegrowingneed formanaging financial risk, riskpredictionplays an increasing role inbanking and finance. In this studywecompare theoutofsample performance of existing methods and some new models for predicting valueatrisk (VaR) in a univariate context. Usingmore than 30 years of the daily return data on theNASDAQ Composite Index, we find that most approaches perform inadequately, although several models are acceptable under current regulatory assessment rules for model adequacy. A hybrid method, combining a heavytailed generalized autoregressive conditionally heteroskedastic (GARCH) filter with an extreme value theorybased approach, performs best overall, closely followed by a variant on a filtered historical simulation, and a newmodel based on heteroskedastic mixture distributions. Conditional autoregressive VaR (CAViaR) models perform inadequately, though an extension to a particular CAViaR model is shown to outperform the others.
Evaluating ValueatRisk Models with DeskLevel Data”, Fourth Joint Central Bank
 Research Conference on 9 Nov 2005, European Central Bank, Research Paper, Preliminary Version, Cited on 17.8.2006, [www.ecb.int/events/pdf/conferences/jcbrconf4/Christoffersen.pdf
, 2005
"... We present new evidence on disaggregated profit and loss and VaR forecasts obtained from a large international commercial bank. Our dataset includes daily P/L generated by four separate business lines within the bank. All four business lines are involved in securities trading and each is observed da ..."
Abstract

Cited by 35 (1 self)
 Add to MetaCart
We present new evidence on disaggregated profit and loss and VaR forecasts obtained from a large international commercial bank. Our dataset includes daily P/L generated by four separate business lines within the bank. All four business lines are involved in securities trading and each is observed daily for a period of at least two years. We also collected the corresponding daily, 1day ahead VaR forecasts for each business line. Given this rich dataset, we provide an integrated, unifying framework for assessing the accuracy of VaR forecasts. Our approach includes many existing backtesting techniques as special cases. In addition, we describe some new tests which are suggested by our framework. A thorough Monte Carlo comparison of the various methods is conducted to provide guidance as to which of these many tests have the best finitesample size and power properties.
Evaluating the predictive accuracy of volatility models
 Journal of Forecasting
, 2001
"... Statistical loss functions that generally lack economic content are commonly used for evaluating financial volatility forecasts. In this paper, an evaluation framework based on loss functions tailored to a user’s economic interests is proposed. According to these interests, the user specifies the ec ..."
Abstract

Cited by 27 (5 self)
 Add to MetaCart
Statistical loss functions that generally lack economic content are commonly used for evaluating financial volatility forecasts. In this paper, an evaluation framework based on loss functions tailored to a user’s economic interests is proposed. According to these interests, the user specifies the economic events to be forecast, the criterion with which to evaluate these forecasts, and the subsets of the forecasts of particular interest. The volatility forecasts from a model are then transformed into probability forecasts of the relevant events and evaluated using the specified criteria (i.e., a probability scoring rule and calibration tests). An empirical example using exchange rate data illustrates the framework and confirms that the choice of loss function directly affects the forecast evaluation results.
Evaluating the Survey of Professional Forecasters probability distributions of expected inflation based on derived probability forecasts
, 2005
"... Regressionbased tests of forecast probabilities of particular events of interest are constructed. The event forecast probabilities are derived from the SPF density forecasts of expected inflation and output growth. Tests of the event probabilities supplement statisticallybased assessments of the f ..."
Abstract

Cited by 24 (6 self)
 Add to MetaCart
Regressionbased tests of forecast probabilities of particular events of interest are constructed. The event forecast probabilities are derived from the SPF density forecasts of expected inflation and output growth. Tests of the event probabilities supplement statisticallybased assessments of the forecast densities using the probability integral transform approach. The regressionbased tests assess whether the forecast probabilities of particular events are equal to the true probabilities, and whether any systematic divergences between the two are related to variables in the agents ’ information set at the time the forecasts were made. Forecast encompassing tests are also used to assess the quality of the event probability forecasts.
Bank Capital Requirements for Market Risk: The Internal Models Approach,” Federal Reserve Bank of New York Economic Policy Review
, 1997
"... The increased prominence of trading activities at many large banking companies has highlighted bank exposure to market risk—the risk of loss from adverse movements in financial market rates and prices. Recognizing the importance of trading operations, banks have sought ways to measure and to manage ..."
Abstract

Cited by 24 (3 self)
 Add to MetaCart
The increased prominence of trading activities at many large banking companies has highlighted bank exposure to market risk—the risk of loss from adverse movements in financial market rates and prices. Recognizing the importance of trading operations, banks have sought ways to measure and to manage the associated risks. At the same time, bank supervisors in the United States and abroad have taken steps to ensure that banks have adequate internal controls and capital resources to address these risks. Prominent among the steps taken by supervisors is the development of formal capital requirements for the market risk exposures arising from banks ’ trading activities. These market risk capital requirements, which will take full effect in January 1998, depart from earlier capital rules in two notable ways. First, the capital charge is based on the output of a bank’s internal risk measurement model *Darryll Hendricks and Beverly Hirtle are vice presidents at the Federal Reserve Bank of New York. rather than on an externally imposed supervisory measure. Second, the capital requirements incorporate qualitative standards for a bank’s risk measurement system. This paper presents an overview of the new capital requirements. In the first section, we describe the structure of the requirements and the considerations that went into their design. In addition, we address some of the concerns that have been raised about the methods of calculating capital charges under the new rules. The paper’s second section considers the probable impact of the market risk capital requirements. After performing a set of rough calculations to show that the effect of the internal models approach on required capital levels and capital ratios will probably be modest, we identify some significant benefits of the new approach. Most notably, the approach will lead to regulatory capital charges that conform more closely to banks’ true risk exposures. Moreover, the information generated by the models will allow supervisors and financial market participants to compare risk exposures over time and across institutions.
Evaluating credit risk models
 Journal of Banking and Finance
, 2000
"... England’s conference on “Credit Risk Modelling and the Regulatory Implications ” for their comments and suggestions. Evaluating Credit Risk Models Over the past decade, commercial banks have devoted many resources to developing internal models to better quantify their financial risks and assign econ ..."
Abstract

Cited by 20 (1 self)
 Add to MetaCart
England’s conference on “Credit Risk Modelling and the Regulatory Implications ” for their comments and suggestions. Evaluating Credit Risk Models Over the past decade, commercial banks have devoted many resources to developing internal models to better quantify their financial risks and assign economic capital. These efforts have been recognized and encouraged by bank regulators. Recently, banks have extended these efforts into the field of credit risk modeling. However, an important question for both banks and their regulators is evaluating the accuracy of a model’s forecasts of credit losses, especially given the small number of available forecasts due to their typically long planning horizons. Using a panel data approach, we propose evaluation methods for credit risk models based on crosssectional simulation. Specifically, models are evaluated not only on their forecasts over time, but also on their forecasts at a given point in time for simulated credit portfolios. Once the forecasts corresponding to these portfolios are generated, they can be evaluated using various statistical methods. I.
Methods for evaluating valueatrisk estimates
 Federal Reserve Bank of
, 1998
"... adopted the market risk amendment (MRA) to the 1988 Basle Capital Accord. The MRA, which became effective in January 1998, requires that commercial banks with significant trading activities set aside capital to cover the market risk exposure in their trading accounts. (For further details on the mar ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
adopted the market risk amendment (MRA) to the 1988 Basle Capital Accord. The MRA, which became effective in January 1998, requires that commercial banks with significant trading activities set aside capital to cover the market risk exposure in their trading accounts. (For further details on the market risk amendment, see Federal Register [1996].) The market risk capital requirements are to be based on the valueatrisk (VaR) estimates generated by the banks ’ own risk management models. In general, such risk management, or VaR, models forecast the distributions of future portfolio returns. To fix notation, let yt denote the log of portfolio value at time t. The kperiodahead portfolio return is εt + k = yt + k – yt.
On the accuracy of VaR estimates based on the VarianceCovariance approach
, 1997
"... We present a thorough empirical study (based on over 8 years of daily data) of candidate models for forecasting losses in relation to positions held against individual risk factors as well as losses in relation to a portfolio of risk factors. As part of the study, we also define various measures ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
We present a thorough empirical study (based on over 8 years of daily data) of candidate models for forecasting losses in relation to positions held against individual risk factors as well as losses in relation to a portfolio of risk factors. As part of the study, we also define various measures and visualization techniques to evaluate the performance of the candidate models in the context of risk management and introduce two innovations: 1) tail emphasized model optimization and 2) implied covariance forecasting. Finally, we highlight the important issue of the estimation error of the covariance matrix in relation to its dimension and the number of datum from which it is estimated and outline a framework for handling this problem.