Results 1  10
of
33
2003): “Forecast uncertainties in macroeconometric modelling: an application to the UK economy
 Journal of the American Statistical Association
"... This paper argues that probability forecasts convey information on the uncertainties that surround macroeconomic forecasts in a straightforward manner which is preferable to other alternatives, including the use of confidence intervals. Probability forecasts obtained using a small benchmark macroec ..."
Abstract

Cited by 81 (29 self)
 Add to MetaCart
This paper argues that probability forecasts convey information on the uncertainties that surround macroeconomic forecasts in a straightforward manner which is preferable to other alternatives, including the use of confidence intervals. Probability forecasts obtained using a small benchmark macroeconometric model as well as a number of other alternatives are presented and evaluated using recursive forecasts generated over the period 1999q12001q1. Out of sample probability forecasts of inflation and output growth are also provided over the period 2001q22003q1, and their implications discussed in relation to the Bank of England’s inflation target and the need to avoid recessions, both as separate events and jointly. The robustness of the results to parameter and model uncertainties is also investigated by a pragmatic implementation of the Bayesian model averaging approach.
J.P.: Ranking learning algorithms: Using IBL and metalearning on accuracy and time results
 Machine Learning
, 2003
"... Abstract. We present a metalearning method to support selection of candidate learning algorithms. It uses a kNearest Neighbor algorithm to identify the datasets that are most similar to the one at hand. The distance between datasets is assessed using a relatively small set of data characteristics, ..."
Abstract

Cited by 69 (7 self)
 Add to MetaCart
(Show Context)
Abstract. We present a metalearning method to support selection of candidate learning algorithms. It uses a kNearest Neighbor algorithm to identify the datasets that are most similar to the one at hand. The distance between datasets is assessed using a relatively small set of data characteristics, which was selected to represent properties that affect algorithm performance. The performance of the candidate algorithms on those datasets is used to generate a recommendation to the user in the form of a ranking. The performance is assessed using a multicriteria evaluation measure that takes not only accuracy, but also time into account. As it is not common in Machine Learning to work with rankings, we had to identify and adapt existing statistical techniques to devise an appropriate evaluation methodology. Using that methodology, we show that the metalearning method presented leads to significantly better rankings than the baseline ranking method. The evaluation methodology is general and can be adapted to other ranking problems. Although here we have concentrated on ranking classification algorithms, the metalearning framework presented can provide assistance in the selection of combinations of methods or more complex problem solving strategies.
A Comparison of Ranking Methods for Classification Algorithm Selection
 In Proceedings of the European Conference on Machine Learning ECML2000 (to Be Published
, 2000
"... . We investigate the problem of using past performance information to select an algorithm for a given classification problem. We present three ranking methods for that purpose: average ranks, success rate ratios and significant wins. We also analyze the problem of evaluating and comparing these ..."
Abstract

Cited by 25 (7 self)
 Add to MetaCart
(Show Context)
. We investigate the problem of using past performance information to select an algorithm for a given classification problem. We present three ranking methods for that purpose: average ranks, success rate ratios and significant wins. We also analyze the problem of evaluating and comparing these methods. The evaluation technique used is based on a leaveoneout procedure. On each iteration, the method generates a ranking using the results obtained by the algorithms on the training datasets. This ranking is then evaluated by calculating its distance from the ideal ranking built using the performance information on the test dataset. The distance measure adopted here, average correlation, is based on Spearman's rank correlation coefficient. To compare ranking methods, a combination of Friedman's test and Dunn's multiple comparison procedure is adopted. When applied to the methods presented here, these tests indicate that the success rate ratios and average ranks methods perfo...
A metalearning method to select the kernel width in support vector regression
 Mach. Learning
, 2004
"... Abstract. The Support Vector Machine algorithm is sensitive to the choice of parameter settings. If these are not set correctly, the algorithm may have a substandard performance. Suggesting a good setting is thus an important problem. We propose a metalearning methodology for this purpose and explo ..."
Abstract

Cited by 16 (2 self)
 Add to MetaCart
(Show Context)
Abstract. The Support Vector Machine algorithm is sensitive to the choice of parameter settings. If these are not set correctly, the algorithm may have a substandard performance. Suggesting a good setting is thus an important problem. We propose a metalearning methodology for this purpose and exploit information about the past performance of different settings. The methodology is applied to set the width of the Gaussian kernel. We carry out an extensive empirical evaluation, including comparisons with other methods (fixed default ranking; selection based on crossvalidation and a heuristic method commonly used to set the width of the SVM kernel). We show that our methodology can select settings with low error while providing significant savings in time. Further work should be carried out to see how the methodology could be adapted to different parameter setting tasks. Keywords: metalearning, parameter setting, support vector machines, Gaussian kernel, learning rankings
Nonparametric Event Study Tests
 Review of Quantitative Finance and Accounting
, 1992
"... This paper provides the rst documentation of the power and specication of the generalized sign test, which is based on the percentage of positive abnormal returns in an estimation period. In simulations using daily stock return data, the generalized sign test is well specied with both exchange list ..."
Abstract

Cited by 13 (0 self)
 Add to MetaCart
This paper provides the rst documentation of the power and specication of the generalized sign test, which is based on the percentage of positive abnormal returns in an estimation period. In simulations using daily stock return data, the generalized sign test is well specied with both exchange listed and nasdaq stocks. A rank test is more powerful under ideal conditions. However, the rank test is more sensitive to increases in the length of the event window, to increases in return variance and to thin trading. The generalized sign test is a viable alternative to the rank test under these conditions.
BehrensFisher: the probable difference between two means when s1 2= s2 2
 Journal of Modern Applied Statistical Methods
"... The history of the BehrensFisher problem and some approximate solutions are reviewed. In outlining relevant statistical hypotheses on the probable difference between two means, the importance of the BehrensFisher problem from a theoretical perspective is acknowledged, but it is concluded that this ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
The history of the BehrensFisher problem and some approximate solutions are reviewed. In outlining relevant statistical hypotheses on the probable difference between two means, the importance of the BehrensFisher problem from a theoretical perspective is acknowledged, but it is concluded that this problem is irrelevant for applied research in psychology, education, and related disciplines. The focus is better placed on “shift in location ” and, more importantly, “shift in location and change in scale ” treatment alternatives.
Modelling Volatilities and Conditional Correlations in Futures Markets with a Multivariate t Distribution
, 2007
"... This paper considers a multivariate t version of the Gaussian dynamic conditional correlation (DCC) model proposed by Engle (2002), and suggests the use of devolatized returns computed as returns standardized by realized volatilities rather than by GARCH type volatility estimates. The tDCC estimati ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
This paper considers a multivariate t version of the Gaussian dynamic conditional correlation (DCC) model proposed by Engle (2002), and suggests the use of devolatized returns computed as returns standardized by realized volatilities rather than by GARCH type volatility estimates. The tDCC estimation procedure is applied to a portfolio of daily returns on currency futures, government bonds and equity index futures. The results strongly reject the normalDCC model in favour of a tDCC speci…cation. The tDCC model also passes a number of VaR diagnostic tests over an evaluation sample. The estimation results suggest a general trend towards a lower level of return volatility, accompanied by a rising trend in conditional cross correlations in most markets; possibly re‡ecting the advent of euro in 1999 and increased interdependence of …nancial markets.
Diversification Cones, Trade Costs and Factor Market Linkages
, 2004
"... This paper finds that the distribution functions of factor usage intensities differ systematically among 10 rich OECD countries in a manner consistent with the multiplecone version of the factor proportions theory. The estimation works even if the same industry codes represent different goods across ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
This paper finds that the distribution functions of factor usage intensities differ systematically among 10 rich OECD countries in a manner consistent with the multiplecone version of the factor proportions theory. The estimation works even if the same industry codes represent different goods across countries in the data. In the framework of frictionless models, there are at least 3 diversification cones among these countries (including, e.g. UK, France and US in different cones). Trade costs need to be high (40% ~ 70 % on ad valorem basis) to invalidate the multiplecone finding and even higher to account for all the observed differences in factor usage intensities (60 % ~ 100 % on ad valorem basis). These high trade costs illustrate how badly factor price equalization is violated for the 10 OECD countries. Both multiple cones with zero or low trade costs and a single cone with high trade costs suggest that factor market linkages are weak between the countries identified as being in different cones (e.g. UK, France and US). Keywords: diversification cones, trade cost, factor market linkages.
Ranking Classification Algorithms Based on Relevant Performance Information
 MetaLearning: Building Automatic Advice Strategies for Model Selection and Method Combination, 2000
, 2000
"... . Given the wide variety of available classification algorithms and the volume of data today's organizations need to analyze, the selection of the right algorithm to use on a new problem is an important issue. In this paper we present zooming, a technique that, for a given dataset, selects ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
(Show Context)
. Given the wide variety of available classification algorithms and the volume of data today's organizations need to analyze, the selection of the right algorithm to use on a new problem is an important issue. In this paper we present zooming, a technique that, for a given dataset, selects relevant past performance information. The selection process is based on the distance between the dataset at hand and other datasets processed in the past. The distance is calculated on the basis of statistical, information theoretic and other measures. The kNearest Neighbor algorithm is used for this purpose. Performance information for the algorithms on the selected datasets is then processed to generate advice in the form of a ranking indicating which algorithms should be applied in which order. Here we propose a ranking method that is based on accuracy and time information, referred to as adjusted ratio of ratios. The generalization power of this ranking method is analyzed using an ...
The Changing Values of the Cooperate and Its Business Focus
 Journal of Agricultural Economics
, 1997
"... JSTOR is a notforprofit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
JSTOR is a notforprofit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms