Results 1  10
of
12
How to Use Expert Advice
 JOURNAL OF THE ASSOCIATION FOR COMPUTING MACHINERY
, 1997
"... We analyze algorithms that predict a binary value by combining the predictions of several prediction strategies, called experts. Our analysis is for worstcase situations, i.e., we make no assumptions about the way the sequence of bits to be predicted is generated. We measure the performance of the ..."
Abstract

Cited by 376 (72 self)
 Add to MetaCart
We analyze algorithms that predict a binary value by combining the predictions of several prediction strategies, called experts. Our analysis is for worstcase situations, i.e., we make no assumptions about the way the sequence of bits to be predicted is generated. We measure the performance of the algorithm by the difference between the expected number of mistakes it makes on the bit sequence and the expected number of mistakes made by the best expert on this sequence, where the expectation is taken with respect to the randomization in the predictions. We show that the minimum achievable difference is on the order of the square root of the number of mistakes of the best expert, and we give efficient algorithms that achieve this. Our upper and lower bounds have matching leading constants in most cases. We then show howthis leads to certain kinds of pattern recognition/learning algorithms with performance bounds that improve on the best results currently known in this context. We also compare our analysis to the case in which log loss is used instead of the expected number of mistakes.
Data snooping biases in tests of financial asset pricing models
 Review of Financial Studies
, 1990
"... authors not those of the National Bureau of Economic Research. NBER Working Paper #3001 ..."
Abstract

Cited by 228 (6 self)
 Add to MetaCart
authors not those of the National Bureau of Economic Research. NBER Working Paper #3001
Sequential prediction of individual sequences under general loss functions
 IEEE Trans. on Information Theory
, 1998
"... Abstract—We consider adaptive sequential prediction of arbitrary binary sequences when the performance is evaluated using a general loss function. The goal is to predict on each individual sequence nearly as well as the best prediction strategy in a given comparison class of (possibly adaptive) pre ..."
Abstract

Cited by 93 (8 self)
 Add to MetaCart
(Show Context)
Abstract—We consider adaptive sequential prediction of arbitrary binary sequences when the performance is evaluated using a general loss function. The goal is to predict on each individual sequence nearly as well as the best prediction strategy in a given comparison class of (possibly adaptive) prediction strategies, called experts. By using a general loss function, we generalize previous work on universal prediction, forecasting, and data compression. However, here we restrict ourselves to the case when the comparison class is finite. For a given sequence, we define the regret as the total loss on the entire sequence suffered by the adaptive sequential predictor, minus the total loss suffered by the predictor in the comparison class that performs best on that particular sequence. We show that for a large class of loss functions, the minimax regret is either (log N)
Tight WorstCase Loss Bounds for Predicting With Expert Advice
, 1994
"... this paper is somewhat different from the one just described. Assume that there are N experts E i , i = 1; : : : ; N , each trying to predict the outcomes y t as best they can. Let x t;i be the prediction of the ith expert E i about the ..."
Abstract

Cited by 52 (10 self)
 Add to MetaCart
this paper is somewhat different from the one just described. Assume that there are N experts E i , i = 1; : : : ; N , each trying to predict the outcomes y t as best they can. Let x t;i be the prediction of the ith expert E i about the
Macroeconomic Forecasting Using Many Predictors
 Advances in Econometrics, Theory and Applications, Eight World Congress of the Econometric Society
, 2000
"... This paper is based on research carried out jointly with James H. Stock, who I thank for comments on this paper. I thank JeanPhilippe Laforte for research assistance. This research was supported by the National Science Foundation (SBR9730489). (Version WC_2b) 1 ..."
Abstract

Cited by 33 (0 self)
 Add to MetaCart
This paper is based on research carried out jointly with James H. Stock, who I thank for comments on this paper. I thank JeanPhilippe Laforte for research assistance. This research was supported by the National Science Foundation (SBR9730489). (Version WC_2b) 1
Rate Equation Approach for Growing Networks
 PROCEEDINGS OF THE XVIII SITGES CONFERENCE ON STATISTICAL MECHANICS, LECTURE NOTES IN PHYSICS
, 2003
"... The rate equations are applied to investigate the structure of growing networks. Within this framework, the degree distribution of a network in which nodes are introduced sequentially and attach to an earlier node of degree k with rate Ak k is computed. Very dierent behaviors arise for < 1, ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
The rate equations are applied to investigate the structure of growing networks. Within this framework, the degree distribution of a network in which nodes are introduced sequentially and attach to an earlier node of degree k with rate Ak k is computed. Very dierent behaviors arise for < 1, = 1, and > 1. The rate equation approach is extended to determine the joint orderdegree distribution, the degree correlations of neighboring nodes, as well as basic global properties. The complete solution for the degree distribution of a finitesize network is outlined. Some unusual properties associated with the most popular node are discussed; these follow simply from the orderdegree distribution. Finally, a toy protein interaction network model is investigated, where the network grows by the processes of node duplication and particular form of random mutations. This system exhibits an infiniteorder percolation transition, giant samplespecific fluctuations, and a nonuniversal degree distribution.
Precise Identification of the WorstCase Voltage Drop Conditions in Power Grid Verification
"... Abstract – Identifying worstcase voltage drop conditions in every module supplied by the power grid is a crucial problem in modern IC design. In this paper we develop a novel methodology for power grid verification which is based on accurately constructing the space of current variations of the sup ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract – Identifying worstcase voltage drop conditions in every module supplied by the power grid is a crucial problem in modern IC design. In this paper we develop a novel methodology for power grid verification which is based on accurately constructing the space of current variations of the supplied modules and locating its precise points that yield the worstcase voltage drop conditions. The construction of the current space is performed via plain simulation and statistical extrapolation using results from extreme value theory. The method overcomes limitations of past methods which either relied on loosely bounding the worstcase voltage drop, or abstracted the current space in a vague and incomplete set of boundtype constraints. Experimental results verify the potential of the proposed method to identify worstcase conditions and demonstrate the pessimism inherent in previous boundtype approaches. I.
Extremal Properties of Random Structures
"... Abstract. The extremal characteristics of random structures, including trees, graphs, and networks, are discussed. A statistical physics approach is employed in which extremal properties are obtained through suitably defined rate equations. A variety of unusual time dependences and systemsize depen ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Abstract. The extremal characteristics of random structures, including trees, graphs, and networks, are discussed. A statistical physics approach is employed in which extremal properties are obtained through suitably defined rate equations. A variety of unusual time dependences and systemsize dependences for basic extremal properties are obtained. 1
Jumps in Equilibrium Prices and Asymmetric News in Foreign Exchange Markets∗
, 2015
"... In this paper we examine the intraday effects of surprises from scheduled and unscheduled announcements on six major exchange rate returns (jumps) using an extension of the standard Tobit model with heteroskedastic and asymmetric errors. Since observed volatility at high frequency often contains mic ..."
Abstract
 Add to MetaCart
In this paper we examine the intraday effects of surprises from scheduled and unscheduled announcements on six major exchange rate returns (jumps) using an extension of the standard Tobit model with heteroskedastic and asymmetric errors. Since observed volatility at high frequency often contains microstructure noise, we use a recently proposed non parametric test to filter out noise and extract jumps from noisefree FX returns (Lee and Mykland (2012)). We found that the most influential scheduled macroeconomic news are globally related to job markets, output growth indicators and public debt. These surprises impact FX jumps rather in the form of good news, as a result of pessimistic forecasts from traders during the crisis period analyzed. We reconfirmed for most of the currencies the hypothesis that negative volatility shocks have a greater impact on volatility than positive shocks of the same magnitude, reflecting markets ’ concern about the cost of stabilization policies. JEL Classification: G14, G12, E44, C22.
Modeling of risk
"... losses using sizebiased data In this paper we present a method for drawing inferences about the process of financial losses that are associated with the operations of a business. For example, for a bank such losses may be related to erroneous transactions, human error, fraud, lawsuits, or power out ..."
Abstract
 Add to MetaCart
(Show Context)
losses using sizebiased data In this paper we present a method for drawing inferences about the process of financial losses that are associated with the operations of a business. For example, for a bank such losses may be related to erroneous transactions, human error, fraud, lawsuits, or power outages. Information about the frequency and magnitude of losses is obtained through the search of a number of sources, such as printed, computerized, or Internetbased publications related to insurance and finance. The data consists of losses that were discovered in the search. We assume that the probability of a loss appearing in the body of sources and also being discovered increases with the magnitude of the loss. Our approach simultaneously models the process of losses and the process of populating the database. The approach is illustrated using data related to operational risk losses that are of special interest to the banking industry.