Results 1  10
of
40
Minimax Regret under Log Loss for General Classes of Experts
, 1999
"... We study sequential strategies for assigning probabilities to the elements that may appear next in a sequence of data. The goal is to minimize the regret under log loss over the worst possible sequence. That is, to minimize the worstcase drop in the loglikelihood of the final sequence when measure ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
measured under the assigned probabilities, as opposed to being measured under the best assignment in a given class of strategies (or experts). Using tools from empirical process theory, we prove a general upper bound on the best possible (minimax) regret that depends on the metric properties of the class
Sequential prediction of individual sequences under general loss functions
 IEEE Trans. on Information Theory
, 1998
"... Abstract—We consider adaptive sequential prediction of arbitrary binary sequences when the performance is evaluated using a general loss function. The goal is to predict on each individual sequence nearly as well as the best prediction strategy in a given comparison class of (possibly adaptive) pre ..."
Abstract

Cited by 84 (9 self)
 Add to MetaCart
) prediction strategies, called experts. By using a general loss function, we generalize previous work on universal prediction, forecasting, and data compression. However, here we restrict ourselves to the case when the comparison class is finite. For a given sequence, we define the regret as the total loss
Some Results On Posterior Regret GammaMinimax Estimation
, 1993
"... . In this paper we study and compute posterior regret \Gammaminimax actions in several estimation problems. We show that under general conditions, posterior regret \Gammaminimax actions are Bayes for some prior in the class \Gamma. We also study some important special cases such as bounded normal ..."
Abstract
 Add to MetaCart
. In this paper we study and compute posterior regret \Gammaminimax actions in several estimation problems. We show that under general conditions, posterior regret \Gammaminimax actions are Bayes for some prior in the class \Gamma. We also study some important special cases such as bounded normal
Achievability of Asymptotic Minimax Regret in Online and Batch Prediction
"... The normalized maximum likelihood model achieves the minimax coding (logloss) regret for data of fixed sample size n. However, it is a batch strategy, i.e., it requires that n be known in advance. Furthermore, it is computationally infeasible for most statistical models, and several computationally ..."
Abstract
 Add to MetaCart
The normalized maximum likelihood model achieves the minimax coding (logloss) regret for data of fixed sample size n. However, it is a batch strategy, i.e., it requires that n be known in advance. Furthermore, it is computationally infeasible for most statistical models, and several
Empirical entropy, minimax regret and minimax risk. arXiv preprint arXiv:1308.1147
, 2013
"... We consider the random design regression model with square loss. We propose a method that aggregates empirical minimizers (ERM) over appropriately chosen random subsets and reduces to ERM in the extreme case, and we establish sharp oracle inequalities for its risk. We show that, under the −p growth ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
as the problem of statistical estimation. On the contrary, for p> 2 we show that the rates of the minimax regret are, in general, slower than for the minimax risk. Our oracle inequalities also imply the v log(n/v)/n rates for VapnikChervonenkis type classes of dimension v without the usual convexity
1Minimax Capacity Loss under SubNyquist Universal Sampling
"... This paper considers the capacity of subsampled analog channels when the sampler is designed to operate independent of instantaneous channel realizations. A compound multiband Gaussian channel with unknown subband occupancy is considered, with perfect channel state information available at both the ..."
Abstract
 Add to MetaCart
the receiver and the transmitter. We restrict our attention to a general class of periodic subNyquist samplers, which subsumes as special cases sampling with periodic modulation and filter banks. We evaluate the loss due to channelindependent (universal) subNyquist design through a sampled capacity loss
Worst Case Prediction over Sequences under Log Loss
 In The Mathematics of Information Coding, Extraction, and Distribution
, 1997
"... . We consider the game of sequentially assigning probabilities to future data based on past observations under logarithmic loss. We are not making probabilistic assumptions about the generation of the data, but consider a situation where a player tries to minimize his loss relative to the loss of th ..."
Abstract

Cited by 16 (1 self)
 Add to MetaCart
of the (with hindsight) best distribution from a target class for the worst sequence of data. We give bounds on the minimax regret in terms of the metric entropies of the target class with respect to suitable distances between distributions. 1. Introduction. The assignment of probabilities to the possible
Towards Minimax Policies for Online Linear Optimization with Bandit Feedback
 In COLT
, 2012
"... We address the online linear optimization problem with bandit feedback. Our contribution is twofold. First, we provide an algorithm (based on exponential weights) with a regret of order dn log N for any finite action set with N actions, under the assumption that the instantaneous loss is bounded by ..."
Abstract

Cited by 24 (1 self)
 Add to MetaCart
We address the online linear optimization problem with bandit feedback. Our contribution is twofold. First, we provide an algorithm (based on exponential weights) with a regret of order dn log N for any finite action set with N actions, under the assumption that the instantaneous loss is bounded
Minimax Nonparametric Classification—Part I: Rates of Convergence
"... Abstract — This paper studies minimax aspects of nonparametric classification. We first study minimax estimation of the conditional probability of a class label, given the feature variable. This function, say �, is assumed to be in a general nonparametric class. We show the minimax rate of convergen ..."
Abstract
 Add to MetaCart
of convergence under square vP loss is determined by the massiveness of the class as measured by metric entropy. The second part of the paper studies minimax classification. The loss of interest is the difference between the probability of misclassification of a classifier and that of the Bayes decision
On Prediction of Individual Sequences 1
"... Sequential randomized prediction of an arbitrary binary sequence is investigated. No assumption is made on the mechanism of generating the bit sequence. The goal of the predictor is to minimize its relative loss (or regret), i.e., to make almost as few mistakes as the best “expert ” in a fixed, poss ..."
Abstract
 Add to MetaCart
. Then we show general upper and lower bounds on the minimax regret in terms of the geometry of the class of experts. As main examples, we determine the exact order of magnitude of the minimax regret for the class of autoregressive linear predictors and for the class of Markov experts.
Results 1  10
of
40