## The Last-Step Minimax Algorithm (2000)

Venue: | Pages 279 290 of: Proc. 11th International Conference on Algorithmic Learning Theory |

Citations: | 8 - 1 self |

### BibTeX

@INPROCEEDINGS{Takimoto00thelast-step,

author = {Eiji Takimoto and Manfred Warmuth},

title = {The Last-Step Minimax Algorithm},

booktitle = {Pages 279 290 of: Proc. 11th International Conference on Algorithmic Learning Theory},

year = {2000},

pages = {279--290}

}

### Years of Citing Articles

### OpenURL

### Abstract

We consider on-line density estimation with a parameterized density from an exponential family. In each trial t the learner predicts a parameter t . Then it receives an instance x t chosen by the adversary and incurs loss ln p(x t j t ) which is the negative log-likelihood of x t w.r.t. the predicted density of the learner. The performance of the learner is measured by the regret dened as the total loss of the learner minus the total loss of the best parameter chosen o-line. We develop an algorithm called the Last-step Minimax Algorithm that predicts with the minimax optimal parameter assuming that the current trial is the last one. For one-dimensional exponential families, we give an explicit form of the prediction of the Last-step Minimax Algorithm and show that its regret is O(ln T ), where T is the number of trials. In particular, for Bernoulli density estimation the Last-step Minimax Algorithm is slightly better than the standard Laplace estimator. This work was done while...

### Citations

275 |
Fisher information and stochastic complexity
- Rissanen
- 1996
(Show Context)
Citation Context ...ssentially as good as the minimax algorithm. Regret Bounds from the MDL Community There is a large body of work on proving regret bounds that has its roots in the Minimum Description Length community =-=[6, 11, 8, 9, 12, 1-=-3]. The denition of regret used in this community is dierent from ours in the following two parts. 1. The learner predicts with an arbitrary probability mass function q t . In particular q t does not ... |

128 |
Universal sequential coding of single messages
- Shtarkov
- 1987
(Show Context)
Citation Context ... show that the regret of the Last-step Minimax Algorithm is of the form of 1 2 ln(T + 1) + c; (1) where c = 1=2. This is very close to the minimax regret that Shtarkov showed for thesxed horizon game =-=[-=-7]. The minimax regret has the same form (1) but now c = (1=2) ln(=2) :23. Another simple and ecient algorithm for density estimation with an arbitrary exponential family is the Forward Algorithm of ... |

116 | Relative loss bounds for on-line density estimation with the exponential family of distributions
- Azoury, Warmuth
(Show Context)
Citation Context ...t has the same form (1) but now c = (1=2) ln(=2) :23. Another simple and ecient algorithm for density estimation with an arbitrary exponential family is the Forward Algorithm of Azoury and Warmuth [2]. This algorithm predicts with t = (a + P t 1 q=1 x q )=t for any exponential family. Here a 0 is a constant that is to be tuned and the mean parameter t is an alternate parameterization of the ... |

55 |
Asymptotic minimax regret for data compression, gambling, and prediction
- Xie, Barron
- 1996
(Show Context)
Citation Context ...ssentially as good as the minimax algorithm. Regret Bounds from the MDL Community There is a large body of work on proving regret bounds that has its roots in the Minimum Description Length community =-=[6, 11, 8, 9, 12, 1-=-3]. The denition of regret used in this community is dierent from ours in the following two parts. 1. The learner predicts with an arbitrary probability mass function q t . In particular q t does not ... |

49 |
A decision-theoretic extension of stochastic complexity and its applications to learning
- Yamanishi
- 1998
(Show Context)
Citation Context ...ssentially as good as the minimax algorithm. Regret Bounds from the MDL Community There is a large body of work on proving regret bounds that has its roots in the Minimum Description Length community =-=[6, 11, 8, 9, 12, 1-=-3]. The denition of regret used in this community is dierent from ours in the following two parts. 1. The learner predicts with an arbitrary probability mass function q t . In particular q t does not ... |

40 | Predicting a binary sequence almost as well as the optimal biased coin
- Freund
- 1996
(Show Context)
Citation Context ...the density. For a Bernoulli, the Forward Algorithm with a = 1=2 is the well-known Laplace estimator. The regret of this algorithm is again of the same form as (1) with c = (1=2) ln :57 (See e.g. [5=-=]-=-). Surprisingly the Last-step Minimax Algorithm is slightly better than the Laplace estimator (c = :5). For general one-dimensional exponential families, the Forward Algorithm can be seen as asrst-ord... |

12 |
Information and Exponential Families in Statistical Theory
- BarndorĀ¤-Nielsen
- 1978
(Show Context)
Citation Context ...lization factor so that R x2X p(xj)dx = 1 holds, and it is called the cumulant function that characterizes the family G . Wesrst review some basic properties of the family. For further details, see [3, 1]. Let g() denote the gradient vector r G(). It is well known that G is a strictly convex function and g() equals the mean of x, i.e. g() = R x2X xp(xj)dx. We let g() = and call the expecta... |

12 |
Asymptotically minimax regret by Bayes mixtures
- Takeuchi, Barron
- 1998
(Show Context)
Citation Context |

10 |
On relative loss bounds in generalized linear regression
- Forster
- 1999
(Show Context)
Citation Context ...p Minimax Algorithm predicts with t = arginf t 2 sup x t 2X 0 0 @ t X q=1 L(x q ; q ) inf B2 t X q=1 L(x q ; B ) 1 A : This method for developing learning algorithms wassrst used by Forster [4] for linear regression. We apply the Last-step Minimax Algorithm to density estimation with one-dimensional exponential families. The exponential families include many fundamental classes of distribut... |

10 |
Asymptotically minimax regret for exponential and curved exponential families
- Takeuchi, Barron
- 1998
(Show Context)
Citation Context |

8 | The minimax strategy for gaussian density estimation
- Takimoto, Warmuth
- 2000
(Show Context)
Citation Context ...ce, there exists a gap between the regret of the Last-step Minimax algorithm and the regret of the optimal minimax algorithm. Specically, the former is O(ln T ), while the latter is O(ln T ln ln T ) [=-=10]-=-. This contrasts with the case of Bernoulli, where the regret of the Last-step Minimax Algorithm is by a constant larger than the minimax regret. Open Problems There are a large number of open problem... |

7 |
erential Geometrical Methods in Statistics
- Amari, Di
- 1985
(Show Context)
Citation Context ...lization factor so that R x2X p(xj)dx = 1 holds, and it is called the cumulant function that characterizes the family G . Wesrst review some basic properties of the family. For further details, see [3, 1]. Let g() denote the gradient vector r G(). It is well known that G is a strictly convex function and g() equals the mean of x, i.e. g() = R x2X xp(xj)dx. We let g() = and call the expecta... |

6 | Extended stochastic complexity and minimax relative loss analysis
- Yamanishi
- 1999
(Show Context)
Citation Context |