Results 1  10
of
301,220
Horizonindependent optimal prediction with logloss in exponential families
 Proc. COLT2013
, 2013
"... We study online learning under logarithmic loss with regular parametric models. Hedayati and Bartlett (2012b) showed that a Bayesian prediction strategy with Jeffreys prior and sequential normalized maximum likelihood (SNML) coincide and are optimal if and only if the latter is exchangeable, and if ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
, and if and only if the optimal strategy can be calculated without knowing the time horizon in advance. They put forward the question what families have exchangeable SNML strategies. This paper fully answers this open problem for onedimensional exponential families. The exchangeability can happen only for three
Achievability of Asymptotic Minimax Regret by HorizonDependent and HorizonIndependent Strategies
"... The normalized maximum likelihood distribution achieves minimax coding (logloss) regret given a fixed sample size, or horizon, n. It generally requires that n be known in advance. Furthermore, extracting the sequential predictions from the normalized maximum likelihood distribution is computation ..."
Abstract
 Add to MetaCart
The normalized maximum likelihood distribution achieves minimax coding (logloss) regret given a fixed sample size, or horizon, n. It generally requires that n be known in advance. Furthermore, extracting the sequential predictions from the normalized maximum likelihood distribution
Graphical models, exponential families, and variational inference
, 2008
"... The formalism of probabilistic graphical models provides a unifying framework for capturing complex dependencies among random variables, and building largescale multivariate statistical models. Graphical models have become a focus of research in many statistical, computational and mathematical fiel ..."
Abstract

Cited by 800 (26 self)
 Add to MetaCart
of probability distributions — are best studied in the general setting. Working with exponential family representations, and exploiting the conjugate duality between the cumulant function and the entropy for exponential families, we develop general variational representations of the problems of computing
Constrained model predictive control: Stability and optimality
 AUTOMATICA
, 2000
"... Model predictive control is a form of control in which the current control action is obtained by solving, at each sampling instant, a finite horizon openloop optimal control problem, using the current state of the plant as the initial state; the optimization yields an optimal control sequence and t ..."
Abstract

Cited by 696 (15 self)
 Add to MetaCart
Model predictive control is a form of control in which the current control action is obtained by solving, at each sampling instant, a finite horizon openloop optimal control problem, using the current state of the plant as the initial state; the optimization yields an optimal control sequence
Interior Point Methods in Semidefinite Programming with Applications to Combinatorial Optimization
 SIAM Journal on Optimization
, 1993
"... We study the semidefinite programming problem (SDP), i.e the problem of optimization of a linear function of a symmetric matrix subject to linear equality constraints and the additional condition that the matrix be positive semidefinite. First we review the classical cone duality as specialized to S ..."
Abstract

Cited by 557 (12 self)
 Add to MetaCart
We study the semidefinite programming problem (SDP), i.e the problem of optimization of a linear function of a symmetric matrix subject to linear equality constraints and the additional condition that the matrix be positive semidefinite. First we review the classical cone duality as specialized
A training algorithm for optimal margin classifiers
 PROCEEDINGS OF THE 5TH ANNUAL ACM WORKSHOP ON COMPUTATIONAL LEARNING THEORY
, 1992
"... A training algorithm that maximizes the margin between the training patterns and the decision boundary is presented. The technique is applicable to a wide variety of classifiaction functions, including Perceptrons, polynomials, and Radial Basis Functions. The effective number of parameters is adjust ..."
Abstract

Cited by 1848 (44 self)
 Add to MetaCart
A training algorithm that maximizes the margin between the training patterns and the decision boundary is presented. The technique is applicable to a wide variety of classifiaction functions, including Perceptrons, polynomials, and Radial Basis Functions. The effective number of parameters is adjusted automatically to match the complexity of the problem. The solution is expressed as a linear combination of supporting patterns. These are the subset of training patterns that are closest to the decision boundary. Bounds on the generalization performance based on the leaveoneout method and the VCdimension are given. Experimental results on optical character recognition problems demonstrate the good generalization obtained when compared with other learning algorithms.
Predicting How People Play Games: Reinforcement Learning . . .
 AMERICAN ECONOMIC REVIEW
, 1998
"... ..."
Optimal Capital Structure, Endogenous Bankruptcy, and the Term Structure of Credit Spreads
 THE JOURNAL OF FINANCE, VOL. 51, NO. 3, PAPERS AND PROCEEDINGS OF THE FIFTYSIXTH
, 1996
"... ..."
Bounds on Individual Risk for Logloss Predictors
"... In sequential prediction with logloss as well as density estimation with risk measured by KL divergence, one is often interested in the expected instantaneous loss, or, equivalently, the individual risk at a given fixed sample size n. For Bayesian prediction and estimation methods, it is often easy ..."
Abstract
 Add to MetaCart
In sequential prediction with logloss as well as density estimation with risk measured by KL divergence, one is often interested in the expected instantaneous loss, or, equivalently, the individual risk at a given fixed sample size n. For Bayesian prediction and estimation methods, it is often
Results 1  10
of
301,220