Results 1  10
of
63
The Dykstra algorithm with Bregman projections
 Communications in Applied Analysis
, 1998
"... ABSTRACT: Let fCi j 1 i mg be a nite family of closed convex subsets of R n, and assume that their intersection C = \fCi j 1 i mg is not empty. In this paper we propose a general Dykstratype sequential algorithm for nding the Bregman projection of a given point r 2 R n onto C and show thatitconverg ..."
Abstract

Cited by 16 (2 self)
 Add to MetaCart
ABSTRACT: Let fCi j 1 i mg be a nite family of closed convex subsets of R n, and assume that their intersection C = \fCi j 1 i mg is not empty. In this paper we propose a general Dykstratype sequential algorithm for nding the Bregman projection of a given point r 2 R n onto C and show
Blockiterative algorithms with underrelaxed Bregman projections
 SIAM J. Optim
"... The notion of relaxation is well understood for orthogonal projec tions onto convex sets For general Bregman projections it was consid ered only for hyperplanes and the question of how to relax Bregman projections onto convex sets that are not linear ie not hyperplanes or halfspaces has remained ..."
Abstract

Cited by 11 (2 self)
 Add to MetaCart
The notion of relaxation is well understood for orthogonal projec tions onto convex sets For general Bregman projections it was consid ered only for hyperplanes and the question of how to relax Bregman projections onto convex sets that are not linear ie not hyperplanes or halfspaces has remained
The Uniform Hardcore Lemma via Approximate Bregman Projections
"... We give a simple, more efficient and uniform proof of thehardcore lemma, a fundamental result in complexity theory with applications in machine learning and cryptography. Our result follows from the connection between boosting algorithms and hardcore set constructions discovered by Klivans and Ser ..."
Abstract

Cited by 13 (2 self)
 Add to MetaCart
and the nonuniform hardcore lemma, while matching or improving the previously bestknown parameters. The algorithm uses a generalized multiplicative update rule combined with a natural notion of approximate Bregman projection. Bregman projections are widely used in convex optimization and machine learning
Legendre Functions and the Method of Random Bregman Projections
, 1997
"... this paper, Bregman's method is studied within the powerful framework of Convex Analysis. New insights are obtained and the rich class of "Bregman/Legendre functions" is introduced. Bregman's method still works, if the underlying function is Bregman/Legendre or more generally if ..."
Abstract

Cited by 69 (15 self)
 Add to MetaCart
this paper, Bregman's method is studied within the powerful framework of Convex Analysis. New insights are obtained and the rich class of "Bregman/Legendre functions" is introduced. Bregman's method still works, if the underlying function is Bregman/Legendre or more generally
Structured prediction, dual extragradient and Bregman projections
 Journal of Machine Learning Research
, 2006
"... We present a simple and scalable algorithm for maximummargin estimation of structured output models, including an important class of Markov networks and combinatorial models. We formulate the estimation problem as a convexconcave saddlepoint problem that allows us to use simple projection methods ..."
Abstract

Cited by 59 (2 self)
 Add to MetaCart
We present a simple and scalable algorithm for maximummargin estimation of structured output models, including an important class of Markov networks and combinatorial models. We formulate the estimation problem as a convexconcave saddlepoint problem that allows us to use simple projection
2.2 Constrained Minimization with Bregman Projections...................... 12
"... These lecture notes contain material presented in the Statistical Learning Theory course at UC Berkeley, Spring’08. ..."
Abstract
 Add to MetaCart
These lecture notes contain material presented in the Statistical Learning Theory course at UC Berkeley, Spring’08.
Dykstra's algorithm with Bregman projections: a convergence proof
 Optimization
, 1998
"... Dykstra's algorithm and the method of cyclic Bregman projections are often employed to solve best approximation and convex feasiblity problems, which are fundamental in mathematics and the physical sciences. Censor and Reich very recently suggested a synthesis of these methods, Dykstra's a ..."
Abstract

Cited by 19 (4 self)
 Add to MetaCart
Dykstra's algorithm and the method of cyclic Bregman projections are often employed to solve best approximation and convex feasiblity problems, which are fundamental in mathematics and the physical sciences. Censor and Reich very recently suggested a synthesis of these methods, Dykstra
Duality for Bregman projections onto translated cones and affine subspaces
 J. Approx. Theory
"... In 2001, Della Pietra, Della Pietra, and Lafferty suggested a dual characterization of the Bregman projection onto linear constraints, which has already been applied by Collins, Schapire, and Singer to boosting algorithms and maximum likelihood logistic regression. The proof provided by Della Pietra ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
In 2001, Della Pietra, Della Pietra, and Lafferty suggested a dual characterization of the Bregman projection onto linear constraints, which has already been applied by Collins, Schapire, and Singer to boosting algorithms and maximum likelihood logistic regression. The proof provided by Della
Matrix exponentiated gradient updates for online learning and Bregman projections
 Journal of Machine Learning Research
, 2005
"... We address the problem of learning a symmetric positive definite matrix. The central issue is to design parameter updates that preserve positive definiteness. Our updates are motivated with the von Neumann divergence. Rather than treating the most general case, we focus on two key applications that ..."
Abstract

Cited by 71 (11 self)
 Add to MetaCart
We address the problem of learning a symmetric positive definite matrix. The central issue is to design parameter updates that preserve positive definiteness. Our updates are motivated with the von Neumann divergence. Rather than treating the most general case, we focus on two key applications that exemplify our methods: Online learning with a simple square loss and finding a symmetric positive definite matrix subject to symmetric linear constraints. The updates generalize the Exponentiated Gradient (EG) update and AdaBoost, respectively: the parameter is now a symmetric positive definite matrix of trace one instead of a probability vector (which in this context is a diagonal positive definite matrix with trace one). The generalized updates use matrix logarithms and exponentials to preserve positive definiteness. Most importantly, we show how the analysis of each algorithm generalizes to the nondiagonal case. We apply both new algorithms, called the Matrix Exponentiated Gradient (MEG) update and DefiniteBoost, to learn a kernel matrix from distance measurements. 1
Matrix Exponential Updates for Online Learning and Bregman Projection
"... We address the problem of learning a positive definite matrix from examples. The central issue is to design parameter updates that preserve positive definiteness. We introduce an update based on matrix exponentials which can be used as an online algorithm or for the purpose of finding a positive de ..."
Abstract
 Add to MetaCart
We address the problem of learning a positive definite matrix from examples. The central issue is to design parameter updates that preserve positive definiteness. We introduce an update based on matrix exponentials which can be used as an online algorithm or for the purpose of finding a positive definite matrix that satisfies linear constraints. We derive this update using the von Neumann divergence and then use this divergence as a measure of progress for proving relative loss bounds. In experiments, we apply our algorithms to learn a kernel matrix from distance measurements.
Results 1  10
of
63