Results 1 
3 of
3
Logistic Regression, AdaBoost and Bregman Distances
, 2000
"... We give a unified account of boosting and logistic regression in which each learning problem is cast in terms of optimization of Bregman distances. The striking similarity of the two problems in this framework allows us to design and analyze algorithms for both simultaneously, and to easily adapt al ..."
Abstract

Cited by 203 (43 self)
 Add to MetaCart
We give a unified account of boosting and logistic regression in which each learning problem is cast in terms of optimization of Bregman distances. The striking similarity of the two problems in this framework allows us to design and analyze algorithms for both simultaneously, and to easily adapt algorithms designed for one problem to the other. For both problems, we give new algorithms and explain their potential advantages over existing methods. These algorithms can be divided into two types based on whether the parameters are iteratively updated sequentially (one at a time) or in parallel (all at once). We also describe a parameterized family of algorithms which interpolates smoothly between these two extremes. For all of the algorithms, we give convergence proofs using a general formalization of the auxiliaryfunction proof technique. As one of our sequentialupdate algorithms is equivalent to AdaBoost, this provides the first general proof of convergence for AdaBoost. We show that all of our algorithms generalize easily to the multiclass case, and we contrast the new algorithms with iterative scaling. We conclude with a few experimental results with synthetic data that highlight the behavior of the old and newly proposed algorithms in different settings.
Polynomially Bounded Minimization Problems That Are Hard To Approximate
, 1994
"... Min PB is the class of minimization problems whose objective functions are bounded by a polynomial in the size of the input. We show that there exist several problems that are Min PBcomplete with respect to an approximation preserving reduction. These problems are very hard to approximate; in polyn ..."
Abstract

Cited by 24 (5 self)
 Add to MetaCart
Min PB is the class of minimization problems whose objective functions are bounded by a polynomial in the size of the input. We show that there exist several problems that are Min PBcomplete with respect to an approximation preserving reduction. These problems are very hard to approximate; in polynomial time they cannot be approximated within n " for some " ? 0, where n is the size of the input, provided that P 6= NP. In particular, the problem of finding the minimum independent dominating set in a graph, the problem of satisfying a 3SAT formula setting the least number of variables to one, and the minimum bounded 0 \Gamma 1 programming problem are shown to be Min PBcomplete. We also present a new type of approximation preserving reduction that is designed for problems whose approximability is expressed as a function of some size parameter. Using this reduction we obtain good lower bounds on the approximability of the treated problems.
On the Complexity of Consistency Problems for Neurons with Binary Weights
, 1994
"... We inquire into the complexity of training a neuron with binary weights when the training examples are Boolean and required to have bounded coincidence and heaviness. Coincidence of an example set is defined as the maximum inner product of two elements, heaviness of an example set is the maximum Ham ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
We inquire into the complexity of training a neuron with binary weights when the training examples are Boolean and required to have bounded coincidence and heaviness. Coincidence of an example set is defined as the maximum inner product of two elements, heaviness of an example set is the maximum Hammingweight of an element. We use both as parameters to define classes of restricted consistency problems and ask for which values they are NPcomplete or solvable in polynomial time. The consistency problem is shown to be NPcomplete when the example sets are allowed to have coincidence at least 1 and heaviness at least 4. On the other hand, we give lineartime algorithms for solving consistency problems with coincidence 0 or heaviness at most 3. Moreover, these results remain valid when the threshold of the neuron is bounded by a constant of value at least 2, whereas consistency can be decided in linear time for neurons with threshold at most 1. We also study maximum consistency problems an...