Results 1  10
of
195
Boosting in the limit: Maximizing the margin of learned ensembles
 In Proceedings of the Fifteenth National Conference on Artificial Intelligence
, 1998
"... The "minimum margin" of an ensemble classifier on a given training set is, roughly speaking, the smallest vote it gives to any correct training label. Recent work has shown that the Adaboost algorithm is particularly effective at producing ensembles with large minimum margins, and theory s ..."
Abstract

Cited by 122 (0 self)
 Add to MetaCart
(Show Context)
The "minimum margin" of an ensemble classifier on a given training set is, roughly speaking, the smallest vote it gives to any correct training label. Recent work has shown that the Adaboost algorithm is particularly effective at producing ensembles with large minimum margins, and theory suggests that this may account for its success at reducing generalization error. We note, however, that the problem of finding good margins is closely related to linear programming, and we use this connection to derive and test new "LPboosting" algorithms that achieve better minimum margins than Adaboost. However, these algorithms do not always yield better generalization performance. In fact, more often the opposite is true. We report on a series of controlled experiments which show that no simple version of the minimummargin story can be complete. We conclude that the crucial question as to why boosting works so well in practice, and how to further improve upon it, remains mostly open. Some of our ...
Optimal Design of a CMOS OpAmp via Geometric Programming
"... We describe a new method for determining component values and transistor dimensions for CMOS operational amplifiers (opamps). We observe that a wide variety of design objectives and constraints have a special form, i.e., theyareposynomial functions of the design variables. As a result the amplifi ..."
Abstract

Cited by 85 (9 self)
 Add to MetaCart
(Show Context)
We describe a new method for determining component values and transistor dimensions for CMOS operational amplifiers (opamps). We observe that a wide variety of design objectives and constraints have a special form, i.e., theyareposynomial functions of the design variables. As a result the amplifier design problem can be expressed as a special form of optimization problem called geometric programming, for which very efficient global optimization methods have been developed. As a consequence we can efficiently determine globally optimal amplifier designs, or globally optimal tradeoffs among competing performance measures such as power, openloop gain, and bandwidth. Our method therefore yields completely automated synthesis of (globally) optimal CMOS amplifiers, directly from specifications. In this paper we apply this method to a specific, widely used operational amplifier architecture, showing in detail how to formulate the design problem as a geometric program. We compute globally optimal tradeoff curves relating performance measures such as power dissipation, unitygain bandwidth, and openloop gain. We show how the method can be used to synthesize robust designs, i.e., designs guaranteed to meet the specifications for a variety of process conditions and parameters.
ArbitraryNorm Separating Plane
 Operations Research Letters
, 1997
"... A plane separating two point sets in ndimensional real space is constructed such that it minimizes the sum of arbitrarynorm distances of misclassified points to the plane. In contrast to previous approaches that used surrogates for distanceminimization, the present work is based on a precise norm ..."
Abstract

Cited by 53 (13 self)
 Add to MetaCart
(Show Context)
A plane separating two point sets in ndimensional real space is constructed such that it minimizes the sum of arbitrarynorm distances of misclassified points to the plane. In contrast to previous approaches that used surrogates for distanceminimization, the present work is based on a precise normdependent explicit closed form for the projection of a point on a plane. This projection is used to formulate the separatingplane problem as a minimization of a convex function on a unit sphere in a norm dual to that of the arbitrary norm used. For the 1norm, the problem can be solved in polynomial time by solving 2n linear programs or by solving a bilinear program. For a general pnorm, the minimization problem can be transformed via an exact penalty formulation to minimizing the sum of a convex function and a bilinear function on a convex set. For the one and infinity norms, a finite successive linearization algorithm can be used for solving the exact penalty formulation. 1 Introduction...
Optimization with stochastic dominance constraints
 SIAM Journal on Optimization
"... We consider the problem of constructing a portfolio of finitely many assets whose returns are described by a discrete joint distribution. We propose a new portfolio optimization model involving stochastic dominance constraints on the portfolio return. We develop optimality and duality theory for the ..."
Abstract

Cited by 52 (6 self)
 Add to MetaCart
We consider the problem of constructing a portfolio of finitely many assets whose returns are described by a discrete joint distribution. We propose a new portfolio optimization model involving stochastic dominance constraints on the portfolio return. We develop optimality and duality theory for these models. We construct equivalent optimization models with utility functions. Numerical illustration is provided.
FIR Filter Design via Spectral Factorization and Convex Optimization
, 1997
"... We consider the design of finite impulse response (FIR) filters subject to upper and lower bounds on the frequency response magnitude. The associated optimization problems, with the filter coefficients as the variables and the frequency response bounds as constraints, are in general nonconvex. Usin ..."
Abstract

Cited by 46 (6 self)
 Add to MetaCart
We consider the design of finite impulse response (FIR) filters subject to upper and lower bounds on the frequency response magnitude. The associated optimization problems, with the filter coefficients as the variables and the frequency response bounds as constraints, are in general nonconvex. Using a change of variables and spectral factorization, we can pose such problems as linear or nonlinear convex optimization problems. As a result we can solve them efficiently (and globally) by recently developed interiorpoint methods. We describe applications to filter and equalizer design, and the related problem of antenna array weight design.
Linear Programs for Automatic Accuracy Control in Regression
 IN NINTH INTERNATIONAL CONFERENCE ON ARTIFICIAL NEURAL NETWORKS, CONFERENCE PUBLICATIONS NO. 470
, 1999
"... We have recently proposed a new approach to control the number of basis functions and the accuracy in Support Vector Machines. The latter is transferred to a linear programming setting, which inherently enforces sparseness of the solution. The algorithm computes a nonlinear estimate in terms of ker ..."
Abstract

Cited by 42 (5 self)
 Add to MetaCart
(Show Context)
We have recently proposed a new approach to control the number of basis functions and the accuracy in Support Vector Machines. The latter is transferred to a linear programming setting, which inherently enforces sparseness of the solution. The algorithm computes a nonlinear estimate in terms of kernel functions and an ffl ? 0 with the property that at most a fraction of the training set has an error exceeding ffl. The algorithm is robust to local perturbations of these points' target values. We give an explicit formulation of the optimization equations needed to solve the linear program and point out which modifications of the standard optimization setting are necessary to take advantage of the particular structure of the equations in the regression case.