Results 1  10
of
127
Improved Boosting Algorithms Using Confidencerated Predictions
 MACHINE LEARNING
, 1999
"... We describe several improvements to Freund and Schapire’s AdaBoost boosting algorithm, particularly in a setting in which hypotheses may assign confidences to each of their predictions. We give a simplified analysis of AdaBoost in this setting, and we show how this analysis can be used to find impr ..."
Abstract

Cited by 698 (26 self)
 Add to MetaCart
We describe several improvements to Freund and Schapire’s AdaBoost boosting algorithm, particularly in a setting in which hypotheses may assign confidences to each of their predictions. We give a simplified analysis of AdaBoost in this setting, and we show how this analysis can be used to find improved parameter settings as well as a refined criterion for training weak hypotheses. We give a specific method for assigning confidences to the predictions of decision trees, a method closely related to one used by Quinlan. This method also suggests a technique for growing decision trees which turns out to be identical to one proposed by Kearns and Mansour. We focus next on how to apply the new boosting algorithms to multiclass classification problems, particularly to the multilabel case in which each example may belong to more than one class. We give two boosting methods for this problem, plus a third method based on output coding. One of these leads to a new method for handling the singlelabel case which is simpler but as effective as techniques suggested by Freund and Schapire. Finally, we give some experimental results comparing a few of the algorithms discussed in this paper.
Robust Linear Programming Discrimination Of Two Linearly Inseparable Sets
, 1992
"... INTRODUCTION We consider the two pointsets A and B in the ndimensional real space R n represented by the m \Theta n matrix A and the k \Theta n matrix B respectively. Our principal objective here is to formulate a single linear program with the following properties: (i) If the convex hulls of A ..."
Abstract

Cited by 211 (33 self)
 Add to MetaCart
INTRODUCTION We consider the two pointsets A and B in the ndimensional real space R n represented by the m \Theta n matrix A and the k \Theta n matrix B respectively. Our principal objective here is to formulate a single linear program with the following properties: (i) If the convex hulls of A and B are disjoint, a strictly separating plane is obtained. (ii) If the convex hulls of A and B intersect, a plane is obtained that minimizes some measure of misclassification points, for all possible cases. (iii) No extraneous constraints are imposed on the linear program that rule out any specific case from consideration. Most linear programming formulations 6,5,12,4 have property (i)
On the Learnability and Design of Output Codes for Multiclass Problems
 In Proceedings of the Thirteenth Annual Conference on Computational Learning Theory
, 2000
"... . Output coding is a general framework for solving multiclass categorization problems. Previous research on output codes has focused on building multiclass machines given predefined output codes. In this paper we discuss for the first time the problem of designing output codes for multiclass problem ..."
Abstract

Cited by 161 (5 self)
 Add to MetaCart
. Output coding is a general framework for solving multiclass categorization problems. Previous research on output codes has focused on building multiclass machines given predefined output codes. In this paper we discuss for the first time the problem of designing output codes for multiclass problems. For the design problem of discrete codes, which have been used extensively in previous works, we present mostly negative results. We then introduce the notion of continuous codes and cast the design problem of continuous codes as a constrained optimization problem. We describe three optimization problems corresponding to three different norms of the code matrix. Interestingly, for the l 2 norm our formalism results in a quadratic program whose dual does not depend on the length of the code. A special case of our formalism provides a multiclass scheme for building support vector machines which can be solved efficiently. We give a time and space efficient algorithm for solving the quadratic program. We describe preliminary experiments with synthetic data show that our algorithm is often two orders of magnitude faster than standard quadratic programming packages. We conclude with the generalization properties of the algorithm. Keywords: Multiclass categorization,output coding, SVM 1.
Representations Of QuasiNewton Matrices And Their Use In Limited Memory Methods
, 1994
"... We derive compact representations of BFGS and symmetric rankone matrices for optimization. These representations allow us to efficiently implement limited memory methods for large constrained optimization problems. In particular, we discuss how to compute projections of limited memory matrices onto ..."
Abstract

Cited by 103 (8 self)
 Add to MetaCart
We derive compact representations of BFGS and symmetric rankone matrices for optimization. These representations allow us to efficiently implement limited memory methods for large constrained optimization problems. In particular, we discuss how to compute projections of limited memory matrices onto subspaces. We also present a compact representation of the matrices generated by Broyden's update for solving systems of nonlinear equations. Key words: QuasiNewton method, constrained optimization, limited memory method, largescale optimization. Abbreviated title: Representation of quasiNewton matrices. 1. Introduction. Limited memory quasiNewton methods are known to be effective techniques for solving certain classes of largescale unconstrained optimization problems (Buckley and Le Nir (1983), Liu and Nocedal (1989), Gilbert and Lemar'echal (1989)) . They make simple approximations of Hessian matrices, which are often good enough to provide a fast rate of linear convergence, and re...
A trust region method based on interior point techniques for nonlinear programming
 Mathematical Programming
, 1996
"... Jorge Nocedal z An algorithm for minimizing a nonlinear function subject to nonlinear inequality constraints is described. It applies sequential quadratic programming techniques to a sequence of barrier problems, and uses trust regions to ensure the robustness of the iteration and to allow the direc ..."
Abstract

Cited by 103 (17 self)
 Add to MetaCart
Jorge Nocedal z An algorithm for minimizing a nonlinear function subject to nonlinear inequality constraints is described. It applies sequential quadratic programming techniques to a sequence of barrier problems, and uses trust regions to ensure the robustness of the iteration and to allow the direct use of second order derivatives. This framework permits primal and primaldual steps, but the paper focuses on the primal version of the new algorithm. An analysis of the convergence properties of this method is presented. Key words: constrained optimization, interior point method, largescale optimization, nonlinear programming, primal method, primaldual method, SQP iteration, barrier method, trust region method.
Everything Old Is New Again: A Fresh Look at Historical Approaches
 in Machine Learning. PhD thesis, MIT
, 2002
"... 2 Everything Old Is New Again: A Fresh Look at Historical ..."
Abstract

Cited by 88 (6 self)
 Add to MetaCart
2 Everything Old Is New Again: A Fresh Look at Historical
An interior point algorithm for large scale nonlinear programming
 SIAM Journal on Optimization
, 1999
"... The design and implementation of a new algorithm for solving large nonlinear programming problems is described. It follows a barrier approach that employs sequential quadratic programming and trust regions to solve the subproblems occurring in the iteration. Both primal and primaldual versions of t ..."
Abstract

Cited by 74 (17 self)
 Add to MetaCart
The design and implementation of a new algorithm for solving large nonlinear programming problems is described. It follows a barrier approach that employs sequential quadratic programming and trust regions to solve the subproblems occurring in the iteration. Both primal and primaldual versions of the algorithm are developed, and their performance is illustrated in a set of numerical tests. Key words: constrained optimization, interior point method, largescale optimization, nonlinear programming, primal method, primaldual method, successive quadratic programming, trust region method.
Constrained GA optimization
 In Proc. of 5th Int'l Conf. on Genetic Algorithms
, 1993
"... We present a general method of handling constraints in genetic optimization, based on the Behavioural Memory paradigm. Instead of requiring the problemdependent design of either repair operators (projection on the feasible region) or penalty function (weighted sum of the constraints violations and ..."
Abstract

Cited by 50 (7 self)
 Add to MetaCart
We present a general method of handling constraints in genetic optimization, based on the Behavioural Memory paradigm. Instead of requiring the problemdependent design of either repair operators (projection on the feasible region) or penalty function (weighted sum of the constraints violations and the objective function), we sample the feasible region by evolving from an initially random population, successively applying a series of different fitness functions which embody constraint satisfaction. The final step is the optimization of the objective function restricted to the feasible region. The success of the whole process is highly dependent on the genetic diversity maintained during the first steps, ensuring a uniform sampling of the feasible region. This method succeeded on some truss structure optimization problems, where the other genetic techniques for handling the constraints failed to give good results. Moreover in some domains, as in automatic generation of software test dat...
Fitting Scattered Data on SphereLike Surfaces Using Spherical Splines
 J. Comput. Appl. Math
"... . Spaces of polynomial splines defined on planar triangulations are very useful tools for fitting scattered data in the plane. Recently, [4, 5], using homogeneous polynomials, we have developed analogous spline spaces defined on triangulations on the sphere and on spherelike surfaces. Using these s ..."
Abstract

Cited by 48 (11 self)
 Add to MetaCart
. Spaces of polynomial splines defined on planar triangulations are very useful tools for fitting scattered data in the plane. Recently, [4, 5], using homogeneous polynomials, we have developed analogous spline spaces defined on triangulations on the sphere and on spherelike surfaces. Using these spaces, it is possible to construct analogs of many of the classical interpolation and fitting methods. Here we examine some of the more interesting ones in detail. For interpolation, we discuss macroelement methods and minimal energy splines, and for fitting, we consider discrete least squares and penalized least squares. 1. Introduction Let S be the unit sphere or a spherelike surface (see Sect. 2 below) in IR 3 . In addition, suppose that we are given a set of scattered points located on S, along with real numbers associated with each of these points. The problem of interest in this paper is to find a function defined on S which either interpolates or approximates these data. This pr...
Local OptimizationBased Simplicial Mesh Untangling And Improvement
 International Journal of Numerical Methods in Engineering
"... . We present an optimizationbased approach for mesh untangling that maximizes the minimum area or volume of simplicial elements in a local submesh. These functions are linear with respect to the free vertex position; thus the problem can be formulated as a linear program that is solved by using the ..."
Abstract

Cited by 47 (8 self)
 Add to MetaCart
. We present an optimizationbased approach for mesh untangling that maximizes the minimum area or volume of simplicial elements in a local submesh. These functions are linear with respect to the free vertex position; thus the problem can be formulated as a linear program that is solved by using the computationally inexpensive simplex method. We prove that the function level sets are convex regardless of the position of the free vertex, and hence the local subproblem is guaranteed to converge. Maximizing the minimum area or volume of mesh elements, although wellsuited for mesh untangling, is not ideal for mesh improvement, and its use often results in poor quality meshes. We therefore combine the mesh untangling technique with optimizationbased mesh improvement techniques and expand previous results to show that a commonly used twodimensional mesh quality criterion can be guaranteed to converge when starting with a valid mesh. Typical results showing the effectiveness of the combine...