Results 1  10
of
43
Extreme Learning Machine for Regression and Multiclass Classification
"... Abstract—Due to the simplicity of their implementations, least square support vector machine (LSSVM) and proximal support vector machine (PSVM) have been widely used in binary classification applications. The conventional LSSVM and PSVM cannot be used in regression and multiclass classification ap ..."
Abstract

Cited by 62 (5 self)
 Add to MetaCart
(Show Context)
Abstract—Due to the simplicity of their implementations, least square support vector machine (LSSVM) and proximal support vector machine (PSVM) have been widely used in binary classification applications. The conventional LSSVM and PSVM cannot be used in regression and multiclass classification applications directly, although variants of LSSVM and PSVM have been proposed to handle such cases. This paper shows that both LSSVM and PSVM can be simplified further and a unified learning framework of LSSVM, PSVM, and other regularization algorithms referred to extreme learning machine (ELM) can be built. ELM works for the “generalized ” singlehiddenlayer feedforward networks (SLFNs), but the hidden layer (or called feature mapping) in ELM need not be tuned. Such SLFNs include but are not limited to SVM, polynomial network, and the conventional feedforward neural networks. This paper shows the following: 1) ELM provides a unified learning platform with a widespread type of feature mappings and can be applied in regression and multiclass classification applications directly; 2) from the optimization method point of view, ELM has milder optimization constraints compared to LSSVM and PSVM; 3) in theory, compared to ELM, LSSVM and PSVM achieve suboptimal solutions and require higher computational complexity; and 4) in theory, ELM can approximate any target continuous function and classify any disjoint regions. As verified by the simulation results, ELM tends to have better scalability and achieve similar (for regression and binary class cases) or much better (for multiclass cases) generalization performance at much faster learning speed (up to thousands times) than traditional SVM and LSSVM. Index Terms—Extreme learning machine (ELM), feature mapping, kernel, least square support vector machine (LSSVM), proximal support vector machine (PSVM), regularization network. I.
An Algorithm for Nonlinear Optimization Using Linear Programming and Equality Constrained Subproblems
, 2003
"... This paper describes an activeset algorithm for largescale nonlinear programming based on the successive linear programming method proposed by Fletcher and Sainz de la Maza [10]. The step computation is performed in two stages. In the first stage a linear program is solved to estimate the activ ..."
Abstract

Cited by 48 (14 self)
 Add to MetaCart
(Show Context)
This paper describes an activeset algorithm for largescale nonlinear programming based on the successive linear programming method proposed by Fletcher and Sainz de la Maza [10]. The step computation is performed in two stages. In the first stage a linear program is solved to estimate the active set at the solution. The linear program is obtained by making a linear approximation to the ` 1 penalty function inside a trust region. In the second stage, an equality constrained quadratic program (EQP) is solved involving only those constraints that are active at the solution of the linear program.
Aircraft turbofan engine health estimation using constrained Kalman filtering
 In ASME Turbo Expo
, 2003
"... Kalman ¯lters are often used to estimate the state variables of a dynamic system. However, in the application of Kalman ¯lters some known signal information is often either ignored or dealt with heuristically. For instance, state variable constraints (which may be based on physical considerations) a ..."
Abstract

Cited by 29 (7 self)
 Add to MetaCart
(Show Context)
Kalman ¯lters are often used to estimate the state variables of a dynamic system. However, in the application of Kalman ¯lters some known signal information is often either ignored or dealt with heuristically. For instance, state variable constraints (which may be based on physical considerations) are often neglected because they do not ¯t easily into the structure of the Kalman ¯lter. This paper develops an analytic method of incorporating state variable inequality constraints in the Kalman ¯lter. The resultant ¯lter is a combination of a standard Kalman ¯lter and a quadratic programming problem. The incorporation of state variable constraints increases the computational e®ort of the ¯lter but signi¯cantly improves its estimation accuracy. The improvement is proven theoretically and shown via simulation results obtained from application to a turbofan engine model. This model contains 16 state variables, 12 measurements, and 8 component health parameters. It is shown that the new algorithms provide improved performance in this example over unconstrained Kalman ¯ltering.
Kalman filtering with inequality constraints for turbofan engine health estimation
 IEE Proc. on Control Theory and Applications
, 2006
"... Kalman fllters are often used to estimate the state variables of a dynamic system. However, in the application of Kalman fllters some known signal information is often either ignored or dealt with heuristically. For instance, state variable constraints (which may be based on physical considerations) ..."
Abstract

Cited by 26 (5 self)
 Add to MetaCart
(Show Context)
Kalman fllters are often used to estimate the state variables of a dynamic system. However, in the application of Kalman fllters some known signal information is often either ignored or dealt with heuristically. For instance, state variable constraints (which may be based on physical considerations) are often neglected because they do not flt easily into the structure of the Kalman fllter. This paper develops two analytic methods of incorporating state variable inequality constraints in the Kalman fllter. The flrst method is a general technique of using hard constraints to enforce inequalities on the state variable estimates. The resultant fllter is a combination of a standard Kalman fllter and a quadratic programming problem. The second method uses soft constraints to estimate state variables that are known to vary slowly with time. (Soft constraints are constraints that are required to be approximately satisfled rather than exactly satisfled.) The incorporation of state variable constraints increases the computational efiort of the fllter but signiflcantly improves its estimation accuracy. The improvement is proven theoretically and shown via simulation
A Second Derivative SQP Method: Local Convergence 30 Practical Issues
 SIAM Journal of Optimization
"... results for a secondderivative SQP method for minimizing the exact ℓ1merit function for a fixed value of the penalty parameter. To establish this result, we used the properties of the socalled Cauchy step, which was itself computed from the socalled predictor step. In addition, we allowed for th ..."
Abstract

Cited by 18 (6 self)
 Add to MetaCart
(Show Context)
results for a secondderivative SQP method for minimizing the exact ℓ1merit function for a fixed value of the penalty parameter. To establish this result, we used the properties of the socalled Cauchy step, which was itself computed from the socalled predictor step. In addition, we allowed for the computation of a variety of (optional) SQP steps that were intended to improve the efficiency of the algorithm. Although we established global convergence of the algorithm, we did not discuss certain aspects that are critical when developing software capable of solving general optimization problems. In particular, we must have strategies for updating the penalty parameter and better techniques for defining the positivedefinite matrix Bk used in computing the predictor step. In this paper we address both of these issues. We consider two techniques for defining the positivedefinite matrix Bk—a simple diagonal approximation and a more sophisticated limitedmemory BFGS update. We also analyze a strategy for updating the penalty parameter based on approximately minimizing the ℓ1penalty function over a sequence of increasing values of the penalty parameter. Algorithms based on exact penalty functions have certain desirable properties. To be practical, however, these algorithms must be guaranteed to avoid the socalled Maratos effect. We show that a nonmonotone variant of our algorithm avoids this phenomenon and, therefore, results in asymptotically superlinear local convergence; this is verified by preliminary numerical results on the Hock and Shittkowski test set. Key words. Nonlinear programming, nonlinear inequality constraints, sequential quadratic programming, ℓ1penalty function, nonsmooth optimization AMS subject classifications. 49J52, 49M37, 65F22, 65K05, 90C26, 90C30, 90C55 1. Introduction. In [19]
An Affine Scaling Algorithm For Minimizing Total Variation In Image Enhancement
, 1994
"... . A computational algorithm is proposed for image enhancement based on total variation minimization with constraints. This constrained minimization problem is introduced by Rudin et al [13, 14, 15] to enhance blurred and noisy images. Our computational algorithm solves the constrained minimization p ..."
Abstract

Cited by 16 (1 self)
 Add to MetaCart
(Show Context)
. A computational algorithm is proposed for image enhancement based on total variation minimization with constraints. This constrained minimization problem is introduced by Rudin et al [13, 14, 15] to enhance blurred and noisy images. Our computational algorithm solves the constrained minimization problem directly by adapting the affine scaling method for the unconstrained l 1 problem [3]. The resulting computational scheme, when viewed as an image enhancement process, has the feature that it can be used in an interactive manner in situations where knowledge of the noise level is either unavailable or unreliable. This computational algorithm can be implemented with a conjugate gradient method. It is further demonstrated that the iterative enhancement process is efficient. Key Words. image enhancement, image reconstruction, deconvolution, minimal total variation, affine scaling algorithm, projected gradient method Department of Computer Science and Advanced Computing Research Institut...
A game theory approach to constrained minimax state estimation
 IEEE Transactions on Signal Processing
, 2006
"... This paper presents a game theory approach to the constrained state estimation of linear discrete time dynamic systems. In the application of state estimators there is often known model or signal information that is either ignored or dealt with heuristically. For example, constraints on the state va ..."
Abstract

Cited by 12 (3 self)
 Add to MetaCart
(Show Context)
This paper presents a game theory approach to the constrained state estimation of linear discrete time dynamic systems. In the application of state estimators there is often known model or signal information that is either ignored or dealt with heuristically. For example, constraints on the state values (which may be based on physical considerations) are often neglected because they do not easily ¯t into the structure of the state estimator. This paper develops a method for incorporating state equality constraints into a minimax state estimator. The algorithm is demonstrated on a simple vehicle tracking simulation.
Preconditioning Reduced Matrices
, 1996
"... We study preconditioning strategies for linear systems with positivedefinite matrices of the form Z T GZ, where Z is rectangular and G is symmetric but not necessarily positive definite. The preconditioning strategies are designed to be used in the context of a conjugategradient iteration, and a ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
We study preconditioning strategies for linear systems with positivedefinite matrices of the form Z T GZ, where Z is rectangular and G is symmetric but not necessarily positive definite. The preconditioning strategies are designed to be used in the context of a conjugategradient iteration, and are suitable within algorithms for constrained optimization problems. The techniques have other uses, however, and are applied here to a class of problems in the calculus of variations. Numerical tests are also included.
An activeset algorithm for nonlinear programming using linear programming and equality constrained subproblems
, 2002
"... This paper describes an activeset algorithm for largescale nonlinear programming based on the successive linear programming method proposed by Fletcher and Sainz de la Maza [9]. The step computation is performed in two stages. In the rst stage a linear program is solved to estimate the active set ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
(Show Context)
This paper describes an activeset algorithm for largescale nonlinear programming based on the successive linear programming method proposed by Fletcher and Sainz de la Maza [9]. The step computation is performed in two stages. In the rst stage a linear program is solved to estimate the active set at the solution. The linear program is obtained by making a linear approximation to the `1 penalty function inside a trust region. In the second stage, an equality constrained quadratic program (EQP) is solved involving only those constraints that are active atthesolution of the linear program. The EQP incorporates a trustregion constraint and is solved (inexactly) by means of a projected conjugate gradient method. Numerical experiments are presented illustrating the performance of the algorithm on the CUTEr [1] test set.
TimeCritical Multiresolution Rendering of Large Complex Models
 Journal of ComputerAided Design
"... Very large and geometrically complex scenes, exceeding millions of polygons and hundreds of objects, arise naturally in many areas of interactive computer graphics. Timecritical rendering of such scenes requires the ability to trade visual quality with speed. Previous work has shown that this can b ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
(Show Context)
Very large and geometrically complex scenes, exceeding millions of polygons and hundreds of objects, arise naturally in many areas of interactive computer graphics. Timecritical rendering of such scenes requires the ability to trade visual quality with speed. Previous work has shown that this can be done by representing individual scene components as multiresolution triangle meshes, and performing at each frame a convex constrained optimization to choose the mesh resolutions that maximize image quality while meeting timing constraints. In this paper we demonstrate that the nonlinear optimization problem with linear constraints associated to a large class of quality estimation heuristics is efficiently solved using an activeset strategy. By exploiting the problem structure, Lagrange multipliers estimates and equality constrained problem solutions are computed in linear time. Results show that our algorithms and data structures provide low memory overhead, smooth levelof detail contro...