Results 1  10
of
147
Global Continuation For Distance Geometry Problems
 SIAM J. OPTIMIZATION
, 1995
"... Distance geometry problems arise in the interpretation of NMR data and in the determination of protein structure. We formulate the distance geometry problem as a global minimization problem with special structure, and show that global smoothing techniques and a continuation approach for global optim ..."
Abstract

Cited by 69 (7 self)
 Add to MetaCart
Distance geometry problems arise in the interpretation of NMR data and in the determination of protein structure. We formulate the distance geometry problem as a global minimization problem with special structure, and show that global smoothing techniques and a continuation approach for global optimization can be used to determine solutions of distance geometry problems with a nearly 100% probability of success.
Finding All Solutions of Nonlinearly Constrained Systems of Equations
 Journal of Global Optimization
, 1995
"... . A new approach is proposed for finding all fflfeasible solutions for certain classes of nonlinearly constrained systems of equations. By introducing slack variables, the initial problem is transformed into a global optimization problem(P) whose multiple global minimum solutionswith a zero object ..."
Abstract

Cited by 30 (14 self)
 Add to MetaCart
. A new approach is proposed for finding all fflfeasible solutions for certain classes of nonlinearly constrained systems of equations. By introducing slack variables, the initial problem is transformed into a global optimization problem(P) whose multiple global minimum solutionswith a zero objectivevalue (if any)correspond to all solutionsof the initial constrainedsystem of equalities. All fflglobally optimal points of (P) are then localized within a set of arbitrarily small disjoint rectangles. This is based on a branch and bound type global optimization algorithm which attains finite fflconvergence to each of the multiple global minima of (P) through the successive refinement of a convexrelaxation of the feasible region and the subsequent solution of a series of nonlinear convex optimization problems. Based on the form of the participating functions, a number of techniques for constructing this convex relaxation are proposed. By taking advantage of the properties of products o...
Particle Swarm Optimizer In Noisy And Continuously Changing Environments
 M.H. Hamza (Ed.), Arti cial Intelligence and Soft Computing, IASTED/ACTA
, 2001
"... In this paper we study the performance of the recently proposed Particle Swarm optimization method in the presence of noisy and continuously changing environments. Experimental results for well known and widely used optimization test functions are given and discussed. Conclusions for its ability to ..."
Abstract

Cited by 30 (6 self)
 Add to MetaCart
In this paper we study the performance of the recently proposed Particle Swarm optimization method in the presence of noisy and continuously changing environments. Experimental results for well known and widely used optimization test functions are given and discussed. Conclusions for its ability to cope with such environments as well as real life applications are also derived.
Machine Learning via Polyhedral Concave Minimization
, 1996
"... Two fundamental problems of machine learning, misclassification minimization [10, 24, 18] and feature selection, [25, 29, 14] are formulated as the minimization of a concave function on a polyhedral set. Other formulations of these problems utilize linear programs with equilibrium constraints [18, 1 ..."
Abstract

Cited by 27 (12 self)
 Add to MetaCart
Two fundamental problems of machine learning, misclassification minimization [10, 24, 18] and feature selection, [25, 29, 14] are formulated as the minimization of a concave function on a polyhedral set. Other formulations of these problems utilize linear programs with equilibrium constraints [18, 1, 4, 3] which are generally intractable. In contrast, for the proposed concave minimization formulation, a successive linearization algorithm without stepsize terminates after a maximum average of 7 linear programs on problems with as many as 4192 points in 14dimensional space. The algorithm terminates at a stationary point or a global solution to the problem. Preliminary numerical results indicate that the proposed approach is quite effective and more efficient than other approaches. 1 Introduction We shall consider the following two fundamental problems of machine learning: Problem 1.1 Misclassification Minimization [24, 18] Given two finite point sets A and B in the ndimensional real s...
On Copositive Programming and Standard Quadratic Optimization Problems
 Journal of Global Optimization
, 2000
"... A standard quadratic problem consists of finding global maximizers of a quadratic form over the standard simplex. In this paper, the usual semidefinite programming relaxation is strengthened by replacing the cone of positive semidefinite matrices by the cone of completely positive matrices (the posi ..."
Abstract

Cited by 23 (5 self)
 Add to MetaCart
A standard quadratic problem consists of finding global maximizers of a quadratic form over the standard simplex. In this paper, the usual semidefinite programming relaxation is strengthened by replacing the cone of positive semidefinite matrices by the cone of completely positive matrices (the positive semidefinite matrices which allow a factorization FF^T where F is some nonnegative matrix). The dual of this cone is the cone of copositive matrices (i.e., those matrices which yield a nonnegative quadratic form on the positive orthant). This conic formulation allows us to employ primaldual affinescaling directions. Furthermore, these approaches are combined with an evolutionary dynamics algorithm which generates primalfeasible paths along which the objective is monotonically improved until a local solution is reached. In particular, the primaldual affine scaling directions are used to escape from local maxima encountered during the evolutionary dynamics phase.
Global optimization by continuous GRASP
 Optimization Letters
"... ABSTRACT. We introduce a novel global optimization method called Continuous GRASP (CGRASP) which extends Feo and Resende’s greedy randomized adaptive search procedure (GRASP) from the domain of discrete optimization to that of continuous global optimization. This stochastic local search method is s ..."
Abstract

Cited by 22 (9 self)
 Add to MetaCart
ABSTRACT. We introduce a novel global optimization method called Continuous GRASP (CGRASP) which extends Feo and Resende’s greedy randomized adaptive search procedure (GRASP) from the domain of discrete optimization to that of continuous global optimization. This stochastic local search method is simple to implement, is widely applicable, and does not make use of derivative information, thus making it a wellsuited approach for solving global optimization problems. We illustrate the effectiveness of the procedure on a set of standard test problems as well as two hard global optimization problems. 1.
Learning sparse representations by nonnegative matrix factorization and sequential cone programming
 Journal of Machine Learning Research
, 2006
"... We exploit the biconvex nature of the Euclidean nonnegative matrix factorization (NMF) optimization problem to derive optimization schemes based on sequential quadratic and second order cone programming. We show that for ordinary NMF, our approach performs as well as existing stateoftheart algori ..."
Abstract

Cited by 21 (0 self)
 Add to MetaCart
We exploit the biconvex nature of the Euclidean nonnegative matrix factorization (NMF) optimization problem to derive optimization schemes based on sequential quadratic and second order cone programming. We show that for ordinary NMF, our approach performs as well as existing stateoftheart algorithms, while for sparsityconstrained NMF, as recently proposed by P. O. Hoyer in JMLR 5 (2004), it outperforms previous methods. In addition, we show how to extend NMF learning within the same optimization framework in order to make use of class membership information in supervised learning problems.
Global Optimization for the Phase Stability Problem
 AIChE J
, 1994
"... The Gibbs tangent plane criterion has become an important tool in determining the quality of obtained solutions to the phase and chemical equilibrium problem. The ability to determine if a postulated solution is thermodynamically stable with respect to perturbations in any or all of the phases is ve ..."
Abstract

Cited by 20 (4 self)
 Add to MetaCart
The Gibbs tangent plane criterion has become an important tool in determining the quality of obtained solutions to the phase and chemical equilibrium problem. The ability to determine if a postulated solution is thermodynamically stable with respect to perturbations in any or all of the phases is very useful in the search for the true equilibrium solution. Previous approaches have concentrated on finding the stationary points of the tangent plane distance function. However, no guarantee of obtaining all stationary points can be provided. These difficulties arise due to the complex and nonlinear nature of the models used to predict equilibrium. In this work, simpler formulations for the stability problem are presented for the special class of problems where nonideal liquid phases can be adequately modeled using the NRTL and UNIQUAC activity coefficient equations. It is shown how the global minimum of the tangent plane distance function can be obtained for this class of problems. The adv...