Results 1  10
of
15
Mesh ShapeQuality Optimization Using the Inverse MeanRatio Metric
 Preprint ANL/MCSP11360304, Argonne National Laboratory, Argonne
, 2004
"... Meshes containing elements with bad quality can result in poorly conditioned systems of equations that must be solved when using a discretization method, such as the finiteelement method, for solving a partial differential equation. Moreover, such meshes can lead to poor accuracy in the approximate ..."
Abstract

Cited by 14 (4 self)
 Add to MetaCart
Meshes containing elements with bad quality can result in poorly conditioned systems of equations that must be solved when using a discretization method, such as the finiteelement method, for solving a partial differential equation. Moreover, such meshes can lead to poor accuracy in the approximate solution computed. In this paper, we present a nonlinear fractional program that relocates the vertices of a given mesh to optimize the average element shape quality as measured by the inverse meanratio metric. To solve the resulting largescale optimization problems, we apply an efficient implementation of an inexact Newton algorithm using the conjugate gradient method with a block Jacobi preconditioner to compute the direction. We show that the block Jacobi preconditioner is positive definite by proving a general theorem concerning the convexity of fractional functions, applying this result to components of the inverse meanratio metric, and showing that each block in the preconditioner is invertible. Numerical results obtained with this specialpurpose code on several test meshes are presented and used to quantify the impact on solution time and memory requirements of using a modeling language and generalpurpose algorithm to solve these problems. 1
Inexact SQP methods for equality constrained optimization
 SIAM J. Opt
"... Abstract. We present an algorithm for largescale equality constrained optimization. The method is based on a characterization of inexact sequential quadratic programming (SQP) steps that can ensure global convergence. Inexact SQP methods are needed for largescale applications for which the iterati ..."
Abstract

Cited by 9 (5 self)
 Add to MetaCart
Abstract. We present an algorithm for largescale equality constrained optimization. The method is based on a characterization of inexact sequential quadratic programming (SQP) steps that can ensure global convergence. Inexact SQP methods are needed for largescale applications for which the iteration matrix cannot be explicitly formed or factored and the arising linear systems must be solved using iterative linear algebra techniques. We address how to determine when a given inexact step makes sufficient progress toward a solution of the nonlinear program, as measured by an exact penalty function. The method is globalized by a line search. An analysis of the global convergence properties of the algorithm and numerical results are presented. Key words. largescale optimization, constrained optimization, sequential quadratic programming, inexact linear system solvers, Krylov subspace methods AMS subject classifications. 49M37, 65K05, 90C06, 90C30, 90C55 1. Introduction. In
Flexible Penalty Functions for Nonlinear Constrained Optimization
, 2007
"... We propose a globalization strategy for nonlinear constrained optimization. The method employs a “flexible” penalty function to promote convergence, where during each iteration the penalty parameter can be chosen as any number within a prescribed interval, rather than a fixed value. This increased f ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
We propose a globalization strategy for nonlinear constrained optimization. The method employs a “flexible” penalty function to promote convergence, where during each iteration the penalty parameter can be chosen as any number within a prescribed interval, rather than a fixed value. This increased flexibility in the step acceptance procedure is designed to promote long productive steps for fast convergence. An analysis of the global convergence properties of the approach in the context of a line search Sequential Quadratic Programming method and numerical results for the KNITRO software package are presented.
Dynamic updates of the barrier parameter in primaldual methods for nonlinear programming
, 2006
"... ..."
An Algorithm for the Fast Solution of Symmetric Linear Complementarity Problems
, 2008
"... This paper studies algorithms for the solution of mixed symmetric linear complementarity problems. The goal is to compute fast and approximate solutions of medium to large sized problems, such as those arising in computer game simulations and American options pricing. The paper proposes an improveme ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
This paper studies algorithms for the solution of mixed symmetric linear complementarity problems. The goal is to compute fast and approximate solutions of medium to large sized problems, such as those arising in computer game simulations and American options pricing. The paper proposes an improvement of a method described by Kocvara and Zowe [19] that combines projected GaussSeidel iterations with subspace minimization steps. The proposed algorithm employs a recursive subspace minimization designed to handle severely illconditioned problems. Numerical tests indicate that the approach is more efficient than interiorpoint and gradient projection methods on some physical simulation problems that arise in computer game scenarios.
The Optimization Test Environment
"... Testing is a crucial part of software development in general, and hence also in mathematical programming. Unfortunately, it is often a time consuming and little exciting activity. This naturally motivated us to increase the e ciency in testing solvers for optimization problems and to automatize as m ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
Testing is a crucial part of software development in general, and hence also in mathematical programming. Unfortunately, it is often a time consuming and little exciting activity. This naturally motivated us to increase the e ciency in testing solvers for optimization problems and to automatize as much of the procedure as possible. Keywords: test environment, optimization, solver benchmarking, solver comparison The testing procedure typically consists of three basic tasks: a) organize test problem sets, also called test libraries; b) solve selected test problems with selected solvers; c) analyze, check and compare the results. The Test Environment is a graphical user interface (GUI) that enables to manage the tasks a) and b) interactively, and task c) automatically. The Test Environment is particularly designed for users who seek to 1. adjust solver parameters, or 2. compare solvers on single problems, or 3. evaluate solvers on suitable test sets.
Convexity and Concavity Detection in Computational Graphs Tree Walks for Convexity Assessment
, 2008
"... Abstract. In this paper, we examine sets of symbolic tools associated to modeling systems for mathematical programming which can be used to automatically detect the presence or lack of convexity and concavity in the objective and constraint functions. As a consequence, convexity of the feasible set ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Abstract. In this paper, we examine sets of symbolic tools associated to modeling systems for mathematical programming which can be used to automatically detect the presence or lack of convexity and concavity in the objective and constraint functions. As a consequence, convexity of the feasible set may be assessed to some extent. The coconut solver system [Sch04b] focuses on nonlinear global continuous optimization and possesses its own modeling language and data structures. The Dr.ampl [FO07] metasolver aims to analyze nonlinear diffentiable optimization models and hooks into the ampl Solver Library [Gay02]. The symbolic analysis may ◭ be supplemented with a numerical disproving phase when the former returns inconclusive results. We report numerical results using these tools on sets of test problems for both global and local optimization. 1.
A LINE SEARCH MULTIGRID METHOD FOR LARGESCALE NONLINEAR OPTIMIZATION ∗
, 2008
"... Abstract. We present a line search multigrid method for solving discretized versions of general unconstrained infinite dimensional optimization problems. At each iteration on each level, the algorithm computes either a “direct search ” direction on the current level or a “recursive search ” directio ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Abstract. We present a line search multigrid method for solving discretized versions of general unconstrained infinite dimensional optimization problems. At each iteration on each level, the algorithm computes either a “direct search ” direction on the current level or a “recursive search ” direction from coarser level models. Introducing a new condition that must be satisfied by a backtracking line search procedure, the “recursive search ” direction is guaranteed to be a descent direction. Global convergence is proved under fairly minimal requirements on the minimization method used at all grid levels. Using a limited memory BFGS quasiNewton method to produce the “direct search ” direction, preliminary numerical experiments show that our line search multigrid approach is promising.
P.: Stopping rules and backward error analysis for boundconstrained optimization
 Numerische Mathematik
, 2011
"... optimization ..."
1 Benchmark Functions for CEC’2013 Special Session and Competition on Niching Methods for Multimodal Function Optimization
"... Evolutionary Algorithms (EAs) in their original forms are usually designed for locating a single global solution. These algorithms typically converge to a single solution because of the global selection scheme used. Nevertheless, many realworld problems are “multimodal ” by nature, i.e., multiple sa ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Evolutionary Algorithms (EAs) in their original forms are usually designed for locating a single global solution. These algorithms typically converge to a single solution because of the global selection scheme used. Nevertheless, many realworld problems are “multimodal ” by nature, i.e., multiple satisfactory solutions exist. It may be desirable to locate many such satisfactory solutions so that a decision maker can choose one that is most proper in his/her problem domain. Numerous techniques have been developed in the past for locating multiple optima (global or local). These techniques are commonly referred to as “niching ” methods. A niching method can be incorporated into a standard EA to promote and maintain formation of multiple stable subpopulations within a single population, with an aim to locate multiple globally optimal or suboptimal solutions. Many niching methods have