Results 11  20
of
96
Finite Element Analysis of Nonsmooth Contact
 COMPUT. METHODS APPL. MECH. ENG
, 1999
"... This work develops robust contact algorithms capable of dealing with complex contact situations involving several bodies with corners. Amongst the mathematical tools we bring to bear on the problem is nonsmooth analysis, following [14]. We specifically address contact geometries for which both the u ..."
Abstract

Cited by 14 (8 self)
 Add to MetaCart
This work develops robust contact algorithms capable of dealing with complex contact situations involving several bodies with corners. Amongst the mathematical tools we bring to bear on the problem is nonsmooth analysis, following [14]. We specifically address contact geometries for which both the use of normals and gap functions have difficulties and therefore precludes the application of most contact algorithms proposed to date. Such situations arise in applications such as fragmentation, where angular fragments undergo complex collision sequences before they scatter. We demonstrate the robustness and versatility of the nonsmooth contact algorithms developed in this paper with the aid of selected two and threedimensional applications.
Global Optimization For Constrained Nonlinear Programming
, 2001
"... In this thesis, we develop constrained simulated annealing (CSA), a global optimization algorithm that asymptotically converges to constrained global minima (CGM dn ) with probability one, for solving discrete constrained nonlinear programming problems (NLPs). The algorithm is based on the necessary ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
In this thesis, we develop constrained simulated annealing (CSA), a global optimization algorithm that asymptotically converges to constrained global minima (CGM dn ) with probability one, for solving discrete constrained nonlinear programming problems (NLPs). The algorithm is based on the necessary and sufficient condition for constrained local minima (CLM dn ) in the theory of discrete constrained optimization using Lagrange multipliers developed in our group. The theory proves the equivalence between the set of discrete saddle points and the set of CLM dn, leading to the firstorder necessary and sufficient condition for CLM dn. To find
Optimal Portfolios When Stock Prices Follow an Exponential Lévy Process
 Finance and Stochastics
, 2001
"... We investigate some portfolio problems that consist of maximizing expected terminal wealth under the constraint of an upper bound for the risk, where we measure risk by the variance, but also by the CapitalatRisk (CaR). The solution of the meanvariance problem has the same structure for any price ..."
Abstract

Cited by 11 (2 self)
 Add to MetaCart
We investigate some portfolio problems that consist of maximizing expected terminal wealth under the constraint of an upper bound for the risk, where we measure risk by the variance, but also by the CapitalatRisk (CaR). The solution of the meanvariance problem has the same structure for any price process which follows an exponential Levy process. The CaR involves a quantile of the corresponding wealth process of the portfolio. We derive a weak limit law for its approximation by a simpler Levy process, often the sum of a drift term, a Brownian motion and a compound Poisson process. Certain relations between a Levy process and its stochastic exponential are investigated.
Methods for nonlinear constraints in optimization calculations
 THE STATE OF THE ART IN NUMERICAL ANALYSIS
, 1996
"... ..."
On the sequential quadratically constrained quadratic programming methods
 Math. Oper. Res
, 2004
"... doi 10.1287/moor.1030.0069 ..."
Tuning Strategies In Constrained Simulated Annealing For Nonlinear Global Optimization
 Int’l J. of Artificial Intelligence Tools
, 2000
"... This paper studies various strategies in constrained simulated annealing (CSA), a global optimization algorithm that achieves asymptotic convergence to constrained global minima (CGM) with probability one for solving discrete constrained nonlinear programming problems (NLPs). The algorithm is based ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
This paper studies various strategies in constrained simulated annealing (CSA), a global optimization algorithm that achieves asymptotic convergence to constrained global minima (CGM) with probability one for solving discrete constrained nonlinear programming problems (NLPs). The algorithm is based on the necessary and sufficient condition for discrete constrained local minima (CLM) in the theory of discrete Lagrange multipliers and its extensions to continuous and mixedinteger constrained NLPs. The strategies studied include adaptive neighborhoods, distributions to control sampling, acceptance probabilities, and cooling schedules. We report much better solutions than the bestknown solutions in the literature on two sets of continuous benchmarks and their discretized versions.
Optimal Signal Sets For NonGaussian Detectors
 SIAM Journal on Optimization
, 1997
"... . Identifying a maximallyseparated set of signals is important in the design of modems. The notion of optimality is dependent on the model chosen to describe noise in the measurements; while some analytic results can be derived under the assumption of Gaussian noise, no such techniques are known fo ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
. Identifying a maximallyseparated set of signals is important in the design of modems. The notion of optimality is dependent on the model chosen to describe noise in the measurements; while some analytic results can be derived under the assumption of Gaussian noise, no such techniques are known for choosing signal sets in the nonGaussian case. To obtain numerical solutions for nonGaussian detectors, minimax problems are transformed into nonlinear programs, resulting in a novel formulation yielding problems with relatively few variables and many inequality constraints. Using sequential quadratic programming, optimal signal sets are obtained for a variety of noise distributions. Key words. Optimal Design, Inequality Constraints, Sequential Quadratic Programming Contribution of the National Institute of Standards and Technology and not subject to copyright in the United States. y Department of Mathematics, University of Michigan, Ann Arbor, MI 48109 z Mathematical and Computationa...
SQP methods for largescale nonlinear programming
, 1999
"... We compare and contrast a number of recent sequential quadratic programming (SQP) methods that have been proposed for the solution of largescale nonlinear programming problems. Both linesearch and trustregion approaches are considered, as are the implications of interiorpoint and quadratic progr ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
We compare and contrast a number of recent sequential quadratic programming (SQP) methods that have been proposed for the solution of largescale nonlinear programming problems. Both linesearch and trustregion approaches are considered, as are the implications of interiorpoint and quadratic programming methods.
INEXACT JOSEPHY–NEWTON FRAMEWORK FOR GENERERALIZED EQUATIONS AND ITS APPLICATIONS TO LOCAL ANALYSIS OF NEWTONIAN METHODS FOR CONSTRAINED OPTIMIZATION ∗
, 2008
"... We propose and analyze a perturbed version of the classical JosephyNewton method for solving generalized equations. This perturbed framework is convenient to treat in a unified way standard sequential quadratic programming, its stabilzed version, sequential quadratically constrained quadratic progr ..."
Abstract

Cited by 8 (6 self)
 Add to MetaCart
We propose and analyze a perturbed version of the classical JosephyNewton method for solving generalized equations. This perturbed framework is convenient to treat in a unified way standard sequential quadratic programming, its stabilzed version, sequential quadratically constrained quadratic programming, and linearly constrained Lagrangian methods. For the linearly constrained Lagrangian methods, in particular, we obtain superlinear convergence under the secondorder sufficient optimality condition and the strict Mangasarian–Fromovitz constraint qualification, while previous results in the literature assume (in addition to secondorder sufficiency) the stronger linear independence constraint qualification as well as the strict complementarity condition. For the sequential quadratically constrained quadratic programming methods, we prove primaldual superlinear/quadratic convergence under the same assumptions as above, which also gives a new result.
LOCAL CONVERGENCE OF EXACT AND INEXACT AUGMENTED LAGRANGIAN METHODS UNDER THE SECONDORDER SUFFICIENT OPTIMALITY CONDITION
, 2012
"... We establish local convergence and rate of convergence of the classical augmented Lagrangian algorithm under the sole assumption that the dual starting point is close to a multiplier satisfying the secondorder sufficient optimality condition. In particular, no constraint qualifications of any kind ..."
Abstract

Cited by 8 (4 self)
 Add to MetaCart
We establish local convergence and rate of convergence of the classical augmented Lagrangian algorithm under the sole assumption that the dual starting point is close to a multiplier satisfying the secondorder sufficient optimality condition. In particular, no constraint qualifications of any kind are needed. Previous literature on the subject required, in addition, the linear independence constraint qualification and either the strict complementarity assumption or a stronger version of the secondorder sufficient condition. That said, the classical results allow the initial multiplier estimate to be far from the optimal one, at the expense of proportionally increasing the threshold value for the penalty parameters. Although our primary goal is to avoid constraint qualifications, if the stronger assumptions are introduced, then starting points far from the optimal multiplier are allowed within our analysis as well. Using only the secondorder sufficient optimality condition, for penalty parameters large enough we prove primaldual Qlinear convergence rate, which becomes superlinear if the parameters are allowed to go to infinity. Both exact and inexact solutions of subproblems are considered. In the exact case, we further show that the primal convergence rate is of the same Qorder as the primaldual rate. Previous assertions for the primal sequence all had to do with the weaker Rrate of convergence and required the stronger assumptions cited above. Finally, we show that under our assumptions one of the popular rules of controlling the penalty parameters ensures their boundedness.