Results 1  10
of
26
EFFICIENT CONVEX OPTIMIZATION FOR ENGINEERING DESIGN
"... Many problems in engineering analysis and design can be cast as convex optimization problems, often nonlinear and nondifferentiable. We give a highlevel description of recently developed interiorpoint methods for convex optimization, explain how problem structure can be exploited in these algorit ..."
Abstract

Cited by 21 (13 self)
 Add to MetaCart
Many problems in engineering analysis and design can be cast as convex optimization problems, often nonlinear and nondifferentiable. We give a highlevel description of recently developed interiorpoint methods for convex optimization, explain how problem structure can be exploited in these algorithms, and illustrate the general scheme with numerical experiments. To give a rough idea of the efficiencies obtained, we are able to solve convex optimization problems with over 1000 variables and 10000 constraints in around 10 minutes on a workstation.
User’s Guide For QPOPT 1.0: A Fortran Package For Quadratic Programming
, 1995
"... QPOPT is a set of Fortran subroutines for minimizing a general quadratic function subject to linear constraints and simple upper and lower bounds. QPOPT may also be used for linear programming and for finding a feasible point for a set of linear equalities and inequalities. If the quadratic function ..."
Abstract

Cited by 19 (3 self)
 Add to MetaCart
QPOPT is a set of Fortran subroutines for minimizing a general quadratic function subject to linear constraints and simple upper and lower bounds. QPOPT may also be used for linear programming and for finding a feasible point for a set of linear equalities and inequalities. If the quadratic function is convex (i.e., the Hessian is positive definite or positive semidefinite), the solution obtained will be a global minimizer. If the quadratic is nonconvex (i.e., the Hessian is indefinite), the solution obtained will be a local minimizer or a deadpoint. A twophase activeset method is used. The first phase minimizes the sum of infeasibilities. The second phase minimizes the quadratic function within the feasible region, using a reduced Hessian to obtain search directions. The method is most efficient when many constraints or bounds are active at the solution. QPOPT is not intended for large sparse problems, but there is no fixed limit on problem size. The source code is suitable for all scientific machines with a Fortran 77
Primaldual projected gradient algorithms for extended linearquadratic programming
 SIAM J. Optimization
"... Abstract. Many largescale problems in dynamic and stochastic optimization can be modeled with extended linearquadratic programming, which admits penalty terms and treats them through duality. In general the objective functions in such problems are only piecewise smooth and must be minimized or max ..."
Abstract

Cited by 17 (2 self)
 Add to MetaCart
Abstract. Many largescale problems in dynamic and stochastic optimization can be modeled with extended linearquadratic programming, which admits penalty terms and treats them through duality. In general the objective functions in such problems are only piecewise smooth and must be minimized or maximized relative to polyhedral sets of high dimensionality. This paper proposes a new class of numerical methods for “fully quadratic ” problems within this framework, which exhibit secondorder nonsmoothness. These methods, combining the idea of finiteenvelope representation with that of modified gradient projection, work with local structure in the primal and dual problems simultaneously, feeding information back and forth to trigger advantageous restarts. Versions resembling steepest descent methods and conjugate gradient methods are presented. When a positive threshold of εoptimality is specified, both methods converge in a finite number of iterations. With threshold 0, it is shown under mild assumptions that the steepest descent version converges linearly, while the conjugate gradient version still has a finite termination property. The algorithms are designed to exploit features of primal and dual decomposability of the Lagrangian, which are typically available in a largescale setting, and they are open to considerable parallelization. Key words. Extended linearquadratic programming, largescale numerical optimization, finiteenvelope representation, gradient projection, primaldual methods, steepest descent methods, conjugate gradient methods. AMS(MOS) subject classifications. 65K05, 65K10, 90C20 1. Introduction. A
Exponential Families
, 1990
"... General methods for obtaining maximum likelihood estimates in exponential families are demonstrated using a constrained autologistic model for estimating relatedness from DNA fingerprint data. The novel features are the use of constrained optimization and two new algorithms for maximum likelihood es ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
General methods for obtaining maximum likelihood estimates in exponential families are demonstrated using a constrained autologistic model for estimating relatedness from DNA fingerprint data. The novel features are the use of constrained optimization and two new algorithms for maximum likelihood estimation. The first, the "phase I " algorithm determines the support of the MLE in the closure of the exponential family (a distribution in the family conditioned on a face of the convex support of the natural statistic) when the MLE does not exist in the traditional sense (a point in the natural parameter space). The second, the maximum Monte Carlo likelihood algorithm uses the Metropolis algorithm or the Gibbs sampler to obtain estimates when exact calculation of the likelihood is not possible. Separate papers on each algorithm accompany
MinimumTime Control Characteristics of Flexible Structures
 J. Guid., Ctrl
, 1996
"... The timeoptimal control of flexible structures is considered. We formulate the general timeoptimal control problem for singleaxis flexible structures, and analytical results are given for the number of control switches for the onebendingmode case, with and without damping. When there is no damp ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
The timeoptimal control of flexible structures is considered. We formulate the general timeoptimal control problem for singleaxis flexible structures, and analytical results are given for the number of control switches for the onebendingmode case, with and without damping. When there is no damping, it is shown that the time optimal control generally has 3 switches and is an odd function of time about the second switch except in certain isolated cases where there is only 1 switch. With damping, it is shown that there is always more than 1 switch. A numerical method is presented for solving the timeoptimal control for general linear systems, and solutions are presented for flexible structures with several flexible modes, revealing interesting trends of the timeoptimal control switch times as the maneuver sizes and the frequencies and damping ratios of the flexible modes are varied. 1 Introduction In many applications, such as manipulators, diskdrive heads, or pointing systems, s...
Spectral mixture analysis: Linear and semiparametric full and iterated partial unmixing in multiand hyperspectral image data
 Int. J. Comput. Vis
"... Abstract. As a supplement or an alternative to classification of hyperspectral image data linear and semiparametric mixture models are considered in order to obtain estimates of abundance of each class or endmember in pixels with mixed membership. Full unmixing based on both ordinary least squares ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Abstract. As a supplement or an alternative to classification of hyperspectral image data linear and semiparametric mixture models are considered in order to obtain estimates of abundance of each class or endmember in pixels with mixed membership. Full unmixing based on both ordinary least squares (OLS) and nonnegative least squares (NNLS), and the partial unmixing methods orthogonal subspace projection (OSP), constrained energy minimization (CEM) and an eigenvalue formulation alternative are dealt with. The solution to the eigenvalue formulation alternative proves to be identical to the CEM solution. The matrix inversion involved in CEM can be avoided by working on (a subset of) orthogonally transformed data such as signal maximum autocorrelation factors, MAFs, or signal minimum noise fractions, MNFs. This will also cause the partial unmixing result to be independent of the noise isolated in the MAF/MNFs not included in the analysis. CEM and the eigenvalue formulation alternative enable us to perform partial unmixing when we know one desired endmember spectrum only and not the full set of endmember spectra. This is an advantage over full unmixing and OSP. The eigenvalue formulation of CEM inspires us to suggest an iterated CEM scheme. Also the target constrained interference minimized filter (TCIMF) is described. Spectral angle mapping (SAM) is briefly described. Finally, semiparametric unmixing (SPU) based on a combined linear and additive model with a nonlinear, smooth function to represent endmember spectra unaccounted for is introduced. An example with two generated bands shows that both full unmixing, the CEM, the
A Variant of the TopkisVeinott Method for Solving Inequality Constrained Optimization Problems
 J. Appl. Math. Optim
, 1997
"... . In this paper, we give a variant of the TopkisVeinott method for solving inequality constrained optimization problems. This method uses a linearly constrained positive semidefinite quadratic problem to generate a feasible descent direction at each iteration. Under mild assumptions, the algorithm ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
. In this paper, we give a variant of the TopkisVeinott method for solving inequality constrained optimization problems. This method uses a linearly constrained positive semidefinite quadratic problem to generate a feasible descent direction at each iteration. Under mild assumptions, the algorithm is shown to be globally convergent in the sense that every accumulation point of the sequence generated by the algorithm is a FritzJohn point of the problem. We introduce a FritzJohn (FJ) function, an FJ1 strong secondorder sufficiency condition (FJ1SSOSC) and an FJ2 strong secondorder sufficiency condition (FJ2SSOSC), and then show, without any constraint qualification (CQ), that (i) if an FJ point z satisfies the FJ1SSOSC, then there exists a neighborhood N(z) of z such that for any FJ point y 2 N(z) n fzg, f 0 (y) 6= f 0 (z), where f 0 is the objective function of the problem; (ii) if an FJ point z satisfies the FJ2SSOSC, then z is a strict local minimum of the problem. The resu...
Optimization Framework for the Synthesis of Chemical Reactor Networks
, 1998
"... The reactor network synthesis problem involves determining the type, size, and interconnections of the reactor units, optimal concentration and temperature profiles, and the heat load requirements of the process. A general framework is presented for the synthesis of optimal chemical reactor networks ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
The reactor network synthesis problem involves determining the type, size, and interconnections of the reactor units, optimal concentration and temperature profiles, and the heat load requirements of the process. A general framework is presented for the synthesis of optimal chemical reactor networks via an optimization approach. The possible design alternatives are represented via a process superstructure which includes continuous stirred tank reactors and cross flow reactors along with mixers and splitters that connect the units. The superstructure is mathematically modeled using differential and algebraic constraints and the resulting problem is formulated as an optimal control problem. The solution methodology for addressing the optimal control formulation involves the application of a control parameterization approach where the selected control variables are discretized in terms of time invariant parameters. The dynamic system is decoupled from the optimization and solved as a func...