Results 1  10
of
13
An InteriorPoint Algorithm For Nonconvex Nonlinear Programming
 COMPUTATIONAL OPTIMIZATION AND APPLICATIONS
, 1997
"... The paper describes an interiorpoint algorithm for nonconvex nonlinear programming which is a direct extension of interiorpoint methods for linear and quadratic programming. Major modifications include a merit function and an altered search direction to ensure that a descent direction for the mer ..."
Abstract

Cited by 199 (14 self)
 Add to MetaCart
The paper describes an interiorpoint algorithm for nonconvex nonlinear programming which is a direct extension of interiorpoint methods for linear and quadratic programming. Major modifications include a merit function and an altered search direction to ensure that a descent direction for the merit function is obtained. Preliminary numerical testing indicates that the method is robust. Further, numerical comparisons with MINOS and LANCELOT show that the method is efficient, and has the promise of greatly reducing solution times on at least some classes of models.
Interior methods for nonlinear optimization
 SIAM REVIEW
, 2002
"... Interior methods are an omnipresent, conspicuous feature of the constrained optimization landscape today, but it was not always so. Primarily in the form of barrier methods, interiorpoint techniques were popular during the 1960s for solving nonlinearly constrained problems. However, their use for ..."
Abstract

Cited by 127 (6 self)
 Add to MetaCart
(Show Context)
Interior methods are an omnipresent, conspicuous feature of the constrained optimization landscape today, but it was not always so. Primarily in the form of barrier methods, interiorpoint techniques were popular during the 1960s for solving nonlinearly constrained problems. However, their use for linear programming was not even contemplated because of the total dominance of the simplex method. Vague but continuing anxiety about barrier methods eventually led to their abandonment in favor of newly emerging, apparently more efficient alternatives such as augmented Lagrangian and sequential quadratic programming methods. By the early 1980s, barrier methods were almost without exception regarded as a closed chapter in the history of optimization. This picture changed dramatically with Karmarkar’s widely publicized announcement in 1984 of a fast polynomialtime interior method for linear programming; in 1985, a formal connection was established between his method and classical barrier methods. Since then, interior methods have advanced so far, so fast, that their influence has transformed both the theory and practice of constrained optimization. This article provides a condensed, selective look at classical material and recent research about interior methods for nonlinearly constrained optimization.
Lowrank mechanism: Optimizing batch queries under differential privacy.
 PVLDB,
, 2012
"... ABSTRACT Differential privacy is a promising privacypreserving paradigm for statistical query processing over sensitive data. It works by injecting random noise into each query result, such that it is provably hard for the adversary to infer the presence or absence of any individual record from th ..."
Abstract

Cited by 15 (3 self)
 Add to MetaCart
(Show Context)
ABSTRACT Differential privacy is a promising privacypreserving paradigm for statistical query processing over sensitive data. It works by injecting random noise into each query result, such that it is provably hard for the adversary to infer the presence or absence of any individual record from the published noisy results. The main objective in differentially private query processing is to maximize the accuracy of the query results, while satisfying the privacy guarantees. Previous work, notably
A modified barrieraugmented Lagrangian method for constrained minimization
 COMPUT. OPTIM. APPL
, 1999
"... We present and analyze an interiorexterior augmented Lagrangian method for solving constrained optimization problems with both inequality and equality constraints. This method, the modified barrier—augmented Lagrangian (MBAL) method, is a combination of the modified barrier and the augmented Lag ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
(Show Context)
We present and analyze an interiorexterior augmented Lagrangian method for solving constrained optimization problems with both inequality and equality constraints. This method, the modified barrier—augmented Lagrangian (MBAL) method, is a combination of the modified barrier and the augmented Lagrangian methods. It is based on the MBAL function, which treats inequality constraints with a modified barrier term and equalities with an augmented Lagrangian term. The MBAL method alternatively minimizes the MBAL function in the primal space and updates the Lagrange multipliers. For a large enough fixed barrierpenalty parameter the MBAL method is shown to converge Qlinearly under the standard secondorder optimality conditions. Qsuperlinear convergence can be achieved by increasing the barrierpenalty parameter after each Lagrange multiplier update. We consider a dual problem that is based on the MBAL function. We prove a basic duality theorem for it and show that it has several important properties that fail to hold for the dual based on the classical Lagrangian.
MEUSE: an origindestination matrix estimator that exploits structure
, 1994
"... This paper proposes an improvement of existing methods of origindestination matrix estimation by an explicit use of data describing the structure of the matrix. These data can be namely obtained from parking surveys. The new model is applied on both illustrative and real examples, and the results a ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
This paper proposes an improvement of existing methods of origindestination matrix estimation by an explicit use of data describing the structure of the matrix. These data can be namely obtained from parking surveys. The new model is applied on both illustrative and real examples, and the results are discussed. Comparisons with the results obtained with SATURN/ME2 and the generalized leastsquares method are also presented.
Nonlinear output constraints handling for production optimization of oil reservoirs
 Computational Geosciences
, 2012
"... Abstract Adjointbased gradient computations for oil reservoirs have been increasingly used in closedloop reservoir management optimizations. Most constraints in the optimizations are for the control input, which may either be bound constraints or equality constraints. This paper addresses output ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
Abstract Adjointbased gradient computations for oil reservoirs have been increasingly used in closedloop reservoir management optimizations. Most constraints in the optimizations are for the control input, which may either be bound constraints or equality constraints. This paper addresses output constraints for both state and control variables. We propose to use a (interior) barrier function approach, where the output constraints are added as a barrier term to the objective function. As we assume there always exist feasible initial control inputs, the method maintains the feasibility of the constraints. Three case examples are presented. The results show that the proposed method is able to preserve the computational efficiency of the adjoint methods.
A Lagrangianbarrier function for adjoint state constraints optimization of oil reservoirs water flooding. In:
 IEEE Conference on Proceeding of 2010 49th,
, 2010
"... AbstractIn the secondary phase of oil recovery, water flooding is the most common way to sweep remaining oil in the reservoirs. The process can be regarded as a nonlinear optimization problem. This paper focuses on how to handle state constraints in an adjoint optimization framework for such syste ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
AbstractIn the secondary phase of oil recovery, water flooding is the most common way to sweep remaining oil in the reservoirs. The process can be regarded as a nonlinear optimization problem. This paper focuses on how to handle state constraints in an adjoint optimization framework for such systems. The state constraints are cast as nonlinear inequality constraints. In the presence of state constraints, adjointbased gradient optimization methods can loose their efficiency. Moreover, using existing optimization packages one needs to supply Jacobians of the inequality constraints. We propose a Lagrangianbarrier function based method which adds the state constraints as a term to the objective function. Furthermore, we present a numerical case demonstrating that the feasibility and efficiency of the proposed method.
Design Optimization utilizing Dynamic Substructuring and Artificial Intelligence Techniques
"... In mechanical and structural systems, resonance may cause large strains and stresses which can lead to the failure of the system. Since it is often not possible to change the frequency content of the external load excitation, the phenomenon can only be avoided by updating the design of the structure ..."
Abstract
 Add to MetaCart
(Show Context)
In mechanical and structural systems, resonance may cause large strains and stresses which can lead to the failure of the system. Since it is often not possible to change the frequency content of the external load excitation, the phenomenon can only be avoided by updating the design of the structure. In this paper, a design optimization strategy based on the integration of the Component Mode Synthesis (CMS) method with numerical optimization techniques is presented. For reasons of numerical efficiency, a Finite Element (FE) model is represented by a surrogate model which is a function of the design parameters. The surrogate model is obtained in four steps: First, the reduced FE models of the components are derived using the CMS method. Then the components are assembled to obtain the entire structural response. Afterwards the dynamic behavior is determined for a number of design parameter settings. Finally, the surrogate model representing the dynamic behavior is obtained. In this research, the surrogate model is determined using the Backpropagation Neural Networks which is then optimized using the Genetic Algorithms and Sequential Quadratic Programming method. The application of the introduced techniques is demonstrated on a simple test problem. 1
Numerical experiments with the LANCELOT
, 1992
"... package(Release A) for largescale nonlinear optimization 1 ..."
(Show Context)
Lagrangian
, 1994
"... We show how to exploit the structure inherent in the linear algebra for constrained nonlinear optimizaüon problems when inequality constraints have been converted to equations by adding slack variables and the problem is solved using an augmented Lagrangian method. ..."
Abstract
 Add to MetaCart
(Show Context)
We show how to exploit the structure inherent in the linear algebra for constrained nonlinear optimizaüon problems when inequality constraints have been converted to equations by adding slack variables and the problem is solved using an augmented Lagrangian method.