Results 1  10
of
17
Optimal Control of Flow With Discontinuities
 Journal of Computational Physics
, 2003
"... Optimal control of the 1D Riemann problem of Euler equations whose solution is characterized by discontinuities is carried out by both nonsmooth and smooth op timization methods. By matching a desired flow to the numerical model for a given time window we effectively change the location of discont ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
Optimal control of the 1D Riemann problem of Euler equations whose solution is characterized by discontinuities is carried out by both nonsmooth and smooth op timization methods. By matching a desired flow to the numerical model for a given time window we effectively change the location of discontinuities. The control pa rameters are chosen to be the initial values for pressure and density fields. Existence of solutions for the optimal control problem is proven. A high resolution model and a model with artificial viscosity, implementing two different numerical methods, are used to solve the forward model. The cost functional is the weighted difference be tween the numerical values and the observations for pressure, density and velocity. The observations are constructed from the analytical solution. We consider either distributed observations in time or observations calculated at the end of the assimi lation window. We consider two different time horizons and two sets of observations. The gradient (respectively a subgradient) of the cost functional, obtained from the adjoint of the discrete forward model, are employed for the smooth minimization (respectively for the nonsmooth minimization) algorithm. Discontinuity detection improves the performance of the minimizer for the model with artificial viscosity by selecting the points where the shock occurs (and these points are then removed from Preprint submitted to Elsevier Science 26 March 2002 the cost functional and its gradient). The numerical flow obtained with the optimal initial conditions obtained from the nonsmooth minimization matches very well the observations. The algorithm for smooth minimization converges for the shorter time horizon but fails to perform satisfactorily for the longer time horizon.
Kullback Proximal Algorithms for Maximum Likelihood Estimation
 IEEE Transactions on Information Theory
, 1998
"... Accelerated algorithms for maximum likelihood image reconstruction are essential for emerging applications such as 3D tomography, dynamic tomographic imaging, and other high dimensional inverse problems. In this paper, we introduce and analyze a class of fast and stable sequential optimization metho ..."
Abstract

Cited by 13 (4 self)
 Add to MetaCart
Accelerated algorithms for maximum likelihood image reconstruction are essential for emerging applications such as 3D tomography, dynamic tomographic imaging, and other high dimensional inverse problems. In this paper, we introduce and analyze a class of fast and stable sequential optimization methods for computing maximum likelihood estimates and study its convergence properties. These methods are based on a proximal point algorithm implemented with the KullbackLiebler (KL) divergence between posterior densities of the complete data as a proximal penalty function. When the proximal relaxation parameter is set to unity one obtains the classical expectation maximization (EM) algorithm. For a decreasing sequence of relaxation parameters, relaxed versions of EM are obtained which can have much faster asymptotic convergence without sacrice of monotonicity. We present an implementation of the algorithm using More's Trust Region update strategy. For illustration the method is applied to a...
Fixed points in the family of convex representations of a maximal monotone operator
, 2003
"... ..."
On The Relation Between Bundle Methods For Maximal Monotone Inclusions And Hybrid Proximal Point Algorithms
 Inherently Parallel Algorithms in Feasibility and Optimization and their Applications, volume 8 of Studies in Computational Mathematics
, 2001
"... this paper we consider bundle methods under the light of inexact proximal point algorithms, namely the hybrid variant of [36], see also [35,37,38]. The insight given by this new interpretation is twofold. First, it provides an alternative convergence proof, which is technically simple, for serious s ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
this paper we consider bundle methods under the light of inexact proximal point algorithms, namely the hybrid variant of [36], see also [35,37,38]. The insight given by this new interpretation is twofold. First, it provides an alternative convergence proof, which is technically simple, for serious steps of bundle methods by invoking the corresponding results for hybrid proximal point methods. Second, relating the two methodologies supplies a computationally realistic implementation of hybrid proximal point methods for the most general case, i.e., when the operator may not have any special structure. Our paper is organized as follows. In x 2 we outline the hybrid proximal point algorithm, together with its relevant convergence properties. Some useful theory from [9] and [8] on certain enlargements of maximal monotone operators is reviewed in x 3. Finally, in Section 4 we establish the connection between bundle and hybrid proximal methods and give some new convergence results, including the linear rate of convergence for bundle methods.
On approximations with finite precision in bundle methods for nonsmooth optimization
 METHOD FOR CONSTRAINED CONVEX OPTIMIZATION 25
, 2003
"... Abstract. We consider the proximal form of a bundle algorithm for minimizing a nonsmooth convex function, assuming that the function and subgradient values are evaluated approximately. We show how these approximations should be controlled in order to satisfy the desired optimality tolerance. For exa ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
Abstract. We consider the proximal form of a bundle algorithm for minimizing a nonsmooth convex function, assuming that the function and subgradient values are evaluated approximately. We show how these approximations should be controlled in order to satisfy the desired optimality tolerance. For example, this is relevant in the context of Lagrangian relaxation, where obtaining exact information about the function and subgradient values involves solving exactly a certain optimization problem, which can be relatively costly (and as we show, in any case unnecessary). We show that approximation with some finite precision is sufficient in this setting and give an explicit characterization of this precision. Alternatively, our result can be viewed as a stability analysis of standard proximal bundle methods, as it answers the following question: for a given approximation error, what kind of approximate solution can be obtained and how does it depend on the magnitude of the perturbation? Key Words. Nonsmooth optimization, convex optimization, bundle methods, stability analysis, perturbations. 1.
Generalized Proximal Point Algorithms and Bundle Implementations
 CSPL), Dept. EECS, University ofMichigan, Ann Arbor
, 1998
"... In this paper, we present a study of the proximal point algorithm using very general regularizations for minimizing possibly nondierentiable and nonconvex locally Lipschitz functions. We deduce from the proximal point scheme simple and implementable bundle methods for the convex and nonconvex cases. ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
In this paper, we present a study of the proximal point algorithm using very general regularizations for minimizing possibly nondierentiable and nonconvex locally Lipschitz functions. We deduce from the proximal point scheme simple and implementable bundle methods for the convex and nonconvex cases. The originality of our bundle method is that the bundle information incorporates the subgradients of both the objective and the regularization function. The resulting method opens up a broad class of regularizations which are not restricted to quadratic, convex or even dierentiable functions. Keywords: mathematical programming, proximal point, bundle methods, nonsmooth regularization This work was partially supported by the Department of Defense Research & Engineering (DDR&E) Multidisciplinary University Research Initiative (MURI) on "Reduced Signature Target Recognition" managed by the Air Force OÆce of Scientic Research (AFOSR) under AFOSR grant AFOSR F49620960028. Chretien and He...
Parallel Variable Distribution for Constrained Optimization*
, 2000
"... Abstract. In the parallel variable distribution framework for solving optimization problems (PVD), the variables are distributed among parallel processors with each processor having the primary responsibility for updating its block of variables while allowing the remaining “secondary ” variables to ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Abstract. In the parallel variable distribution framework for solving optimization problems (PVD), the variables are distributed among parallel processors with each processor having the primary responsibility for updating its block of variables while allowing the remaining “secondary ” variables to change in a restricted fashion along some easily computable directions. For constrained nonlinear programs convergence theory for PVD algorithms was previously available only for the case of convex feasible set. Additionally, one either had to assume that constraints are blockseparable, or to use exact projected gradient directions for the change of secondary variables. In this paper, we propose two new variants of PVD for the constrained case. Without assuming convexity of constraints, but assuming blockseparable structure, we show that PVD subproblems can be solved inexactly by solving their quadratic programming approximations. This extends PVD to nonconvex (separable) feasible sets, and provides a constructive practical way of solving the parallel subproblems. For inseparable constraints, but assuming convexity, we develop a PVD method based on suitable approximate projected gradient directions. The approximation criterion is based on a certain error bound result, and it is readily implementable. Using such approximate directions may be especially useful when the projection operation is computationally expensive.
stabilized model and an efficient solution method for the yearly optimal power management
 Optimization Methods and Software
"... We propose a stabilized model for the electricity generation management problem on a yearly scale. We also introduce an original and efficient solution method in a particular case. Our model is compared to other management methods and offers the best average cost while preserving a reasonable standa ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
We propose a stabilized model for the electricity generation management problem on a yearly scale. We also introduce an original and efficient solution method in a particular case. Our model is compared to other management methods and offers the best average cost while preserving a reasonable standard deviation of the cost over a set of testing scenarios.
de systèmes complexes
"... Two approximations of the Hessian matrix as limitedmemory operators are built from the limitedmemoryBFGS inverse Hessian approximationprovided by the minimization code, in view of the speci#cation of the inverse analysis#forecast error covariance matrix in variational data assimilation. Some numer ..."
Abstract
 Add to MetaCart
Two approximations of the Hessian matrix as limitedmemory operators are built from the limitedmemoryBFGS inverse Hessian approximationprovided by the minimization code, in view of the speci#cation of the inverse analysis#forecast error covariance matrix in variational data assimilation. Some numerical experiments and theoretical considerations lead to reject the limitedmemory DFP Hessian approximation and to retain the BFGS one for the applications foreseen. Conditioning issues are explored and a preconditioning strategy viaachange of control variable is proposed, based on a suitable Cholesky factorization of the limitedmemory inverse Hessian matrix. This factorization is implemented as the composition of linear operators. The memory requirements and the number of #oatingpoint operations required by the method are given and con#rmed bynumerical experiments. The method is found to have a strong potential for variational data assimilation systems using high resolution ocean or atmosphere general circulation models.
unknown title
, 2001
"... Refinement and coarsening indicators for adaptive parameterization: application to the estimation of hydraulic transmissivities ..."
Abstract
 Add to MetaCart
Refinement and coarsening indicators for adaptive parameterization: application to the estimation of hydraulic transmissivities