Results 1  10
of
14
A New SelfDual Embedding Method for Convex Programming
 Journal of Global Optimization
, 2001
"... In this paper we introduce a conic optimization formulation for inequalityconstrained convex programming, and propose a selfdual embedding model for solving the resulting conic optimization problem. The primal and dual cones in this formulation are characterized by the original constraint function ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
In this paper we introduce a conic optimization formulation for inequalityconstrained convex programming, and propose a selfdual embedding model for solving the resulting conic optimization problem. The primal and dual cones in this formulation are characterized by the original constraint functions and their corresponding conjugate functions respectively. Hence they are completely symmetric. This allows for a standard primaldual path following approach for solving the embedded problem. Moreover, there are two immediate logarithmic barrier functions for the primal and dual cones. We show that these two logarithmic barrier functions are conjugate to each other. The explicit form of the conjugate functions are in fact not required to be known in the algorithm. An advantage of the new approach is that there is no need to assume an initial feasible solution to start with. To guarantee the polynomiality of the pathfollowing procedure, we may apply the selfconcordant barrier theory of Nesterov and Nemirovski. For this purpose, as one application, we prove that the barrier functions constructed this way are indeed selfconcordant when the original constraint functions are convex and quadratic. Keywords: Convex Programming, Convex Cones, SelfDual Embedding, SelfConcordant Barrier Functions. # Department of Systems Engineering and Engineering Management, The Chinese University of Hong Kong, Shatin, Hong Kong. Research supported by Hong Kong RGC Earmarked Grants CUHK4181/00E and CUHK4233/01E. 1 1
A fullNewton step O(n) infeasible interiorpoint algorithm for linear optimization
, 2005
"... We present a primaldual infeasible interiorpoint algorithm. As usual, the algorithm decreases the duality gap and the feasibility residuals at the same rate. Assuming that an optimal solution exists it is shown that at most O(n) iterations suffice to reduce the duality gap and the residuals by the ..."
Abstract

Cited by 9 (6 self)
 Add to MetaCart
We present a primaldual infeasible interiorpoint algorithm. As usual, the algorithm decreases the duality gap and the feasibility residuals at the same rate. Assuming that an optimal solution exists it is shown that at most O(n) iterations suffice to reduce the duality gap and the residuals by the factor 1/e. This implies an O(nlog(n/ε)) iteration bound for getting an εsolution of the problem at hand, which coincides with the best known bound for infeasible interiorpoint algorithms. The algorithm constructs strictly feasible iterates for a sequence of perturbations of the given problem and its dual problem. A special feature of the algorithm is that it uses only fullNewton steps. Two types of fullNewton steps are used, socalled feasibility steps and usual (centering) steps. Starting at strictly feasible iterates of a perturbed pair, (very) close its central path, feasibility steps serve to generate strictly feasible iterates for the next perturbed pair. By accomplishing a few centering steps for the new perturbed pair we obtain strictly feasible iterates close enough to the central path of the new perturbed pair. The algorithm finds an optimal solution or detects infeasibility or unboundedness of the given problem.
Density estimation by total variation regularization
 Aadvances in Statistical Modeling and Inference Essays in Honor of Kjell A Doksum
, 2007
"... Abstract. L1 penalties have proven to be an attractive regularization device for nonparametric regression, image reconstruction, and model selection. For function estimation, L1 penalties, interpreted as roughness of the candidate function measured by their total variation, are known to be capable o ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
Abstract. L1 penalties have proven to be an attractive regularization device for nonparametric regression, image reconstruction, and model selection. For function estimation, L1 penalties, interpreted as roughness of the candidate function measured by their total variation, are known to be capable of capturing sharp changes in the target function while still maintaining a general smoothing objective. We explore the use of penalties based on total variation of the estimated density, its square root, and its logarithm – and their derivatives – in the context of univariate and bivariate density estimation, and compare the results to some other density estimation methods including L2 penalized likelihood methods. Our objective is to develop a unified approach to total variation penalized density estimation offering methods that are: capable of identifying qualitative features like sharp peaks, extendible to higher dimensions, and computationally tractable. Modern interior point methods for solving convex optimization problems play a critical role in achieving the final objective, as do piecewise linear finite element methods that facilitate the use of sparse linear algebra. 1.
A PrimalDual Decomposition Algorithm for Multistage Stochastic Convex Programming
 Convex Programming, Mathematical Programming 104, 153
, 2000
"... This paper presents a new and high performance solution method for multistage stochastic convex programming. Stochastic programming is a quantitative tool developed in the field of optimization to cope with the problem of decisionmaking under uncertainty. Among others, stochastic programming has fo ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
This paper presents a new and high performance solution method for multistage stochastic convex programming. Stochastic programming is a quantitative tool developed in the field of optimization to cope with the problem of decisionmaking under uncertainty. Among others, stochastic programming has found many applications in finance, such as assetliability and bondportfolio management. However, many stochastic programming applications still remain computationally intractable because of their overwhelming dimensionality. In this paper we propose a new decomposition algorithm for multistage stochastic programming with a convex objective, based on the pathfollowing interior point method combined with the homogeneous selfdual embedding technique. Our preliminary numerical experiments show that this approach is very promising in many ways for solving generic multistage stochastic programming, including its superiority in terms of numerical e#ciency, as well as the flexibility in testing and analyzing the model.
An interior point solver for smooth convex optimization with an application to environmentalenergyeconomic models
, 2000
"... ..."
Contents lists available at SciVerse ScienceDirect Journal of Theoretical Biology
"... journal homepage: www.elsevier.com/locate/yjtbi A variational principle for computing nonequilibrium fluxes and potentials in genomescale biochemical networks ..."
Abstract
 Add to MetaCart
journal homepage: www.elsevier.com/locate/yjtbi A variational principle for computing nonequilibrium fluxes and potentials in genomescale biochemical networks
Interior Point Methods for Conelinear Optimization Solvability, Modeling and Engineering Applications
"... Cone linear optimization (CLO) problems play a crucial role in the theory, algorithms and applications of modern optimization. Large classes of CLO problems are solvable efficiently by using modern interior point methods based software. Moreover, CLO allows to model many engineering optimization pro ..."
Abstract
 Add to MetaCart
Cone linear optimization (CLO) problems play a crucial role in the theory, algorithms and applications of modern optimization. Large classes of CLO problems are solvable efficiently by using modern interior point methods based software. Moreover, CLO allows to model many engineering optimization problems in a novel way. This paper provides a brief survey of the most important classes of CLO problems, provides useful information about their solvability and available software. Finally, to illustrate the applicability of CLO, a novel approach to robust optimization is discussed. 2. Keywords Conelinear optimization, semidefinite optimization, robust optimization, engineering design. 3. ConeLinear Optimization Cone linear optimization (CLO) problems play a crucial role in the theory, algorithms and applications of modern optimization. A primaldual pair of CLO problems can be given as (P ) min c T x s:t: Ax \Gamma b 2 C1 x 2 C2 (D) max b T y s:t: c \Gamma A T y 2 C 2 y ...
A Tutorial on Geometric Programming
"... A geometric program (GP) is a type of mathematical optimization problem characterized by objective and constraint functions that have a special form. Recently developed solution methods can solve even largescale GPs extremely efficiently and reliably; at the same time a number of practical problems ..."
Abstract
 Add to MetaCart
A geometric program (GP) is a type of mathematical optimization problem characterized by objective and constraint functions that have a special form. Recently developed solution methods can solve even largescale GPs extremely efficiently and reliably; at the same time a number of practical problems, particularly in circuit design, have been found to be equivalent to (or well approximated by) GPs. Putting these two together, we get effective solutions for the practical problems. The basic approach in GP modeling is to attempt to express a practical problem, such as an engineering analysis or design problem, in GP format. In the best case, this formulation is exact; when this isn’t possible, we settle for an approximate formulation. This tutorial paper collects together in one place the basic background material needed to do GP modeling. We start with the basic definitions and facts, and some methods used to transform problems into GP format. We show how to recognize functions and problems compatible with GP, and how to approximate functions or data in a form compatible with GP (when this is possible). We give some simple and representative examples, and also describe some common extensions of GP, along with methods for solving (or approximately solving) them.