Results 1  10
of
10
A New SelfDual Embedding Method for Convex Programming
 Journal of Global Optimization
, 2001
"... In this paper we introduce a conic optimization formulation for inequalityconstrained convex programming, and propose a selfdual embedding model for solving the resulting conic optimization problem. The primal and dual cones in this formulation are characterized by the original constraint function ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
In this paper we introduce a conic optimization formulation for inequalityconstrained convex programming, and propose a selfdual embedding model for solving the resulting conic optimization problem. The primal and dual cones in this formulation are characterized by the original constraint functions and their corresponding conjugate functions respectively. Hence they are completely symmetric. This allows for a standard primaldual path following approach for solving the embedded problem. Moreover, there are two immediate logarithmic barrier functions for the primal and dual cones. We show that these two logarithmic barrier functions are conjugate to each other. The explicit form of the conjugate functions are in fact not required to be known in the algorithm. An advantage of the new approach is that there is no need to assume an initial feasible solution to start with. To guarantee the polynomiality of the pathfollowing procedure, we may apply the selfconcordant barrier theory of Nesterov and Nemirovski. For this purpose, as one application, we prove that the barrier functions constructed this way are indeed selfconcordant when the original constraint functions are convex and quadratic. Keywords: Convex Programming, Convex Cones, SelfDual Embedding, SelfConcordant Barrier Functions. # Department of Systems Engineering and Engineering Management, The Chinese University of Hong Kong, Shatin, Hong Kong. Research supported by Hong Kong RGC Earmarked Grants CUHK4181/00E and CUHK4233/01E. 1 1
A fullNewton step O(n) infeasible interiorpoint algorithm for linear optimization
, 2005
"... We present a primaldual infeasible interiorpoint algorithm. As usual, the algorithm decreases the duality gap and the feasibility residuals at the same rate. Assuming that an optimal solution exists it is shown that at most O(n) iterations suffice to reduce the duality gap and the residuals by the ..."
Abstract

Cited by 9 (6 self)
 Add to MetaCart
We present a primaldual infeasible interiorpoint algorithm. As usual, the algorithm decreases the duality gap and the feasibility residuals at the same rate. Assuming that an optimal solution exists it is shown that at most O(n) iterations suffice to reduce the duality gap and the residuals by the factor 1/e. This implies an O(nlog(n/ε)) iteration bound for getting an εsolution of the problem at hand, which coincides with the best known bound for infeasible interiorpoint algorithms. The algorithm constructs strictly feasible iterates for a sequence of perturbations of the given problem and its dual problem. A special feature of the algorithm is that it uses only fullNewton steps. Two types of fullNewton steps are used, socalled feasibility steps and usual (centering) steps. Starting at strictly feasible iterates of a perturbed pair, (very) close its central path, feasibility steps serve to generate strictly feasible iterates for the next perturbed pair. By accomplishing a few centering steps for the new perturbed pair we obtain strictly feasible iterates close enough to the central path of the new perturbed pair. The algorithm finds an optimal solution or detects infeasibility or unboundedness of the given problem.
Density estimation by total variation regularization
 Aadvances in Statistical Modeling and Inference Essays in Honor of Kjell A Doksum
, 2007
"... Abstract. L1 penalties have proven to be an attractive regularization device for nonparametric regression, image reconstruction, and model selection. For function estimation, L1 penalties, interpreted as roughness of the candidate function measured by their total variation, are known to be capable o ..."
Abstract

Cited by 8 (2 self)
 Add to MetaCart
Abstract. L1 penalties have proven to be an attractive regularization device for nonparametric regression, image reconstruction, and model selection. For function estimation, L1 penalties, interpreted as roughness of the candidate function measured by their total variation, are known to be capable of capturing sharp changes in the target function while still maintaining a general smoothing objective. We explore the use of penalties based on total variation of the estimated density, its square root, and its logarithm – and their derivatives – in the context of univariate and bivariate density estimation, and compare the results to some other density estimation methods including L2 penalized likelihood methods. Our objective is to develop a unified approach to total variation penalized density estimation offering methods that are: capable of identifying qualitative features like sharp peaks, extendible to higher dimensions, and computationally tractable. Modern interior point methods for solving convex optimization problems play a critical role in achieving the final objective, as do piecewise linear finite element methods that facilitate the use of sparse linear algebra. 1.
A PrimalDual Decomposition Algorithm for Multistage Stochastic Convex Programming
 Convex Programming, Mathematical Programming 104, 153
, 2000
"... This paper presents a new and high performance solution method for multistage stochastic convex programming. Stochastic programming is a quantitative tool developed in the field of optimization to cope with the problem of decisionmaking under uncertainty. Among others, stochastic programming has fo ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
This paper presents a new and high performance solution method for multistage stochastic convex programming. Stochastic programming is a quantitative tool developed in the field of optimization to cope with the problem of decisionmaking under uncertainty. Among others, stochastic programming has found many applications in finance, such as assetliability and bondportfolio management. However, many stochastic programming applications still remain computationally intractable because of their overwhelming dimensionality. In this paper we propose a new decomposition algorithm for multistage stochastic programming with a convex objective, based on the pathfollowing interior point method combined with the homogeneous selfdual embedding technique. Our preliminary numerical experiments show that this approach is very promising in many ways for solving generic multistage stochastic programming, including its superiority in terms of numerical e#ciency, as well as the flexibility in testing and analyzing the model.
Interior Point Methods for Conelinear Optimization Solvability, Modeling and Engineering Applications
"... Cone linear optimization (CLO) problems play a crucial role in the theory, algorithms and applications of modern optimization. Large classes of CLO problems are solvable efficiently by using modern interior point methods based software. Moreover, CLO allows to model many engineering optimization pro ..."
Abstract
 Add to MetaCart
Cone linear optimization (CLO) problems play a crucial role in the theory, algorithms and applications of modern optimization. Large classes of CLO problems are solvable efficiently by using modern interior point methods based software. Moreover, CLO allows to model many engineering optimization problems in a novel way. This paper provides a brief survey of the most important classes of CLO problems, provides useful information about their solvability and available software. Finally, to illustrate the applicability of CLO, a novel approach to robust optimization is discussed. 2. Keywords Conelinear optimization, semidefinite optimization, robust optimization, engineering design. 3. ConeLinear Optimization Cone linear optimization (CLO) problems play a crucial role in the theory, algorithms and applications of modern optimization. A primaldual pair of CLO problems can be given as (P ) min c T x s:t: Ax \Gamma b 2 C1 x 2 C2 (D) max b T y s:t: c \Gamma A T y 2 C 2 y ...
Source Data Perturbation in Statistical Disclosure Control
, 2000
"... When tables of quantitative data are generated from a datafile, the release of those tables should not reveal information concerning individual respondents. This disclosure of individual respondents in the microdata file can be prevented by applying disclosure control methods at the table level, ..."
Abstract
 Add to MetaCart
When tables of quantitative data are generated from a datafile, the release of those tables should not reveal information concerning individual respondents. This disclosure of individual respondents in the microdata file can be prevented by applying disclosure control methods at the table level, but this may create inconsistencies across tables.
Contents lists available at SciVerse ScienceDirect Journal of Theoretical Biology
"... journal homepage: www.elsevier.com/locate/yjtbi A variational principle for computing nonequilibrium fluxes and potentials in genomescale biochemical networks ..."
Abstract
 Add to MetaCart
journal homepage: www.elsevier.com/locate/yjtbi A variational principle for computing nonequilibrium fluxes and potentials in genomescale biochemical networks
An Infeasible InteriorPoint Algorithm with fullNewton Step for Linear Optimization
"... In this paper we present an infeasible interiorpoint algorithm for solving linear optimization problems. This algorithm is obtained by modifying the search direction in the algorithm [8]. The analysis of our algorithm is much simpler than that of the algorithm [8] at some places. The iteration boun ..."
Abstract
 Add to MetaCart
In this paper we present an infeasible interiorpoint algorithm for solving linear optimization problems. This algorithm is obtained by modifying the search direction in the algorithm [8]. The analysis of our algorithm is much simpler than that of the algorithm [8] at some places. The iteration bound of the algorithm is as good as the best known iteration bound O ( n log 1 ε for IIPMs.
Minimize ∑
, 2009
"... Abstract: In this paper we consider interiorpoint methods (IPM) for the nonlinear, convex optimization problem where the objective function is a weighted sum of reciprocals of variables subject to linear constraints (SOR). This problem appears often in various applications such as statistical strat ..."
Abstract
 Add to MetaCart
Abstract: In this paper we consider interiorpoint methods (IPM) for the nonlinear, convex optimization problem where the objective function is a weighted sum of reciprocals of variables subject to linear constraints (SOR). This problem appears often in various applications such as statistical stratified sampling and entropy problems, to mention just few examples. The SOR is solved using two IPMs. First, a homogeneous IPM is used to solve the KarushKuhnTucker conditions of the problem which is a standard approach. Second, a homogeneous conic quadratic IPM is used to solve the SOR as a reformulated conic quadratic problem. As far as we are aware of it, this is a novel approach not yet considered in the literature. The two approaches are then numerically tested on a set of randomly generated problems using optimization software MOSEK. They are compared by CPU time and the number of iterations, showing that the second approach works better for problems with higher dimensions. The main reason is that although the first approach increases the number of variables, the IPM exploits the structure of the conic quadratic reformulation much better than the structure of the original problem.