Results 1  10
of
51
Complexity of Bézout’s theorem VI: Geodesics in the condition metric
 Found. Comput. Math
"... Abstract. We introduce a new complexity measure of a path of (problems, solutions) pairs in terms of the length of the path in the condition metric which we define in the article. The measure gives an upper bound for the number of Newton steps sufficient to approximate the path discretely starting f ..."
Abstract

Cited by 22 (6 self)
 Add to MetaCart
(Show Context)
Abstract. We introduce a new complexity measure of a path of (problems, solutions) pairs in terms of the length of the path in the condition metric which we define in the article. The measure gives an upper bound for the number of Newton steps sufficient to approximate the path discretely starting from one end and thus produce an approximate zero for the endpoint. This motivates the study of short paths or geodesics in the condition metric. 1.
Advances in convex optimization: Conic programming
 In Proceedings of International Congress of Mathematicians
, 2007
"... Abstract. During the last two decades, major developments in convex optimization were focusing on conic programming, primarily, on linear, conic quadratic and semidefinite optimization. Conic programming allows to reveal rich structure which usually is possessed by a convex program and to exploit ..."
Abstract

Cited by 22 (0 self)
 Add to MetaCart
(Show Context)
Abstract. During the last two decades, major developments in convex optimization were focusing on conic programming, primarily, on linear, conic quadratic and semidefinite optimization. Conic programming allows to reveal rich structure which usually is possessed by a convex program and to exploit this structure in order to process the program efficiently. In the paper, we overview the major components of the resulting theory (conic duality and primaldual interior point polynomial time algorithms), outline the extremely rich “expressive abilities ” of conic quadratic and semidefinite programming and discuss a number of instructive applications.
Proximal point methods for quasiconvex and convex functions with Bregman distances on Hadamard manifolds
 J. Convex Anal
, 2009
"... This paper generalizes the proximal point method using Bregman distances to solve convex and quasiconvex optimization problems on noncompact Hadamard manifolds. We will proved that the sequence generated by our method is well defined and converges to an optimal solution of the problem. Also, we obta ..."
Abstract

Cited by 21 (3 self)
 Add to MetaCart
(Show Context)
This paper generalizes the proximal point method using Bregman distances to solve convex and quasiconvex optimization problems on noncompact Hadamard manifolds. We will proved that the sequence generated by our method is well defined and converges to an optimal solution of the problem. Also, we obtain the same convergence properties for the classical proximal method, applied to a class of quasiconvex problems. Finally, we give some examples of Bregman distances in nonEuclidean spaces.
On the curvature of the central path of linear programming theory
 Foundations of Computational Mathematics
, 2003
"... Abstract. We prove a linear bound on the average total curvature of the central path of linear programming theory in terms on the number of variables. 1 Introduction. In this paper we study the curvature of the central path of linear programming theory. We establish that for a linear programming pro ..."
Abstract

Cited by 19 (3 self)
 Add to MetaCart
(Show Context)
Abstract. We prove a linear bound on the average total curvature of the central path of linear programming theory in terms on the number of variables. 1 Introduction. In this paper we study the curvature of the central path of linear programming theory. We establish that for a linear programming problem defined on a compact polytope contained in R n, the total curvature of the central path is less than or
Riemannian metric and geometric mean for positive semidefinite matrices of fixed rank
 SIAM Journal on Matrix Analysis and Applications
"... Abstract. This paper introduces a new metric and mean on the set of positive semidefinite matrices of fixedrank. The proposed metric is derived from a wellchosen Riemannian quotient geometry that generalizes the reductive geometry of the positive cone and the associated natural metric. The resulti ..."
Abstract

Cited by 18 (9 self)
 Add to MetaCart
(Show Context)
Abstract. This paper introduces a new metric and mean on the set of positive semidefinite matrices of fixedrank. The proposed metric is derived from a wellchosen Riemannian quotient geometry that generalizes the reductive geometry of the positive cone and the associated natural metric. The resulting Riemannian space has strong geometrical properties: it is geodesically complete, and the metric is invariant with respect to all transformations that preserve angles (orthogonal transformations, scalings, and pseudoinversion). A meaningful approximation of the associated Riemannian distance is proposed, that can be efficiently numerically computed via a simple algorithm based on SVD. The induced mean preserves the rank, possesses the most desirable characteristics of a geometric mean, and is easy to compute.
A fullNewton step O(n) infeasible interiorpoint algorithm for linear optimization
, 2005
"... We present a primaldual infeasible interiorpoint algorithm. As usual, the algorithm decreases the duality gap and the feasibility residuals at the same rate. Assuming that an optimal solution exists it is shown that at most O(n) iterations suffice to reduce the duality gap and the residuals by the ..."
Abstract

Cited by 15 (7 self)
 Add to MetaCart
(Show Context)
We present a primaldual infeasible interiorpoint algorithm. As usual, the algorithm decreases the duality gap and the feasibility residuals at the same rate. Assuming that an optimal solution exists it is shown that at most O(n) iterations suffice to reduce the duality gap and the residuals by the factor 1/e. This implies an O(nlog(n/ε)) iteration bound for getting an εsolution of the problem at hand, which coincides with the best known bound for infeasible interiorpoint algorithms. The algorithm constructs strictly feasible iterates for a sequence of perturbations of the given problem and its dual problem. A special feature of the algorithm is that it uses only fullNewton steps. Two types of fullNewton steps are used, socalled feasibility steps and usual (centering) steps. Starting at strictly feasible iterates of a perturbed pair, (very) close its central path, feasibility steps serve to generate strictly feasible iterates for the next perturbed pair. By accomplishing a few centering steps for the new perturbed pair we obtain strictly feasible iterates close enough to the central path of the new perturbed pair. The algorithm finds an optimal solution or detects infeasibility or unboundedness of the given problem.
Consensus in noncommutative spaces
"... Abstract — Convergence analysis of consensus algorithms is revisited in the light of the Hilbert distance. The Lyapunov function used in the early analysis by Tsitsiklis is shown to be the Hilbert distance to consensus in log coordinates. Birkhoff theorem, which proves contraction of the Hilbert met ..."
Abstract

Cited by 15 (6 self)
 Add to MetaCart
(Show Context)
Abstract — Convergence analysis of consensus algorithms is revisited in the light of the Hilbert distance. The Lyapunov function used in the early analysis by Tsitsiklis is shown to be the Hilbert distance to consensus in log coordinates. Birkhoff theorem, which proves contraction of the Hilbert metric for any positive homogeneous monotone map, provides an early yet general convergence result for consensus algorithms. Because Birkhoff theorem holds in arbitrary cones, we extend consensus algorithms to the cone of positive definite matrices. The proposed generalization finds applications in the convergence analysis of quantum stochastic maps, which are a generalization of stochastic maps to noncommutative probability spaces. I.
SelfConcordant Functions for Optimization on Smooth Manifolds
, 2004
"... This paper discusses selfconcordant functions on smooth manifolds. In Euclidean space, this class of functions are utilized extensively in interiorpoint methods for optimization because of the associated low computational complexity. Here, the selfconcordant function is carefully defined on a di ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
This paper discusses selfconcordant functions on smooth manifolds. In Euclidean space, this class of functions are utilized extensively in interiorpoint methods for optimization because of the associated low computational complexity. Here, the selfconcordant function is carefully defined on a differential manifold. First, generalizations of the properties of selfconcordant functions in Euclidean space are derived. Then, Newton decrement is defined and analyzed on the manifold that we consider. Based on this, a damped Newton algorithm is proposed for optimization of selfconcordant functions, which guarantees that the solution falls in any given small neighborhood of the optimal solution, with its existence and uniqueness also proved in this paper, in a finite number of steps. It also ensures quadratic convergence within a neighborhood of the minimal point. This neighborhood can be specified by the the norm of Newton decrement. The computational complexity bound of the proposed approach is also given explicitly. This complexity bound is O( − ln(ɛ)), where ɛ is the desired precision. An interesting optimization problem is given to illustrate the proposed concept and algorithm.
Random Walk Approach to Regret Minimization
"... We propose a computationally efficient random walk on a convex body which rapidly mixes to a timevarying Gibbs distribution. In the setting of online convex optimization and repeated games, the algorithm yields low regret and presents a novel efficient method for implementing mixture forecasting st ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
We propose a computationally efficient random walk on a convex body which rapidly mixes to a timevarying Gibbs distribution. In the setting of online convex optimization and repeated games, the algorithm yields low regret and presents a novel efficient method for implementing mixture forecasting strategies. 1