Results 1  10
of
54
On the NesterovTodd direction in semidefinite programming
 SIAM Journal on Optimization
, 1996
"... Nesterov and Todd discuss several pathfollowing and potentialreduction interiorpoint methods for certain convex programming problems. In the special case of semidefinite programming, we discuss how to compute the corresponding directions efficiently, how to view them as Newton directions, and how ..."
Abstract

Cited by 108 (22 self)
 Add to MetaCart
Nesterov and Todd discuss several pathfollowing and potentialreduction interiorpoint methods for certain convex programming problems. In the special case of semidefinite programming, we discuss how to compute the corresponding directions efficiently, how to view them as Newton directions, and how to take Mehrotra predictorcorrector steps in this framework. We also provide some computational results suggesting that our algorithm is more robust than alternative methods.
Adaptive Subgradient Methods for Online Learning and Stochastic Optimization
, 2010
"... Stochastic subgradient methods are widely used, well analyzed, and constitute effective tools for optimization and online learning. Stochastic gradient methods ’ popularity and appeal are largely due to their simplicity, as they largely follow predetermined procedural schemes. However, most common s ..."
Abstract

Cited by 42 (0 self)
 Add to MetaCart
Stochastic subgradient methods are widely used, well analyzed, and constitute effective tools for optimization and online learning. Stochastic gradient methods ’ popularity and appeal are largely due to their simplicity, as they largely follow predetermined procedural schemes. However, most common subgradient approaches are oblivious to the characteristics of the data being observed. We present a new family of subgradient methods that dynamically incorporate knowledge of the geometry of the data observed in earlier iterations to perform more informative gradientbased learning. The adaptation, in essence, allows us to find needles in haystacks in the form of very predictive but rarely seenfeatures. Ourparadigmstemsfromrecentadvancesinstochasticoptimizationandonlinelearning which employ proximal functions to control the gradient steps of the algorithm. We describe and analyze an apparatus for adaptively modifying the proximal function, which significantly simplifies setting a learning rate and results in regret guarantees that are provably as good as the best proximal function that can be chosen in hindsight. In a companion paper, we validate experimentally our theoretical analysis and show that the adaptive subgradient approach outperforms stateoftheart, but nonadaptive, subgradient algorithms. 1
Geometries of Quantum States
, 1995
"... The quantum analogue of the Fisher information metric of a probability simplex is searched and several Riemannian metrics on the set of positive definite density matrices are studied. Some of them appeared in the literature in connection with Cram'erRao type inequalities or the generalization ..."
Abstract

Cited by 34 (6 self)
 Add to MetaCart
The quantum analogue of the Fisher information metric of a probability simplex is searched and several Riemannian metrics on the set of positive definite density matrices are studied. Some of them appeared in the literature in connection with Cram'erRao type inequalities or the generalization of the Berry phase to mixed states. They are shown to be stochastically monotone here. All stochastically monotone Riemannian metrics are characterized by means of operator monotone functions and it is proven that there exist a maximal and a minimal among them. A class of metrics can be extended to pure states and the FubiniStudy metric shows up there.
A Study of Search Directions in PrimalDual InteriorPoint Methods for Semidefinite Programming
, 1998
"... We discuss several di#erent search directions which can be used in primaldual interiorpoint methods for semidefinite programming problems and investigate their theoretical properties, including scale invariance, primaldual symmetry, and whether they always generate welldefined directions. Among ..."
Abstract

Cited by 30 (1 self)
 Add to MetaCart
We discuss several di#erent search directions which can be used in primaldual interiorpoint methods for semidefinite programming problems and investigate their theoretical properties, including scale invariance, primaldual symmetry, and whether they always generate welldefined directions. Among the directions satisfying all but at most two of these desirable properties are the AlizadehHaeberlyOverton, HelmbergRendl VanderbeiWolkowicz/KojimaShindohHara/Monteiro, NesterovTodd, Gu, and Toh directions, as well as directions we will call the MTW and Half directions. The first five of these appear to be the best in our limited computational testing also. Key words: semidefinite programming, search direction, invariance properties. AMS Subject classification: 90C05. Abbreviated title: Search directions in SDP 1 Introduction This paper is concerned with interiorpoint methods for semidefinite programming (SDP) problems and in particular the various search directions they use and ...
Structured sparsityinducing norms through submodular functions
 IN ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS
, 2010
"... Sparse methods for supervised learning aim at finding good linear predictors from as few variables as possible, i.e., with small cardinality of their supports. This combinatorial selection problem is often turnedinto a convex optimization problem byreplacing the cardinality function by its convex en ..."
Abstract

Cited by 30 (9 self)
 Add to MetaCart
Sparse methods for supervised learning aim at finding good linear predictors from as few variables as possible, i.e., with small cardinality of their supports. This combinatorial selection problem is often turnedinto a convex optimization problem byreplacing the cardinality function by its convex envelope (tightest convex lower bound), in this case the ℓ1norm. In this paper, we investigate more general setfunctions than the cardinality, that may incorporate prior knowledge or structural constraints which are common in many applications: namely, we show that for nonincreasing submodular setfunctions, the corresponding convex envelope can be obtained from its Lovász extension, a common tool in submodular analysis. This defines a family of polyhedral norms, for which we provide generic algorithmic tools (subgradients and proximal operators) and theoretical results (conditions for support recovery or highdimensional inference). By selecting specific submodular functions, we can give a new interpretation to known norms, such as those based on rankstatistics or grouped norms with potentially overlapping groups; we also define new norms, in particular ones that can be used as nonfactorial priors for supervised learning.
VINNIKOV Noncommutative convexity arises from Linear Matrix Inequalities. pp 1 85, to appear J. Functional Analysis
"... Abstract. This paper concerns polynomials in g noncommutative variables x =(x1,...,xg), inverses of such polynomials, and more generally noncommutative “rational expressions ” with real coefficients which are formally symmetric and “analytic near 0”. The focus is on rational expressions r = r(x) whi ..."
Abstract

Cited by 23 (2 self)
 Add to MetaCart
Abstract. This paper concerns polynomials in g noncommutative variables x =(x1,...,xg), inverses of such polynomials, and more generally noncommutative “rational expressions ” with real coefficients which are formally symmetric and “analytic near 0”. The focus is on rational expressions r = r(x) which are “matrix convex ” on the unit ball; i.e., those rational expressions r such that if X =(X1,...,Xg)isagtuple of n × n symmetric matrices satisfying In − � X 2 1 + ···+X 2 � g is positive definite and Y is also, then the symmetric matrix tr(X)+(1−t)r(Y)−r(tX +(1−t)Y) is positive semidefinite for all numbers t, 0≤t≤1. This article gives a complete classification of matrix convex rational expressions (see Theorem 3.3) by representing such r in terms of a symmetric “linear pencil”
A Minkowski type trace inequality and strong subadditivity of quantum entropy
 Advances in the Mathematical Sciences, AMS Transl., 189 Series 2
, 1999
"... We revisit and prove some convexity inequalities for trace functions conjectured in the earlier part I. The main functional considered is ..."
Abstract

Cited by 12 (3 self)
 Add to MetaCart
We revisit and prove some convexity inequalities for trace functions conjectured in the earlier part I. The main functional considered is
CauchySchwarz Inequalities Associated with Positive Semidefinite Matrices
, 2000
"... Using a quasilinear representation for unitarily invariant norms, we prove a basic inequality: Let A = ` L X X ..."
Abstract

Cited by 11 (7 self)
 Add to MetaCart
Using a quasilinear representation for unitarily invariant norms, we prove a basic inequality: Let A = ` L X X
Similarity and Other Spectral Relations for Symmetric Cones
 Linear Algebra And Its Applications 312
, 1998
"... A onetoone relation is established between the nonnegative spectral values of a vector in a primitive symmetric cone and the eigenvalues of its quadratic representation. This result is then exploited to derive similarity relations for vectors with respect to a general symmetric cone. For two pos ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
A onetoone relation is established between the nonnegative spectral values of a vector in a primitive symmetric cone and the eigenvalues of its quadratic representation. This result is then exploited to derive similarity relations for vectors with respect to a general symmetric cone. For two positive definite matrices X and Y , the square of the spectral geometric mean is similar to the matrix product XY , and it is shown that this property carries over to symmetric cones. We also extend the result that the eigenvalues of a matrix product XY are less dispersed than the eigenvalues of the Jordan product (XY +YX)=2. The paper further contains a number of inequalities that are useful in the context of interior point methods, and an extension of Stein's theorem to symmetric cones. Key words. Symmetric cone, Euclidean Jordan algebra, optimization. There are two symmetric cones that are widely used in almost any area of applied mathematics, namely the nonnegative orthant and the cone o...
A matrix convexity approach to some celebrated quantum inequalities
 Proc. Nat. Acad. Sci. USA
"... Dedicated to Gert Pedersen, who is missed for both his brilliance and his exuberant sense of humor. Abstract. Some of the important inequalities associated with quantum entropy are immediate algebraic consequences of the HansenPedersenJensen inequalities. A general argument is given using matrix p ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
Dedicated to Gert Pedersen, who is missed for both his brilliance and his exuberant sense of humor. Abstract. Some of the important inequalities associated with quantum entropy are immediate algebraic consequences of the HansenPedersenJensen inequalities. A general argument is given using matrix perspectives of operator convex functions. A matrix analogue of Maréchal’s extended perspectives provides additional inequalities, including a p + q ≤ 1 result of Lieb. 1.