Results 11  20
of
263
Logarithmic concave measures with application to . . .
 ACTA SCIENTIARUM MATHEMATICARUM, 32 (1971), PP. 301–316.
, 1971
"... ..."
Implementation of Interior Point Methods for Large Scale Linear Programming
 in Interior Point Methods in Mathematical Programming
, 1996
"... In the past 10 years the interior point methods (IPM) for linear programming have gained extraordinary interest as an alternative to the sparse simplex based methods. This has initiated a fruitful competition between the two types of algorithms which has lead to very efficient implementations on bot ..."
Abstract

Cited by 70 (22 self)
 Add to MetaCart
In the past 10 years the interior point methods (IPM) for linear programming have gained extraordinary interest as an alternative to the sparse simplex based methods. This has initiated a fruitful competition between the two types of algorithms which has lead to very efficient implementations on both sides. The significant difference between interior point and simplex based methods is reflected not only in the theoretical background but also in the practical implementation. In this paper we give an overview of the most important characteristics of advanced implementations of interior point methods. First, we present the infeasibleprimaldual algorithm which is widely considered the most efficient general purpose IPM. Our discussion includes various algorithmic enhancements of the basic algorithm. The only shortcoming of the "traditional" infeasibleprimaldual algorithm is to detect a possible primal or dual infeasibility of the linear program. We discuss how this problem can be solve...
Potential Function Methods for Approximately Solving Linear Programming Problems: Theory and Practice
, 2001
"... After several decades of sustained research and testing, linear programming has evolved into a remarkably reliable, accurate and useful tool for handling industrial optimization problems. Yet, large problems arising from several concrete applications routinely defeat the very best linear programming ..."
Abstract

Cited by 69 (4 self)
 Add to MetaCart
After several decades of sustained research and testing, linear programming has evolved into a remarkably reliable, accurate and useful tool for handling industrial optimization problems. Yet, large problems arising from several concrete applications routinely defeat the very best linear programming codes, running on the fastest computing hardware. Moreover, this is a trend that may well continue and intensify, as problem sizes escalate and the need for fast algorithms becomes more stringent. Traditionally, the focus in optimization algorithms, and in particular, in algorithms for linear programming, has been to solve problems "to optimality." In concrete implementations, this has always meant the solution ofproblems to some finite accuracy (for example, eight digits). An alternative approach would be to explicitly, and rigorously, trade o# accuracy for speed. One motivating factor is that in many practical applications, quickly obtaining a partially accurate solution is much preferable to obtaining a very accurate solution very slowly. A secondary (and independent) consideration is that the input data in many practical applications has limited accuracy to begin with. During the last ten years, a new body ofresearch has emerged, which seeks to develop provably good approximation algorithms for classes of linear programming problems. This work both has roots in fundamental areas of mathematical programming and is also framed in the context ofthe modern theory ofalgorithms. The result ofthis work has been a family ofalgorithms with solid theoretical foundations and with growing experimental success. In this manuscript we will study these algorithms, starting with some ofthe very earliest examples, and through the latest theoretical and computational developments.
The nonlinear geometry of linear programming IV. Hilbert geometry, in preparation
"... This series of papers studies a geometric structure underlying Karmarkar’s projective scaling algorithm for solving linear programming problems. A basic feature of the projective scaling algorithm is a vector field depending on the objective function which is defined on the interior of the polytope ..."
Abstract

Cited by 67 (0 self)
 Add to MetaCart
This series of papers studies a geometric structure underlying Karmarkar’s projective scaling algorithm for solving linear programming problems. A basic feature of the projective scaling algorithm is a vector field depending on the objective function which is defined on the interior of the polytope of feasible solutions of the linear program. The geometric structure we study is the set of trajectories obtained by integrating this vector field, which we call Ptrajectories. In order to study Ptrajectories we also study a related vector field on the linear programming polytope, which we call the affine scaling vector field, and its associated trajectories, called Atrajectories. The affine scaling vector field is associated to another linear programming algorithm, the affine scaling algorithm. These affine and projective scaling vector fields are each defined for liner programs of a special form, called strict standard form and canonical form, respectively. This paper defines and presents basic properties of Ptrajectories and Atrajectories. It reviews the projective and affine scaling algorithms, defines the projective and affine scaling vector fields, and gives differential equations for Ptrajectories and Atrajectories. It presents Karmarkar’s interpretation of Atrajectories as steepest descent paths of the objective function 〈c, x 〉 with respect to the Riemannian _ dx
Method of centers for minimizing generalized eigenvalues
 Linear Algebra Appl
, 1993
"... We consider the problem of minimizing the largest generalized eigenvalue of a pair of symmetric matrices, each of which depends affinely on the decision variables. Although this problem may appear specialized, it is in fact quite general, and includes for example all linear, quadratic, and linear fr ..."
Abstract

Cited by 65 (14 self)
 Add to MetaCart
We consider the problem of minimizing the largest generalized eigenvalue of a pair of symmetric matrices, each of which depends affinely on the decision variables. Although this problem may appear specialized, it is in fact quite general, and includes for example all linear, quadratic, and linear fractional programs. Many problems arising in control theory can be cast in this form. The problem is nondifferentiable but quasiconvex, so methods such as Kelley's cuttingplane algorithm or the ellipsoid algorithm of Shor, Nemirovksy, and Yudin are guaranteed to minimize it. In this paper we describe relevant background material and a simple interior point method that solves such problems more efficiently. The algorithm is a variation on Huard's method of centers, using a selfconcordant barrier for matrix inequalities developed by Nesterov and Nemirovsky. (Nesterov and Nemirovsky have also extended their potential reduction methods to handle the same problem [NN91b].) Since the problem is quasiconvex but not convex, devising a nonheuristic stopping criterion (i.e., one that guarantees a given accuracy) is more difficult than in the convex case. We describe several nonheuristic stopping criteria that are based on the dual of a related convex problem and a new ellipsoidal approximation that is slightly sharper, in some cases, than a more general result due to Nesterov and Nemirovsky. The algorithm is demonstrated on an example: determining the quadratic Lyapunov function that optimizes a decay rate estimate for a differential inclusion.
Primaldual interior methods for nonconvex nonlinear programming
 SIAM Journal on Optimization
, 1998
"... Abstract. This paper concerns largescale general (nonconvex) nonlinear programming when first and second derivatives of the objective and constraint functions are available. A method is proposed that is based on finding an approximate solution of a sequence of unconstrained subproblems parameterize ..."
Abstract

Cited by 59 (5 self)
 Add to MetaCart
Abstract. This paper concerns largescale general (nonconvex) nonlinear programming when first and second derivatives of the objective and constraint functions are available. A method is proposed that is based on finding an approximate solution of a sequence of unconstrained subproblems parameterized by a scalar parameter. The objective function of each unconstrained subproblem is an augmented penaltybarrier function that involves both primal and dual variables. Each subproblem is solved with a modified Newton method that generates search directions from a primaldual system similar to that proposed for interior methods. The augmented penaltybarrier function may be interpreted as a merit function for values of the primal and dual variables. An inertiacontrolling symmetric indefinite factorization is used to provide descent directions and directions of negative curvature for the augmented penaltybarrier merit function. A method suitable for large problems can be obtained by providing a version of this factorization that will treat large sparse indefinite systems.
A robust gradient sampling algorithm for nonsmooth, nonconvex optimization
 SIAM Journal on Optimization
"... Let f be a continuous function on R n, and suppose f is continuously differentiable on an open dense subset. Such functions arise in many applications, and very often minimizers are points at which f is not differentiable. Of particular interest is the case where f is not convex, and perhaps not eve ..."
Abstract

Cited by 54 (19 self)
 Add to MetaCart
Let f be a continuous function on R n, and suppose f is continuously differentiable on an open dense subset. Such functions arise in many applications, and very often minimizers are points at which f is not differentiable. Of particular interest is the case where f is not convex, and perhaps not even locally Lipschitz, but whose gradient is easily computed where it is defined. We present a practical, robust algorithm to locally minimize such functions, based on gradient sampling. No subgradient information is required by the algorithm. When f is locally Lipschitz and has bounded level sets, and the sampling radius ǫ is fixed, we show that, with probability one, the algorithm generates a sequence with a cluster point that is Clarke ǫstationary. Furthermore, we show that if f has a unique Clarke stationary point ¯x, then the set of all cluster points generated by the algorithm converges to ¯x as ǫ is reduced to zero.
Optimal design of a CMOS opamp via geometric programming
 IEEE Transactions on ComputerAided Design
, 2001
"... We describe a new method for determining component values and transistor dimensions for CMOS operational ampli ers (opamps). We observe that a wide variety of design objectives and constraints have a special form, i.e., they are posynomial functions of the design variables. As a result the ampli er ..."
Abstract

Cited by 51 (10 self)
 Add to MetaCart
We describe a new method for determining component values and transistor dimensions for CMOS operational ampli ers (opamps). We observe that a wide variety of design objectives and constraints have a special form, i.e., they are posynomial functions of the design variables. As a result the ampli er design problem can be expressed as a special form of optimization problem called geometric programming, for which very e cient global optimization methods have been developed. As a consequence we can e ciently determine globally optimal ampli er designs, or globally optimal tradeo s among competing performance measures such aspower, openloop gain, and bandwidth. Our method therefore yields completely automated synthesis of (globally) optimal CMOS ampli ers, directly from speci cations. In this paper we apply this method to a speci c, widely used operational ampli er architecture, showing in detail how to formulate the design problem as a geometric program. We compute globally optimal tradeo curves relating performance measures such as power dissipation, unitygain bandwidth, and openloop gain. We show how the method can be used to synthesize robust designs, i.e., designs guaranteed to meet the speci cations for a
A Feature Selection Newton Method for Support Vector Machine Classification
 Computational Optimization and Applications
, 2002
"... A fast Newton method, that suppresses input space features, is proposed for a linear programming formulation of support vector machine classifiers. The proposed standalone method can handle classification problems in very high dimensional spaces, such as 28,032 dimensions, and generates a classifie ..."
Abstract

Cited by 51 (3 self)
 Add to MetaCart
A fast Newton method, that suppresses input space features, is proposed for a linear programming formulation of support vector machine classifiers. The proposed standalone method can handle classification problems in very high dimensional spaces, such as 28,032 dimensions, and generates a classifier that depends on very few input features, such as 7 out of the original 28,032. The method can also handle problems with a large number of data points and requires no specialized linear programming packages but merely a linear equation solver. For nonlinear kernel classifiers, the method utilizes a minimal number of kernel functions in the classifier that it gener ates.
Mathematical Programming for Data Mining: Formulations and Challenges
 INFORMS Journal on Computing
, 1998
"... This paper is intended to serve as an overview of a rapidly emerging research and applications area. In addition to providing a general overview, motivating the importance of data mining problems within the area of knowledge discovery in databases, our aim is to list some of the pressing research ch ..."
Abstract

Cited by 47 (0 self)
 Add to MetaCart
This paper is intended to serve as an overview of a rapidly emerging research and applications area. In addition to providing a general overview, motivating the importance of data mining problems within the area of knowledge discovery in databases, our aim is to list some of the pressing research challenges, and outline opportunities for contributions by the optimization research communities. Towards these goals, we include formulations of the basic categories of data mining methods as optimization problems. We also provide examples of successful mathematical programming approaches to some data mining problems. keywords: data analysis, data mining, mathematical programming methods, challenges for massive data sets, classification, clustering, prediction, optimization. To appear: INFORMS: Journal of Compting, special issue on Data Mining, A. Basu and B. Golden (guest editors). Also appears as Mathematical Programming Technical Report 9801, Computer Sciences Department, University of Wi...