Results 11  20
of
373
Optimal design of a CMOS opamp via geometric programming
 IEEE Transactions on ComputerAided Design
, 2001
"... We describe a new method for determining component values and transistor dimensions for CMOS operational ampli ers (opamps). We observe that a wide variety of design objectives and constraints have a special form, i.e., they are posynomial functions of the design variables. As a result the ampli er ..."
Abstract

Cited by 51 (10 self)
 Add to MetaCart
We describe a new method for determining component values and transistor dimensions for CMOS operational ampli ers (opamps). We observe that a wide variety of design objectives and constraints have a special form, i.e., they are posynomial functions of the design variables. As a result the ampli er design problem can be expressed as a special form of optimization problem called geometric programming, for which very e cient global optimization methods have been developed. As a consequence we can e ciently determine globally optimal ampli er designs, or globally optimal tradeo s among competing performance measures such aspower, openloop gain, and bandwidth. Our method therefore yields completely automated synthesis of (globally) optimal CMOS ampli ers, directly from speci cations. In this paper we apply this method to a speci c, widely used operational ampli er architecture, showing in detail how to formulate the design problem as a geometric program. We compute globally optimal tradeo curves relating performance measures such as power dissipation, unitygain bandwidth, and openloop gain. We show how the method can be used to synthesize robust designs, i.e., designs guaranteed to meet the speci cations for a
Fast Model Predictive Control Using Online Optimization
, 2008
"... A widely recognized shortcoming of model predictive control (MPC) is that it can usually only be used in applications with slow dynamics, where the sample time is measured in seconds or minutes. A well known technique for implementing fast MPC is to compute the entire control law offline, in which c ..."
Abstract

Cited by 48 (18 self)
 Add to MetaCart
A widely recognized shortcoming of model predictive control (MPC) is that it can usually only be used in applications with slow dynamics, where the sample time is measured in seconds or minutes. A well known technique for implementing fast MPC is to compute the entire control law offline, in which case the online controller can be implemented as a lookup table. This method works well for systems with small state and input dimensions (say, no more than 5), and short time horizons. In this paper we describe a collection of methods for improving the speed of MPC, using online optimization. These custom methods, which exploit the particular structure of the MPC problem, can compute the control action on the order of 100 times faster than a method that uses a generic optimizer. As an example, our method computes the control actions for a problem with 12 states, 3 controls, and horizon of 30 time steps (which entails solving a quadratic program with 450 variables and 1260 constraints) in around 5msec, allowing MPC to be carried out at 200Hz. 1
Application of interiorpoint methods to model predictive control
 JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS
, 1998
"... We present a structured interiorpoint method for the efficient solution of the optimal control problem in model predictive control (MPC). The cost of this approach is linear in the horizon length, compared with cubic growth for a naive approach. We use a discretetime Riccati recursion to solve the ..."
Abstract

Cited by 46 (6 self)
 Add to MetaCart
We present a structured interiorpoint method for the efficient solution of the optimal control problem in model predictive control (MPC). The cost of this approach is linear in the horizon length, compared with cubic growth for a naive approach. We use a discretetime Riccati recursion to solve the linear equations efficiently at each iteration of the interiorpoint method, and show that this recursion is numerically stable. We demonstrate the effectiveness of the approach by applying it to three process control problems.
Buffer Overrun Detection using Linear Programming and Static Analysis
 In Proceedings of the 10th ACM conference on Computer and communications security
, 2003
"... This paper addresses the issue of identifying buffer overrun vulnerabilities by statically analyzing C source code. We demonstrate a lightweight analysis based on modeling C string manipulations as a linear program. We also present fast, scalable solvers based on linear programming, and demonstrate ..."
Abstract

Cited by 44 (0 self)
 Add to MetaCart
This paper addresses the issue of identifying buffer overrun vulnerabilities by statically analyzing C source code. We demonstrate a lightweight analysis based on modeling C string manipulations as a linear program. We also present fast, scalable solvers based on linear programming, and demonstrate techniques to make the program analysis context sensitive. Based on these techniques, we built a prototype and used it to identify several vulnerabilities in popular security critical applications.
Preconditioning indefinite systems in interior point methods for optimization
 Computational Optimization and Applications
, 2004
"... Abstract. Every Newton step in an interiorpoint method for optimization requires a solution of a symmetric indefinite system of linear equations. Most of today’s codes apply direct solution methods to perform this task. The use of logarithmic barriers in interior point methods causes unavoidable il ..."
Abstract

Cited by 44 (13 self)
 Add to MetaCart
Abstract. Every Newton step in an interiorpoint method for optimization requires a solution of a symmetric indefinite system of linear equations. Most of today’s codes apply direct solution methods to perform this task. The use of logarithmic barriers in interior point methods causes unavoidable illconditioning of linear systems and, hence, iterative methods fail to provide sufficient accuracy unless appropriately preconditioned. Two types of preconditioners which use some form of incomplete Cholesky factorization for indefinite systems are proposed in this paper. Although they involve significantly sparser factorizations than those used in direct approaches they still capture most of the numerical properties of the preconditioned system. The spectral analysis of the preconditioned matrix is performed: for convex optimization problems all the eigenvalues of this matrix are strictly positive. Numerical results are given for a set of public domain large linearly constrained convex quadratic programming problems with sizes reaching tens of thousands of variables. The analysis of these results reveals that the solution times for such problems on a modern PC are measured in minutes when direct methods are used and drop to seconds when iterative methods with appropriate preconditioners are used. Keywords: interiorpoint methods, iterative solvers, preconditioners 1.
T.S.: Interior point methods for massive support vector machines
 Data Mining Institute, Computer Sciences Department, University of Wisconsin
, 2000
"... Abstract. We investigate the use of interiorpoint methods for solving quadratic programming problems with a small number of linear constraints, where the quadratic term consists of a lowrank update to a positive semidefinite matrix. Several formulations of the support vector machine fit into this ..."
Abstract

Cited by 43 (1 self)
 Add to MetaCart
Abstract. We investigate the use of interiorpoint methods for solving quadratic programming problems with a small number of linear constraints, where the quadratic term consists of a lowrank update to a positive semidefinite matrix. Several formulations of the support vector machine fit into this category. An interesting feature of these particular problems is the volume of data, which can lead to quadratic programs with between 10 and 100 million variables and, if written explicitly, a dense Q matrix. Our code is based on OOQP, an objectoriented interiorpoint code, with the linear algebra specialized for the support vector machine application. For the targeted massive problems, all of the data is stored out of core and we overlap computation and input/output to reduce overhead. Results are reported for several linear support vector machine formulations demonstrating that the method is reliable and scalable. Key words. support vector machine, interiorpoint method, linear algebra AMS subject classifications. 90C51, 90C20, 62H30 PII. S1052623400374379 1. Introduction. Interiorpoint methods [30] are frequently used to solve large convex quadratic and linear programs for two reasons. First, the number of iterations
Parallel InteriorPoint Solver for Structured Quadratic Programs: Application to Financial Planning Problems
, 2003
"... Many practical largescale optimization problems are not only sparse, but also display some form of blockstructure such as primal or dual block angular structure. Often these structures are nested: each block of the coarse top level structure is blockstructured itself. Problems with these charact ..."
Abstract

Cited by 41 (20 self)
 Add to MetaCart
Many practical largescale optimization problems are not only sparse, but also display some form of blockstructure such as primal or dual block angular structure. Often these structures are nested: each block of the coarse top level structure is blockstructured itself. Problems with these characteristics appear frequently in stochastic programming but also in other areas such as telecommunication network modelling. We present a linear algebra library tailored for problems with such structure that is used inside an interior point solver for convex quadratic programming problems. Due to its objectoriented design it can be used to exploit virtually any nested block structure arising in practical problems, eliminating the need for highly specialised linear algebra modules needing to be written for every type of problem separately. Through a careful implementation we achieve almost automatic parallelisation of the linear algebra. The efficiency of the approach is illustrated on several problems arising in the financial planning, namely in the asset and liability management. The problems are modelled as
A superlinearly convergent predictorcorrector method for degenerate LCP in a wide neighborhood of the central path with O (√n L)iteration complexity
, 2006
"... ..."
WarmStart Strategies In InteriorPoint Methods For Linear Programming
 SIAM Journal on Optimization
, 2000
"... . We study the situation in which, having solved a linear program with an interiorpoint method, we are presented with a new problem instance whose data is slightly perturbed from the original. We describe strategies for recovering a "warmstart" point for the perturbed problem instance from the iter ..."
Abstract

Cited by 37 (1 self)
 Add to MetaCart
. We study the situation in which, having solved a linear program with an interiorpoint method, we are presented with a new problem instance whose data is slightly perturbed from the original. We describe strategies for recovering a "warmstart" point for the perturbed problem instance from the iterates of the original problem instance. We obtain worstcase estimates of the number of iterations required to converge to a solution of the perturbed instance from the warmstart points, showing that these estimates depend on the size of the perturbation and on the conditioning and other properties of the problem instances. 1. Introduction. This paper describes and analyzes warmstart strategies for interiorpoint methods applied to linear programming (LP) problems. We consider the situation in which one linear program, the "original instance," has been solved by an interiorpoint method, and we are then presented with a new problem of the same dimensions, the "perturbed instance," in which ...
Condition Measures and Properties of the Central Trajectory of a Linear Program
 Mathematical Programming
, 1997
"... Given a data instance d = (A; b; c) of a linear program, we show that certain properties of solutions along the central trajectory of the linear program are inherently related to the condition number C(d) of the data instance d = (A; b; c), where C(d) is a scaleinvariant reciprocal of a closelyrel ..."
Abstract

Cited by 34 (15 self)
 Add to MetaCart
Given a data instance d = (A; b; c) of a linear program, we show that certain properties of solutions along the central trajectory of the linear program are inherently related to the condition number C(d) of the data instance d = (A; b; c), where C(d) is a scaleinvariant reciprocal of a closelyrelated measure ae(d) called the "distance to illposedness." (The distance to illposedness essentially measures how close the data instance d = (A; b; c) is to being primal or dual infeasible.) We present lower and upper bounds on sizes of optimal solutions along the central trajectory, and on rates of change of solutions along the central trajectory, as either the barrier parameter ¯ or the data d = (A; b; c) of the linear program is changed. These bounds are all linear or polynomial functions of certain natural parameters associated with the linear program, namely the condition number C(d), the distance to illposedness ae(d), the norm of the data kdk, and the dimensions m and n. 1 Introdu...