Results 1  10
of
19
Optimal design of a CMOS opamp via geometric programming
 IEEE Transactions on ComputerAided Design
, 2001
"... We describe a new method for determining component values and transistor dimensions for CMOS operational ampli ers (opamps). We observe that a wide variety of design objectives and constraints have a special form, i.e., they are posynomial functions of the design variables. As a result the ampli er ..."
Abstract

Cited by 51 (10 self)
 Add to MetaCart
We describe a new method for determining component values and transistor dimensions for CMOS operational ampli ers (opamps). We observe that a wide variety of design objectives and constraints have a special form, i.e., they are posynomial functions of the design variables. As a result the ampli er design problem can be expressed as a special form of optimization problem called geometric programming, for which very e cient global optimization methods have been developed. As a consequence we can e ciently determine globally optimal ampli er designs, or globally optimal tradeo s among competing performance measures such aspower, openloop gain, and bandwidth. Our method therefore yields completely automated synthesis of (globally) optimal CMOS ampli ers, directly from speci cations. In this paper we apply this method to a speci c, widely used operational ampli er architecture, showing in detail how to formulate the design problem as a geometric program. We compute globally optimal tradeo curves relating performance measures such as power dissipation, unitygain bandwidth, and openloop gain. We show how the method can be used to synthesize robust designs, i.e., designs guaranteed to meet the speci cations for a
A Computational Study of the Homogeneous Algorithm for LargeScale Convex Optimization
, 1997
"... Recently the authors have proposed a homogeneous and selfdual algorithm for solving the monotone complementarity problem (MCP) [5]. The algorithm is a single phase interiorpoint type method, nevertheless it yields either an approximate optimal solution or detects a possible infeasibility of th ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
Recently the authors have proposed a homogeneous and selfdual algorithm for solving the monotone complementarity problem (MCP) [5]. The algorithm is a single phase interiorpoint type method, nevertheless it yields either an approximate optimal solution or detects a possible infeasibility of the problem. In this paper we specialize the algorithm to the solution of general smooth convex optimization problems that also possess nonlinear inequality constraints and free variables. We discuss an implementation of the algorithm for largescale sparse convex optimization. Moreover, we present computational results for solving quadratically constrained quadratic programming and geometric programming problems, where some of the problems contain more than 100,000 constraints and variables. The results indicate that the proposed algorithm is also practically efficient. Department of Management, Odense University, Campusvej 55, DK5230 Odense M, Denmark. Email: eda@busieco.ou.dk y ...
Concurrent Logic Restructuring and Placement for Timing Closure
 in Proc. IEEE International Conference on Computer Aided Design
, 1999
"... ABSTRACT: In this paper, an algorithm for simultaneous logic restructuring and placement is presented. This algorithm first constructs a set of supercells along the critical paths and then generates the set of noninferior remapping solutions for each supercell. The best mapping and placement solu ..."
Abstract

Cited by 13 (0 self)
 Add to MetaCart
ABSTRACT: In this paper, an algorithm for simultaneous logic restructuring and placement is presented. This algorithm first constructs a set of supercells along the critical paths and then generates the set of noninferior remapping solutions for each supercell. The best mapping and placement solutions for all supercells are obtained by solving a generalized geometric programming (GGP) problem. The process of identifying and optimizing the critical paths is iterated until timing closure is achieved. Experimental results on a set of MCNC benchmarks demonstrate the effectiveness of our algorithm. I.
Simultaneous Gate Sizing and Placement
 IEEE Transactions on ComputerAided Design of Integrated Circuits and Systems
, 2000
"... In this paper, we present an algorithm for gate sizing with controlled displacement to improve the overall circuit timing. We use a pathbased delay model to capture the timing constraints in the circuit. To reduce the problem size and improve the solution convergence, we iteratively identify and op ..."
Abstract

Cited by 11 (2 self)
 Add to MetaCart
In this paper, we present an algorithm for gate sizing with controlled displacement to improve the overall circuit timing. We use a pathbased delay model to capture the timing constraints in the circuit. To reduce the problem size and improve the solution convergence, we iteratively identify and optimize the kmost critical paths in the circuit and their neighboring cells. More precisely in each iteration, we perform three operations: a) reposition the immediate fanouts of the gates on the kmost critical paths; b) size down the immediate fanouts of the gates on the kmost critical paths; c) simultaneously reposition and resize the gates on the kmost critical paths. Each of these operations is formulated and solved as a mathematical program by using efficient solution techniques. Experimental results on a set of benchmark circuits demonstrate the effectiveness of our approach compared to the conventional approaches which separate gate sizing from gate placement. 1
A general approach to sparse basis selection: Majorization, concavity, and affine scaling
 IN PROCEEDINGS OF THE TWELFTH ANNUAL CONFERENCE ON COMPUTATIONAL LEARNING THEORY
, 1997
"... Measures for sparse best–basis selection are analyzed and shown to fit into a general framework based on majorization, Schurconcavity, and concavity. This framework facilitates the analysis of algorithm performance and clarifies the relationships between existing proposed concentration measures use ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
Measures for sparse best–basis selection are analyzed and shown to fit into a general framework based on majorization, Schurconcavity, and concavity. This framework facilitates the analysis of algorithm performance and clarifies the relationships between existing proposed concentration measures useful for sparse basis selection. It also allows one to define new concentration measures, and several general classes of measures are proposed and analyzed in this paper. Admissible measures are given by the Schurconcave functions, which are the class of functions consistent with the socalled Lorentz ordering (a partial ordering on vectors also known as majorization). In particular, concave functions form an important subclass of the Schurconcave functions which attain their minima at sparse solutions to the best basis selection problem. A general affine scaling optimization algorithm obtained from a special factorization of the gradient function is developed and proved to converge to a sparse solution for measures chosen from within this subclass.
Probabilistic analytical target cascading: A moment matching formulation for multilevel optimization under uncertainty
 Journal of Mechanical Design
, 2006
"... Analytical target cascading (ATC) is a methodology for hierarchical multilevel system design optimization. In previous work, the deterministic ATC formulation was extended to account for random variables represented by expected values to be matched among subproblems and thus ensure design consistenc ..."
Abstract

Cited by 6 (5 self)
 Add to MetaCart
Analytical target cascading (ATC) is a methodology for hierarchical multilevel system design optimization. In previous work, the deterministic ATC formulation was extended to account for random variables represented by expected values to be matched among subproblems and thus ensure design consistency. In this work, the probabilistic formulation is augmented to allow the introduction and matching of additional probabilistic characteristics. A particular probabilistic analytical target cascading (PATC) formulation is proposed that matches the first two moments of interrelated responses and linking variables. Several implementation issues are addressed, including representation of probabilistic design targets, matching responses and linking variables under uncertainty, and coordination strategies. Analytical and simulationbased optimal design examples are used to illustrate the new formulation. The accuracy of the proposed PATC formulation is demonstrated by comparing PATC results to those obtained using a probabilistic allinone formulation. �DOI: 10.1115/1.2205870�
Automated design of operational transconductance amplifiers using reversed geometric programming
 In Proceedings of the 41th IEEE/ACM Design Automation Conference
, 2004
"... We present a method for designing operational amplifiers using reversed geometric programming, which is an extension of geometric programming that allows both convex and nonconvex constraints. Adding a limited set of nonconvex constraints can improve the accuracy of convex equationbased optimizati ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
We present a method for designing operational amplifiers using reversed geometric programming, which is an extension of geometric programming that allows both convex and nonconvex constraints. Adding a limited set of nonconvex constraints can improve the accuracy of convex equationbased optimization, without compromising global optimality. These constraints allow increased accuracy for critical modeling equations, such as the relationship between gm and IDS. To demonstrate the design methodology, a foldedcascode amplifier is designed in a 0.18 µm technology for varying speed requirements and is compared with simulations and designs obtained from geometric programming. Categories and Subject Descriptors:
Robustness of Posynomial Geometric Programming Optima
 Mathematical Programming
, 1999
"... This paper develops a simple bounding procedure for the optimal value of a posynomial geometric programming (GP) problem when some of the coefficients for terms in the problem's objective function are estimated with error. The bound may be computed even before the problem is solved and it is shown ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
This paper develops a simple bounding procedure for the optimal value of a posynomial geometric programming (GP) problem when some of the coefficients for terms in the problem's objective function are estimated with error. The bound may be computed even before the problem is solved and it is shown analytically that the optimum value is very insensitive to errors in the coefficients; for example, a 20% error could cause the optimum to be wrong by no more than 1.67%. Key Words: Geometric Programming, Posynomials, Sensitivity Analysis *Corresponding Author Address: Department of Industrial Engineering 1048 Benedum Hall University of Pittsburgh Pittsburgh, PA 15261 email: rajgopal@engrng.pitt.edu fax: (412) 6249831 1 Introduction Geometric Programming (GP) is a technique for solving certain classes of algebraic nonlinear optimization problems. Since its original development by Duffin, Peterson and Zener (1967) at the Westinghouse R & D Center, it has been studied extensively and...
ABSTRACT* FLEXIBLE DATA FUSION ( & FISSION)
"... An approach is described for developing methods for "data fusion": given how events A & B occurring by themselves influence some measure, estimate the influence (on that measure) of A and B occurring together. An example is "combine the effects of evidence on the belief (likelihood) o ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
An approach is described for developing methods for "data fusion": given how events A & B occurring by themselves influence some measure, estimate the influence (on that measure) of A and B occurring together. An example is "combine the effects of evidence on the belief (likelihood) of some hypothesis." This approach also deals with the opposite problem of estimating the effects on a measure of A and B by themselves when only their combined effects are known: data fission. The methods developed will both 1) try to make intuitive estimations at information not given, and 2) not conflict with any information given (unless it is inconsistent). 1.
Modular Test Plans for Certification of Software Reliability
"... This paper considers the problem of certifying the reliability of a software system that can be decomposed into a finite number of modules. It uses a Markovian model for the transfer of control between modules in order to develop the system reliability expression in terms of the module reliabilities ..."
Abstract
 Add to MetaCart
This paper considers the problem of certifying the reliability of a software system that can be decomposed into a finite number of modules. It uses a Markovian model for the transfer of control between modules in order to develop the system reliability expression in terms of the module reliabilities. A test procedure is considered in which only the individual modules are tested and the system is certified if, and only if, no failures are observed. The minimum number of tests required of each module is determined such that the probability of certifying a system whose reliability falls below a specified value R 0 is less than a specified small fraction b. This sample size determination problem is formulated as a twostage mathematical program and an algorithm is developed for solving this problem. Two examples from the literature are considered to demonstrate the procedure. Keywords: Software reliability; Modular Tests; Sample Size Determination; Mathematical Programming 1 1. Introduc...