Results 1 
7 of
7
Optimal design of a CMOS opamp via geometric programming
 IEEE Transactions on ComputerAided Design
, 2001
"... We describe a new method for determining component values and transistor dimensions for CMOS operational ampli ers (opamps). We observe that a wide variety of design objectives and constraints have a special form, i.e., they are posynomial functions of the design variables. As a result the ampli er ..."
Abstract

Cited by 51 (10 self)
 Add to MetaCart
We describe a new method for determining component values and transistor dimensions for CMOS operational ampli ers (opamps). We observe that a wide variety of design objectives and constraints have a special form, i.e., they are posynomial functions of the design variables. As a result the ampli er design problem can be expressed as a special form of optimization problem called geometric programming, for which very e cient global optimization methods have been developed. As a consequence we can e ciently determine globally optimal ampli er designs, or globally optimal tradeo s among competing performance measures such aspower, openloop gain, and bandwidth. Our method therefore yields completely automated synthesis of (globally) optimal CMOS ampli ers, directly from speci cations. In this paper we apply this method to a speci c, widely used operational ampli er architecture, showing in detail how to formulate the design problem as a geometric program. We compute globally optimal tradeo curves relating performance measures such as power dissipation, unitygain bandwidth, and openloop gain. We show how the method can be used to synthesize robust designs, i.e., designs guaranteed to meet the speci cations for a
The Statistical Adversary Allows Optimal MoneyMaking Trading Strategies (Extended Abstract)
, 1993
"... Andrew Chou Jeremy Cooperstock y Ran ElYaniv z Michael Klugerman x Tom Leighton  November, 1993 Abstract The distributional approach and competitive analysis have traditionally been used for the design and analysis of online algorithms. The former assumes a specific distribution on inputs, whil ..."
Abstract

Cited by 22 (4 self)
 Add to MetaCart
Andrew Chou Jeremy Cooperstock y Ran ElYaniv z Michael Klugerman x Tom Leighton  November, 1993 Abstract The distributional approach and competitive analysis have traditionally been used for the design and analysis of online algorithms. The former assumes a specific distribution on inputs, while the latter assumes inputs are chosen by an unrestricted adversary. This paper employs the statistical adversary (recently proposed by Raghavan) to analyze and design online algorithms for twoway currency trading. The statistical adversary approach may be viewed as a hybrid of the distributional approach and competitive analysis. By statistical adversary, we mean an adversary that generates input sequences, where each sequence must satisfy certain general statistical properties. The online algorithms presented in this paper have some very attractive properties. For instance, the algorithms are moneymaking; they are guaranteed to be profitable when the optimal offli...
A general approach to sparse basis selection: Majorization, concavity, and affine scaling
 IN PROCEEDINGS OF THE TWELFTH ANNUAL CONFERENCE ON COMPUTATIONAL LEARNING THEORY
, 1997
"... Measures for sparse bestâ€“basis selection are analyzed and shown to fit into a general framework based on majorization, Schurconcavity, and concavity. This framework facilitates the analysis of algorithm performance and clarifies the relationships between existing proposed concentration measures use ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
Measures for sparse bestâ€“basis selection are analyzed and shown to fit into a general framework based on majorization, Schurconcavity, and concavity. This framework facilitates the analysis of algorithm performance and clarifies the relationships between existing proposed concentration measures useful for sparse basis selection. It also allows one to define new concentration measures, and several general classes of measures are proposed and analyzed in this paper. Admissible measures are given by the Schurconcave functions, which are the class of functions consistent with the socalled Lorentz ordering (a partial ordering on vectors also known as majorization). In particular, concave functions form an important subclass of the Schurconcave functions which attain their minima at sparse solutions to the best basis selection problem. A general affine scaling optimization algorithm obtained from a special factorization of the gradient function is developed and proved to converge to a sparse solution for measures chosen from within this subclass.
Nearly Optimal Competitive Online Replacement
"... This paper studies the following online replacement problem. There is a real function f(t), called the flow rate, defined over a finite time horizon [0; T ]. It is known that m f(t) M for some reals 0 m ! M . At time 0 an online player starts to pay money at the rate f(0). At each time 0 ! t T ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
This paper studies the following online replacement problem. There is a real function f(t), called the flow rate, defined over a finite time horizon [0; T ]. It is known that m f(t) M for some reals 0 m ! M . At time 0 an online player starts to pay money at the rate f(0). At each time 0 ! t T the player may changeover and continue paying money at the rate f(t). The complication is that each such changeover incurs some fixed penalty. The player is called online as at each time t the player knows f only over the time interval [0; t]. The goal of the player is to minimize the total cost comprised of cumulative payment flow plus changeover costs. This formulation of the replacement problem has various interesting applications among which are: equipment replacement, supplier replacement, the menu cost problem and mortgage refinancing.
Incremental Communication for Multilayer Neural Networks: Error Analysis
, 1995
"... Artificial neural networks (ANNs) involve a large amount of internode communications. To reduce the communication cost as well as the time of learning process in ANNs, we earlier proposed an incremental internode communication method. In the incremental communication method, instead of communicati ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Artificial neural networks (ANNs) involve a large amount of internode communications. To reduce the communication cost as well as the time of learning process in ANNs, we earlier proposed an incremental internode communication method. In the incremental communication method, instead of communicating the full magnitude of the output value of a node, only the increment or decrement to its previous value is sent on a communication link. In this paper, the effects of the limited precision incremental communication method on the convergence behavior and performance of multilayer neural networks are investigated. The nonlinear aspects of representing the incremental values with reduced (limited) precision for the commonly used error backpropagation training algorithm are analyzed. It is shown that the nonlinear effect of small perturbations in the input(s)/output of a node does not enforce instability. The analysis is supported by simulation studies of two problems. The simulation results ...
Accelerated Backpropagation Learning: Extended Dynamic Parallel Tangent Optimization Algorithm
 Lecture Notes in Artificial Intelligence 1822
, 2000
"... The backpropagation algorithm is an iterative gradient descent algorithm designed to train multilayer neural networks. Despite its popularity and eectiveness, the orthogonal steps (zigzagging) near the optimum point slows down the convergence of this algorithm. To overcome the ineciency of zigza ..."
Abstract
 Add to MetaCart
The backpropagation algorithm is an iterative gradient descent algorithm designed to train multilayer neural networks. Despite its popularity and eectiveness, the orthogonal steps (zigzagging) near the optimum point slows down the convergence of this algorithm. To overcome the ineciency of zigzagging in the conventional backpropagation algorithm, one of the authors earlier proposed the use of a deecting gradient technique to improve the convergence of backpropagation learning algorithm. The proposed method is called Partan backpropagation learning algorithm[3]. The convergence time of multilayer networks has further improved through dynamic adaptation of their learning rates[6]. In this paper, an extension to the dynamic parallel tangent learning algorithm is proposed. In the proposed algorithm, each connection has its own learning as well as acceleration rate. These individual rates are dynamically adapted as the learning proceeds. Simulation studies are carried out on dierent learning problems. Faster rate of convergence is achieved for all problems used in the simulations. Keywords: Articial neural networks, Backpropagation, Gradient descent, Parallel tangent, Dynamic parallel tangent. 1