Results 1  10
of
26
Linear recurrences with polynomial coefficients and computation of the CartierManin operator on hyperelliptic curves
 In International Conference on Finite Fields and Applications (Toulouse
, 2004
"... Abstract. We study the complexity of computing one or several terms (not necessarily consecutive) in a recurrence with polynomial coefficients. As applications, we improve the best currently known upper bounds for factoring integers deterministically and for computing the Cartier–Manin operator of h ..."
Abstract

Cited by 22 (8 self)
 Add to MetaCart
Abstract. We study the complexity of computing one or several terms (not necessarily consecutive) in a recurrence with polynomial coefficients. As applications, we improve the best currently known upper bounds for factoring integers deterministically and for computing the Cartier–Manin operator of hyperelliptic curves.
Diagrammatic Derivation of Gradient Algorithms for Neural Networks
 in Neural Computation
, 1994
"... Deriving gradient algorithms for timedependent neural network structures typically requires numerous chain rule expansions, diligent bookkeeping, and careful manipulation of terms. In this paper, we show how to use the principle of Network Reciprocity to derive such algorithms via a set of simple b ..."
Abstract

Cited by 19 (1 self)
 Add to MetaCart
Deriving gradient algorithms for timedependent neural network structures typically requires numerous chain rule expansions, diligent bookkeeping, and careful manipulation of terms. In this paper, we show how to use the principle of Network Reciprocity to derive such algorithms via a set of simple block diagram manipulation rules. The approach provides a common framework to derive popular algorithms including backpropagation and backpropagationthroughtime without a single chain rule expansion. Additional examples are provided for a variety of complicated architectures to illustrate both the generality and the simplicity of the approach. 1 Introduction Deriving the appropriate gradient descent algorithm for a new network architecture or system configuration normally involves brute force derivative calculations. For example, the celebrated backpropagation algorithm for training feedforward neural networks was derived by repeatedly applying chain rule expansions backward through the ne...
Fast algorithms for zerodimensional polynomial systems using duality
 APPLICABLE ALGEBRA IN ENGINEERING, COMMUNICATION AND COMPUTING
, 2001
"... Many questions concerning a zerodimensional polynomial system can be reduced to linear algebra operations in the quotient algebra A = k[X1,..., Xn]/I, where I is the ideal generated by the input system. Assuming that the multiplicative structure of the algebra A is (partly) known, we address the q ..."
Abstract

Cited by 16 (3 self)
 Add to MetaCart
Many questions concerning a zerodimensional polynomial system can be reduced to linear algebra operations in the quotient algebra A = k[X1,..., Xn]/I, where I is the ideal generated by the input system. Assuming that the multiplicative structure of the algebra A is (partly) known, we address the question of speeding up the linear algebra phase for the computation of minimal polynomials and rational parametrizations in A. We present new formulæ for the rational parametrizations, extending those of Rouillier, and algorithms extending ideas introduced by Shoup in the univariate case. Our approach is based on the Amodule structure of the dual space � A. An important feature of our algorithms is that we do not require � A to be free and of rank 1. The complexity of our algorithms for computing the minimal polynomial and the rational parametrizations are O(2 n D 5/2) and O(n2 n D 5/2) respectively, where D is the dimension of A. For fixed n, this is better than algorithms based on linear algebra except when the complexity of the available matrix product has exponent less than 5/2.
Relating RealTime Backpropagation and BackpropagationThroughTime: An Application of Flow Graph Interreciprocity.
"... We show that signal flow graph theory provides a simple way to relate two popular algorithms used for adapting dynamic neural networks, realtime backpropagation and backpropagationthroughtime. Starting with the flow graph for realtime backpropagation, we use a simple transposition to produce a s ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
We show that signal flow graph theory provides a simple way to relate two popular algorithms used for adapting dynamic neural networks, realtime backpropagation and backpropagationthroughtime. Starting with the flow graph for realtime backpropagation, we use a simple transposition to produce a second graph. The new graph is shown to be interreciprocal with the original and to correspond to the backpropagationthroughtime algorithm. Interreciprocity provides a theoretical argument to verify that both flow graphs implement the same overall weight update. Introduction Two adaptive algorithms, realtime backpropagation (RTBP) and backpropagationthroughtime (BPTT), are currently used to train multilayer neural networks with output feedback connections. RTBP was first introduced for single layer fully recurrent networks by Williams and Zipser (1989). The algorithm has since been extended to include feedforward networks with output feedback (see, e.g., Narendra 1990). The algorithm is...
Optimization techniques for highperformance digital circuits
 in Proc. IEEE Int. Conf. ComputerAided Design (ICCAD
, 1997
"... The relentless push for high performance in custom digital circuits has led to renewed emphasis on circuit optimization or tuning. The parameters of the optimization are typically transistor and interconnect sizes. The design metrics are not just delay, transition times, power and area, but also ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
The relentless push for high performance in custom digital circuits has led to renewed emphasis on circuit optimization or tuning. The parameters of the optimization are typically transistor and interconnect sizes. The design metrics are not just delay, transition times, power and area, but also signal integrity and manufacturability. This tutorial paper discusses some of the recently proposed methods of circuit optimization, with an emphasis on practical application and methodology impact. Circuit optimization techniques fall into three broad categories. The rst is dynamic tuning, based on timedomain simulation of the underlying circuit, typically combined with adjoint sensitivity computation. These methods are accurate but require the specication of input signals, and are best applied to small data
ow circuits and \crosssections " of larger circuits. Ecient sensitivity computation renders feasible the tuning of circuits with a few thousand transistors. Second, static tuners employ static timing analysis to evaluate the performance of the circuit. All paths through the logic are simultaneously tuned, and no input vectors are required. Large control macros are best tuned by these methods. However, in the context of deep submicron custom design, the inaccuracy of the delay models employed by these methods often limits their utility. Aggressive dynamic or static tuning can push a circuit into a precipitous corner of the manufacturing process space, which is a problem addressed by the third class of circuit optimization tools, statistical tuners. Statistical techniques are used to enhance manufacturability or maximize yield. In addition to surveying the above techniques, topics such as the use of stateoftheart nonlinear optimization methods and special considerations for interconnect sizing, clock tree optimization and noiseaware tuning will be brie
y considered. 1
Circuit Optimization via Adjoint Lagrangians
 IEEE INTERNATIONAL CONFERENCE ON COMPUTERAIDED DESIGN
, 1997
"... The circuit tuning problem is best approached by means of gradientbased nonlinear optimization algorithms. For large circuits, gradient computation can be the bottleneck in the optimization procedure. Traditionally, when the number of measurements is large relative to the number of tunable paramete ..."
Abstract

Cited by 8 (4 self)
 Add to MetaCart
The circuit tuning problem is best approached by means of gradientbased nonlinear optimization algorithms. For large circuits, gradient computation can be the bottleneck in the optimization procedure. Traditionally, when the number of measurements is large relative to the number of tunable parameters, the direct method [2] is used to repeatedly solve the associated sensitivity circuit to obtain all the necessary gradients. Likewise, when the parameters outnumber the measurements, the adjoint method [1] is employed to solve the adjoint circuit repeatedly for each measurement to compute the sensitivities. In this paper, we propose the adjoint Lagrangian method, which computes all the gradients necessary for augmentedLagrangianbased optimization in a single adjoint analysis. After the nominal simulation of the circuit has been carried out, the gradients of the merit function are expressed as the gradients of a weighted sum of circuit measurements. The weights are dependent on the nominal solution and on optimizer quantities such as Lagrange multipliers. By suitably choosing the excitations of the adjoint circuit, the gradients of the merit function are computed via a single adjoint analysis, irrespective of the number of measurements and the number of parameters of the optimization. This procedure requires close integration between the nonlinear optimization software and the circuit simulation program. The adjoint
Adjoint techniques for sensitivity analysis in highfrequency structure CAD
 IEEE TRANS. MICROWAVE THEORY TECH
, 2004
"... There is a revival of the interest in adjoint sensitivity analysis techniques. This is partly because current computeraideddesign software based on fullwave electromagnetic (EM) solvers remains too slow for the purposes of practical highfrequency structure design despite the increasing capacity ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
There is a revival of the interest in adjoint sensitivity analysis techniques. This is partly because current computeraideddesign software based on fullwave electromagnetic (EM) solvers remains too slow for the purposes of practical highfrequency structure design despite the increasing capacity of computers. The adjointvariable methods for design sensitivity analysis offer computational speed and accuracy. They can be used for efficient gradientbased optimization, in tolerance and yield analysis. Adjointbased sensitivity analysis for circuits has been well studied and extensively covered in the microwave literature. In comparison, sensitivities with fullwave analysis techniques have attracted little attention, and there have been few applications into feasible and versatile algorithms. We review adjointvariable methods used in highfrequency structure design with both circuit analysis techniques and fullwave EM analysis techniques. A brief discussion on adjointbased sensitivity analysis for nonlinear dynamic systems is also included.
On the complexities of multipoint evaluation and interpolation
 TCS
"... We compare the complexities of multipoint polynomial evaluation and interpolation. We show that, over a field of characteristic zero, both questions have equivalent complexities, up to a constant number of polynomial multiplications. ..."
Abstract

Cited by 6 (5 self)
 Add to MetaCart
We compare the complexities of multipoint polynomial evaluation and interpolation. We show that, over a field of characteristic zero, both questions have equivalent complexities, up to a constant number of polynomial multiplications.
A SignalFlowGraph Approach to Online Gradient Calculation
 Neural Computation
, 2000
"... A large class of nonlinear dynamic adaptive systems such as dynamic recurrent neural networks can be effectively represented by signal flow graphs (SFGs). By this method, complex systems are described as a general connection of many simple components, each of them implementing a simple oneinput, on ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
A large class of nonlinear dynamic adaptive systems such as dynamic recurrent neural networks can be effectively represented by signal flow graphs (SFGs). By this method, complex systems are described as a general connection of many simple components, each of them implementing a simple oneinput, oneoutput transformation, as in an electrical circuit. Even if graph representations are popular in the neural network community, they are often used for qualitative description rather than for rigorous representation and computational purposes. In this article, a method for both online and batchbackward gradient computation of a system output or cost function with respect to system parameters is derived by the SFG representation theory and its known properties. The system can be any causal, in general nonlinear and timevariant, dynamic system represented by an SFG, in particular any feedforward, timedelay, or recurrent neural network. In this work, we use discretetime notation, but the same theory holds for the continuoustime case. The gradient is obtained in a straightforward way by the analysis of two SFGs, the original one and its adjoint (obtained from the first by simple transformations), without the complex chain rule expansions of derivatives usually employed. This method can be used for sensitivity analysis and for learning both offline and online. Online learning is particularly important since it is required by many real applications, such as digital signal processing, system identification and control, channel equalization, and predistortion. 1
Adjoint systems for models of cell signaling pathways and their application to parameter …tting
 IEEE/ACM Transactions on Computational Biology and Bioinformatics
, 2006
"... Abstract—The paper concerns the problem of fitting mathematical models of cell signaling pathways. Such models frequently take the form of sets of nonlinear ordinary differential equations. While the model is continuous in time, the performance index used in the fitting procedure involves measuremen ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
Abstract—The paper concerns the problem of fitting mathematical models of cell signaling pathways. Such models frequently take the form of sets of nonlinear ordinary differential equations. While the model is continuous in time, the performance index used in the fitting procedure involves measurements taken at discrete time moments. Adjoint sensitivity analysis is a tool which can be used for finding the gradient of a performance index in the space of parameters of the model. In the paper, a structural formulation of adjoint sensitivity analysis called the Generalized Backpropagation Through Time (GBPTT) is used. The method is especially suited for hybrid, continuousdiscrete time systems. As an example, we use the mathematical model of the NFB regulatory module, which plays a major role in the innate immune response in animals. Index Terms—Biology and genetics, modeling, ordinary differential equations, parameter learning. Ç 1