Results 1  10
of
11
Linear recurrences with polynomial coefficients and computation of the CartierManin operator on hyperelliptic curves
 In International Conference on Finite Fields and Applications (Toulouse
, 2004
"... Abstract. We study the complexity of computing one or several terms (not necessarily consecutive) in a recurrence with polynomial coefficients. As applications, we improve the best currently known upper bounds for factoring integers deterministically and for computing the Cartier–Manin operator of h ..."
Abstract

Cited by 21 (8 self)
 Add to MetaCart
Abstract. We study the complexity of computing one or several terms (not necessarily consecutive) in a recurrence with polynomial coefficients. As applications, we improve the best currently known upper bounds for factoring integers deterministically and for computing the Cartier–Manin operator of hyperelliptic curves.
Fast algorithms for zerodimensional polynomial systems using duality
 APPLICABLE ALGEBRA IN ENGINEERING, COMMUNICATION AND COMPUTING
, 2001
"... Many questions concerning a zerodimensional polynomial system can be reduced to linear algebra operations in the quotient algebra A = k[X1,..., Xn]/I, where I is the ideal generated by the input system. Assuming that the multiplicative structure of the algebra A is (partly) known, we address the q ..."
Abstract

Cited by 16 (3 self)
 Add to MetaCart
Many questions concerning a zerodimensional polynomial system can be reduced to linear algebra operations in the quotient algebra A = k[X1,..., Xn]/I, where I is the ideal generated by the input system. Assuming that the multiplicative structure of the algebra A is (partly) known, we address the question of speeding up the linear algebra phase for the computation of minimal polynomials and rational parametrizations in A. We present new formulæ for the rational parametrizations, extending those of Rouillier, and algorithms extending ideas introduced by Shoup in the univariate case. Our approach is based on the Amodule structure of the dual space � A. An important feature of our algorithms is that we do not require � A to be free and of rank 1. The complexity of our algorithms for computing the minimal polynomial and the rational parametrizations are O(2 n D 5/2) and O(n2 n D 5/2) respectively, where D is the dimension of A. For fixed n, this is better than algorithms based on linear algebra except when the complexity of the available matrix product has exponent less than 5/2.
Diagrammatic Derivation of Gradient Algorithms for Neural Networks
 in Neural Computation
, 1994
"... Deriving gradient algorithms for timedependent neural network structures typically requires numerous chain rule expansions, diligent bookkeeping, and careful manipulation of terms. In this paper, we show how to use the principle of Network Reciprocity to derive such algorithms via a set of simple b ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
Deriving gradient algorithms for timedependent neural network structures typically requires numerous chain rule expansions, diligent bookkeeping, and careful manipulation of terms. In this paper, we show how to use the principle of Network Reciprocity to derive such algorithms via a set of simple block diagram manipulation rules. The approach provides a common framework to derive popular algorithms including backpropagation and backpropagationthroughtime without a single chain rule expansion. Additional examples are provided for a variety of complicated architectures to illustrate both the generality and the simplicity of the approach. 1 Introduction Deriving the appropriate gradient descent algorithm for a new network architecture or system configuration normally involves brute force derivative calculations. For example, the celebrated backpropagation algorithm for training feedforward neural networks was derived by repeatedly applying chain rule expansions backward through the ne...
Relating RealTime Backpropagation and BackpropagationThroughTime: An Application of Flow Graph Interreciprocity.
"... We show that signal flow graph theory provides a simple way to relate two popular algorithms used for adapting dynamic neural networks, realtime backpropagation and backpropagationthroughtime. Starting with the flow graph for realtime backpropagation, we use a simple transposition to produce a s ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
We show that signal flow graph theory provides a simple way to relate two popular algorithms used for adapting dynamic neural networks, realtime backpropagation and backpropagationthroughtime. Starting with the flow graph for realtime backpropagation, we use a simple transposition to produce a second graph. The new graph is shown to be interreciprocal with the original and to correspond to the backpropagationthroughtime algorithm. Interreciprocity provides a theoretical argument to verify that both flow graphs implement the same overall weight update. Introduction Two adaptive algorithms, realtime backpropagation (RTBP) and backpropagationthroughtime (BPTT), are currently used to train multilayer neural networks with output feedback connections. RTBP was first introduced for single layer fully recurrent networks by Williams and Zipser (1989). The algorithm has since been extended to include feedforward networks with output feedback (see, e.g., Narendra 1990). The algorithm is...
On the complexities of multipoint evaluation and interpolation
 TCS
"... We compare the complexities of multipoint polynomial evaluation and interpolation. We show that, over a field of characteristic zero, both questions have equivalent complexities, up to a constant number of polynomial multiplications. ..."
Abstract

Cited by 6 (5 self)
 Add to MetaCart
We compare the complexities of multipoint polynomial evaluation and interpolation. We show that, over a field of characteristic zero, both questions have equivalent complexities, up to a constant number of polynomial multiplications.
Circuit Optimization via Adjoint Lagrangians
 IEEE INTERNATIONAL CONFERENCE ON COMPUTERAIDED DESIGN
, 1997
"... The circuit tuning problem is best approached by means of gradientbased nonlinear optimization algorithms. For large circuits, gradient computation can be the bottleneck in the optimization procedure. Traditionally, when the number of measurements is large relative to the number of tunable paramete ..."
Abstract

Cited by 6 (3 self)
 Add to MetaCart
The circuit tuning problem is best approached by means of gradientbased nonlinear optimization algorithms. For large circuits, gradient computation can be the bottleneck in the optimization procedure. Traditionally, when the number of measurements is large relative to the number of tunable parameters, the direct method [2] is used to repeatedly solve the associated sensitivity circuit to obtain all the necessary gradients. Likewise, when the parameters outnumber the measurements, the adjoint method [1] is employed to solve the adjoint circuit repeatedly for each measurement to compute the sensitivities. In this paper, we propose the adjoint Lagrangian method, which computes all the gradients necessary for augmentedLagrangianbased optimization in a single adjoint analysis. After the nominal simulation of the circuit has been carried out, the gradients of the merit function are expressed as the gradients of a weighted sum of circuit measurements. The weights are dependent on the nominal solution and on optimizer quantities such as Lagrange multipliers. By suitably choosing the excitations of the adjoint circuit, the gradients of the merit function are computed via a single adjoint analysis, irrespective of the number of measurements and the number of parameters of the optimization. This procedure requires close integration between the nonlinear optimization software and the circuit simulation program. The adjoint
Adjoint techniques for sensitivity analysis in highfrequency structure CAD
 IEEE TRANS. MICROWAVE THEORY TECH
, 2004
"... There is a revival of the interest in adjoint sensitivity analysis techniques. This is partly because current computeraideddesign software based on fullwave electromagnetic (EM) solvers remains too slow for the purposes of practical highfrequency structure design despite the increasing capacity ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
There is a revival of the interest in adjoint sensitivity analysis techniques. This is partly because current computeraideddesign software based on fullwave electromagnetic (EM) solvers remains too slow for the purposes of practical highfrequency structure design despite the increasing capacity of computers. The adjointvariable methods for design sensitivity analysis offer computational speed and accuracy. They can be used for efficient gradientbased optimization, in tolerance and yield analysis. Adjointbased sensitivity analysis for circuits has been well studied and extensively covered in the microwave literature. In comparison, sensitivities with fullwave analysis techniques have attracted little attention, and there have been few applications into feasible and versatile algorithms. We review adjointvariable methods used in highfrequency structure design with both circuit analysis techniques and fullwave EM analysis techniques. A brief discussion on adjointbased sensitivity analysis for nonlinear dynamic systems is also included.
Relating RealTime Backpropagation and BackpropagationThroughTime: An Application of Flow Graph Interreciprocity.
"... We show that signal flow graph theory provides a simple way to relate two popular algorithms used for adapting dynamic neural networks, realtime backpropagation and backpropagationthroughtime. Starting with the flow graph for realtime backpropagation, we use a simple transposition to produce a s ..."
Abstract
 Add to MetaCart
We show that signal flow graph theory provides a simple way to relate two popular algorithms used for adapting dynamic neural networks, realtime backpropagation and backpropagationthroughtime. Starting with the flow graph for realtime backpropagation, we use a simple transposition to produce a second graph. The new graph is shown to be interreciprocal with the original and to correspond to the backpropagationthroughtime algorithm. Interreciprocity provides a theoretical argument to verify that both flow graphs implement the same overall weight update. Introduction Two adaptive algorithms, realtime backpropagation (RTBP) and backpropagationthroughtime (BPTT), are currently used to train multilayer neural networks with output feedback connections. RTBP was first introduced for single layer fully recurrent networks by Williams and Zipser (1989). The algorithm has since been extended to include feedforward networks with output feedback (see, e.g., Narendra 1990). The algorithm is...
Interconnection Structures in Physical Systems: A Mathematical Formulation
"... The powerconserving structure of a physical system is known as interconnection structure. This paper presents a mathematical formulation of the interconnection structure in Hilbert spaces. Some properties of interconnection structures are pointed out and their three natural representations are trea ..."
Abstract
 Add to MetaCart
The powerconserving structure of a physical system is known as interconnection structure. This paper presents a mathematical formulation of the interconnection structure in Hilbert spaces. Some properties of interconnection structures are pointed out and their three natural representations are treated. The developed theory is illustrated on two examples: electrical circuit and onedimensional transmission line. 1
Pmax ≈ Pin
, 2009
"... Deduce an approximate expression for the power received by a small antenna with a load that include a resistance R (as well as a possible reactance) when the antenna is in a linearly polarized incident plane wave of wavelength λ and (timeaverage) power per unit area Pin. Show that the maximum power ..."
Abstract
 Add to MetaCart
Deduce an approximate expression for the power received by a small antenna with a load that include a resistance R (as well as a possible reactance) when the antenna is in a linearly polarized incident plane wave of wavelength λ and (timeaverage) power per unit area Pin. Show that the maximum power received is approximately λ