Results 1  10
of
216
Associated polynomials and uniform methods for the solution of linear problems
 SIAM Review
, 1966
"... 2. Brief survey of results 279 3. The basic orthonormality relation 279 ..."
Abstract

Cited by 26 (0 self)
 Add to MetaCart
(Show Context)
2. Brief survey of results 279 3. The basic orthonormality relation 279
A survey on the continuous nonlinear resource allocation problem
 Eur. J. Oper. Res
, 2008
"... Our problem of interest consists of minimizing a separable, convex and differentiable function over a convex set, defined by bounds on the variables and an explicit constraint described by a separable convex function. Applications are abundant, and vary from equilibrium problems in the engineering a ..."
Abstract

Cited by 24 (1 self)
 Add to MetaCart
(Show Context)
Our problem of interest consists of minimizing a separable, convex and differentiable function over a convex set, defined by bounds on the variables and an explicit constraint described by a separable convex function. Applications are abundant, and vary from equilibrium problems in the engineering and economic sciences, through resource allocation and balancing problems in manufacturing, statistics, military operations research and production and financial economics, to subproblems in algorithms for a variety of more complex optimization models. This paper surveys the history and applications of the problem, as well as algorithmic approaches to its solution. The most common techniques are based on finding the optimal value of the Lagrange multiplier for the explicit constraint, most often through the use of a type of line search procedure. We analyze the most relevant references, especially regarding their originality and numerical findings, summarizing with remarks on possible extensions and future research. 1 Introduction and
On optimal multipoint methods for solving nonlinear equations
, 2009
"... A general class of threepoint iterative methods for solving nonlinear equations is constructed. Its order of convergence reaches eight with only four function evaluations per iteration, which means that the proposed methods possess as high as possible computational efficiency in the sense of the K ..."
Abstract

Cited by 23 (10 self)
 Add to MetaCart
A general class of threepoint iterative methods for solving nonlinear equations is constructed. Its order of convergence reaches eight with only four function evaluations per iteration, which means that the proposed methods possess as high as possible computational efficiency in the sense of the KungTraub hypothesis (1974). Numerical examples are included to demonstrate a spectacular convergence speed with only few function evaluations.
A Basic Family Of Iteration Functions For Polynomial Root Finding And Its Characterizations
 J. OF COMP. AND APPL. MATH
, 1997
"... Let p(x) be a polynomial of degree n 2 with coefficients in a subfield K of the complex numbers. For each natural number m 2, let Lm (x) be the m2m lower triangular matrix whose diagonal entries are p(x) and for each j = 1; : : : ; m 0 1, its jth subdiagonal entries are p (j) (x)=j!. For i = 1; ..."
Abstract

Cited by 18 (12 self)
 Add to MetaCart
(Show Context)
Let p(x) be a polynomial of degree n 2 with coefficients in a subfield K of the complex numbers. For each natural number m 2, let Lm (x) be the m2m lower triangular matrix whose diagonal entries are p(x) and for each j = 1; : : : ; m 0 1, its jth subdiagonal entries are p (j) (x)=j!. For i = 1; 2, let L (i) m (x) be the matrix obtained from Lm (x) by deleting its first i rows and its last i columns. L (1) 1 (x) j 1. Then, the function Bm (x) = x 0 p(x) det(L (1) m01 (x))=det(L (1) m (x)) is a member of S(m; m + n 0 2), where for any M m, S(m; M) is the set of all rational iteration functions such that for all roots ` of p(x) , g(x) = ` + P M i=m fl i (x)(` 0 x) i , with fl i (x)'s also rational and welldefined at `. Given g 2 S(m; M), and a simple root ` of p(x), g (i) (`) = 0, i = 1; : : : ; m0 1, and fl m (`) = (01) m g (m) (`)=m!. For Bm (x) we obtain fl m (`) = (01) m det(L (2) m+1 (`))=det(L (1) m (`)). For m = 2 and 3, Bm (x) coincides with Newton'...
Review of some iterative root–finding methods from a dynamical point of view
 SCIENTIA
, 2004
"... ..."
THE COMPLEXITY OF MULTIPLEPRECISION ARITHMETIC
, 1976
"... In studying the complexity of iterative processes it is usually assumed that the arithmetic operations of addition, multiplication, and division can be performed in certain constant times. This assumption is invalid if the precision required increases as the computation proceeds. We give upper and l ..."
Abstract

Cited by 15 (2 self)
 Add to MetaCart
In studying the complexity of iterative processes it is usually assumed that the arithmetic operations of addition, multiplication, and division can be performed in certain constant times. This assumption is invalid if the precision required increases as the computation proceeds. We give upper and lower bounds on the number of singleprecision operations required to perform various multipleprecision operations, and deduce some interesting consequences concerning the relative efficiencies of methods for solving nonlinear equations using variablelength multipleprecision arithmetic.
Generalization Of Taylor's Theorem And Newton's Method Via A New Family Of Determinantal Interpolation Formulas
 J. of Comp. and Appl. Math
, 1997
"... The general form of Taylor's theorem gives the formula, f = Pn +Rn , where Pn is the Newton 's interpolating polynomial, computed with respect to a confluent vector of nodes, and Rn is the remainder. When f 0 6= 0, for each m = 2; : : : ; n + 1, we describe a "determinantal interpo ..."
Abstract

Cited by 13 (12 self)
 Add to MetaCart
(Show Context)
The general form of Taylor's theorem gives the formula, f = Pn +Rn , where Pn is the Newton 's interpolating polynomial, computed with respect to a confluent vector of nodes, and Rn is the remainder. When f 0 6= 0, for each m = 2; : : : ; n + 1, we describe a "determinantal interpolation formula", f = P m;n +R m;n , where P m;n is a rational function in x and f itself. These formulas play a dual role in the approximation of f or its inverse. For m = 2, the formula is Taylor's and for m = 3 it gives Halley's iteration function, as well as a Pad'e approximant. By applying the formulas to Pn , for each m 2, Pm;m\Gamma1 ; : : : ; Pm;m+n\Gamma2 , is a set of n rational approximations that includes Pn , and may provide a better approximation to f , than Pn . Thus each Taylor polynomial unfolds into an infinite spectrum of rational approximations. The formulas also give an infinite spectrum of rational inverse approximations, as well as a family of iteration functions for real or complex ...
Graphic and numerical comparison between iterative methods
 The Mathematical Intelligencer
"... Let f be a function f: R → R and ζ a root of f, that is, f(ζ) = 0. It is well known that if we take x0 close to ζ, and under certain conditions that I will not explain here, the Newton method xn+1 = xn − f(xn) ..."
Abstract

Cited by 13 (0 self)
 Add to MetaCart
(Show Context)
Let f be a function f: R → R and ζ a root of f, that is, f(ζ) = 0. It is well known that if we take x0 close to ζ, and under certain conditions that I will not explain here, the Newton method xn+1 = xn − f(xn)
OPTIMAL SIZE INTEGER DIVISION CIRCUITS
, 1988
"... Division is a fundamental problem for arithmetic and algebraic computation. This paper describes Boolean circuits (of bounded fanin) for integer division ( nding reciprocals) that have size O(M (n)) and depth O(log n log log n), where M(n) is the size complexity ofO(log n) depth integer multiplicat ..."
Abstract

Cited by 11 (2 self)
 Add to MetaCart
Division is a fundamental problem for arithmetic and algebraic computation. This paper describes Boolean circuits (of bounded fanin) for integer division ( nding reciprocals) that have size O(M (n)) and depth O(log n log log n), where M(n) is the size complexity ofO(log n) depth integer multiplication circuits. Currently, M(n) isknown to be O(n log n log log n), but any improvement in this bound that preserves circuit depth will be re ected by a similar improvement in the size complexity of our division algorithm. Previously, no one has been able to derive a division circuit with size O(n log c n) for any c, and simultaneous depth less than (log 2 n). The circuit families described in this paper are logspace uniform; that is, they can be constructed by a deterministic Turing machine in space O(log n). The results match the bestknown depth bounds for logspace uniform circuits, and are optimal in size. The general method of highorder iterative formulas is of independent interest as a way of efciently using parallel processors to solve algebraic problems. In particular, this algorithm implies that any rational function can be evaluated in these complexity bounds. As an introduction to highorder iterative methods a circuit is rst presented for nding polynomial reciprocals (where the coe cients come from an arbitrary ring, and ring operations are unit cost in the circuit) in size O(PM(n)) and depth O(log n log log n), where PM(n) is the size complexity of optimal depth polynomial multiplication.