Results 1  10
of
77
Solving A Polynomial Equation: Some History And Recent Progress
, 1997
"... The classical problem of solving an nth degree polynomial equation has substantially influenced the development of mathematics throughout the centuries and still has several important applications to the theory and practice of presentday computing. We briefly recall the history of the algorithmic a ..."
Abstract

Cited by 85 (16 self)
 Add to MetaCart
The classical problem of solving an nth degree polynomial equation has substantially influenced the development of mathematics throughout the centuries and still has several important applications to the theory and practice of presentday computing. We briefly recall the history of the algorithmic approach to this problem and then review some successful solution algorithms. We end by outlining some algorithms of 1995 that solve this problem at a surprisingly low computational cost.
Univariate polynomials: nearly optimal algorithms for factorization and rootfinding
 In Proceedings of the International Symposium on Symbolic and Algorithmic Computation
, 2001
"... To approximate all roots (zeros) of a univariate polynomial, we develop two effective algorithms and combine them in a single recursive process. One algorithm computes a basic well isolated zerofree annulus on the complex plane, whereas another algorithm numerically splits the input polynomial of t ..."
Abstract

Cited by 38 (11 self)
 Add to MetaCart
To approximate all roots (zeros) of a univariate polynomial, we develop two effective algorithms and combine them in a single recursive process. One algorithm computes a basic well isolated zerofree annulus on the complex plane, whereas another algorithm numerically splits the input polynomial of the nth degree into two factors balanced in the degrees and with the zero sets separated by the basic annulus. Recursive combination of the two algorithms leads to computation of the complete numerical factorization of a polynomial into the product of linear factors and further to the approximation of the roots. The new rootfinder incorporates the earlier techniques of Schönhage, Neff/Reif, and Kirrinnis and our old and new techniques and yields nearly optimal (up to polylogarithmic factors) arithmetic and Boolean cost estimates for the computational complexity of both complete factorization and rootfinding. The improvement over our previous record Boolean complexity estimates is by roughly the factor of n for complete factorization and also for the approximation of wellconditioned (well isolated) roots, whereas the same algorithm is also optimal (under both arithmetic and Boolean models of computing) for the worst case input polynomial, whose roots can be illconditioned, forming
Optimal and nearly optimal algorithms for approximating polynomial zeros
 Comput. Math. Appl
, 1996
"... AbstractWe substantially improve the known algorithms for approximating all the complex zeros of an n th degree polynomial p(x). Our new algorithms save both Boolean and arithmetic sequential time, versus the previous best algorithms of SchSnhage [1], Pan [2], and Neff and Reif [3]. In parallel (N ..."
Abstract

Cited by 30 (14 self)
 Add to MetaCart
AbstractWe substantially improve the known algorithms for approximating all the complex zeros of an n th degree polynomial p(x). Our new algorithms save both Boolean and arithmetic sequential time, versus the previous best algorithms of SchSnhage [1], Pan [2], and Neff and Reif [3]. In parallel (NC) implementation, we dramatically decrease the number of processors, versus the parallel algorithm of Neff [4], which was the only NC algorithm known for this problem so far. Specifically, under the simple normalization assumption that the variable x has been scaled so as to confine the zeros of p(x) to the unit disc {x: Ix [ < 1}, our algorithms (which promise to be practically effective) approximate all the zeros of p(x) within the absolute error bound 2b, by using order of n arithmetic operations and order of (b + n)n 2 Boolean (bitwise) operations (in both cases up to within polylogarithmic factors). The algorithms allow their optimal (work preserving) NC parallelization, so that they can be implemented by using polylogarithmic time and the orders of n arithmetic processors or (b + n)n 2 Boolean processors. All the cited bounds on the computational complexity are within polylogarithmic factors from the optimum (in terms of n and b) under both arithmetic and Boolean models of computation (in the Boolean case, under the additional (realistic) assumption that n = O(b)).
The convergence rate of the Sandwich algorithm for approximating convex functions
 Computing
, 1992
"... ..."
Guaranteed intervals for Kolmogorov’s theorem (and their possible relation to neural networks
 Interval Computations
, 1993
"... Abstract In 1987, R. HechtNielsen noticed that a theorem that was proved by Kolmogorov in 1957 as a solution to one of Hilbert's problems, actually shows that an arbitrary function f can be implemented by a 3layer neural network with appropriate activation functions and O/. The more accurately we ..."
Abstract

Cited by 12 (7 self)
 Add to MetaCart
Abstract In 1987, R. HechtNielsen noticed that a theorem that was proved by Kolmogorov in 1957 as a solution to one of Hilbert's problems, actually shows that an arbitrary function f can be implemented by a 3layer neural network with appropriate activation functions and O/. The more accurately we implement these functions, the better approximation to f we get. Kolmogorov's proof can be transformed into a fast iterative algorithm that converges to the description of a network. However, this algorithm does not provide us with a guaranteed approximation accuracy: namely, if we want to approximate a given function f with a given accuracy ", this algorithm does not tell us after what iteration we can guarantee this accuracy. In 1991, Kurkova proposed a second algorithmic version of Kolmogorov's theorem. Namely, she showed how for every continuous function f, and for every " ? 0, we can construct a neural network that approximates f with a given accuracy ", i.e., whose output belongs to an interval [f (x1;:::; xn) \Gamma "; f (x1;:::; xn) + "]. In the original Kolmogorov's theorem, the design (and, in particular, the number Nhidden of hidden neurons) does not change with ". In Kurkova's algorithm, when " ! 0, the number of hidden neurons increases (Nhidden! 1), and so does the complexity of the approximating network. The natural question is: can we provide a guaranteed approximation property and still keep Nhidden independent on "? Our asnwer is "yes". In this paper, we describe algorithms that generate the functions and O / (from the original
Topics on Interpolation Problems in Algebraic Geometry
, 2004
"... These are notes of the lectures given by the authors during the school/workshop “Polynomial Interpolation and Projective Embeddings”. We mainly focus our attention on the planar case and on the Segre and HarbourneHirschowitz Conjectures. We discuss the state of the art introducing several results ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
These are notes of the lectures given by the authors during the school/workshop “Polynomial Interpolation and Projective Embeddings”. We mainly focus our attention on the planar case and on the Segre and HarbourneHirschowitz Conjectures. We discuss the state of the art introducing several results and different techniques.
Computational universes
 Chaos, Solitons & Fractals
, 2006
"... Suspicions that the world might be some sort of a machine or algorithm existing “in the mind ” of some symbolic number cruncher have lingered from antiquity. Although popular at times, the most radical forms of this idea never reached mainstream. Modern developments in physics and computer science h ..."
Abstract

Cited by 9 (5 self)
 Add to MetaCart
Suspicions that the world might be some sort of a machine or algorithm existing “in the mind ” of some symbolic number cruncher have lingered from antiquity. Although popular at times, the most radical forms of this idea never reached mainstream. Modern developments in physics and computer science have lent support to the thesis, but empirical evidence is needed before it can begin to replace our contemporary world view.
Computers, Reasoning and Mathematical Practice
"... ion in itself is not the goal: for Whitehead [117]"it is the large generalisation, limited by a happy particularity, which is the fruitful conception." As an example consider the theorem in ring theory, which states that if R is a ring, f(x) is a polynomial over R and f(r) = 0 for every element of ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
ion in itself is not the goal: for Whitehead [117]"it is the large generalisation, limited by a happy particularity, which is the fruitful conception." As an example consider the theorem in ring theory, which states that if R is a ring, f(x) is a polynomial over R and f(r) = 0 for every element of r of R then R is commutative. Special cases of this, for example f(x) is x 2 \Gamma x or x 3 \Gamma x, can be given a first order proof in a few lines of symbol manipulation. The usual proof of the general result [20] (which takes a semester's postgraduate course to develop from scratch) is a corollary of other results: we prove that rings satisfying the condition are semisimple artinian, apply a theorem which shows that all such rings are matrix rings over division rings, and eventually obtain the result by showing that all finite division rings are fields, and hence commutative. This displays von Neumann's architectural qualities: it is "deep" in a way in which the symbol manipulati...
From Keeping ‘Nature’s Secrets’ to the Institutionalization of ‘Open
 In Ghosh, Rishab Aiyer, CODE: Collaborative ownership and the digital
, 1994
"... An earlier version of this paper was presented to the University of Sienna Workshop, “Science as an Institution and the Institutions of Science, ” held at the Certosa di Pontignano (nr. Sienna), Italy on 2526 th January, 2002. The author is grateful for the comments received on that occasion from F ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
An earlier version of this paper was presented to the University of Sienna Workshop, “Science as an Institution and the Institutions of Science, ” held at the Certosa di Pontignano (nr. Sienna), Italy on 2526 th January, 2002. The author is grateful for the comments received on that occasion from Fabio Pammoli, the Workshop’s organizer, and from other participants, especially Richard Nelson and Keith Pavitt. Many other intellectual debts that were incurred in the course of research on the larger corpus of work upon which this paper draws are acknowledged in David (2000). Contact author during 20.09.2003 – 20.03.2004 at:
From Unicode to Typography, a Case Study: the Greek Script
 Proceedings of 14th International Unicode Conference, available from http://omega.enstb.org/yannis/pdf/boston99.pdf
, 1999
"... ..."