Results 1  10
of
48
Solving A Polynomial Equation: Some History And Recent Progress
, 1997
"... The classical problem of solving an nth degree polynomial equation has substantially influenced the development of mathematics throughout the centuries and still has several important applications to the theory and practice of presentday computing. We briefly recall the history of the algorithmic a ..."
Abstract

Cited by 85 (16 self)
 Add to MetaCart
The classical problem of solving an nth degree polynomial equation has substantially influenced the development of mathematics throughout the centuries and still has several important applications to the theory and practice of presentday computing. We briefly recall the history of the algorithmic approach to this problem and then review some successful solution algorithms. We end by outlining some algorithms of 1995 that solve this problem at a surprisingly low computational cost.
The littlewoodofford problem and invertibility of random matrices
 Adv. Math
"... Abstract. We prove two basic conjectures on the distribution of the smallest singular value of random n×n matrices with independent entries. Under minimal moment assumptions, we show that the smallest singular value is of order n −1/2, which is optimal for Gaussian matrices. Moreover, we give a opti ..."
Abstract

Cited by 44 (10 self)
 Add to MetaCart
Abstract. We prove two basic conjectures on the distribution of the smallest singular value of random n×n matrices with independent entries. Under minimal moment assumptions, we show that the smallest singular value is of order n −1/2, which is optimal for Gaussian matrices. Moreover, we give a optimal estimate on the tail probability. This comes as a consequence of a new and essentially sharp estimate in the LittlewoodOfford problem: for i.i.d. random variables Xk and real numbers ak, determine the probability p that the sum � k akXk lies near some number v. For arbitrary coefficients ak of the same order of magnitude, we show that they essentially lie in an arithmetic progression of length 1/p. 1.
Univariate polynomials: nearly optimal algorithms for factorization and rootfinding
 In Proceedings of the International Symposium on Symbolic and Algorithmic Computation
, 2001
"... To approximate all roots (zeros) of a univariate polynomial, we develop two effective algorithms and combine them in a single recursive process. One algorithm computes a basic well isolated zerofree annulus on the complex plane, whereas another algorithm numerically splits the input polynomial of t ..."
Abstract

Cited by 38 (11 self)
 Add to MetaCart
To approximate all roots (zeros) of a univariate polynomial, we develop two effective algorithms and combine them in a single recursive process. One algorithm computes a basic well isolated zerofree annulus on the complex plane, whereas another algorithm numerically splits the input polynomial of the nth degree into two factors balanced in the degrees and with the zero sets separated by the basic annulus. Recursive combination of the two algorithms leads to computation of the complete numerical factorization of a polynomial into the product of linear factors and further to the approximation of the roots. The new rootfinder incorporates the earlier techniques of Schönhage, Neff/Reif, and Kirrinnis and our old and new techniques and yields nearly optimal (up to polylogarithmic factors) arithmetic and Boolean cost estimates for the computational complexity of both complete factorization and rootfinding. The improvement over our previous record Boolean complexity estimates is by roughly the factor of n for complete factorization and also for the approximation of wellconditioned (well isolated) roots, whereas the same algorithm is also optimal (under both arithmetic and Boolean models of computing) for the worst case input polynomial, whose roots can be illconditioned, forming
Nonasymptotic theory of random matrices: extreme singular values
 PROCEEDINGS OF THE INTERNATIONAL CONGRESS OF MATHEMATICIANS
, 2010
"... ..."
Condition numbers of Gaussian random matrices
 SIAM J. Matrix Anal. Appl
, 2005
"... Abstract. Let Gm×n be an m × n real random matrix whose elements are independent and identically distributed standard normal random variables, and let κ2(Gm×n) be the 2norm condition number of Gm×n. We prove that, for any) m ≥ 2, n ≥ 2 and x ≥ n − m  + 1, κ2(Gm×n) satisfies ..."
Abstract

Cited by 34 (4 self)
 Add to MetaCart
Abstract. Let Gm×n be an m × n real random matrix whose elements are independent and identically distributed standard normal random variables, and let κ2(Gm×n) be the 2norm condition number of Gm×n. We prove that, for any) m ≥ 2, n ≥ 2 and x ≥ n − m  + 1, κ2(Gm×n) satisfies
Optimal and nearly optimal algorithms for approximating polynomial zeros
 Comput. Math. Appl
, 1996
"... AbstractWe substantially improve the known algorithms for approximating all the complex zeros of an n th degree polynomial p(x). Our new algorithms save both Boolean and arithmetic sequential time, versus the previous best algorithms of SchSnhage [1], Pan [2], and Neff and Reif [3]. In parallel (N ..."
Abstract

Cited by 30 (14 self)
 Add to MetaCart
AbstractWe substantially improve the known algorithms for approximating all the complex zeros of an n th degree polynomial p(x). Our new algorithms save both Boolean and arithmetic sequential time, versus the previous best algorithms of SchSnhage [1], Pan [2], and Neff and Reif [3]. In parallel (NC) implementation, we dramatically decrease the number of processors, versus the parallel algorithm of Neff [4], which was the only NC algorithm known for this problem so far. Specifically, under the simple normalization assumption that the variable x has been scaled so as to confine the zeros of p(x) to the unit disc {x: Ix [ < 1}, our algorithms (which promise to be practically effective) approximate all the zeros of p(x) within the absolute error bound 2b, by using order of n arithmetic operations and order of (b + n)n 2 Boolean (bitwise) operations (in both cases up to within polylogarithmic factors). The algorithms allow their optimal (work preserving) NC parallelization, so that they can be implemented by using polylogarithmic time and the orders of n arithmetic processors or (b + n)n 2 Boolean processors. All the cited bounds on the computational complexity are within polylogarithmic factors from the optimum (in terms of n and b) under both arithmetic and Boolean models of computation (in the Boolean case, under the additional (realistic) assumption that n = O(b)).
Random Approximation in Numerical Analysis
 Proceedings of the Conference "Functional Analysis" Essen
, 1994
"... this paper is twofold. In the first part (sections 2  6) I want to give a survey on recent developments of Monte Carlo complexity. This will include techniques to derive sharp lower bounds as well as the construction of concrete numerical methods which attain these optimal bounds. The field covered ..."
Abstract

Cited by 29 (22 self)
 Add to MetaCart
this paper is twofold. In the first part (sections 2  6) I want to give a survey on recent developments of Monte Carlo complexity. This will include techniques to derive sharp lower bounds as well as the construction of concrete numerical methods which attain these optimal bounds. The field covered here lies at the frontiers of several disciplines, among them theoretical computer science, numerical analysis, probability theory, approximation theory and to a large extent functional analysis. I want to stress the latter aspect and show how new techniques from Banach space and operator theory can be applied to Monte Carlo complexity. In the second part I want to present new results  the solution to a problem concering the Monte Carlo complexity of Fredholm integral equations. This will demonstrate in detail the general approach outlined in part one. We develop a new, fast algorithm  it is a combination of Monte Carlo methods with the Galerkin technique, an approach which seems to be new to this field. The basis functions used for the Galerkin discretization are orthogonal splines of minimal smoothness. They lead to an implementable procedure of minimal computational cost. The paper is organized as follows. In section 2, the main notions of informationbased complexity theory are explained. We cover both the deterministic and the stochastic setting in detail, also for the sake of later comparisons. Some relations to snumber theory are presented in section 3. The role of the average case in proofs of lower bounds for Monte Carlo methods is explained in Section 4. In the following three sections, we analyse the complexity of basic numerical problems: Section 5 deals with numerical integration and contains classical results on the complexity of Monte Carlo quadrature, toge...
Inverse LittlewoodOfford theorems and the condition number of random discrete matrices
 Annals of Mathematics
"... Abstract. Consider a random sum η1v1 +... + ηnvn, where η1,..., ηn are i.i.d. random signs and v1,..., vn are integers. The LittlewoodOfford problem asks to maximize concentration probabilities such as P(η1v1+...+ηnvn = 0) subject to various hypotheses on the v1,..., vn. In this paper we develop an ..."
Abstract

Cited by 29 (12 self)
 Add to MetaCart
Abstract. Consider a random sum η1v1 +... + ηnvn, where η1,..., ηn are i.i.d. random signs and v1,..., vn are integers. The LittlewoodOfford problem asks to maximize concentration probabilities such as P(η1v1+...+ηnvn = 0) subject to various hypotheses on the v1,..., vn. In this paper we develop an inverse LittlewoodOfford theory (somewhat in the spirit of Freiman’s inverse theory) in additive combinatorics, which starts with the hypothesis that a concentration probability is large, and concludes that almost all of the v1,..., vn are efficiently contained in a generalized arithmetic progression. As an application we give a new bound on the magnitude of the least singular value of a random Bernoulli matrix, which in turn provides upper tail estimates on the condition number. 1.