Results 1  10
of
20
Smoothed analysis of the condition numbers and growth factors of matrices
 SIAM J. Matrix Anal. Appl
, 2002
"... Let A be an arbitrary matrix and let A be a slight random perturbation of A. We prove that it is unlikely that A has large condition number. Using this result, we prove it is unlikely that A has large growth factor under Gaussian elimination without pivoting. By combining these results, we show that ..."
Abstract

Cited by 43 (3 self)
 Add to MetaCart
Let A be an arbitrary matrix and let A be a slight random perturbation of A. We prove that it is unlikely that A has large condition number. Using this result, we prove it is unlikely that A has large growth factor under Gaussian elimination without pivoting. By combining these results, we show that the smoothed precision necessary to solve Ax = b, for any b, using Gaussian elimination without pivoting is logarithmic. Moreover, when A is an allzero square matrix, our results significantly improve the averagecase analysis of Gaussian elimination without pivoting performed by Yeung and Chan (SIAM J. Matrix Anal. Appl., 1997). Partially supported by NSF grant CCR0112487
Smoothed analysis of Renegar’s condition number for linear programming
, 2003
"... We perform a smoothed analysis of Renegar’s condition number for linear programming. In particular, we show that for every nbyd matrix Ā, nvector ¯ b and dvector ¯c satisfying ∥ Ā, ¯ b, ¯c ∥ ∥ F ≤ 1 and every σ ≤ 1 / √ dn, the expectation of the logarithm of C(A,b,c) is O(log(nd/σ)), where A, ..."
Abstract

Cited by 22 (6 self)
 Add to MetaCart
We perform a smoothed analysis of Renegar’s condition number for linear programming. In particular, we show that for every nbyd matrix Ā, nvector ¯ b and dvector ¯c satisfying ∥ Ā, ¯ b, ¯c ∥ ∥ F ≤ 1 and every σ ≤ 1 / √ dn, the expectation of the logarithm of C(A,b,c) is O(log(nd/σ)), where A, b and c are Gaussian perturbations of Ā, ¯ b and ¯c of variance σ 2. From this bound, we obtain a smoothed analysis of Renegar’s interior point algorithm. By combining this with the smoothed analysis of finite termination Spielman and Teng (Math. Prog. Ser. B, 2003), we show that the smoothed complexity of linear programming is O(n 3 log(nd/σ)).
Approximation schemes for packing with item fragmentation. Theory Comput
 Syst
"... We consider two variants of the classical bin packing problem in which items may be fragmented. This can potentially reduce the total number of bins needed for packing the instance. However, since fragmentation incurs overhead, we attempt to avoid it as much as possible. In bin packing with size inc ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
We consider two variants of the classical bin packing problem in which items may be fragmented. This can potentially reduce the total number of bins needed for packing the instance. However, since fragmentation incurs overhead, we attempt to avoid it as much as possible. In bin packing with size increasing fragmentation (BPSIF), fragmenting an item increases the input size (due to a header/footer of fixed size that is added to each fragment). In bin packing with size preserving fragmentation (BPSPF), there is a bound on the total number of fragmented items. These two variants of bin packing capture many practical scenarios, including message transmission in community TV networks, VLSI circuit design and preemptive scheduling on parallel machines with setup times/setup costs. While both BPSPF and BPSIF do not belong to the class of problems that admit a polynomial time approximation scheme (PTAS), we show in this paper that both problems admit a dual PTAS and an asymptotic PTAS. We also develop for each of the problems a dual asymptotic fully polynomial time approximation scheme (AFPTAS). Our AFPTASs are based on a nonstandard transformation of the mixed packing and covering linear program formulations of our problems into pure covering programs, which enables to efficiently solve these programs.
THE PROBABILITY THAT A SLIGHTLY PERTURBED NUMERICAL ANALYSIS PROBLEM IS DIFFICULT
, 2008
"... We prove a general theorem providing smoothed analysis estimates for conic condition numbers of problems of numerical analysis. Our probability estimates depend only on geometric invariants of the corresponding sets of illposed inputs. Several applications to linear and polynomial equation solving ..."
Abstract

Cited by 8 (8 self)
 Add to MetaCart
We prove a general theorem providing smoothed analysis estimates for conic condition numbers of problems of numerical analysis. Our probability estimates depend only on geometric invariants of the corresponding sets of illposed inputs. Several applications to linear and polynomial equation solving show that the estimates obtained in this way are easy to derive and quite accurate. The main theorem is based on a volume estimate of εtubular neighborhoods around a real algebraic subvariety of a sphere, intersected with a spherical disk of radius σ. Besides ε and σ, this bound depends only on the dimension of the sphere and on the degree of the defining equations.
Smoothed Analysis of Condition Numbers and Complexity Implications for Linear Programming
, 2009
"... We perform a smoothed analysis of Renegar’s condition number for linear programming by analyzing the distribution of the distance to illposedness of a linear program subject to a slight Gaussian perturbation. In particular, we show that for every nbyd matrix Ā, nvector ¯ b, and dvector ¯c satis ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
We perform a smoothed analysis of Renegar’s condition number for linear programming by analyzing the distribution of the distance to illposedness of a linear program subject to a slight Gaussian perturbation. In particular, we show that for every nbyd matrix Ā, nvector ¯ b, and dvector ¯c satisfying ∥ ∥ Ā, ¯ b, ¯c ∥ ∥ F ≤ 1 and every σ ≤ 1, E [log C(A, b, c)] = O(log(nd/σ)), A,b,c where A, b and c are Gaussian perturbations of Ā, ¯ b and ¯c of variance σ 2 and C(A, b, c) is the condition number of the linear program defined by (A, b, c). From this bound, we obtain a smoothed analysis of interior point algorithms. By combining this with the smoothed analysis of finite termination of Spielman and Teng (Math. Prog. Ser. B, 2003), we show that the smoothed complexity of interior point algorithms for linear programming is O(n 3 log(nd/σ)).
Smoothed Analysis of Condition Numbers
"... The running time of many iterative numerical algorithms is dominated by the condition number of the input, a quantity measuring the sensitivity of the solution with regard to small perturbations of the input. Examples are iterative methods of linear algebra, interiorpoint methods of linear and conv ..."
Abstract

Cited by 6 (5 self)
 Add to MetaCart
The running time of many iterative numerical algorithms is dominated by the condition number of the input, a quantity measuring the sensitivity of the solution with regard to small perturbations of the input. Examples are iterative methods of linear algebra, interiorpoint methods of linear and convex optimization, as well as homotopy methods for solving systems of polynomial equations. Thus a probabilistic analysis of these algorithms can be reduced to the analysis of the distribution of the condition number for a random input. This approach was elaborated for averagecase complexity by many researchers. The goal of this survey is to explain how averagecase analysis can be naturally refined in the sense of smoothed analysis. The latter concept, introduced by Spielman and Teng in 2001, aims at showing that for all real inputs (even illposed ones), and all slight random perturbations of that input, it is unlikely that the running time will be large. A recent general result of Bürgisser, Cucker and Lotz (2008) gives smoothed analysis estimates for a variety of applications. Its proof boils down to local bounds on the volume of tubes around a real algebraic hypersurface in a sphere. This is achieved by bounding the integrals of absolute curvature of smooth hypersurfaces in terms of their degree via the principal kinematic formula of integral geometry and Bézout’s theorem.
Robust smoothed analysis of a condition number of linear programming
, 2009
"... We perform a smoothed analysis of the GCCcondition number C(A) of the linear programming feasibility problem ∃x ∈ R m+1 Ax < 0. Suppose that Ā is any matrix with rows ai of euclidean norm 1 and, independently for all i, let ai be a random perturbation of ai following the uniform distribution in the ..."
Abstract

Cited by 6 (6 self)
 Add to MetaCart
We perform a smoothed analysis of the GCCcondition number C(A) of the linear programming feasibility problem ∃x ∈ R m+1 Ax < 0. Suppose that Ā is any matrix with rows ai of euclidean norm 1 and, independently for all i, let ai be a random perturbation of ai following the uniform distribution in the spherical disk in S m of angular radius arcsinσ and centered at ai. We prove that E(ln C(A)) = O(mn/σ). A similar result was shown for Renegar’s condition number and Gaussian perturbations by Dunagan, Spielman, and Teng [arXiv:cs.DS/0302011]. Our result is robust in the sense that it easily extends to radially symmetric probability distributions supported on a spherical disk of radius arcsinσ, whose density may even have a singularity at the center of the perturbation. Our proofs combine ideas from a recent paper of Bürgisser, Cucker, and Lotz (Math. Comp. 77, No. 263, 2008) with techniques of Dunagan et al.
Smoothed analysis of probabilistic roadmaps
 In Fourth SIAM Conference of Analytic Algorithms and Computational Geometry
, 2007
"... The probabilistic roadmap algorithm is a leading heuristic for robot motion planning. It is extremely efficient in practice, yet its worst case convergence time is unbounded as a function of the input’s combinatorial complexity. We prove a smoothed polynomial upper bound on the number of samples req ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
The probabilistic roadmap algorithm is a leading heuristic for robot motion planning. It is extremely efficient in practice, yet its worst case convergence time is unbounded as a function of the input’s combinatorial complexity. We prove a smoothed polynomial upper bound on the number of samples required to produce an accurate probabilistic roadmap, and thus on the running time of the algorithm, in an environment of simplices. This sheds light on its widespread empirical success. 1
On the Hardness and Smoothed Complexity of QuasiConcave Minimization
"... In this paper, we resolve the smoothed and approximative complexity of lowrank quasiconcave minimization, providing both upper and lower bounds. As an upper bound, we provide the first smoothed analysis of quasiconcave minimization. The analysis is based on a smoothed bound for the number of extr ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
In this paper, we resolve the smoothed and approximative complexity of lowrank quasiconcave minimization, providing both upper and lower bounds. As an upper bound, we provide the first smoothed analysis of quasiconcave minimization. The analysis is based on a smoothed bound for the number of extreme points of the projection of the feasible polytope onto a kdimensional subspace, where k is the rank (informally, the dimension of nonconvexity) of the quasiconcave function. Our smoothed bound is polynomial in the original dimension of the problem n and the perturbation size ρ, and it is exponential in the rank of the function k. From this, we obtain the first randomized fully polynomialtime approximation scheme for lowrank quasiconcave minimization under broad conditions. In contrast with this, we prove log nhardness of approximation for general quasiconcave minimization. This shows that our smoothed bound is essentially tight, in that no polynomial smoothed bound is possible for quasiconcave functions of general rank k. The tools that we introduce for the smoothed analysis may be of independent interest. All previous smoothed analyses of polytopes analyzed projections onto twodimensional subspaces and studied them using trigonometry to examine the angles between vectors and 2planes in R n. In this paper, we provide what is, to our knowledge, the first smoothed analysis of the projection of polytopes onto higherdimensional subspaces. To do this, we replace the trigonometry with tools from random matrix theory and differential geometry on the Grassmannian. Our hardness reduction is based on entirely different proofs that may also be of independent interest: we show that the stochastic 2stage minimum spanning tree problem has a supermodular objective and that su