Results 1  10
of
24
Solving Polynomial Systems Using a Branch and Prune Approach
 SIAM Journal on Numerical Analysis
, 1997
"... This paper presents Newton, a branch & prune algorithm to find all isolated solutions of a system of polynomial constraints. Newton can be characterized as a global search method which uses intervals for numerical correctness and for pruning the search space early. The pruning in Newton consists in ..."
Abstract

Cited by 101 (7 self)
 Add to MetaCart
This paper presents Newton, a branch & prune algorithm to find all isolated solutions of a system of polynomial constraints. Newton can be characterized as a global search method which uses intervals for numerical correctness and for pruning the search space early. The pruning in Newton consists in enforcing at each node of the search tree a unique local consistency condition, called boxconsistency, which approximates the notion of arcconsistency wellknown in artificial intelligence. Boxconsistency is parametrized by an interval extension of the constraint and can be instantiated to produce the HansenSegupta's narrowing operator (used in interval methods) as well as new operators which are more effective when the computation is far from a solution. Newton has been evaluated on a variety of benchmarks from kinematics, chemistry, combustion, economics, and mechanics. On these benchmarks, it outperforms the interval methods we are aware of and compares well with stateoftheart continuation methods. Limitations of Newton (e.g., a sensitivity to the size of the initial intervals on some problems) are also discussed. Of particular interest is the mathematical and programming simplicity of the method.
Complete search in continuous global optimization and constraint satisfaction, Acta Numerica 13
, 2004
"... A chapter for ..."
Fast Concurrent Access to Parallel Disks
 In 11th ACMSIAM Symposium on Discrete Algorithms
, 1999
"... High performance applications involving large data sets require the efficient and flexible use of multiple disks. In an external memory machine with D parallel, independent disks, only one block can be accessed on each disk in one I/O step. This restriction leads to a load balancing problem that is ..."
Abstract

Cited by 52 (11 self)
 Add to MetaCart
High performance applications involving large data sets require the efficient and flexible use of multiple disks. In an external memory machine with D parallel, independent disks, only one block can be accessed on each disk in one I/O step. This restriction leads to a load balancing problem that is perhaps the main inhibitor for adapting singledisk external memory algorithms to multiple disks. This paper shows that this problem can be solved efficiently using a combination of randomized placement, redundancy and an optimal scheduling algorithm. A buffer of O(D) blocks suffices to support efficient writing of arbitrary blocks if blocks are distributed uniformly at random to the disks (e.g., by hashing). If two randomly allocated copies of each block exist, N arbitrary blocks can be read within dN=De + 1 I/O steps with high probability. In addition, the redundancy can be reduced from 2 to 1 + 1=r for any integer r. These results can be used to emulate the simple and powerful "singledisk multihead" model of external computing [1] on the physically more realistic independent disk model [33] with small constant overhead. This is faster than a lower bound for deterministic emulation [3].
A PrimalRelaxed Dual Global Optimization Approach
, 1993
"... A deterministic global optimization approach is proposed for nonconvex constrained nonlinear programming problems. Partitioning of the variables, along with the introduction of transformation variables, if necessary, convert the original problem into primal and relaxed dual subproblems that provide ..."
Abstract

Cited by 41 (19 self)
 Add to MetaCart
A deterministic global optimization approach is proposed for nonconvex constrained nonlinear programming problems. Partitioning of the variables, along with the introduction of transformation variables, if necessary, convert the original problem into primal and relaxed dual subproblems that provide valid upper and lower bounds respectively on the global optimum. Theoretical properties are presented which allow for a rigorous solution of the relaxed dual problem. Proofs of fflfinite convergence and fflglobal optimality are provided. The approach is shown to be particularly suited to (a) quadratic programming problems, (b) quadratically constrained problems, and (c) unconstrained and constrained optimization of polynomial and rational polynomial functions. The theoretical approach is illustrated through a few example problems. Finally, some further developments in the approach are briefly discussed.
Reconciling Simplicity and Realism in Parallel Disk Models
 Parallel Computing
, 2001
"... For the design and analysis of algorithms that process huge data sets, a machine model is needed that handles parallel disks. There seems to be a dilemma between simple and flexible use of such a model and accurate modelling of details of the hardware. This paper explains how many aspects of this pr ..."
Abstract

Cited by 16 (3 self)
 Add to MetaCart
For the design and analysis of algorithms that process huge data sets, a machine model is needed that handles parallel disks. There seems to be a dilemma between simple and flexible use of such a model and accurate modelling of details of the hardware. This paper explains how many aspects of this problem can be resolved. The programming model implements one large logical disk allowing concurrent access to arbitrary sets of variable size blocks. This model can be implemented efficienctly on multiple independent disks even if zones with different speed, communication bottlenecks and failed disks are allowed. These results not only provide useful algorithmic tools but also imply a theoretical justification for studying external memory algorithms using simple abstract models.
The Cluster Problem In Multivariate Global Optimization
 Journal of Global Optimization
, 1994
"... . We consider branch and bound methods for enclosing all unconstrained global minimizers of a nonconvex nonlinear twicecontinuously differentiable objective function. In particular, we consider bounds obtained with interval arithmetic, with the "midpoint test," but no acceleration procedures. Unles ..."
Abstract

Cited by 15 (4 self)
 Add to MetaCart
. We consider branch and bound methods for enclosing all unconstrained global minimizers of a nonconvex nonlinear twicecontinuously differentiable objective function. In particular, we consider bounds obtained with interval arithmetic, with the "midpoint test," but no acceleration procedures. Unless the lower bound is exact, the algorithm without acceleration procedures in general gives an undesirable cluster of boxes around each minimizer. In a previous paper, we analyzed this problem for univariate objective functions. In this paper, we generalize that analysis to multidimensional objective functions. As in the univariate case, the results show that the problem is highly related to the behavior of the objective function near the global minimizers and to the order of the corresponding interval extension. 1. Introduction and Basic Concepts Our underlying problem is: (1) find all global minimizers to f(x) subject to x 2 X; where X ae R m is a compact right parallelepiped with face...
Reliable TwoDimensional Graphing Methods for Mathematical Formulae with Two Free Variables
, 2001
"... present s a series of new algorit hms for reliably graphingt wodimensional implicit equat ions and inequalit ies. A clear st andard for int erpret ingt he graphs generat ed byt wodimensional graphing soft ware is int roduced and used t o evaluat et he present ed algorit hms. The first approach pr ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
present s a series of new algorit hms for reliably graphingt wodimensional implicit equat ions and inequalit ies. A clear st andard for int erpret ingt he graphs generat ed byt wodimensional graphing soft ware is int roduced and used t o evaluat et he present ed algorit hms. The first approach present ed uses a st andard int erval arit hmet ic library. This approach is shownt o be fault y; an analysis oft he failure reveals a limit at ion of st andard int erval arit hmet ic. Subsequent algorit hms are developed in parallel wit h improvement s and ext#E sions t# t# e int erval ari t#met# c used byt he graphing algorit hms. Graphs exhibit ing a variet y of mat hemat ical and art ist ic phenomena are shownt o be graphed correct ly byt he present ed algorit hms. A brief comparison of t he final algorit hm present edt o ot her graphing algorit hms is included.
Global Optimization in Control System Analysis and Design
 CONTROL AND DYNAMIC SYSTEMS: ADVANCES IN THEORY AND APPLICATIONS
, 1992
"... Many problems in control system analysis and design can be posed in a setting where a system with a fixed model structure and nominal parameter values is affected by parameter variations. An example is parametric robustness analysis, where the parameters might represent physical quantities that are ..."
Abstract

Cited by 11 (2 self)
 Add to MetaCart
Many problems in control system analysis and design can be posed in a setting where a system with a fixed model structure and nominal parameter values is affected by parameter variations. An example is parametric robustness analysis, where the parameters might represent physical quantities that are known only to within a certain accuracy, or vary depending on operating conditions etc. Frequently asked questions here deal with performance issues: "How bad can a certain performance measure of the system be over all possible values of the parameters?" Another example is parametric controller design, where the parameters represent degrees of freedom available to the control system designer. A typical question here would be: "What is the best choice of parameters, one that optimizes a certain design objective?" Many of the questions above may be directly restated as optimization problems: If q denotes the vector of parameters, Q
Optimization and Regularization of Nonlinear Least Squares Problems
, 1996
"... An important branch in scientific computing is parameter estimation. Given a mathematical model and observation data, parameters are sought to explain physical properties as well as possible. In order to find these parameters an optimization problem is often formed, frequently a nonlinear least squa ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
An important branch in scientific computing is parameter estimation. Given a mathematical model and observation data, parameters are sought to explain physical properties as well as possible. In order to find these parameters an optimization problem is often formed, frequently a nonlinear least squares problem. This thesis mainly contributes to the development of tools, techniques, and theories for nonlinear least squares problems that lack a welldefined solution. Specifically, the intention is to generalize regularization methods for linear inverse problems to also handle nonlinear inverse problems. The investigation started by considering an exactly rankdeficient problem, i.e., a problem with a dependency among the parameters. It turns out that such a problem can be formulated as a nonlinear minimum norm problem. To solve this optimization problem two regularization methods are proposed: A GaussNewton Tikhonov regularized method and a minimum norm GaussNewton method. It is shown t...
An Introduction to Affine Arithmetic
, 2003
"... Affine arithmetic (AA) is a model for selfvalidated computation which, like standard interval arithmetic (IA), produces guaranteed enclosures for computed quantities, taking into account any uncertainties in the input data as well as all internal truncation and roundoff errors. Unlike standard I ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
Affine arithmetic (AA) is a model for selfvalidated computation which, like standard interval arithmetic (IA), produces guaranteed enclosures for computed quantities, taking into account any uncertainties in the input data as well as all internal truncation and roundoff errors. Unlike standard IA, the quantity representations used by AA are firstorder approximations, whose error is generally quadratic in the width of input intervals. In many practical applications, the higher asymptotic accuracy of AA more than compensates for the increased cost of its operations.