Results 1  10
of
38
Solving Polynomial Systems Using a Branch and Prune Approach
 SIAM Journal on Numerical Analysis
, 1997
"... This paper presents Newton, a branch & prune algorithm to find all isolated solutions of a system of polynomial constraints. Newton can be characterized as a global search method which uses intervals for numerical correctness and for pruning the search space early. The pruning in Newton consists ..."
Abstract

Cited by 102 (7 self)
 Add to MetaCart
This paper presents Newton, a branch & prune algorithm to find all isolated solutions of a system of polynomial constraints. Newton can be characterized as a global search method which uses intervals for numerical correctness and for pruning the search space early. The pruning in Newton consists in enforcing at each node of the search tree a unique local consistency condition, called boxconsistency, which approximates the notion of arcconsistency wellknown in artificial intelligence. Boxconsistency is parametrized by an interval extension of the constraint and can be instantiated to produce the HansenSegupta's narrowing operator (used in interval methods) as well as new operators which are more effective when the computation is far from a solution. Newton has been evaluated on a variety of benchmarks from kinematics, chemistry, combustion, economics, and mechanics. On these benchmarks, it outperforms the interval methods we are aware of and compares well with stateoftheart continuation methods. Limitations of Newton (e.g., a sensitivity to the size of the initial intervals on some problems) are also discussed. Of particular interest is the mathematical and programming simplicity of the method.
Complete search in continuous global optimization and constraint satisfaction, Acta Numerica 13
, 2004
"... A chapter for ..."
Fast Concurrent Access to Parallel Disks
"... High performance applications involving large data sets require the efficient and flexible use of multiple disks. In an external memory machine with D parallel, independent disks, only one block can be accessed on each disk in one I/O step. This restriction leads to a load balancing problem that is ..."
Abstract

Cited by 51 (12 self)
 Add to MetaCart
High performance applications involving large data sets require the efficient and flexible use of multiple disks. In an external memory machine with D parallel, independent disks, only one block can be accessed on each disk in one I/O step. This restriction leads to a load balancing problem that is perhaps the main inhibitor for the efficient adaptation of singledisk external memory algorithms to multiple disks. We solve this problem for arbitrary access patterns by randomly mapping blocks of a logical address space to the disks. We show that a shared buffer of O(D) blocks suffices to support efficient writing. The analysis uses the properties of negative association to handle dependencies between the random variables involved. This approach might be of independent interest for probabilistic analysis in general. If two randomly allocated copies of each block exist, N arbitrary blocks can be read within dN=De + 1 I/O steps with high probability. The redundancy can be further reduced from 2 to 1 + 1=r for any integer r without a big impact on reading efficiency. From the point of view of external memory models, these results rehabilitate Aggarwal and Vitter's "singledisk multihead" model [1] that allows access to D arbitrary blocks in each I/O step. This powerful model can be emulated on the physically more realistic independent disk model [2] with small constant overhead factors. Parallel disk external memory algorithms can therefore be developed in the multihead model first. The emulation result can then be applied directly or further refinements can be added.
A PrimalRelaxed Dual Global Optimization Approach
, 1993
"... A deterministic global optimization approach is proposed for nonconvex constrained nonlinear programming problems. Partitioning of the variables, along with the introduction of transformation variables, if necessary, convert the original problem into primal and relaxed dual subproblems that provide ..."
Abstract

Cited by 42 (19 self)
 Add to MetaCart
A deterministic global optimization approach is proposed for nonconvex constrained nonlinear programming problems. Partitioning of the variables, along with the introduction of transformation variables, if necessary, convert the original problem into primal and relaxed dual subproblems that provide valid upper and lower bounds respectively on the global optimum. Theoretical properties are presented which allow for a rigorous solution of the relaxed dual problem. Proofs of fflfinite convergence and fflglobal optimality are provided. The approach is shown to be particularly suited to (a) quadratic programming problems, (b) quadratically constrained problems, and (c) unconstrained and constrained optimization of polynomial and rational polynomial functions. The theoretical approach is illustrated through a few example problems. Finally, some further developments in the approach are briefly discussed.
The Cluster Problem In Multivariate Global Optimization
 Journal of Global Optimization
, 1994
"... . We consider branch and bound methods for enclosing all unconstrained global minimizers of a nonconvex nonlinear twicecontinuously differentiable objective function. In particular, we consider bounds obtained with interval arithmetic, with the "midpoint test," but no acceleration procedu ..."
Abstract

Cited by 17 (4 self)
 Add to MetaCart
. We consider branch and bound methods for enclosing all unconstrained global minimizers of a nonconvex nonlinear twicecontinuously differentiable objective function. In particular, we consider bounds obtained with interval arithmetic, with the "midpoint test," but no acceleration procedures. Unless the lower bound is exact, the algorithm without acceleration procedures in general gives an undesirable cluster of boxes around each minimizer. In a previous paper, we analyzed this problem for univariate objective functions. In this paper, we generalize that analysis to multidimensional objective functions. As in the univariate case, the results show that the problem is highly related to the behavior of the objective function near the global minimizers and to the order of the corresponding interval extension. 1. Introduction and Basic Concepts Our underlying problem is: (1) find all global minimizers to f(x) subject to x 2 X; where X ae R m is a compact right parallelepiped with face...
Reconciling Simplicity and Realism in Parallel Disk Models
 Parallel Computing
, 2001
"... For the design and analysis of algorithms that process huge data sets, a machine model is needed that handles parallel disks. There seems to be a dilemma between simple and flexible use of such a model and accurate modelling of details of the hardware. This paper explains how many aspects of this pr ..."
Abstract

Cited by 16 (4 self)
 Add to MetaCart
For the design and analysis of algorithms that process huge data sets, a machine model is needed that handles parallel disks. There seems to be a dilemma between simple and flexible use of such a model and accurate modelling of details of the hardware. This paper explains how many aspects of this problem can be resolved. The programming model implements one large logical disk allowing concurrent access to arbitrary sets of variable size blocks. This model can be implemented efficienctly on multiple independent disks even if zones with different speed, communication bottlenecks and failed disks are allowed. These results not only provide useful algorithmic tools but also imply a theoretical justification for studying external memory algorithms using simple abstract models.
Reliable TwoDimensional Graphing Methods for Mathematical Formulae with Two Free Variables
, 2001
"... present s a series of new algorit hms for reliably graphingt wodimensional implicit equat ions and inequalit ies. A clear st andard for int erpret ingt he graphs generat ed byt wodimensional graphing soft ware is int roduced and used t o evaluat et he present ed algorit hms. The first approach pr ..."
Abstract

Cited by 12 (0 self)
 Add to MetaCart
present s a series of new algorit hms for reliably graphingt wodimensional implicit equat ions and inequalit ies. A clear st andard for int erpret ingt he graphs generat ed byt wodimensional graphing soft ware is int roduced and used t o evaluat et he present ed algorit hms. The first approach present ed uses a st andard int erval arit hmet ic library. This approach is shownt o be fault y; an analysis oft he failure reveals a limit at ion of st andard int erval arit hmet ic. Subsequent algorit hms are developed in parallel wit h improvement s and ext#E sions t# t# e int erval ari t#met# c used byt he graphing algorit hms. Graphs exhibit ing a variet y of mat hemat ical and art ist ic phenomena are shownt o be graphed correct ly byt he present ed algorit hms. A brief comparison of t he final algorit hm present edt o ot her graphing algorit hms is included.
Global Optimization in Control System Analysis and Design
 CONTROL AND DYNAMIC SYSTEMS: ADVANCES IN THEORY AND APPLICATIONS
, 1992
"... Many problems in control system analysis and design can be posed in a setting where a system with a fixed model structure and nominal parameter values is affected by parameter variations. An example is parametric robustness analysis, where the parameters might represent physical quantities that are ..."
Abstract

Cited by 11 (2 self)
 Add to MetaCart
Many problems in control system analysis and design can be posed in a setting where a system with a fixed model structure and nominal parameter values is affected by parameter variations. An example is parametric robustness analysis, where the parameters might represent physical quantities that are known only to within a certain accuracy, or vary depending on operating conditions etc. Frequently asked questions here deal with performance issues: "How bad can a certain performance measure of the system be over all possible values of the parameters?" Another example is parametric controller design, where the parameters represent degrees of freedom available to the control system designer. A typical question here would be: "What is the best choice of parameters, one that optimizes a certain design objective?" Many of the questions above may be directly restated as optimization problems: If q denotes the vector of parameters, Q
Optimization and Regularization of Nonlinear Least Squares Problems
, 1996
"... An important branch in scientific computing is parameter estimation. Given a mathematical model and observation data, parameters are sought to explain physical properties as well as possible. In order to find these parameters an optimization problem is often formed, frequently a nonlinear least squa ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
An important branch in scientific computing is parameter estimation. Given a mathematical model and observation data, parameters are sought to explain physical properties as well as possible. In order to find these parameters an optimization problem is often formed, frequently a nonlinear least squares problem. This thesis mainly contributes to the development of tools, techniques, and theories for nonlinear least squares problems that lack a welldefined solution. Specifically, the intention is to generalize regularization methods for linear inverse problems to also handle nonlinear inverse problems. The investigation started by considering an exactly rankdeficient problem, i.e., a problem with a dependency among the parameters. It turns out that such a problem can be formulated as a nonlinear minimum norm problem. To solve this optimization problem two regularization methods are proposed: A GaussNewton Tikhonov regularized method and a minimum norm GaussNewton method. It is shown t...
Use of a RealValued Local Minimum in Parallel Interval Global Optimization
 Interval Computations
, 1993
"... We consider a parallel method for finding the global minimum (and all of the global minimizers) of a continuous nonlinear function f: D → R, where D is an ndimensional interval. The method combines one of the well known branchandbound interval search methods of Skelboe, Moore and Hansen with a r ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
We consider a parallel method for finding the global minimum (and all of the global minimizers) of a continuous nonlinear function f: D → R, where D is an ndimensional interval. The method combines one of the well known branchandbound interval search methods of Skelboe, Moore and Hansen with a realvalued optimization method. Initially we use a standard realvalued optimization method to find a local minimizer xp (or rather: a prediction of a local minimizer). Then the interval Newton method is applied to an interval Ip containing xp as its midpoint. Ip is chosen as large as possible under the restriction that the Newton interval method must converge when Ip is used as starting interval. In this way the original problem has been reduced to the problem of searching a domain D \Ip which does not contain the local (and perhaps global) minimizer. The remaining domain is searched by the branchandbound interval method, starting by splitting the remaining domain into 2n intervals and hence avoiding Ip. This branchandbound search then either verifies that the point xp is the global minimizer, or the opposite is detected and it finds the global minimum (and the global minimizers) in the usual way. The combined method parallizes well. On one test case the combined method is faster than the branchandbound method itself. However, for another test case we get the opposite result. This is explained. Использование вещественнозначного локального минимума для параллельной интервальной глобальной оптимизации