Results 1  10
of
79
Qualitative and Quantitative Simulation: Bridging the Gap
 Artificial Intelligence
, 1997
"... Shortcomings of qualitative simulation and of quantitative simulation motivate combining them to do simulations exhibiting strengths of both. The resulting class of techniques is called semiquantitative simulation. One approach to semiquantitative simulation is to use numeric intervals to represe ..."
Abstract

Cited by 43 (1 self)
 Add to MetaCart
Shortcomings of qualitative simulation and of quantitative simulation motivate combining them to do simulations exhibiting strengths of both. The resulting class of techniques is called semiquantitative simulation. One approach to semiquantitative simulation is to use numeric intervals to represent incomplete quantitative information. In this research we demonstrate semiquantitative simulation using intervals in an implemented semiquantitative simulator called Q3. Q3 progressively refines a qualitative simulation, providing increasingly specific quantitative predictions which can converge to a numerical simulation in the limit while retaining important correctness guarantees from qualitative and interval simulation techniques. Q3's simulations are based on a technique we call step size refinement. While a pure qualitative simulation has a very coarse step size, representing the state of a system trajectory at relatively few qualitatively distinct states, Q3 interpolates newly expl...
The complexity of analog computation
 in Math. and Computers in Simulation 28(1986
"... We ask if analog computers can solve NPcomplete problems efficiently. Regarding this as unlikely, we formulate a strong version of Church’s Thesis: that any analog computer can be simulated efficiently (in polynomial time) by a digital computer. From this assumption and the assumption that P ≠ NP w ..."
Abstract

Cited by 36 (0 self)
 Add to MetaCart
We ask if analog computers can solve NPcomplete problems efficiently. Regarding this as unlikely, we formulate a strong version of Church’s Thesis: that any analog computer can be simulated efficiently (in polynomial time) by a digital computer. From this assumption and the assumption that P ≠ NP we can draw conclusions about the operation of physical devices used for computation. An NPcomplete problem, 3SAT, is reduced to the problem of checking whether a feasible point is a local optimum of an optimization problem. A mechanical device is proposed for the solution of this problem. It encodes variables as shaft angles and uses gears and smooth cams. If we grant Strong Church’s Thesis, that P ≠ NP, and a certain ‘‘Downhill Principle’ ’ governing the physical behavior of the machine, we conclude that it cannot operate successfully while using only polynomial resources. We next prove Strong Church’s Thesis for a class of analog computers described by wellbehaved ordinary differential equations, which we can take as representing part of classical mechanics. We conclude with a comment on the recently discovered connection between spin glasses and combinatorial optimization. 1.
Church's thesis meets the Nbody problem
, 1999
"... THIS IS A REVISIONINPROGRESS! NOT QUITE FINAL YET! "Church's thesis" is at the foundation of computer science. It is pointed out that with any particular set of physical laws, Church's thesis need not merely be postulated, in fact it may be decidable. Trying to do so is valuable. In Newton's laws ..."
Abstract

Cited by 23 (0 self)
 Add to MetaCart
THIS IS A REVISIONINPROGRESS! NOT QUITE FINAL YET! "Church's thesis" is at the foundation of computer science. It is pointed out that with any particular set of physical laws, Church's thesis need not merely be postulated, in fact it may be decidable. Trying to do so is valuable. In Newton's laws of physics with point masses, we outline a proof that Church's thesis is false. But with certain more realistic laws of motion, incorporating some relativistic effects, the Extended Church's thesis is true. Along the way we prove a useful theorem: a wide class of ordinary differential equations may be integrated with "polynomial slowdown. " Warning: we cannot give careful definitions and caveats in this abstract, and interpreting our results is difficult. Keywords  Newtonian Nbody problem, Church's thesis, computability, numerical methods for ordinary differential equations. Contents 1 Background 1 2 Introduction. Our results and their interpretation. 2 2.1 First way to interpret the...
A Posteriori Error Estimates for Variable TimeStep Discretizations of Nonlinear Evolution Equations
"... We study the backward Euler method with variable timesteps for abstract evolution equations in Hilbert spaces. Exploiting convexity of the underlying potential or the anglebounded condition, thereby assuming no further regularity, we derive novel a posteriori estimates of the discretization error ..."
Abstract

Cited by 19 (2 self)
 Add to MetaCart
We study the backward Euler method with variable timesteps for abstract evolution equations in Hilbert spaces. Exploiting convexity of the underlying potential or the anglebounded condition, thereby assuming no further regularity, we derive novel a posteriori estimates of the discretization error in terms of computable quantities related to the amount of energy dissipation or monotonicity residual. These estimators solely depend on the discrete solution and data and impose no constraints between consecutive timesteps. We also prove that they converge to zero with an optimal rate with respect to the regularity of the solution. We apply the abstract results to a number of concrete strongly nonlinear problems of parabolic type with degenerate or singular character.
Consistency Techniques in Ordinary Differential Equations
, 2000
"... This paper takes a fresh look at the application of interval analysis to ordinary differential equations and studies how consistency techniques can help address the accuracy problems typically exhibited by these methods, while trying to preserve their efficiency. It proposes to generalize interval t ..."
Abstract

Cited by 16 (1 self)
 Add to MetaCart
This paper takes a fresh look at the application of interval analysis to ordinary differential equations and studies how consistency techniques can help address the accuracy problems typically exhibited by these methods, while trying to preserve their efficiency. It proposes to generalize interval techniques intoatwostep process: a forward process that computes an enclosure and a backward process that reduces this enclosure. Consistency techniques apply naturally to the backward (pruning) step but can also be applied to the forward phase. The paper describes the framework, studies the various steps in detail, proposes a number of novel techniques, and gives some preliminary experimental results to indicate the potential of this new research avenue.
Adaptive Weak Approximation Of Stochastic Differential Equations
, 2001
"... Adaptive timestepping methods based on the Monte Carlo Euler method for weak approximation of Itô stochastic differential equations are developed. The main result is new expansions of the computational error, with computable leadingorder term in a posteriori form, based on stochastic flows and dis ..."
Abstract

Cited by 15 (4 self)
 Add to MetaCart
Adaptive timestepping methods based on the Monte Carlo Euler method for weak approximation of Itô stochastic differential equations are developed. The main result is new expansions of the computational error, with computable leadingorder term in a posteriori form, based on stochastic flows and discrete dual backward problems. The expansions lead to efficient and accurate computation of error estimates. Adaptive algorithms for either stochastic time steps or deterministic time steps are described. Numerical examples illustrate when stochastic and deterministic adaptive time steps are superior to constant time steps and when adaptive stochastic steps are superior to adaptive deterministic steps. Stochastic time steps use Brownian bridges and require more work for a given number of time steps. Deterministic time steps may yield more time steps but require less work; for example, in the limit of vanishing error tolerance, the ratio of the computational error and its computable estimate tends to 1 with negligible additional work to determine the adaptive deterministic time steps.
TIMESTEPPING AND PRESERVING ORTHONORMALITY
, 1997
"... Certain applications produce initial value ODEs whose solutions, regarded as timedependent matrices, preserve orthonormality. Such systems arise in the computation of Lyapunov exponents and the construction of smooth singular value decompositions of parametrized matrices. For some special problem c ..."
Abstract

Cited by 15 (0 self)
 Add to MetaCart
Certain applications produce initial value ODEs whose solutions, regarded as timedependent matrices, preserve orthonormality. Such systems arise in the computation of Lyapunov exponents and the construction of smooth singular value decompositions of parametrized matrices. For some special problem classes, there exist timestepping methods that automatically inherit the orthonormality preservation. However, a more widely applicable approach is to apply a standard integrator and regularly replace the approximate solution by an orthonormal matrix. Typically, the approximate solution is replaced by the factor Q from its QR decomposition (computed, for example, by the modified Gram ~ method). However, the optimal replacementthe one that is closest in the Frobenius normis given by the orthonormal polar factor. Quadratically convergent iteration schemes can be used to compute this factor. In particular, there is a matrix multiplication based iteration that is ideally suited to modern computer architectures. Hence, we argue that perturbing towards the orthonormal polar factor is an attractive choice, and we consider performing a fixed number of iterations. Using the optimality property we show that the perturbations improve the departure from orthonormality without significantly degrading the finitetime global error bound for the ODE solution. Our analysis allows for adaptive timestepping, where a local error control process is driven by a usersupplied tolerance. Finally, using a recent result of Sun, we show how the global error bound carries through to the case where the orthonormal QR factor is used instead of the orthonormal polar factor.
A Survey of the Explicit RungeKutta Method
, 1995
"... Research in explicit RungeKutta methods is producing continual improvements to the original algorithms, and the aim of this survey is to relate the current stateoftheart. In drawing attention to recent advances, we hope to provide useful information for those who apply numerical methods. We desc ..."
Abstract

Cited by 14 (2 self)
 Add to MetaCart
Research in explicit RungeKutta methods is producing continual improvements to the original algorithms, and the aim of this survey is to relate the current stateoftheart. In drawing attention to recent advances, we hope to provide useful information for those who apply numerical methods. We describe recent work in the derivation of RungeKutta coefficients: "classical" generalpurpose formulas, "special" formulas for high order and Hamiltonian problems, and "continuous" formulas for dense output. We also give a thorough review of implementation details. Modern techniques are described for the tasks of controlling the local error in a stepbystep integration, computing reliable estimates of the global error, detecting stiffness, and detecting and handling discontinuities and singularities. We also discuss some important software issues. 1 Introduction Explicit RungeKutta (ERK) formulas are among the oldest and bestunderstood schemes in the numerical analyst's toolkit. However, d...
The Dynamics of RungeKutta Methods
 Int. J. Bifurcation and Chaos
, 1992
"... this paper, we attempt to elucidate the dynamics of the most commonly used family of numerical integration schemes, RungeKutta methods, by the application of the techniques of dynamical systems theory to the maps produced in the numerical analysis. QMW preprint DYN #919, Int. J. Bifurcation and C ..."
Abstract

Cited by 13 (4 self)
 Add to MetaCart
this paper, we attempt to elucidate the dynamics of the most commonly used family of numerical integration schemes, RungeKutta methods, by the application of the techniques of dynamical systems theory to the maps produced in the numerical analysis. QMW preprint DYN #919, Int. J. Bifurcation and Chaos, 2, 427449, 1992
Stability in Possibilistic Linear Programming With Continuous Fuzzy Number Parameters
 Mathematical Sciences Dept. Tech. Rep. 307, The Johns Hopkins University
, 1984
"... We provetha possibilisticlinea prograisti problems (introduced by Buckley in [2])a] wellposed, i.e. sma. cha9O of the membership function of thepa0xO.SR ma yca93 onlya sma. deviaRxO in the possibilistic distribution of the objective function. 1 ..."
Abstract

Cited by 12 (6 self)
 Add to MetaCart
We provetha possibilisticlinea prograisti problems (introduced by Buckley in [2])a] wellposed, i.e. sma. cha9O of the membership function of thepa0xO.SR ma yca93 onlya sma. deviaRxO in the possibilistic distribution of the objective function. 1