Results 1  10
of
140
Internal set theory: A new approach to nonstandard analysis
 Bull. Amer. Math. Soc
, 1977
"... 1. Internal set theory. We present here a new approach to Abraham Robinson's nonstandard analysis [10] with the aim of making these powerful methods readily available to the working mathematician. This approach to nonstandard analysis is based on a theory which we call internal set theory (1ST) ..."
Abstract

Cited by 105 (0 self)
 Add to MetaCart
(Show Context)
1. Internal set theory. We present here a new approach to Abraham Robinson's nonstandard analysis [10] with the aim of making these powerful methods readily available to the working mathematician. This approach to nonstandard analysis is based on a theory which we call internal set theory (1ST). We start with axiomatic set theory, say ZFC (ZermeloFraenkel set theory with the axiom of choice [1]). In addition to the usual undefined binary predicate E of set theory we adjoin a new undefined unary predicate standard. The axioms of 1ST are the usual axioms of ZFC plus three others, which we will state below. All theorems of conventional mathematics remain valid. No change in terminology is required. What is new in internal set theory is only an addition, not a change. We choose to call certain sets standard (and we recall that in ZFC every mathematical objecta real number, a function, etc.is a set), but the theorems of conventional mathematics apply to all sets, nonstandard as well as standard.
Appraising and Amending Theories: The Strategy of Lakatosian Defence and Two Principles That Warrant It
 Psychological Inquiry
, 1990
"... In social science, everything is somewhat correlated with everything (“crud factor”), so whether H0 is refuted depends solely on statistical power. In psychology, the directional counternull of interest, H*, is not equivalent to the substantive theory T, there being many plausible alternative explan ..."
Abstract

Cited by 66 (15 self)
 Add to MetaCart
(Show Context)
In social science, everything is somewhat correlated with everything (“crud factor”), so whether H0 is refuted depends solely on statistical power. In psychology, the directional counternull of interest, H*, is not equivalent to the substantive theory T, there being many plausible alternative explanations of a mere directional trend (weak use of significance tests). Testing against a predicted point value (the strong use of significant tests) can discorroborate T by refuting H*. If used thus to abandon T forthwith, it is too strong, not allowing for theoretical verisimilitude as distinguished from truth. Defense and amendment of an apparently falsified T are appropriate strategies only when T has accumulated a good track record (“money in the bank”) by making successful or nearmiss predictions of low prior probability (Salmon’s “damn strange coincidences”). Two rough indexes are proposed for numerifying the track record, by considering jointly how intolerant (risky) and how close (accurate) are its predictions. For almost three quarters of a century, the received doctrine about appraising psychological theories has been to perform a statistical significance test. In the “soft ” areas (clinical, counseling, developmental, personality, and social psychology),
Informationtheoretic Limitations of Formal Systems
 JOURNAL OF THE ACM
, 1974
"... An attempt is made to apply informationtheoretic computational complexity to metamathematics. The paper studies the number of bits of instructions that must be a given to a computer for it to perform finite and infinite tasks, and also the amount of time that it takes the computer to perform these ..."
Abstract

Cited by 50 (7 self)
 Add to MetaCart
(Show Context)
An attempt is made to apply informationtheoretic computational complexity to metamathematics. The paper studies the number of bits of instructions that must be a given to a computer for it to perform finite and infinite tasks, and also the amount of time that it takes the computer to perform these tasks. This is applied to measuring the difficulty of proving a given set of theorems, in terms of the number of bits of axioms that are assumed, and the size of the proofs needed to deduce the theorems from the axioms.
Modular Data Structure Verification
 EECS DEPARTMENT, MASSACHUSETTS INSTITUTE OF TECHNOLOGY
, 2007
"... This dissertation describes an approach for automatically verifying data structures, focusing on techniques for automatically proving formulas that arise in such verification. I have implemented this approach with my colleagues in a verification system called Jahob. Jahob verifies properties of Java ..."
Abstract

Cited by 44 (21 self)
 Add to MetaCart
This dissertation describes an approach for automatically verifying data structures, focusing on techniques for automatically proving formulas that arise in such verification. I have implemented this approach with my colleagues in a verification system called Jahob. Jahob verifies properties of Java programs with dynamically allocated data structures. Developers write Jahob specifications in classical higherorder logic (HOL); Jahob reduces the verification problem to deciding the validity of HOL formulas. I present a new method for proving HOL formulas by combining automated reasoning techniques. My method consists of 1) splitting formulas into individual HOL conjuncts, 2) soundly approximating each HOL conjunct with a formula in a more tractable fragment and 3) proving the resulting approximation using a decision procedure or a theorem prover. I present three concrete logics; for each logic I show how to use it to approximate HOL formulas, and how to decide the validity of formulas in this logic. First, I present an approximation of HOL based on a translation to firstorder logic, which enables the use of existing resolutionbased theorem provers. Second, I present an approximation of HOL based on field constraint analysis, a new technique that enables
Higher Order Logic
 In Handbook of Logic in Artificial Intelligence and Logic Programming
, 1994
"... Contents 1 Introduction : : : : : : : : : : : : : : : : : : : : : : : : : : : : 2 2 The expressive power of second order Logic : : : : : : : : : : : 3 2.1 The language of second order logic : : : : : : : : : : : : : 3 2.2 Expressing size : : : : : : : : : : : : : : : : : : : : : : : : 4 2.3 Definin ..."
Abstract

Cited by 24 (0 self)
 Add to MetaCart
Contents 1 Introduction : : : : : : : : : : : : : : : : : : : : : : : : : : : : 2 2 The expressive power of second order Logic : : : : : : : : : : : 3 2.1 The language of second order logic : : : : : : : : : : : : : 3 2.2 Expressing size : : : : : : : : : : : : : : : : : : : : : : : : 4 2.3 Defining data types : : : : : : : : : : : : : : : : : : : : : 6 2.4 Describing processes : : : : : : : : : : : : : : : : : : : : : 8 2.5 Expressing convergence using second order validity : : : : : : : : : : : : : : : : : : : : : : : : : 9 2.6 Truth definitions: the analytical hierarchy : : : : : : : : 10 2.7 Inductive definitions : : : : : : : : : : : : : : : : : : : : : 13 3 Canonical semantics of higher order logic : : : : : : : : : : : : 15 3.1 Tarskian semantics of second order logic : : : : : : : : : 15 3.2 Function and re
Program logic and equivalence in the presence of garbage collection
, 2001
"... Abstract. It is generally thought that reasoning about programs in memory safe, garbage collected languages is much easier than in languages where the programmer has more explicit control over memory. Paradoxically, existing program logics are based on a low level view of storage that is sensitive t ..."
Abstract

Cited by 21 (0 self)
 Add to MetaCart
Abstract. It is generally thought that reasoning about programs in memory safe, garbage collected languages is much easier than in languages where the programmer has more explicit control over memory. Paradoxically, existing program logics are based on a low level view of storage that is sensitive to the presence or absence of unreachable cells, and Reynolds has pointed out that the Hoare triples derivable in these logics are even incompatible with garbage collection. We present a study of a small language whose operational semantics includes a rule for reclaiming garbage. Our main results include an analysis of propositions that are garbage insensitive, and full abstraction results connecting partial and total correctness to two natural notions of observational equivalence between programs. 1 Introduction Garbage collection is an essential method used to reclaim heapallocated objects whose lifetime cannot be easily predicted at compile time. It is most strongly associated with highlevel languages such as Lisp, ML and Java, where heap allocation is the norm. It can also be used in a lower level language like C, coexisting with explicit deallocation primitives [10]. In any case, garbage collection relieves the programmer of the burden of explicitly managing dynamically allocated memory. This generally leads to simpler programs, and removes or lessens errors that result from incorrect attempts to access disposed memory, errors that are often difficult to diagnose or even reproduce.
Undecidability and incompleteness in classical mechanics
 Internat. J. Theoret. Physics
, 1991
"... We describe Richardson's functor from the Diophantine equations and Diophantine problems into elementary realvalued functions and problems. We then derive a general undecidability and incompleteness result for elementary functions within ZFC set theory, and apply it to some problems in Hamilt ..."
Abstract

Cited by 21 (4 self)
 Add to MetaCart
(Show Context)
We describe Richardson's functor from the Diophantine equations and Diophantine problems into elementary realvalued functions and problems. We then derive a general undecidability and incompleteness result for elementary functions within ZFC set theory, and apply it to some problems in Hamiltonian mechanics and dynamical systems theory. Our examples deal with the algorithmic impossibility of deciding whether a given Hamiltonian can be integrated by quadratures and related questions; they lead to a version of G6del's incompleteness theorem within Hamiltonian mechanics. A similar application to the unsolvability of the decision problem for chaotic dynamical systems is also obtained. 1.
A new applied approach for executing computations with infinite and infinitesimal quantities
 Informatica
, 2008
"... A new computational methodology for executing calculations with infinite and infinitesimal quantities is described in this paper. It is based on the principle ‘The part is less than the whole ’ introduced by Ancient Greeks and applied to all numbers (finite, infinite, and infinitesimal) and to all ..."
Abstract

Cited by 19 (7 self)
 Add to MetaCart
(Show Context)
A new computational methodology for executing calculations with infinite and infinitesimal quantities is described in this paper. It is based on the principle ‘The part is less than the whole ’ introduced by Ancient Greeks and applied to all numbers (finite, infinite, and infinitesimal) and to all sets and processes (finite and infinite). It is shown that it becomes possible to write down finite, infinite, and infinitesimal numbers by a finite number of symbols as particular cases of a unique framework. The new methodology has allowed us to introduce the Infinity Computer working with such numbers (its simulator has already been realized). Examples dealing with divergent series, infinite sets, and limits are given.
Cardinal arithmetic for skeptics
 Bull. Amer. Math. Soc. New Series
, 1992
"... When modern set theory is applied to conventional mathematical problems, it has a disconcerting tendency to produce independence results rather than theorems in the usual sense. The resulting preoccupation with “consistency ” rather than “truth ” may be felt to give the subject an air of unreality. ..."
Abstract

Cited by 14 (4 self)
 Add to MetaCart
When modern set theory is applied to conventional mathematical problems, it has a disconcerting tendency to produce independence results rather than theorems in the usual sense. The resulting preoccupation with “consistency ” rather than “truth ” may be felt to give the subject an air of unreality. Even elementary questions about the basic arithmetical operations of exponentiation in the context of infinite cardinalities, like the value of 2 ℵ0, cannot be settled on the basis of the usual axioms of set theory (ZFC). Although much can be said in favor of such independence results, rather than undertaking to challenge such prejudices, we have a more modest goal; we wish to point out an area of contemporary set theory in which theorems are abundant, although the conventional wisdom views the subject as dominated by independence results, namely, cardinal arithmetic. To see the subject in this light it will be necessary to carry out a substantial shift in our point of view. To make a very rough analogy with another generalization of ordinary arithmetic, the natural response to the loss of unique factorization caused