Results 1  10
of
84
Interactive Foundations of Computing
, 1997
"... : The claim that interactive systems have richer behavior than algorithms is surprisingly easy to prove: Turing machines cannot model interaction machines because: interaction is not expressible by a finite initial input string. Interaction machines extend the Chomsky hierarchy, are modeled by inte ..."
Abstract

Cited by 48 (5 self)
 Add to MetaCart
: The claim that interactive systems have richer behavior than algorithms is surprisingly easy to prove: Turing machines cannot model interaction machines because: interaction is not expressible by a finite initial input string. Interaction machines extend the Chomsky hierarchy, are modeled by interaction grammars, and precisely capture fuzzy concepts like open systems and empirical computer science. Part I of this paper examines extensions to interactive models for algorithms, machines, grammars, and semantics, while part II considers the expressiveness of different forms of interaction. Interactive identity machines are already more powerful than Turing machines, while noninteractive parallelism and distribution are algorithmic. The extension of Turing to interaction machines parallels that of the lambda to the pi calculus, but the ability to model shared state allows interaction machines to express more powerful behavior than calculi. Asynchronous and nonserializable interaction ar...
Partial realizations of Hilbert’s program
 Journal of Symbolic Logic
, 1988
"... JSTOR is a notforprofit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JS ..."
Abstract

Cited by 38 (8 self)
 Add to MetaCart
JSTOR is a notforprofit service that helps scholars, researchers, and students discover, use, and build upon a wide range of content in a trusted digital archive. We use information technology and tools to increase productivity and facilitate new forms of scholarship. For more information about JSTOR, please contact support@jstor.org. Association for Symbolic Logic is collaborating with JSTOR to digitize, preserve and extend access to The
Modular Data Structure Verification
 EECS DEPARTMENT, MASSACHUSETTS INSTITUTE OF TECHNOLOGY
, 2007
"... This dissertation describes an approach for automatically verifying data structures, focusing on techniques for automatically proving formulas that arise in such verification. I have implemented this approach with my colleagues in a verification system called Jahob. Jahob verifies properties of Java ..."
Abstract

Cited by 36 (21 self)
 Add to MetaCart
This dissertation describes an approach for automatically verifying data structures, focusing on techniques for automatically proving formulas that arise in such verification. I have implemented this approach with my colleagues in a verification system called Jahob. Jahob verifies properties of Java programs with dynamically allocated data structures. Developers write Jahob specifications in classical higherorder logic (HOL); Jahob reduces the verification problem to deciding the validity of HOL formulas. I present a new method for proving HOL formulas by combining automated reasoning techniques. My method consists of 1) splitting formulas into individual HOL conjuncts, 2) soundly approximating each HOL conjunct with a formula in a more tractable fragment and 3) proving the resulting approximation using a decision procedure or a theorem prover. I present three concrete logics; for each logic I show how to use it to approximate HOL formulas, and how to decide the validity of formulas in this logic. First, I present an approximation of HOL based on a translation to firstorder logic, which enables the use of existing resolutionbased theorem provers. Second, I present an approximation of HOL based on field constraint analysis, a new technique that enables
Computability and Evolutionary Complexity: Markets as Complex Adaptive Systems
 CAS). Economic Journal 115 (504) (2005), F159–F192. Available online at SSRN: http://ssrn.com/abstract=745578
"... Few will argue that the epiphenomena of biological systems and socioeconomic systems are anything but complex. The purpose of this Feature is to examine critically and contribute to the burgeoning multidisciplinary literature on markets as complex adaptive systems (CAS). The new sciences of compl ..."
Abstract

Cited by 26 (9 self)
 Add to MetaCart
Few will argue that the epiphenomena of biological systems and socioeconomic systems are anything but complex. The purpose of this Feature is to examine critically and contribute to the burgeoning multidisciplinary literature on markets as complex adaptive systems (CAS). The new sciences of complexity, the principles of selforganisation and emergence along with the methods of evolutionary computation and artificially intelligent agent models have been developed in a multidisciplinary fashion. The cognoscenti here consider that complex systems whether natural or artificial, physical, biological or socioeconomic can be characterised by a unifying set of principles. Further, it is held that these principles mark a paradigm shift from earlier ways of viewing such phenomenon. The articles in this Feature aim to provide detailed insights and examples of both the challenges and the prospects for economics that are offered by the new methods of the complexity sciences. The applicability or not of the optimisation framework of conventional economics depends on the domain of the problem and in particular the modern theories behind noncomputability are outlined to explain why adaptive or emergent methods of computation and agentbased
The Garden of Knowledge as a Knowledge Manifold  A Conceptual Framework for Computer Supported Subjective Education
 CID17, TRITANAD9708, DEPARTMENT OF NUMERICAL ANALYSIS AND COMPUTING SCIENCE
, 1997
"... This work presents a unied patternbased epistemological framework, called a Knowledge Manifold, for the description and extraction of knowledge from information. Within this framework it also presents the metaphor of the Garden Of Knowledge as a constructive example. Any type of KM is defined in te ..."
Abstract

Cited by 22 (14 self)
 Add to MetaCart
This work presents a unied patternbased epistemological framework, called a Knowledge Manifold, for the description and extraction of knowledge from information. Within this framework it also presents the metaphor of the Garden Of Knowledge as a constructive example. Any type of KM is defined in terms of its objective calibration protocols  procedures that are implemented on top of the participating subjective knowledgepatches. They are the procedures of agreement and obedience that characterize the coherence of any type of interaction, and which are used here in order to formalize the concept of participator consciousness in terms of the inversedirect limit duality of Category Theory.
Proof checking the RSA public key encryption algorithm
 American Mathematical Monthly
, 1984
"... The authors describe the use of a mechanical theoremprover to check the published proof of the invertibility of the public key encryption algorithm of Rivest, Shamir and Adleman: (M mod n) mod N=M, provided n is the product of two distinct primes p and q, M
Abstract

Cited by 22 (9 self)
 Add to MetaCart
The authors describe the use of a mechanical theoremprover to check the published proof of the invertibility of the public key encryption algorithm of Rivest, Shamir and Adleman: (M mod n) mod N=M, provided n is the product of two distinct primes p and q, M<n, and e and d are multiplicative inverses in the ring of integers modulo (p1)*(q1). Among the lemmas proved mechanically and used in the main proof are many familiar theorems of number theory, including Fermat’s theorem: M mod p=1, when p M. The axioms underlying the proofs are those of Peano arithmetic and ordered pairs. The development of mathematics toward greater precision has led, as is well known, to the formalization of large tracts of it, so that one can prove any theorem using nothing but a few mechanical rules. Godel [11] But formalized mathematics cannot in practice be written down in full, and therefore we must have confidence in what might be called the common sense of the mathematician... We shall therefore very quickly abandon formalized mathematics... Bourbaki [1] 1.
Infinitary Self Reference in Learning Theory
, 1994
"... Kleene's Second Recursion Theorem provides a means for transforming any program p into a program e(p) which first creates a quiescent self copy and then runs p on that self copy together with any externally given input. e(p), in effect, has complete (low level) self knowledge, and p represents how ..."
Abstract

Cited by 18 (6 self)
 Add to MetaCart
Kleene's Second Recursion Theorem provides a means for transforming any program p into a program e(p) which first creates a quiescent self copy and then runs p on that self copy together with any externally given input. e(p), in effect, has complete (low level) self knowledge, and p represents how e(p) uses its self knowledge (and its knowledge of the external world). Infinite regress is not required since e(p) creates its self copy outside of itself. One mechanism to achieve this creation is a self replication trick isomorphic to that employed by singlecelled organisms. Another is for e(p) to look in a mirror to see which program it is. In 1974 the author published an infinitary generalization of Kleene's theorem which he called the Operator Recursion Theorem. It provides a means for obtaining an (algorithmically) growing collection of programs which, in effect, share a common (also growing) mirror from which they can obtain complete low level models of themselves and the other prog...
A Strategy for Constructing New Predicates in First Order Logic
 In Proceedings of the Third European Working Session on Learning
, 1988
"... There is increasing interest within the Machine Learning community in systems which automatically reformulate their problem representation by defining and constructing new predicates. A previous paper discussed such a system, called CIGOL, and gave a derivation for the mechanism of inverting individ ..."
Abstract

Cited by 16 (6 self)
 Add to MetaCart
There is increasing interest within the Machine Learning community in systems which automatically reformulate their problem representation by defining and constructing new predicates. A previous paper discussed such a system, called CIGOL, and gave a derivation for the mechanism of inverting individual steps in first order resolution proofs. In this paper we describe an enhancement to CIGOL's learning strategy which strongly constrains the formation of new concepts and hypotheses. The new strategy is based on results from algorithmic information theory. Using these results it is possible to compute the probability that the simplifications produced by adopting new concepts or hypotheses are not based on chance regularities within the examples. This can be derived from the amount of information compression produced by replacing the examples with the hypothesised concepts. CIGOL's improved performance, based on an approximation of this strategy, is demonstrated by way of the automatic "di...
A framework for comparing conceptual models
 Workshop on Enterprise Modelling and Information Systems Architectures (EMISA 2005
, 2005
"... Abstract: Conceptual models are a widely used mean for documenting software systems as well as describing organisational structures. The trend towards integrated and flexible information systems has encouraged research about the comparison of conceptual models. Current approaches on the identificati ..."
Abstract

Cited by 15 (8 self)
 Add to MetaCart
Abstract: Conceptual models are a widely used mean for documenting software systems as well as describing organisational structures. The trend towards integrated and flexible information systems has encouraged research about the comparison of conceptual models. Current approaches on the identification of similarities between conceptual models often adopt an automation perspective only. In this paper we will unfold severe arguments that a fully automatic model comparison process is not feasible. Furthermore, we will show that only a semiautomatic process can perform the comparison of conceptual models at the semantic level. On this theoretical basis, we will develop a framework which identifies all necessary and sufficient components for comparing conceptual models. We will show that this framework includes all the requirements that a semiautomatic model comparison process must meet.