Results 1 
6 of
6
The ChurchTuring Thesis over Arbitrary Domains
, 2008
"... The ChurchTuring Thesis has been the subject of many variations and interpretations over the years. Specifically, there are versions that refer only to functions over the natural numbers (as Church and Kleene did), while others refer to functions over arbitrary domains (as Turing intended). Our pu ..."
Abstract

Cited by 12 (9 self)
 Add to MetaCart
The ChurchTuring Thesis has been the subject of many variations and interpretations over the years. Specifically, there are versions that refer only to functions over the natural numbers (as Church and Kleene did), while others refer to functions over arbitrary domains (as Turing intended). Our purpose is to formalize and analyze the thesis when referring to functions over arbitrary domains. First, we must handle the issue of domain representation. We show that, prima facie, the thesis is not well defined for arbitrary domains, since the choice of representation of the domain might have a nontrivial influence. We overcome this problem in two steps: (1) phrasing the thesis for entire computational models, rather than for a single function; and (2) proving a “completeness” property of the recursive functions and Turing machines with respect to domain representations. In the second part, we propose an axiomatization of an “effective model of computation” over an arbitrary countable domain. This axiomatization is based on Gurevich’s postulates for sequential algorithms. A proof is provided showing that all models satisfying these axioms, regardless of underlying data structure, are of equivalent computational power to, or weaker than, Turing machines.
N.: Comparing computational power
 Logic Journal of the IGPL
"... All models are wrong but some are useful. —George E. P. Box, “Robustness in the strategy of scientific model building ” (1979) It is common practice to compare the computational power of different models of computation. For example, the recursive functions are strictly more powerful than the primiti ..."
Abstract

Cited by 6 (4 self)
 Add to MetaCart
All models are wrong but some are useful. —George E. P. Box, “Robustness in the strategy of scientific model building ” (1979) It is common practice to compare the computational power of different models of computation. For example, the recursive functions are strictly more powerful than the primitive recursive functions, because the latter are a proper subset of the former (which includes Ackermann’s function). Sidebyside with this “containment ” method of measuring power, it is standard to use an approach based on “simulation”. For example, one says that the (untyped) lambda calculus is as powerful—computationally speaking—as the partial recursive functions, because the lambda calculus can simulate all partial recursive functions by encoding the natural numbers as Church numerals. The problem is that unbridled use of these two ways of comparing power allows one to show that some computational models are strictly stronger than themselves! We argue that a better definition is that model A is strictly stronger than B if A can simulate B via some encoding, whereas B cannot simulate A under any encoding. We then show that the recursive functions are strictly stronger in this sense than the primitive recursive. We also prove that the recursive functions, partial recursive functions, and Turing machines are “complete”, in the sense that no injective encoding can make them equivalent to any “hypercomputational” model. 1
Why Church's thesis still holds: Some notes on Peter Wegner's tracts on interaction and computability
 Computer Journal
, 1998
"... Peter Wegner’s definition of computability differs markedly from the classical term as established by Church, Kleene, Markov, Post, Turing et al. Wegner identifies interaction as the main feature of today’s systems which is lacking in the classical treatment of computability. We compare the differen ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
Peter Wegner’s definition of computability differs markedly from the classical term as established by Church, Kleene, Markov, Post, Turing et al. Wegner identifies interaction as the main feature of today’s systems which is lacking in the classical treatment of computability. We compare the different approaches and argue whether or not Wegner’s criticism is appropriate. Taking into account the major arguments from the literature, we show that Church’s thesis still holds. 1.
On computing minimal and perfect model membership
 Data and Knowledge Engineering
, 1996
"... Abstract. The computational complexity of a number of problems relating to minimal models of nonHorn deductive databases is considered. In particular, the problem of determining minimal model membership is shown to be NP complete for nonrecursive propositional databases. The structure of minimal ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
Abstract. The computational complexity of a number of problems relating to minimal models of nonHorn deductive databases is considered. In particular, the problem of determining minimal model membership is shown to be NP complete for nonrecursive propositional databases. The structure of minimal models is also examined using the notion of a cyclic tree, and methods of determining minimal model membership, minimality of models and compiling the GCWA are presented. The handling of negative premises is also considered using perfect model semantics, and methods for computing perfect model membership are presented.
R.R.: Validity of models and classes of models in semantic composability
 In: Proceedings of the Fall Simulation Interoperability Workshop
, 2003
"... ABSTRACT: Composability is the capability to select and assemble simulation components in various combinations into simulation systems. The defining characteristic of composability is the ability to combine and recombine components. Composability exists in two forms, syntactic and semantic (also kno ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
ABSTRACT: Composability is the capability to select and assemble simulation components in various combinations into simulation systems. The defining characteristic of composability is the ability to combine and recombine components. Composability exists in two forms, syntactic and semantic (also known as engineering and modeling). Syntactic composability is the implementation of components so that they can be combined. Semantic composability is the question of whether the models embodied by the composed components can be meaningfully composed. A theory of semantic composability has been developed that examines the semantic composability of models using formal definitions and reasoning. In this paper results of semantic composability theory concerned with validity are presented. After briefly restating formal definitions of model and simulation, labeled transition systems are defined and introduced as models of the computation of models and compositions. Bisimulation, which is a general relation between the states of labeled transition simulations, is specialized with the addition of a validity metric, and shown to serve as a formal definition of validity. The power of different validity metrics to represent applicationspecific validity is explained. Classes of models are defined and compared with the models used in simulation. Certain classes of models and validity metrics for which validity is (or is not) preserved under composition are defined and their validity (or lack thereof) under composition is proven. 1.