Results 1  10
of
82
Partial Functions in ACL2
 Journal of Automated Reasoning
"... We describe a macro for introducing \partial functions" into ACL2, i.e., functions not dened everywhere. The function \denitions" are actually admitted via the encapsulation principle. We discuss the basic issues surrounding partial functions in ACL2 and illustrate theorems that can be proved ab ..."
Abstract

Cited by 31 (7 self)
 Add to MetaCart
We describe a macro for introducing \partial functions" into ACL2, i.e., functions not dened everywhere. The function \denitions" are actually admitted via the encapsulation principle. We discuss the basic issues surrounding partial functions in ACL2 and illustrate theorems that can be proved about such functions.
The Structure of Complete Degrees
, 1990
"... This paper surveys investigations into how strong these commonalities are. More concretely, we are concerned with: What do NPcomplete sets look like? To what extent are the properties of particular NPcomplete sets, e.g., SAT, shared by all NPcomplete sets? If there are are structural differences ..."
Abstract

Cited by 29 (3 self)
 Add to MetaCart
This paper surveys investigations into how strong these commonalities are. More concretely, we are concerned with: What do NPcomplete sets look like? To what extent are the properties of particular NPcomplete sets, e.g., SAT, shared by all NPcomplete sets? If there are are structural differences between NPcomplete sets, what are they and what explains the differences? We make these questions, and the analogous questions for other complexity classes, more precise below. We need first to formalize NPcompleteness. There are a number of competing definitions of NPcompleteness. (See [Har78a, p. 7] for a discussion.) The most common, and the one we use, is based on the notion of mreduction, also known as polynomialtime manyone reduction and Karp reduction. A set A is mreducible to B if and only if there is a (total) polynomialtime computable function f such that for all x, x 2 A () f(x) 2 B: (1) 1
An analog characterization of the Grzegorczyk hierarchy
 Journal of Complexity
, 2002
"... We study a restricted version of Shannon's General . . . ..."
Abstract

Cited by 29 (15 self)
 Add to MetaCart
We study a restricted version of Shannon's General . . .
Decidability and undecidability results for planning with numerical state variables
 Proceedings of the Sixth International Conference on Artificial Intelligence Planning and Scheduling
, 2002
"... These days, propositional planning can be considered a quite wellunderstood problem. Good algorithms are known that can solve a wealth of very different and sometimes challenging planning tasks, and theoretical computational properties of both general STRIPSstyle planning and the bestknown benchm ..."
Abstract

Cited by 28 (1 self)
 Add to MetaCart
These days, propositional planning can be considered a quite wellunderstood problem. Good algorithms are known that can solve a wealth of very different and sometimes challenging planning tasks, and theoretical computational properties of both general STRIPSstyle planning and the bestknown benchmark problems have been established. However, propositional planning has a major drawback: The formalism is too weak to allow for the easy encoding of many genuinely interesting planning problems, specifically those involving numbers. A recent effort to enhance the PDDL planning language to cope with (among other additions) numerical state variables, to be used at the third international planning competition, has increased interest in these issues. In this contribution, we analyze “STRIPS with numbers” from a theoretical point of view. Specifically, we show that the introduction of numerical state variables makes the planning problem undecidable in the general case and many restrictions thereof and identify special cases for which we can provide decidability results.
Computability and Evolutionary Complexity: Markets as Complex Adaptive Systems
 CAS). Economic Journal 115 (504) (2005), F159–F192. Available online at SSRN: http://ssrn.com/abstract=745578
"... Few will argue that the epiphenomena of biological systems and socioeconomic systems are anything but complex. The purpose of this Feature is to examine critically and contribute to the burgeoning multidisciplinary literature on markets as complex adaptive systems (CAS). The new sciences of compl ..."
Abstract

Cited by 26 (9 self)
 Add to MetaCart
Few will argue that the epiphenomena of biological systems and socioeconomic systems are anything but complex. The purpose of this Feature is to examine critically and contribute to the burgeoning multidisciplinary literature on markets as complex adaptive systems (CAS). The new sciences of complexity, the principles of selforganisation and emergence along with the methods of evolutionary computation and artificially intelligent agent models have been developed in a multidisciplinary fashion. The cognoscenti here consider that complex systems whether natural or artificial, physical, biological or socioeconomic can be characterised by a unifying set of principles. Further, it is held that these principles mark a paradigm shift from earlier ways of viewing such phenomenon. The articles in this Feature aim to provide detailed insights and examples of both the challenges and the prospects for economics that are offered by the new methods of the complexity sciences. The applicability or not of the optimisation framework of conventional economics depends on the domain of the problem and in particular the modern theories behind noncomputability are outlined to explain why adaptive or emergent methods of computation and agentbased
Lowness properties and approximations of the jump
 Proceedings of the Twelfth Workshop of Logic, Language, Information and Computation (WoLLIC 2005). Electronic Lecture Notes in Theoretical Computer Science 143
, 2006
"... ..."
Arbitrary Precision Real Arithmetic: Design and Algorithms
, 1996
"... this article the second representation mentioned above. We first recall the main properties of computable real numbers. We deduce from one definition, among the three definitions of this notion, a representation of these numbers as sequence of finite Badic numbers and then we describe algorithms fo ..."
Abstract

Cited by 19 (0 self)
 Add to MetaCart
this article the second representation mentioned above. We first recall the main properties of computable real numbers. We deduce from one definition, among the three definitions of this notion, a representation of these numbers as sequence of finite Badic numbers and then we describe algorithms for rational operations and transcendental functions for this representation. Finally we describe briefly the prototype written in Caml. 2. Computable real numbers
Computation and Hypercomputation
 MINDS AND MACHINES
, 2003
"... Does Nature permit the implementation of behaviours that cannot be simulated computationally? We consider the meaning of physical computationality in some detail, and present arguments in favour of physical hypercomputation: for example, modern scientific method does not allow the specification o ..."
Abstract

Cited by 15 (4 self)
 Add to MetaCart
Does Nature permit the implementation of behaviours that cannot be simulated computationally? We consider the meaning of physical computationality in some detail, and present arguments in favour of physical hypercomputation: for example, modern scientific method does not allow the specification of any experiment capable of refuting hypercomputation. We consider the implications of relativistic algorithms capable of solving the (Turing) Halting Problem. We also reject as a fallacy the argument that hypercomputation has no relevance because noncomputable values are indistinguishable from sufficiently close computable approximations. In addition to
Routines, hierarchies of problems , procedural behavior: some evidence from experiments
 THE RATIONAL FOUNDATIONS OF ECONOMIC BEHAVIOR, MACMILLAN, IN
, 1994
"... A laboratory experiment was performed as replication of the original one created by M. Cohen and P. Bacdayan at Michigan University. It consists in a twopersons card game played by a large number of pairs, whose actions are stored in a computer's memory. In order to achieve the final goal each playe ..."
Abstract

Cited by 13 (3 self)
 Add to MetaCart
A laboratory experiment was performed as replication of the original one created by M. Cohen and P. Bacdayan at Michigan University. It consists in a twopersons card game played by a large number of pairs, whose actions are stored in a computer's memory. In order to achieve the final goal each player must discover his subgoals, and must coordinate his action with the partner's one. The game therefore involves the division of knowledge and cooperation among players, and gives rise to the emergence of organizational routines. It is suggested that the organizational routines, i.e. the sequences of patterned actions which lead to the realization of the final goal, cannot be fully memorized because of their variety and number. It is shown that players do not possess all the knowledge needed by an hypothetical supervisor to play the best strategy: they generally explore only a limited part of the the space of the potential rules, and therefore learn and memorize a simple, bounded set of "personal " metarules. These metarules, also called "production rules " in standard Cognitive Science's language, are of the form. Each "Condition " can concern either the game configurations or the partner's action. In the former case the identification of an appropriate "Action " depends on the subgoals exploration. In the latter it depends on the recognition (or discovery) of interaction rules; in this eventuality the production rule embodies a dynamic and possibly cooperative reaction to the partner's action. Organizational procedures (routines) therefore emerge as the outcome of a distributed process generated by "personal " production rules. These routines, as in von Hayek's view, "can be understood as if it were made according to a single plan, although nobody has planned it. " (Hayek, 1980, p. 54). Empirical evidence is provided to support the above statements.
The ChurchTuring Thesis over Arbitrary Domains
, 2008
"... The ChurchTuring Thesis has been the subject of many variations and interpretations over the years. Specifically, there are versions that refer only to functions over the natural numbers (as Church and Kleene did), while others refer to functions over arbitrary domains (as Turing intended). Our pu ..."
Abstract

Cited by 12 (9 self)
 Add to MetaCart
The ChurchTuring Thesis has been the subject of many variations and interpretations over the years. Specifically, there are versions that refer only to functions over the natural numbers (as Church and Kleene did), while others refer to functions over arbitrary domains (as Turing intended). Our purpose is to formalize and analyze the thesis when referring to functions over arbitrary domains. First, we must handle the issue of domain representation. We show that, prima facie, the thesis is not well defined for arbitrary domains, since the choice of representation of the domain might have a nontrivial influence. We overcome this problem in two steps: (1) phrasing the thesis for entire computational models, rather than for a single function; and (2) proving a “completeness” property of the recursive functions and Turing machines with respect to domain representations. In the second part, we propose an axiomatization of an “effective model of computation” over an arbitrary countable domain. This axiomatization is based on Gurevich’s postulates for sequential algorithms. A proof is provided showing that all models satisfying these axioms, regardless of underlying data structure, are of equivalent computational power to, or weaker than, Turing machines.