Results 1  10
of
35
Agentbased computational models and generative social science
 Complexity
, 1999
"... This article argues that the agentbased computational model permits a distinctive approach to social science for which the term “generative ” is suitable. In defending this terminology, features distinguishing the approach from both “inductive ” and “deductive ” science are given. Then, the followi ..."
Abstract

Cited by 64 (0 self)
 Add to MetaCart
This article argues that the agentbased computational model permits a distinctive approach to social science for which the term “generative ” is suitable. In defending this terminology, features distinguishing the approach from both “inductive ” and “deductive ” science are given. Then, the following specific contributions to social science are discussed: The agentbased computational model is a new tool for empirical research. It offers a natural environment for the study of connectionist phenomena in social science. Agentbased modeling provides a powerful way to address certain enduring—and especially interdisciplinary—questions. It allows one to subject certain core theories—such as neoclassical microeconomics—to important types of stress (e.g., the effect of evolving preferences). It permits one to study how rules of individual behavior give rise—or “map up”—to macroscopic regularities and organizations. In turn, one can employ laboratory behavioral research findings to select among competing agentbased (“bottom up”) models. The agentbased approach may well have the important effect of decoupling individual rationality from macroscopic equilibrium and of separating decision science from social science more generally. Agentbased modeling offers powerful new forms of hybrid theoreticalcomputational work; these are particularly relevant to the study of nonequilibrium systems. The agentbased approach invites the interpretation of society as a distributed computational device, and in turn the interpretation of social dynamics as a type of computation. This interpretation raises important foundational issues in social science—some related to intractability, and some to undecidability proper. Finally, since “emergence” figures prominently in this literature, I take up the connection between agentbased modeling and classical emergentism, criticizing the latter and arguing that the two are incompatible. � 1999 John Wiley &
On the Semantics of “Now” in Databases
 ACM Transactions on Database Systems
, 1997
"... Although “now ” is expressed in SQL as CURRENT_TIMESTAMP within queries, this value cannot be stored in the database. However, this notion of an everincreasing currenttime value has been reflected in some temporal data models by inclusion of databaseresident variables, such as “now”, “untilchang ..."
Abstract

Cited by 42 (16 self)
 Add to MetaCart
Although “now ” is expressed in SQL as CURRENT_TIMESTAMP within queries, this value cannot be stored in the database. However, this notion of an everincreasing currenttime value has been reflected in some temporal data models by inclusion of databaseresident variables, such as “now”, “untilchanged, ” “�, ” “@, ” and “–”. Time variables are very desirable, but their use also leads to a new type of database, consisting of tuples with variables, termed a variable database. This article proposes a framework for defining the semantics of the variable databases of the relational and temporal relational data models. A framework is presented because several reasonable meanings may be given to databases that use some of the specific temporal variables that have appeared in the literature. Using the framework, the article defines a useful semantics for such databases. Because situations occur where the existing time variables are inadequate, two new types of modeling entities that address these shortcomings, timestamps that we call nowrelative and nowrelative indeterminate, are introduced and defined within the framework. Moreover, the article provides a foundation, using algebraic
Higher Order Logic
 In Handbook of Logic in Artificial Intelligence and Logic Programming
, 1994
"... Contents 1 Introduction : : : : : : : : : : : : : : : : : : : : : : : : : : : : 2 2 The expressive power of second order Logic : : : : : : : : : : : 3 2.1 The language of second order logic : : : : : : : : : : : : : 3 2.2 Expressing size : : : : : : : : : : : : : : : : : : : : : : : : 4 2.3 Definin ..."
Abstract

Cited by 19 (0 self)
 Add to MetaCart
Contents 1 Introduction : : : : : : : : : : : : : : : : : : : : : : : : : : : : 2 2 The expressive power of second order Logic : : : : : : : : : : : 3 2.1 The language of second order logic : : : : : : : : : : : : : 3 2.2 Expressing size : : : : : : : : : : : : : : : : : : : : : : : : 4 2.3 Defining data types : : : : : : : : : : : : : : : : : : : : : 6 2.4 Describing processes : : : : : : : : : : : : : : : : : : : : : 8 2.5 Expressing convergence using second order validity : : : : : : : : : : : : : : : : : : : : : : : : : 9 2.6 Truth definitions: the analytical hierarchy : : : : : : : : 10 2.7 Inductive definitions : : : : : : : : : : : : : : : : : : : : : 13 3 Canonical semantics of higher order logic : : : : : : : : : : : : 15 3.1 Tarskian semantics of second order logic : : : : : : : : : 15 3.2 Function and re
NonOrthomodular Models for Both Standard Quantum Logic and Standard Classical Logic: Repercussions for Quantum Computers
 Helv. Phys. Acta
, 1999
"... Abstract. It is shown that propositional calculuses of both quantum and classical logics are noncategorical. We find that quantum logic is in addition to an orthomodular lattice also modeled by a weakly orthomodular lattice and that classical logic is in addition to a Boolean algebra also modeled by ..."
Abstract

Cited by 17 (12 self)
 Add to MetaCart
Abstract. It is shown that propositional calculuses of both quantum and classical logics are noncategorical. We find that quantum logic is in addition to an orthomodular lattice also modeled by a weakly orthomodular lattice and that classical logic is in addition to a Boolean algebra also modeled by a weakly distributive lattice. Both new models turn out to be nonorthomodular. We prove the soundness and completeness of the calculuses for the models. We also prove that all the operations in an orthomodular lattice are fivefold defined. In the end we discuss possible repercussions of our results to quantum computations and quantum computers.
Types in logic and mathematics before 1940
 Bulletin of Symbolic Logic
, 2002
"... Abstract. In this article, we study the prehistory of type theory up to 1910 and its development between Russell and Whitehead’s Principia Mathematica ([71], 1910–1912) and Church’s simply typed λcalculus of 1940. We first argue that the concept of types has always been present in mathematics, thou ..."
Abstract

Cited by 11 (6 self)
 Add to MetaCart
Abstract. In this article, we study the prehistory of type theory up to 1910 and its development between Russell and Whitehead’s Principia Mathematica ([71], 1910–1912) and Church’s simply typed λcalculus of 1940. We first argue that the concept of types has always been present in mathematics, though nobody was incorporating them explicitly as such, before the end of the 19th century. Then we proceed by describing how the logical paradoxes entered the formal systems of Frege, Cantor and Peano concentrating on Frege’s Grundgesetze der Arithmetik for which Russell applied his famous paradox 1 and this led him to introduce the first theory of types, the Ramified Type Theory (rtt). We present rtt formally using the modern notation for type theory and we discuss how Ramsey, Hilbert and Ackermann removed the orders from rtt leading to the simple theory of types stt. We present stt and Church’s own simply typed λcalculus (λ→C 2) and we finish by comparing rtt, stt and λ→C. §1. Introduction. Nowadays, type theory has many applications and is used in many different disciplines. Even within logic and mathematics, there are many different type systems. They serve several purposes, and are formulated in various ways. But, before 1903 when Russell first introduced
Visual Algorithm Simulation
, 2003
"... Understanding data structures and algorithms, both of which are abstract concepts, is an integral part of software engineering and elementary computer science education. However, people usually have difficulty in understanding abstract concepts and processes such as procedural encoding of algorithms ..."
Abstract

Cited by 11 (6 self)
 Add to MetaCart
Understanding data structures and algorithms, both of which are abstract concepts, is an integral part of software engineering and elementary computer science education. However, people usually have difficulty in understanding abstract concepts and processes such as procedural encoding of algorithms and data structures. One way to improve their understanding is to provide visualizations to make the abstract concepts more concrete. This thesis presents the design, implementation and evaluation for the Matrix application framework that occupies a unique niche between the following two domains. In the first domain, called algorithm animation, abstractions of the behavior of fundamental computer program operations are visualized. In the second domain, called algorithm simulation, the framework for exploring and understanding algorithms and data structures is exhibited. First, an overview and theoretical basis for the application framework is presented. Second, the different roles are defined and examined for realizing the idea of algorithm
A Formal Theory for KnowledgeBased Product Model Representation
, 1996
"... The field of design science attempts to place engineering design on a more formal, rigorous footing. This paper introduces recent work by the author in this area. ArtifactCentered Modeling (ACM) is a general framework intended to partition the design endeavor in manageable sections. A fundamental p ..."
Abstract

Cited by 9 (6 self)
 Add to MetaCart
The field of design science attempts to place engineering design on a more formal, rigorous footing. This paper introduces recent work by the author in this area. ArtifactCentered Modeling (ACM) is a general framework intended to partition the design endeavor in manageable sections. A fundamental part of ACM is the representation of information about products being designed. The Axiomatic Information Model for Design (AIMD) is a formal theory about product information based on axiomatic set theory. AIMD provides formal bases for quantities, features, parts and assemblies, systems, and subassemblies; these are all notions essential to design. It is not a product modeling system per se, but rather a logic of product structure whose axioms define criteria for determining the logical validity of product models. A previous version of the theory has been found to contain logical inconsistencies; the version presented herein addresses those problems. A complete axiomatization of the new th...
On the Simplicity and Speed of Programs for Computing Infinite Sets of Natural Numbers
 J. ASSOC. COMPUT. MACH
, 1969
"... It is suggested that there are infinite computable sets of natural numbers with the property that no infinite subset can be computed more simply or more quickly than the whole set. Attempts to establish this without restricting in any way the computer involved in the calculations are not entirely su ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
It is suggested that there are infinite computable sets of natural numbers with the property that no infinite subset can be computed more simply or more quickly than the whole set. Attempts to establish this without restricting in any way the computer involved in the calculations are not entirely successful. A hypothesis concerning the computer makes it possible to exhibit sets without simpler subsets. A second and analogous hypothesis then makes it possible to prove that these sets are also without subsets which can be computed more rapidly than the whole set. It is then demonstrated that there are computers which satisfy both hypotheses. The general theory is momentarily set aside and a particular Turing machine is studied. Lastly, it is shown that the second hypothesis is more restrictive then requiring the computer to be capable of calculating all infinite computable sets of natural numbers.
Algorithm Animation and Simulation
, 2000
"... Understanding data structures and algorithms, both of which are abstract concepts, is an integral part of elementary computer science education. On the other hand, people usually have difficulties in understanding abstract concepts and processes such as procedural encoding of algorithms and data str ..."
Abstract

Cited by 3 (2 self)
 Add to MetaCart
Understanding data structures and algorithms, both of which are abstract concepts, is an integral part of elementary computer science education. On the other hand, people usually have difficulties in understanding abstract concepts and processes such as procedural encoding of algorithms and data structures. One way to improve their understanding is to provide visual examples to make the abstract concepts more concrete. This thesis presents the design and implementation for an application framework that occupies a unique niche between the following two domains. In the first domain, called algorithm animation, abstractions of the behavior of fundamental computer program operations are visualized...
A formalization of the Ramified Type Theory
, 1994
"... In "Principia Mathematica " [17], B. Russell and A.N. Whitehead propose a type system for higher order logic. This system has become known under the name "ramified type theory". It was invented to avoid the paradoxes, which could be conducted from Frege's "Begriffschrift&quo ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
In "Principia Mathematica " [17], B. Russell and A.N. Whitehead propose a type system for higher order logic. This system has become known under the name "ramified type theory". It was invented to avoid the paradoxes, which could be conducted from Frege's "Begriffschrift" [7]. We give a formalization of the ramified type theory as described in the Principia Mathematica, trying to keep it as close as possible to the ideas of the Principia. As an alternative, distancing ourselves from the Principia, we express notions from the ramified type theory in a lambda calculus style, thus clarifying the type system of Russell and Whitehead in a contemporary setting. Both formalizations are inspired by current developments in research on type theory and typed lambda calculus; see e.g. [3]. In these formalizations, and also when defining "truth", we will need the notion of substitution. As substitution is not formally defined in the Principia, we have to define it ourselves. Finally, the reaction by Hilbert and Ackermann in [10] on the