Results 11  20
of
616
Tractable Reasoning via Approximation
 Artificial Intelligence
, 1995
"... Problems in logic are wellknown to be hard to solve in the worst case. Two different strategies for dealing with this aspect are known from the literature: language restriction and theory approximation. In this paper we are concerned with the second strategy. Our main goal is to define a semantical ..."
Abstract

Cited by 93 (0 self)
 Add to MetaCart
Problems in logic are wellknown to be hard to solve in the worst case. Two different strategies for dealing with this aspect are known from the literature: language restriction and theory approximation. In this paper we are concerned with the second strategy. Our main goal is to define a semantically wellfounded logic for approximate reasoning, which is justifiable from the intuitive point of view, and to provide fast algorithms for dealing with it even when using expressive languages. We also want our logic to be useful to perform approximate reasoning in different contexts. We define a method for the approximation of decision reasoning problems based on multivalued logics. Our work expands and generalizes in several directions ideas presented by other researchers. The major features of our technique are: 1) approximate answers give semantically clear information about the problem at hand; 2) approximate answers are easier to compute than answers to the original problem; 3) approxim...
The calculi of emergence: Computation, dynamics, and induction
 Physica D
, 1994
"... Defining structure and detecting the emergence of complexity in nature are inherently subjective, though essential, scientific activities. Despite the difficulties, these problems can be analyzed in terms of how modelbuilding observers infer from measurements the computational capabilities embedded ..."
Abstract

Cited by 78 (14 self)
 Add to MetaCart
Defining structure and detecting the emergence of complexity in nature are inherently subjective, though essential, scientific activities. Despite the difficulties, these problems can be analyzed in terms of how modelbuilding observers infer from measurements the computational capabilities embedded in nonlinear processes. An observer’s notion of what is ordered, what is random, and what is complex in its environment depends directly on its computational resources: the amount of raw measurement data, of memory, and of time available for estimation and inference. The discovery of structure in an environment depends more critically and subtlely, though, on how those resources are organized. The descriptive power of the observer’s chosen (or implicit) computational model class, for example, can be an overwhelming determinant in finding regularity in data. This paper presents an overview of an inductive framework — hierarchicalmachine reconstruction — in which the emergence of complexity is associated with the innovation of new computational model classes. Complexity metrics for detecting structure and quantifying emergence, along with an analysis of the constraints on the dynamics of innovation, are outlined. Illustrative examples are drawn from the onset of unpredictability in nonlinear systems, finitary nondeterministic processes, and
Equivalence of Measures of Complexity Classes
"... The resourcebounded measures of complexity classes are shown to be robust with respect to certain changes in the underlying probability measure. Specifically, for any real number ffi ? 0, any uniformly polynomialtime computable sequence ~ fi = (fi 0 ; fi 1 ; fi 2 ; : : : ) of real numbers (biases ..."
Abstract

Cited by 71 (19 self)
 Add to MetaCart
The resourcebounded measures of complexity classes are shown to be robust with respect to certain changes in the underlying probability measure. Specifically, for any real number ffi ? 0, any uniformly polynomialtime computable sequence ~ fi = (fi 0 ; fi 1 ; fi 2 ; : : : ) of real numbers (biases) fi i 2 [ffi; 1 \Gamma ffi], and any complexity class C (such as P, NP, BPP, P/Poly, PH, PSPACE, etc.) that is closed under positive, polynomialtime, truthtable reductions with queries of at most linear length, it is shown that the following two conditions are equivalent. (1) C has pmeasure 0 (respectively, measure 0 in E, measure 0 in E 2 ) relative to the cointoss probability measure given by the sequence ~ fi. (2) C has pmeasure 0 (respectively, measure 0 in E, measure 0 in E 2 ) relative to the uniform probability measure. The proof introduces three techniques that may be useful in other contexts, namely, (i) the transformation of an efficient martingale for one probability measu...
Model checking one million lines of C code
 In Proceedings of the 11th Annual Network and Distributed System Security Symposium (NDSS
, 2004
"... Implementation bugs in securitycritical software are pervasive. Several authors have previously suggested model checking as a promising means to detect improper use of system interfaces and thereby detect a broad class of security vulnerabilities. In this paper, we report on our practical experienc ..."
Abstract

Cited by 70 (2 self)
 Add to MetaCart
Implementation bugs in securitycritical software are pervasive. Several authors have previously suggested model checking as a promising means to detect improper use of system interfaces and thereby detect a broad class of security vulnerabilities. In this paper, we report on our practical experience using MOPS, a tool for software model checking securitycritical applications. As examples of security vulnerabilities that can be analyzed using model checking, we pick five important classes of vulnerabilities and show how to codify them as temporal safety properties, and then we describe the results of checking them on several significant Unix applications using MOPS. After analyzing over one million lines of code, we found more than a dozen new security weaknesses in important, widelydeployed applications. This demonstrates for the first time that model checking is practical and useful for detecting security weaknesses at large scale in real, legacy systems. 1.
Agentbased computational models and generative social science
 Complexity
, 1999
"... This article argues that the agentbased computational model permits a distinctive approach to social science for which the term “generative ” is suitable. In defending this terminology, features distinguishing the approach from both “inductive ” and “deductive ” science are given. Then, the followi ..."
Abstract

Cited by 64 (0 self)
 Add to MetaCart
This article argues that the agentbased computational model permits a distinctive approach to social science for which the term “generative ” is suitable. In defending this terminology, features distinguishing the approach from both “inductive ” and “deductive ” science are given. Then, the following specific contributions to social science are discussed: The agentbased computational model is a new tool for empirical research. It offers a natural environment for the study of connectionist phenomena in social science. Agentbased modeling provides a powerful way to address certain enduring—and especially interdisciplinary—questions. It allows one to subject certain core theories—such as neoclassical microeconomics—to important types of stress (e.g., the effect of evolving preferences). It permits one to study how rules of individual behavior give rise—or “map up”—to macroscopic regularities and organizations. In turn, one can employ laboratory behavioral research findings to select among competing agentbased (“bottom up”) models. The agentbased approach may well have the important effect of decoupling individual rationality from macroscopic equilibrium and of separating decision science from social science more generally. Agentbased modeling offers powerful new forms of hybrid theoreticalcomputational work; these are particularly relevant to the study of nonequilibrium systems. The agentbased approach invites the interpretation of society as a distributed computational device, and in turn the interpretation of social dynamics as a type of computation. This interpretation raises important foundational issues in social science—some related to intractability, and some to undecidability proper. Finally, since “emergence” figures prominently in this literature, I take up the connection between agentbased modeling and classical emergentism, criticizing the latter and arguing that the two are incompatible. � 1999 John Wiley &
Biometric identification
 Communications of the ACM
, 2000
"... Identification of grammars (r. e. indices) for recursively enumerable languages from positive data by algorithmic devices is a well studied problem in learning theory. The present paper considers identification of r. e. languages by machines that have access to membership oracles for noncomputable s ..."
Abstract

Cited by 60 (4 self)
 Add to MetaCart
Identification of grammars (r. e. indices) for recursively enumerable languages from positive data by algorithmic devices is a well studied problem in learning theory. The present paper considers identification of r. e. languages by machines that have access to membership oracles for noncomputable sets. It is shown that for any set A there exists another set B such that the collections of r. e. languages that can be identified by machines with access to a membership oracle for B is strictly larger than the collections of r. e. languages that can be identified by machines with access to a membership oracle for A. In other words, there is no maximal inference degree for language identification.
Quantum Algorithm For Hilberts Tenth Problem
 Int.J.Theor.Phys
, 2003
"... We explore in the framework of Quantum Computation the notion of Computability, which holds a central position in Mathematics and Theoretical Computer Science. A quantum algorithm for Hilbert’s tenth problem, which is equivalent to the Turing halting problem and is known to be mathematically noncomp ..."
Abstract

Cited by 60 (10 self)
 Add to MetaCart
We explore in the framework of Quantum Computation the notion of Computability, which holds a central position in Mathematics and Theoretical Computer Science. A quantum algorithm for Hilbert’s tenth problem, which is equivalent to the Turing halting problem and is known to be mathematically noncomputable, is proposed where quantum continuous variables and quantum adiabatic evolution are employed. If this algorithm could be physically implemented, as much as it is valid in principle—that is, if certain hamiltonian and its ground state can be physically constructed according to the proposal—quantum computability would surpass classical computability as delimited by the ChurchTuring thesis. It is thus argued that computability, and with it the limits of Mathematics, ought to be determined not solely by Mathematics itself but also by Physical Principles. 1
Verification of Concurrent Programs: The AutomataTheoretic Framework
 Annals of Pure and Applied Logic
, 1987
"... We present an automatatheoretic framework to the verification of concurrent and nondeterministic programs. The basic idea is that to verify that a program P is correct one writes a program A that receives the computation of P as input and diverges only on incorrect computations of P . Now P is c ..."
Abstract

Cited by 47 (3 self)
 Add to MetaCart
We present an automatatheoretic framework to the verification of concurrent and nondeterministic programs. The basic idea is that to verify that a program P is correct one writes a program A that receives the computation of P as input and diverges only on incorrect computations of P . Now P is correct if and only if a program PA , obtained by combining P and A, terminates. We formalize this idea in a framework of !automata with a recursive set of states. This unifies previous works on verification of fair termination and verification of temporal properties. 1 Introduction In this paper we present an automatatheoretic framework that unifies several trends in the area of concurrent program verification. The trends are temporal logic, model checking, automata theory, and fair termination. Let us start with a survey of these trends. In 1977 Pnueli suggested the use of temporal logic in the verification of concurrent programs [Pn77]. The basic motivation is that in the verificat...
Degrees of random sets
, 1991
"... An explicit recursiontheoretic definition of a random sequence or random set of natural numbers was given by MartinLöf in 1966. Other approaches leading to the notions of nrandomness and weak nrandomness have been presented by Solovay, Chaitin, and Kurtz. We investigate the properties of nrando ..."
Abstract

Cited by 46 (4 self)
 Add to MetaCart
An explicit recursiontheoretic definition of a random sequence or random set of natural numbers was given by MartinLöf in 1966. Other approaches leading to the notions of nrandomness and weak nrandomness have been presented by Solovay, Chaitin, and Kurtz. We investigate the properties of nrandom and weakly nrandom sequences with an emphasis on the structure of their Turing degrees. After an introduction and summary, in Chapter II we present several equivalent definitions of nrandomness and weak nrandomness including a new definition in terms of a forcing relation analogous to the characterization of ngeneric sequences in terms of Cohen forcing. We also prove that, as conjectured by Kurtz, weak nrandomness is indeed strictly weaker than nrandomness. Chapter III is concerned with intrinsic properties of nrandom sequences. The main results are that an (n + 1)random sequence A satisfies the condition A (n) ≡T A⊕0 (n) (strengthening a result due originally to Sacks) and that nrandom sequences satisfy a number of strong independence properties, e.g., if A ⊕ B is nrandom then A is nrandom relative to B. It follows that any countable distributive lattice can be embedded