Results 11  20
of
852
Computability Classes for Enforcement Mechanisms
 ACM Transactions on Programming Languages and Systems
, 2003
"... A precise characterization of those security policies enforceable by program rewriting is given. This characterization exposes and rectifies problems in prior work on execution monitoring, yielding a more precise characterization of those security policies enforceable by execution monitors and a ..."
Abstract

Cited by 79 (18 self)
 Add to MetaCart
A precise characterization of those security policies enforceable by program rewriting is given. This characterization exposes and rectifies problems in prior work on execution monitoring, yielding a more precise characterization of those security policies enforceable by execution monitors and a taxonomy of enforceable security policies. Some but not all classes can be identified with known classes from computational complexity theory.
Special Purpose Parallel Computing
 Lectures on Parallel Computation
, 1993
"... A vast amount of work has been done in recent years on the design, analysis, implementation and verification of special purpose parallel computing systems. This paper presents a survey of various aspects of this work. A long, but by no means complete, bibliography is given. 1. Introduction Turing ..."
Abstract

Cited by 77 (5 self)
 Add to MetaCart
A vast amount of work has been done in recent years on the design, analysis, implementation and verification of special purpose parallel computing systems. This paper presents a survey of various aspects of this work. A long, but by no means complete, bibliography is given. 1. Introduction Turing [365] demonstrated that, in principle, a single general purpose sequential machine could be designed which would be capable of efficiently performing any computation which could be performed by a special purpose sequential machine. The importance of this universality result for subsequent practical developments in computing cannot be overstated. It showed that, for a given computational problem, the additional efficiency advantages which could be gained by designing a special purpose sequential machine for that problem would not be great. Around 1944, von Neumann produced a proposal [66, 389] for a general purpose storedprogram sequential computer which captured the fundamental principles of...
Evolving Algebras: An Attempt To Discover Semantics
, 1993
"... Machine (a virtual machine model which underlies most of the current Prolog implementations and incorporates crucial optimization techniques) starting from a more abstract EA for Prolog developed by Borger in [Bo1Bo3]. Q: How do you tailor an EA machine to the abstraction level of an algorithm wh ..."
Abstract

Cited by 74 (12 self)
 Add to MetaCart
Machine (a virtual machine model which underlies most of the current Prolog implementations and incorporates crucial optimization techniques) starting from a more abstract EA for Prolog developed by Borger in [Bo1Bo3]. Q: How do you tailor an EA machine to the abstraction level of an algorithm whose individual steps are complicated algorithms all by themselves? For example, the algorithm may be written in a high level language that allows, say, multiplying integer matrices in one step. A: You model the given algorithm modulo those algorithms needed to perform single steps. In your case, matrix multiplication will be built in as an operation. Q: Coming back to Turing, there could be a good reason for him to speak about computable functions rather than algorithms. We don't really know what algorithms are. A: I agree. Notice, however, that there are different notions of algorithm. On the one hand, an algorithm is an intuitive idea which you have in your head before writing code. Th...
Minimum Description Length Induction, Bayesianism, and Kolmogorov Complexity
 IEEE Transactions on Information Theory
, 1998
"... The relationship between the Bayesian approach and the minimum description length approach is established. We sharpen and clarify the general modeling principles MDL and MML, abstracted as the ideal MDL principle and defined from Bayes's rule by means of Kolmogorov complexity. The basic condition un ..."
Abstract

Cited by 67 (7 self)
 Add to MetaCart
The relationship between the Bayesian approach and the minimum description length approach is established. We sharpen and clarify the general modeling principles MDL and MML, abstracted as the ideal MDL principle and defined from Bayes's rule by means of Kolmogorov complexity. The basic condition under which the ideal principle should be applied is encapsulated as the Fundamental Inequality, which in broad terms states that the principle is valid when the data are random, relative to every contemplated hypothesis and also these hypotheses are random relative to the (universal) prior. Basically, the ideal principle states that the prior probability associated with the hypothesis should be given by the algorithmic universal probability, and the sum of the log universal probability of the model plus the log of the probability of the data given the model should be minimized. If we restrict the model class to the finite sets then application of the ideal principle turns into Kolmogorov's mi...
The fully informed particle swarm: Simpler, maybe better
 IEEE Transactions on Evolutionary Computation
, 2004
"... The canonical particle swarm algorithm is a new approach to optimization, drawing inspiration from group behavior and the establishment of social norms. It is gaining popularity, especially because of the speed of convergence and the fact it is easy to use. However, we feel that each individual is n ..."
Abstract

Cited by 66 (3 self)
 Add to MetaCart
The canonical particle swarm algorithm is a new approach to optimization, drawing inspiration from group behavior and the establishment of social norms. It is gaining popularity, especially because of the speed of convergence and the fact it is easy to use. However, we feel that each individual is not simply influenced by the best performer among his neighbors. We thus decided to make the individuals “fully informed. ” The results are very promising, as informed individuals seem to find better solutions in all the benchmark functions.
Optimal Ordered Problem Solver
, 2002
"... We present a novel, general, optimally fast, incremental way of searching for a universal algorithm that solves each task in a sequence of tasks. The Optimal Ordered Problem Solver (OOPS) continually organizes and exploits previously found solutions to earlier tasks, eciently searching not only the ..."
Abstract

Cited by 62 (20 self)
 Add to MetaCart
We present a novel, general, optimally fast, incremental way of searching for a universal algorithm that solves each task in a sequence of tasks. The Optimal Ordered Problem Solver (OOPS) continually organizes and exploits previously found solutions to earlier tasks, eciently searching not only the space of domainspecific algorithms, but also the space of search algorithms. Essentially we extend the principles of optimal nonincremental universal search to build an incremental universal learner that is able to improve itself through experience.
Trivial Reals
"... Solovay showed that there are noncomputable reals ff such that H(ff _ n) 6 H(1n) + O(1), where H is prefixfree Kolmogorov complexity. Such Htrivial reals are interesting due to the connection between algorithmic complexity and effective randomness. We give a new, easier construction of an Htrivi ..."
Abstract

Cited by 57 (31 self)
 Add to MetaCart
Solovay showed that there are noncomputable reals ff such that H(ff _ n) 6 H(1n) + O(1), where H is prefixfree Kolmogorov complexity. Such Htrivial reals are interesting due to the connection between algorithmic complexity and effective randomness. We give a new, easier construction of an Htrivial real. We also analyze various computabilitytheoretic properties of the Htrivial reals, showing for example that no Htrivial real can compute the halting problem. Therefore, our construction of an Htrivial computably enumerable set is an easy, injuryfree construction of an incomplete computably enumerable set. Finally, we relate the Htrivials to other classes of "highly nonrandom " reals that have been previously studied.
Automating Software Feature Verification
 BELL LABS TECHNICAL JOURNAL
, 2000
"... A significant part of the call processing software for Lucent's new PathStar access server [FSW98] was checked with automated formal verification techniques. The verification system we built for this purpose, named FeaVer, maintains a database of feature requirements which is accessible via a web ..."
Abstract

Cited by 56 (15 self)
 Add to MetaCart
A significant part of the call processing software for Lucent's new PathStar access server [FSW98] was checked with automated formal verification techniques. The verification system we built for this purpose, named FeaVer, maintains a database of feature requirements which is accessible via a web browser. Via the browser the user can invoke verification runs. The verifications are performed by the system with the help of a standard logic model checker that runs in the background, invisibly to the user. Requirement violations are reported as C execution traces and stored in the database for user perusal and correction. The main strength of the system is in the detection of undesired feature interactions at an early stage of systems design, the type of problem that is notoriously difficult to detect with traditional testing techniques. Error reports
The Computational Power and Complexity of Constraint Handling Rules
 In Second Workshop on Constraint Handling Rules, at ICLP05
, 2005
"... Constraint Handling Rules (CHR) is a highlevel rulebased programming language which is increasingly used for general purposes. We introduce the CHR machine, a model of computation based on the operational semantics of CHR. Its computational power and time complexity properties are compared to thos ..."
Abstract

Cited by 50 (21 self)
 Add to MetaCart
Constraint Handling Rules (CHR) is a highlevel rulebased programming language which is increasingly used for general purposes. We introduce the CHR machine, a model of computation based on the operational semantics of CHR. Its computational power and time complexity properties are compared to those of the wellunderstood Turing machine and Random Access Memory machine. This allows us to prove the interesting result that every algorithm can be implemented in CHR with the best known time and space complexity. We also investigate the practical relevance of this result and the constant factors involved. Finally we expand the scope of the discussion to other (declarative) programming languages.
A lambda calculus for quantum computation
 SIAM Journal of Computing
"... The classical lambda calculus may be regarded both as a programming language and as a formal algebraic system for reasoning about computation. It provides a computational model equivalent to the Turing machine, and continues to be of enormous benefit in the classical theory of computation. We propos ..."
Abstract

Cited by 49 (1 self)
 Add to MetaCart
The classical lambda calculus may be regarded both as a programming language and as a formal algebraic system for reasoning about computation. It provides a computational model equivalent to the Turing machine, and continues to be of enormous benefit in the classical theory of computation. We propose that quantum computation, like its classical counterpart, may benefit from a version of the lambda calculus suitable for expressing and reasoning about quantum algorithms. In this paper we develop a quantum lambda calculus as an alternative model of quantum computation, which combines some of the benefits of both the quantum Turing machine and the quantum circuit models. The calculus turns out to be closely related to the linear lambda calculi used in the study of Linear Logic. We set up a computational model and an equational proof system for this calculus, and we argue that it is equivalent to the quantum Turing machine.