Results 1  10
of
46
Randomized Algorithms
, 1995
"... Randomized algorithms, once viewed as a tool in computational number theory, have by now found widespread application. Growth has been fueled by the two major benefits of randomization: simplicity and speed. For many applications a randomized algorithm is the fastest algorithm available, or the simp ..."
Abstract

Cited by 1923 (39 self)
 Add to MetaCart
Randomized algorithms, once viewed as a tool in computational number theory, have by now found widespread application. Growth has been fueled by the two major benefits of randomization: simplicity and speed. For many applications a randomized algorithm is the fastest algorithm available, or the simplest, or both. A randomized algorithm is an algorithm that uses random numbers to influence the choices it makes in the course of its computation. Thus its behavior (typically quantified as running time or quality of output) varies from
The Power of Vacillation in Language Learning
, 1992
"... Some extensions are considered of Gold's influential model of language learning by machine from positive data. Studied are criteria of successful learning featuring convergence in the limit to vacillation between several alternative correct grammars. The main theorem of this paper is that there ..."
Abstract

Cited by 48 (11 self)
 Add to MetaCart
Some extensions are considered of Gold's influential model of language learning by machine from positive data. Studied are criteria of successful learning featuring convergence in the limit to vacillation between several alternative correct grammars. The main theorem of this paper is that there are classes of languages that can be learned if convergence in the limit to up to (n+1) exactly correct grammars is allowed but which cannot be learned if convergence in the limit is to no more than n grammars, where the no more than n grammars can each make finitely many mistakes. This contrasts sharply with results of Barzdin and Podnieks and, later, Case and Smith, for learnability from both positive and negative data. A subset principle from a 1980 paper of Angluin is extended to the vacillatory and other criteria of this paper. This principle, provides a necessary condition for circumventing overgeneralization in learning from positive data. It is applied to prove another theorem to the eff...
Hypercomputation: computing more than the Turing machine
, 2002
"... In this report I provide an introduction to the burgeoning field of hypercomputation – the study of machines that can compute more than Turing machines. I take an extensive survey of many of the key concepts in the field, tying together the disparate ideas and presenting them in a structure which al ..."
Abstract

Cited by 35 (5 self)
 Add to MetaCart
In this report I provide an introduction to the burgeoning field of hypercomputation – the study of machines that can compute more than Turing machines. I take an extensive survey of many of the key concepts in the field, tying together the disparate ideas and presenting them in a structure which allows comparisons of the many approaches and results. To this I add several new results and draw out some interesting consequences of hypercomputation for several different disciplines. I begin with a succinct introduction to the classical theory of computation and its place amongst some of the negative results of the 20 th Century. I then explain how the ChurchTuring Thesis is commonly misunderstood and present new theses which better describe the possible limits on computability. Following this, I introduce ten different hypermachines (including three of my own) and discuss in some depth the manners in which they attain their power and the physical plausibility of each method. I then compare the powers of the different models using a device from recursion theory. Finally, I examine the implications of hypercomputation to mathematics, physics, computer science and philosophy. Perhaps the most important of these implications is that the negative mathematical results of Gödel, Turing and Chaitin are each dependent upon the nature of physics. This both weakens these results and provides strong links between mathematics and physics. I conclude that hypercomputation is of serious academic interest within many disciplines, opening new possibilities that were previously ignored because of long held misconceptions about the limits of computation.
Using random sets as oracles
"... Let R be a notion of algorithmic randomness for individual subsets of N. We say B is a base for R randomness if there is a Z �T B such that Z is R random relative to B. We show that the bases for 1randomness are exactly the Ktrivial sets and discuss several consequences of this result. We also sho ..."
Abstract

Cited by 34 (15 self)
 Add to MetaCart
(Show Context)
Let R be a notion of algorithmic randomness for individual subsets of N. We say B is a base for R randomness if there is a Z �T B such that Z is R random relative to B. We show that the bases for 1randomness are exactly the Ktrivial sets and discuss several consequences of this result. We also show that the bases for computable randomness include every ∆ 0 2 set that is not diagonally noncomputable, but no set of PAdegree. As a consequence, we conclude that an nc.e. set is a base for computable randomness iff it is Turing incomplete. 1
Algorithmic Entropy Of Sets
 9] M. FerbusZanda and S. Grigorieff. Is randomness &quot;native&quot; to Computer Science?. Logic in ComputerScience Column. Bulletin of EATCS, vol 74
, 1976
"... In a previous paper a theory of program size formally identical to information theory was developed. The entropy of an individual finite object was defined to be the size in bits of the smallest program for calculating it. It was shown that this is \Gamma log 2 of the probability that the object is ..."
Abstract

Cited by 17 (5 self)
 Add to MetaCart
In a previous paper a theory of program size formally identical to information theory was developed. The entropy of an individual finite object was defined to be the size in bits of the smallest program for calculating it. It was shown that this is \Gamma log 2 of the probability that the object is obtained by means of a program whose successive bits are chosen by flipping an unbiased coin. Here a theory of the entropy of recursively enumerable sets of objects is proposed which includes the previous theory as the special case of sets having a single element. The primary concept in the generalized theory is the probability that a computing machine enumerates a given set when its program is manufactured by coin flipping. The entropy of a set is defined to be \Gamma log 2 of this probability. 2 G. J. Chaitin 1. Introduction In a classical paper on computability by probabilistic machines [1], de Leeuw et al. showed that if a machine with a random element can enumerate a specific set o...
The Value of Information
 in Monotone Decision Problems,” MIT Working Paper
, 2001
"... To the memory of Andrei Kolmogorov, In the 100th year since his birth. There appears to be a gap between usual interpretations of Godel Theorem and what is actually proven. Closing this gap does not seem obvious and involves complexity theory. (This is unrelated to, well studied before, complexity q ..."
Abstract

Cited by 13 (1 self)
 Add to MetaCart
To the memory of Andrei Kolmogorov, In the 100th year since his birth. There appears to be a gap between usual interpretations of Godel Theorem and what is actually proven. Closing this gap does not seem obvious and involves complexity theory. (This is unrelated to, well studied before, complexity quantifications of the usual Godel effects.) Similar problems and answers apply to other unsolvability results for tasks where required solutions are not unique, such as, e.g., nonrecursive tilings. 1 Introduction. D.Hilbert asked if formal arithmetic can be consistently extended to a complete theory. The question was somewhat vague since an obvious answer was “yes”: just add to the axioms of Peano Arithmetic (PA) 1 a maximal consistent set, clearly existing albeit hard to find. K.Godel formalized this question as existence among such extensions of recursively enumerable ones and gave it a
ErrorBounded Probabilistic Computations Between MA and AM
 IN PROCEEDINGS 28TH MATHEMATICAL FOUNDATIONS OF COMPUTER SCIENCE
, 2002
"... We introduce the probabilistic class SBP which is defined in a BPPlike manner. This class emerges from BPP by keeping the promise of a probability gap but decreasing the probability limit from 1/2 to exponentially small values. We show ..."
Abstract

Cited by 7 (1 self)
 Add to MetaCart
We introduce the probabilistic class SBP which is defined in a BPPlike manner. This class emerges from BPP by keeping the promise of a probability gap but decreasing the probability limit from 1/2 to exponentially small values. We show
Quantum computers that can be simulated classically in polynomial time
 In: Proceedings of the ThirtyThird Annual ACM Symposium on Theory of Computing. ACM
, 2001
"... A model of quantum computation based on unitary matrix operations was introduced by Feynman and Deutsch. It has been asked whether the power of this model exceeds that of classical Turing machines. We show here that a signi cant class of these quantum computations can be simulated classically in p ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
(Show Context)
A model of quantum computation based on unitary matrix operations was introduced by Feynman and Deutsch. It has been asked whether the power of this model exceeds that of classical Turing machines. We show here that a signi cant class of these quantum computations can be simulated classically in polynomial time. In particular we show that twobit operations characterized by 4 4 matrices in which the sixteen entries obey a set of ve polynomial relations can be composed according to certain rules to yield a class of circuits that can be simulated classically in polynomial time. This contrasts with the known universality of twobit operations, and demonstrates that eÆcient quantum computation of restricted classes is reconcilable with the Polynomial Time Turing Hypothesis. In other words it is possible that quantum phenomena can be used in a scalable fashion to make computers but that they do not have superpolynomial speedups compared to Turing machines for any problem. The techniques introduced bring the quantum computational model within the realm of algebraic complexity theory. In a manner consistent will one view of quantum physics, the wave function is simulated deterministically, and randomization arises only in the course of making measurements. The results generalize the quantum model in that they do not require the matrices to be unitary. In a dierent direction these techniques also yield deterministic polynomial time algorithms for the decision and parity problems for certain classes of readtwice Boolean formulae. All our results are based on the use of gates that are dened in terms of their graph matching properties. 1. BACKGROUND The now classical theory of computational complexity is based on the computational model proposed by Turing[30] augmented in two ways: On the one hand random oper
Relational properties expressible with one universal quantifier are testable
 Stochastic Algorithms: Foundations and Applications, 5th International Symposium, SAGA 2009
"... Abstract. In property testing a small, random sample of an object is taken and one wishes to distinguish with high probability between the case where it has a desired property and the case where it is far from having the property. Much of the recent work has focused on graphs. In the present paper t ..."
Abstract

Cited by 5 (5 self)
 Add to MetaCart
(Show Context)
Abstract. In property testing a small, random sample of an object is taken and one wishes to distinguish with high probability between the case where it has a desired property and the case where it is far from having the property. Much of the recent work has focused on graphs. In the present paper three generalized models for testing relational structures are introduced and relationships between these variations are shown. Furthermore, the logical classification problem for testability is considered and, as the main result, it is shown that Ackermann’s class with equality is testable. Key words: property testing, logic 1
Strong Determinism vs. Computability
 The Foundational Debate, Complexity and Constructivity in Mathematics and
, 1995
"... Are minds subject to laws of physics? Are the laws of physics computable? Are conscious thought processes computable? Currently there is little agreement as to what are the right answers to these questions. Penrose ([41], p. 644) goes one step further and asserts that: a radical new theory is indeed ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
(Show Context)
Are minds subject to laws of physics? Are the laws of physics computable? Are conscious thought processes computable? Currently there is little agreement as to what are the right answers to these questions. Penrose ([41], p. 644) goes one step further and asserts that: a radical new theory is indeed needed, and I am suggesting, moreover, that this theory, when it is found, will be of an essentially noncomputational character. The aim of this paper is three fold: 1) to examine the incompatibility between the hypothesis of strong determinism and computability, 2) to give new examples of uncomputable physical laws, and 3) to discuss the relevance of Gödel’s Incompleteness Theorem in refuting the claim that an algorithmic theory—like strong AI—can provide an adequate theory of mind. Finally, we question the adequacy of the theory of computation to discuss physical laws and thought processes. 1