Results 1  10
of
32
A Case Study in RealTime Parallel Computation: Correcting Algorithms
 Journal of Parallel and Distributed Computing
, 2001
"... A correcting algorithm is one that receives an endless stream of corrections to its initial input data and terminates when all the corrections received have been taken into account. We give a characterization of correcting algorithms based on the theory of dataaccumulating algorithms. In particular ..."
Abstract

Cited by 21 (19 self)
 Add to MetaCart
(Show Context)
A correcting algorithm is one that receives an endless stream of corrections to its initial input data and terminates when all the corrections received have been taken into account. We give a characterization of correcting algorithms based on the theory of dataaccumulating algorithms. In particular, it is shown that any correcting algorithm exhibits superunitary behavior in a parallel computation setting if and only if the static counterpart of that correcting algorithm manifests a strictly superunitary speedup. Since both classes of correcting and dataaccumulating algorithms are included in the more general class of realtime algorithms, we show in fact that many problems from this class manifest superunitary behavior. Moreover, we give an example of a realtime parallel computation that pertains to neither of the two classes studied (namely, correcting and dataaccumulating algorithms), but still manifests superunitary behavior. Because of the aforementioned results, the usual measures of performance for parallel algorithms (that is, speedup and efficiency) lose much of their ability to convey effectively the nature of the phenomenon taking place in the realtime case. We propose therefore a more expressive measure that captures all the relevant parameters of the computation. Our proposal is made in terms of a graphical representation. We state as an open problem the investigation of such a measure, including nding an analytical form for it.
From Heisenberg to Gödel via Chaitin
, 2008
"... In 1927 Heisenberg discovered that the “more precisely the position is determined, the less precisely the momentum is known in this instant, and vice versa”. Four years later Gödel showed that a finitely specified, consistent formal system which is large enough to include arithmetic is incomplete. A ..."
Abstract

Cited by 11 (9 self)
 Add to MetaCart
In 1927 Heisenberg discovered that the “more precisely the position is determined, the less precisely the momentum is known in this instant, and vice versa”. Four years later Gödel showed that a finitely specified, consistent formal system which is large enough to include arithmetic is incomplete. As both results express some kind of impossibility it is natural to ask whether there is any relation between them, and, indeed, this question has been repeatedly asked for a long time. The main interest seems to have been in possible implications of incompleteness to physics. In this note we will take interest in the converse implication and will offer a positive answer to the question: Does uncertainty imply incompleteness? We will show that algorithmic randomness is equivalent to a “formal uncertainty principle ” which implies Chaitin’s informationtheoretic incompleteness. We also show that the derived uncertainty relation, for many computers, is physical. This fact supports the conjecture that uncertainty implies randomness not only in mathematics, but also in physics.
Is Complexity a Source of Incompleteness?
 IS COMPLEXITY A SOURCE OF INCOMPLETENESS
, 2004
"... ..."
Mathematical proofs at a crossroad
 Theory Is Forever, Lectures Notes in Comput. Sci. 3113
, 2004
"... Abstract. For more than 2000 years, from Pythagoras and Euclid to Hilbert and Bourbaki, mathematical proofs were essentially based on axiomaticdeductive reasoning. In the last decades, the increasing length and complexity of many mathematical proofs led to the expansion of some empirical, experimen ..."
Abstract

Cited by 7 (7 self)
 Add to MetaCart
(Show Context)
Abstract. For more than 2000 years, from Pythagoras and Euclid to Hilbert and Bourbaki, mathematical proofs were essentially based on axiomaticdeductive reasoning. In the last decades, the increasing length and complexity of many mathematical proofs led to the expansion of some empirical, experimental, psychological and social aspects, yesterday only marginal, but now changing radically the very essence of proof. In this paper, we try to organize this evolution, to distinguish its different steps and aspects, and to evaluate its advantages and shortcomings. Axiomaticdeductive proofs are not a posteriori work, a luxury we can marginalize nor are computerassisted proofs bad mathematics. There is hope for integration! 1
Complexity, Deconstruction, and Relativism
 Theory, Culture & Society
, 2005
"... The online version of this article can be found at: ..."
Abstract

Cited by 7 (2 self)
 Add to MetaCart
(Show Context)
The online version of this article can be found at:
Entropic Principles
, 2008
"... We discuss the evolution of radiation and BekensteinHawking entropies in expanding isotropic universes. We establish a general relation which shows why it is inevitable that there is currently a huge difference in the numerical values of these two entropies. Some anthropic constraints on their valu ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
We discuss the evolution of radiation and BekensteinHawking entropies in expanding isotropic universes. We establish a general relation which shows why it is inevitable that there is currently a huge difference in the numerical values of these two entropies. Some anthropic constraints on their values are given and other aspects of the cosmological ’entropy gap ’ problem are discussed. The coincidence of the classical and quantum entropies for black holes with Hawking lifetime equal to the age of the universe, and hence of radius equal to the proton size, is shown to be identical to the condition that we obseve the universe at the main sequence lifetime. 1.
Library of Congress CataloginginPublication Data
"... Institute, a federally funded research and development center supported by the ..."
Abstract
 Add to MetaCart
Institute, a federally funded research and development center supported by the
A Case Study in RealTime Parallel Computation: Correcting Algorithms
"... Abstract A correcting algorithm is one that receives an endless stream of corrections to its initial input data and terminates when all the corrections received have been taken into account. We give a characterization of correcting algorithms based on the theory of dataaccumulating algorithms. In p ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract A correcting algorithm is one that receives an endless stream of corrections to its initial input data and terminates when all the corrections received have been taken into account. We give a characterization of correcting algorithms based on the theory of dataaccumulating algorithms. In particular, it is shown that any correcting algorithm exhibits superunitary behavior in a parallel computation setting if and only if the static counterpart of that correcting algorithm manifests a strictly superunitary speedup. Since both classes of correcting and dataaccumulating algorithms are included in the more general class of realtime algorithms, we show in fact that many problems from this class manifest superunitary behavior. Moreover, we give an example of a realtime parallel computation that pertains to neither of the two classes studied (namely, correcting and dataaccumulating algorithms), but still manifests superunitary behavior. Because of the aforementioned results, the usual measures of performance for parallel algorithms (that is, speedup and efficiency) lose much of their ability to convey effectively the nature of the phenomenon taking place in the realtime case. We propose therefore a more expressive measure that captures all the relevant parameters of the computation. Our proposal is made in terms of a graphical representation. We state as an open problem the investigation of such a measure, including finding an analytical form for it. 1 Introduction A well known result concerning the limits of parallel computation, called the speedup theorem, states that when two or more processors are applied to solve a given computational problem, the decrease in running time is at most proportional to the increase in the number of processors [4, 13].
www.elsevier.com/locate/yaama Is complexity a source of incompleteness?
, 2004
"... In this paper we prove Chaitin’s “heuristic principle, ” the theorems of a finitelyspecified theory cannot be significantly more complex than the theory itself, for an appropriate measure of complexity. We show that the measure is invariant under the change of the Gödel numbering. For this measure, ..."
Abstract
 Add to MetaCart
(Show Context)
In this paper we prove Chaitin’s “heuristic principle, ” the theorems of a finitelyspecified theory cannot be significantly more complex than the theory itself, for an appropriate measure of complexity. We show that the measure is invariant under the change of the Gödel numbering. For this measure, the theorems of a finitelyspecified, sound, consistent theory strong enough to formalize arithmetic which is arithmetically sound (like Zermelo–Fraenkel set theory with choice or Peano Arithmetic) have bounded complexity, hence every sentence of the theory which is significantly more complex than the theory is unprovable. Previous results showing that incompleteness is not accidental, but ubiquitous are here reinforced in probabilistic terms: the probability that a true sentence of length n is provable in the theory tends to zero when n tends to infinity, while the probability that a sentence of length n is true is strictly positive. © 2004 Elsevier Inc. All rights reserved. 1.