Results 1  10
of
10
A Theory of Program Size Formally Identical to Information Theory
, 1975
"... A new definition of programsize complexity is made. H(A;B=C;D) is defined to be the size in bits of the shortest selfdelimiting program for calculating strings A and B if one is given a minimalsize selfdelimiting program for calculating strings C and D. This differs from previous definitions: (1) ..."
Abstract

Cited by 405 (17 self)
 Add to MetaCart
A new definition of programsize complexity is made. H(A;B=C;D) is defined to be the size in bits of the shortest selfdelimiting program for calculating strings A and B if one is given a minimalsize selfdelimiting program for calculating strings C and D. This differs from previous definitions: (1) programs are required to be selfdelimiting, i.e. no program is a prefix of another, and (2) instead of being given C and D directly, one is given a program for calculating them that is minimal in size. Unlike previous definitions, this one has precisely the formal 2 G. J. Chaitin properties of the entropy concept of information theory. For example, H(A;B) = H(A) + H(B=A) + O(1). Also, if a program of length k is assigned measure 2 \Gammak , then H(A) = \Gamma log 2 (the probability that the standard universal computer will calculate A) +O(1). Key Words and Phrases: computational complexity, entropy, information theory, instantaneous code, Kraft inequality, minimal program, probab...
Finite differencing of computable expressions
, 1980
"... Finite differencing is a program optimization method that generalizes strength reduction, and provides an efficient implementation for a host of program transformations including "iterator inversion." Finite differencing is formally specified in terms of more basic transformations shown to ..."
Abstract

Cited by 133 (6 self)
 Add to MetaCart
(Show Context)
Finite differencing is a program optimization method that generalizes strength reduction, and provides an efficient implementation for a host of program transformations including "iterator inversion." Finite differencing is formally specified in terms of more basic transformations shown to preserve program semantics. Estimates of the speedup that the technique yields are given. A full illustrative example of algorithm derivation ispresented.
Informationtheoretic Limitations of Formal Systems
 JOURNAL OF THE ACM
, 1974
"... An attempt is made to apply informationtheoretic computational complexity to metamathematics. The paper studies the number of bits of instructions that must be a given to a computer for it to perform finite and infinite tasks, and also the amount of time that it takes the computer to perform these ..."
Abstract

Cited by 50 (7 self)
 Add to MetaCart
An attempt is made to apply informationtheoretic computational complexity to metamathematics. The paper studies the number of bits of instructions that must be a given to a computer for it to perform finite and infinite tasks, and also the amount of time that it takes the computer to perform these tasks. This is applied to measuring the difficulty of proving a given set of theorems, in terms of the number of bits of axioms that are assumed, and the size of the proofs needed to deduce the theorems from the axioms.
Experience with the SETL optimizer
 ACM Transactions on Programming Languages and Systems
, 1983
"... The structure of an existing optimizer for the very highlevel, set theoretically oriented programming language SETL is described, and its capabilities are illustrated. The use of novel techniques (supported by stateoftheart interprocedural program analysis methods) enables the optimizer to accom ..."
Abstract

Cited by 29 (0 self)
 Add to MetaCart
The structure of an existing optimizer for the very highlevel, set theoretically oriented programming language SETL is described, and its capabilities are illustrated. The use of novel techniques (supported by stateoftheart interprocedural program analysis methods) enables the optimizer to accomplish various sophisticated optimizations, the most significant of which are the automatic selection of data representations and the systematic elimination of superfluous copying operations. These techniques allow quite sophisticated datastructure choices to be made automatically. Categories and Subject Descriptors: D.3.2 [Programmiug Languages]: Language Classificationsvery highlevel languages; SETL; D.3.4 [Programming Languages]: ProcessorscompUers; optimization; 1.2.2 [Artificial Intelligence]: Automatic Programmingautomatic analysis of algorithms; program modification; program transformation
Program derivation with verified transformations  A case study
 Comm. Pure Appl. Math
, 1995
"... ..."
The formal reconstruction and speedup of the linear time fragment of Willard’s relational calculus subset
 In Proceedings of the IFIP TC 2 WG 2.1 international workshop on Algorithmic languages and calculi
, 1997
"... ..."
unknown title
"... Abstract: This paper reviews algorithmic information theory, which is an attempt to apply informationtheoretic and probabilistic ideas to recursive function theory. Typical concerns in this approach are, for example, the number of bits of information required to specify an algorithm, or the probab ..."
Abstract
 Add to MetaCart
Abstract: This paper reviews algorithmic information theory, which is an attempt to apply informationtheoretic and probabilistic ideas to recursive function theory. Typical concerns in this approach are, for example, the number of bits of information required to specify an algorithm, or the probability that a program whose bits are chosen by coin flipping produces a given output. During the past few years the definitions of algorithmic information theory have been reformulated. The basic features of the new formalism are presented here and certain results of R. M. Solovay are reported. Historical introduction To our knowledge, the first publication of the ideas of algorithmic information theory was the description of R. J. Solomonoff’s ideas given in 1962 by M. L. Minsky in his paper, “Problems of formulation for artificial intelligence ” [ 11: “Consider a slightly different form of inductive inference problem. Suppose that we are given a very long ‘data ’ sequence of symbols; the problem is to make a prediction about the future of the sequence. This is a
at IBM
"... This paper traces the evolution of IBM RlSC stringent realtime response requirements, the architecture from its origins in the 1970s at the performance target was 12 million instructions per IBM Thomas J. Watson Research Center to the second (MIPS) [ 11. This specialized application required prese ..."
Abstract
 Add to MetaCart
(Show Context)
This paper traces the evolution of IBM RlSC stringent realtime response requirements, the architecture from its origins in the 1970s at the performance target was 12 million instructions per IBM Thomas J. Watson Research Center to the second (MIPS) [ 11. This specialized application required presentday IBM RlSC System/6000 * computer. a very fast processor, but did not have to perform The acronym RISC, for Reduced InstructionSet complicated instructions and had little demand for Computer, is used in this paper to describe the floatingpoint calculations. Other than moving data 801 and subsequent architectures. However, between registers and memory, the machine had to be RlSC in this context does not strictly imply a able to add, combine fields extracted from several reduced number of instructions, but rather a set registers, perform branches, and carry out input/output of primitives carefully chosen to exploit the operations. fastest component of the storage hierarchy and When the telephone project was terminated in 1975, provide instructions that can be generated the machine itself had not been built, but the design had easily by compilers. We describe how these progressed to the point where it seemed to be an excellent goals were embodied in the 801 architecture basis for a generalpurpose, highperformance and how they have since evolved on the basis of miniprocessor. The attractiveness of the processor design experience and new technologies. The effect of stemmed from projections that it would be able to this evolution is illustrated with the results of compute at high speed relative to its cost in a variety of several benchmark tests of CPU performance. application areas. The most important features of the telephone