Results 1  10
of
11
A Theory of Program Size Formally Identical to Information Theory
, 1975
"... A new definition of programsize complexity is made. H(A;B=C;D) is defined to be the size in bits of the shortest selfdelimiting program for calculating strings A and B if one is given a minimalsize selfdelimiting program for calculating strings C and D. This differs from previous definitions: (1) ..."
Abstract

Cited by 333 (16 self)
 Add to MetaCart
A new definition of programsize complexity is made. H(A;B=C;D) is defined to be the size in bits of the shortest selfdelimiting program for calculating strings A and B if one is given a minimalsize selfdelimiting program for calculating strings C and D. This differs from previous definitions: (1) programs are required to be selfdelimiting, i.e. no program is a prefix of another, and (2) instead of being given C and D directly, one is given a program for calculating them that is minimal in size. Unlike previous definitions, this one has precisely the formal 2 G. J. Chaitin properties of the entropy concept of information theory. For example, H(A;B) = H(A) + H(B=A) + O(1). Also, if a program of length k is assigned measure 2 \Gammak , then H(A) = \Gamma log 2 (the probability that the standard universal computer will calculate A) +O(1). Key Words and Phrases: computational complexity, entropy, information theory, instantaneous code, Kraft inequality, minimal program, probab...
Algorithmic information theory
 IBM JOURNAL OF RESEARCH AND DEVELOPMENT
, 1977
"... This paper reviews algorithmic information theory, which is an attempt to apply informationtheoretic and probabilistic ideas to recursive function theory. Typical concerns in this approach are, for example, the number of bits of information required to specify an algorithm, or the probability that ..."
Abstract

Cited by 320 (19 self)
 Add to MetaCart
This paper reviews algorithmic information theory, which is an attempt to apply informationtheoretic and probabilistic ideas to recursive function theory. Typical concerns in this approach are, for example, the number of bits of information required to specify an algorithm, or the probability that a program whose bits are chosen by coin flipping produces a given output. During the past few years the definitions of algorithmic information theory have been reformulated. The basic features of the new formalism are presented here and certain results of R. M. Solovay are reported.
Improvements to Graph Coloring Register Allocation
 ACM Transactions on Programming Languages and Systems
, 1994
"... This paper describes both the techniques themselves and our experience building and using register allocators that incorporate them. It provides a detailed description of optimistic coloring and rematerialization. It presents experimental data to show the performance of several versions of the regis ..."
Abstract

Cited by 173 (8 self)
 Add to MetaCart
This paper describes both the techniques themselves and our experience building and using register allocators that incorporate them. It provides a detailed description of optimistic coloring and rematerialization. It presents experimental data to show the performance of several versions of the register allocator on a suite of FORTRAN programs. It discusses several insights that we discovered only after repeated implementation of these allocators. Categories and Subject Descriptors: D.3.4 [Programming Languages]: Processorscompi l ers , optimization General terms: Languages Additional Key Words and Phrases: Register allocation, code generation, graph coloring 1. INTRODUCTION The relationship between runtime performance and e#ective use of a machine's register set is well understood. In a compiler, the process of deciding which values to keep in registers at each point in the generated code is called register allocation. Value
Register Allocation via Graph Coloring
, 1992
"... Chaitin and his colleagues at IBM in Yorktown Heights built the first global register allocator based on graph coloring. This thesis describes a series of improvements and extensions to the Yorktown allocator. There are four primary results: Optimistic coloring Chaitin's coloring heuristic pessimis ..."
Abstract

Cited by 135 (4 self)
 Add to MetaCart
Chaitin and his colleagues at IBM in Yorktown Heights built the first global register allocator based on graph coloring. This thesis describes a series of improvements and extensions to the Yorktown allocator. There are four primary results: Optimistic coloring Chaitin's coloring heuristic pessimistically assumes any node of high degree will not be colored and must therefore be spilled. By optimistically assuming that nodes of high degree will receive colors, I often achieve lower spill costs and faster code; my results are never worse. Coloring pairs The pessimism of Chaitin's coloring heuristic is emphasized when trying to color register pairs. My heuristic handles pairs as a natural consequence of its optimism. Rematerialization Chaitin et al. introduced the idea of rematerialization to avoid the expense of spilling and reloading certain simple values. By propagating rematerialization information around the SSA graph using a simple variation of Wegman and Zadeck's constant propag...
Decision Procedures for Multisets with Cardinality Constraints
"... Abstract. Applications in software verification and interactive theorem proving often involve reasoning about sets of objects. Cardinality constraints on such collections also arise in these applications. Multisets arise in these applications for analogous reasons as sets: abstracting the content of ..."
Abstract

Cited by 11 (7 self)
 Add to MetaCart
Abstract. Applications in software verification and interactive theorem proving often involve reasoning about sets of objects. Cardinality constraints on such collections also arise in these applications. Multisets arise in these applications for analogous reasons as sets: abstracting the content of linked data structure with duplicate elements leads to multisets. Interactive theorem provers such as Isabelle specify theories of multisets and prove a number of theorems about them to enable their use in interactive verification. However, the decidability and complexity of constraints on multisets is much less understood than for constraints on sets. The first contribution of this paper is a polynomialspace algorithm for deciding expressive quantifierfree constraints on multisets with cardinality operators. Our decision procedure reduces in polynomial time constraints on multisets to constraints in an extension of quantifierfree Presburger arithmetic with certain “unbounded sum ” expressions. We prove bounds on solutions of resulting constraints and describe a polynomialspace decision procedure for these constraints. The second contribution of this paper is a proof that adding quantifiers to a constraint language containing subset and cardinality operators yields undecidable constraints. The result follows by reduction from Hilbert’s 10th problem. 1
On Linear Arithmetic with Stars
"... Abstract. We consider an extension of integer linear arithmetic with a star operator that takes closure under vector addition of the set of solutions of linear arithmetic subformula. We show that the satisfiability problem for this language is in NP (and therefore NPcomplete). Our proof uses a gene ..."
Abstract

Cited by 8 (6 self)
 Add to MetaCart
Abstract. We consider an extension of integer linear arithmetic with a star operator that takes closure under vector addition of the set of solutions of linear arithmetic subformula. We show that the satisfiability problem for this language is in NP (and therefore NPcomplete). Our proof uses a generalization of a recent result on sparse solutions of integer linear programming problems. We present two consequences of our result. The first one is an optimal decision procedure for a logic of sets, multisets, and cardinalities that has applications in verification, interactive theorem proving, and description logics. The second is NPcompleteness of the reachability problem for a class of “homogeneous ” transition systems whose transitions are defined using integer linear arithmetic formulas. 1
Serializing Parallel Programs by Removing Redundant Computation
 Master's thesis, MIT
, 1994
"... Programs often exhibit more parallelism than is actually available in the target architecture. This thesis introduces and evaluates three methods  loop unrolling, loop common expression elimination, and loop differencing  for automatically transforming a parallel algorithm into a less parallel o ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
Programs often exhibit more parallelism than is actually available in the target architecture. This thesis introduces and evaluates three methods  loop unrolling, loop common expression elimination, and loop differencing  for automatically transforming a parallel algorithm into a less parallel one that takes advantage of only the parallelism available at run time. The resulting program performs less computation to produce its results; the running time is not just improved via secondorder effects such as improving use of the memory hierarchy or reducing overhead (such optimizations can further improve performance). The asymptotic complexity is not usually reduced, but the constant factors can be lowered significantly, often by a factor of 4 or more. The basis for these methods is the detection of loop common expressions, or common subexpressions in different iterations of a parallel loop. The loop differencing method also permits computation of just the change in an expression from iteration to iteration.
Using Graph Coloring in an Algebraic Compiler
 Acta Informatica
, 1996
"... An algebraic compiler allows incremental development of the source program and builds its target image by composing the target images of the program components. In this paper we describe the general structure of an algebraic compiler focusing on compositional code generation. We show that the mathem ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
An algebraic compiler allows incremental development of the source program and builds its target image by composing the target images of the program components. In this paper we describe the general structure of an algebraic compiler focusing on compositional code generation. We show that the mathematical model for register management by an algebraic compiler is a graph coloring problem in which an optimally colored graph is obtained by composing optimally colored subgraphs. More precisely, we define the cliquecomposition of graphs G 1 and G 2 as the graph obtained by joining all the vertices in a clique in G 1 with all the vertices in a clique in G 2 and show that optimal register management by an algebraic compiler is achieved by performing cliquecomposition operations. Thus, an algebraic compiler provides automatically adequate clique separation of the global register management graph. We present a lineartime algorithm that takes as input optimally colored graphs G 1 and G 2 and...
A LANGUAGE DESIGN FOR VECTOR MACHINES
, 1975
"... This paper deals with a programming language under development at NASA's Langley Research Center for the CDC STAR100. The design goals for the language are that it be basic in design and able to be extended as deemed necessary to serve the user community, capable of the expression of efficient algo ..."
Abstract
 Add to MetaCart
This paper deals with a programming language under development at NASA's Langley Research Center for the CDC STAR100. The design goals for the language are that it be basic in design and able to be extended as deemed necessary to serve the user community, capable of the expression of efficient algorithms by forcing the user to make the maximum use of the specialized hardware design, and easy to implement so that the language and compiler could be developed with a minimum of effort. The key to the language was in choosing the basic data types and data structures. Scalars, vectors, and strings are available data types in the language. an associated set of operators which consist primarily of the operations provided by the hardware. restricted form of the array. in arrays, forcing the user to vectorize scalar data when it is necessary to structure it. such as real arrays since the high level vector machine instructions may be used to deal with them directly. Each basic data type has The only data structure in the language is a Only vector and string data types may be stored This permits the most effective use of the machine for entities This paper is a result of work started under NASA Grant NGR 47102001 while the authors were in residence at ICASE, NASA Langley Research Center.
Hyperset Approach to Semistructured Databases and the Experimental Implementation of the Query Language Delta
"... This thesis presents practical suggestions towards the implementation of the hyperset approach to semistructured databases and the associated query language ∆ (Delta). This work can be characterised as part of a topdown approach to semistructured databases, from theory to practice. Over the last ..."
Abstract
 Add to MetaCart
This thesis presents practical suggestions towards the implementation of the hyperset approach to semistructured databases and the associated query language ∆ (Delta). This work can be characterised as part of a topdown approach to semistructured databases, from theory to practice. Over the last decade the rise of the WorldWide Web has lead to the suggestion for a shift from structured relational databases to semistructured databases, which can query distributed and heterogeneous data having unfixed/nonrigid structure in contrast to ordinary relational databases. In principle, the WorldWide Web can be considered as a large distributed semistructured database where arbitrary hyperlinking between Web pages can be interpreted as graph edges (inspiring the synonym ‘Weblike ’ for ‘semistructured ’ databases also called here WDB). In fact, most approaches to semistructured databases are based on graphs, whereas the hyperset approach presented here represents such graphs as systems of set equations. This is more than just a style of notation, but rather a style of thought and the corresponding mathematical background leads to considerable differences with other approaches to