Results 1 
9 of
9
Resource Control Graphs
"... Resource Control Graphs are an abstract representation of programs. Each state of the program is abstracted by its size, and each instruction is abstracted by the effects it has on the state size whenever it is executed. The abstractions of instruction effects are then used as weights on the arcs of ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Resource Control Graphs are an abstract representation of programs. Each state of the program is abstracted by its size, and each instruction is abstracted by the effects it has on the state size whenever it is executed. The abstractions of instruction effects are then used as weights on the arcs of a program’s Control Flow Graph. Termination is proved by finding decreases in a wellfounded order on statesize, in line with other termination analyses, resulting in proofs similar in spirit to those produced by Size Change Termination analysis. However, the size of states may also be used to measure the amount of space consumed by the program at each point of execution. This leads to an alternative characterisation of the Non Size Increasing programs, i.e. of programs that can compute without allocating new memory. This new tool is able to encompass several existing analyses and similarities with other studies suggest that even more analyses might be expressable in this framework, thus giving hopes for a generic tool for studying programs.
Formalizing Turing Machines
"... Abstract. We discuss the formalization, in the Matita Theorem Prover, of a few, basic results on Turing Machines, up to the existence of a (certified) Universal Machine. The work is meant to be a preliminary step towards the creation of a formal repository in Complexity Theory, and is a small piece ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
(Show Context)
Abstract. We discuss the formalization, in the Matita Theorem Prover, of a few, basic results on Turing Machines, up to the existence of a (certified) Universal Machine. The work is meant to be a preliminary step towards the creation of a formal repository in Complexity Theory, and is a small piece in our Reverse Complexity program, aiming to a comfortable, machine independent axiomatization of the field. 1
A formal proof of borodintrakhtenbrot’s gap theorem
 In Certified Programs and Proofs  Third International Conference, CPP 2013
"... Abstract. In this paper, we discuss the formalization of the well known Gap Theorem of Complexity Theory, asserting the existence of arbitrarily large gaps between complexity classes. The proof is done at an abstract, machine independent level, and is particularly aimed to identify the minimal set ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
(Show Context)
Abstract. In this paper, we discuss the formalization of the well known Gap Theorem of Complexity Theory, asserting the existence of arbitrarily large gaps between complexity classes. The proof is done at an abstract, machine independent level, and is particularly aimed to identify the minimal set of assumptions required to prove the result (smaller than expected, actually). The work is part of a long term reverse complexity program, whose goal is to obtain, via a reverse methodological approach, a formal treatment of Complexity Theory at a comfortable level of abstraction and logical rigor. 1
The Speedup Theorem in a Primitive Recursive Framework
"... Blum’s speedup theorem is a major theorem in computational complexity, showing the existence of computable functions for which no optimal program can exist: for any speedup function r there exists a function fr such that for any program computing fr we can find an alternative program computing it ..."
Abstract
 Add to MetaCart
(Show Context)
Blum’s speedup theorem is a major theorem in computational complexity, showing the existence of computable functions for which no optimal program can exist: for any speedup function r there exists a function fr such that for any program computing fr we can find an alternative program computing it with the desired speedup r. The main corollary is that algorithmic problems do not have, in general, a inherent complexity. Traditional proofs of the speedup theorem make an essential use of Kleene’s fix point theorem to close a suitable diagonal argument. As a consequence, very little is known about its validity in subrecursive settings, where there is no universal machine, and no fixpoints. In this article we discuss an alternative, formal proof of the speedup theorem that allows us to spare the invocation of the fix point theorem and sheds more light on the actual complexity of the function fr.
D.2.8 [Software Engineering]: Metrics—complexity measures,
"... f(0, y) = g(y) f(x + 1, y) = h(x, y, f(j1(x), y),..., f(jk(x), y) where g, h, j1,..., jk are primitive recursive and ji(x) ≤ x for i ∈ {1,..., k} , are themselves primitive recursive. A similar remark holds for recursion with parameter substituhal00642731, ..."
Abstract
 Add to MetaCart
(Show Context)
f(0, y) = g(y) f(x + 1, y) = h(x, y, f(j1(x), y),..., f(jk(x), y) where g, h, j1,..., jk are primitive recursive and ji(x) ≤ x for i ∈ {1,..., k} , are themselves primitive recursive. A similar remark holds for recursion with parameter substituhal00642731,
Licensed under a Creative Commons Attribution License
"... is essentially always possible to find a program solving any decision problem a factor of 2 faster. This result is a classical theorem in computing, but also one of the most debated. The main ingredient of the typical proof of the linear speedup theorem is tape compression, where a fast machine is c ..."
Abstract
 Add to MetaCart
(Show Context)
is essentially always possible to find a program solving any decision problem a factor of 2 faster. This result is a classical theorem in computing, but also one of the most debated. The main ingredient of the typical proof of the linear speedup theorem is tape compression, where a fast machine is constructed with tape alphabet or number of tapes far greater than that of the original machine. In this paper, we prove that limiting Turing machines to a fixed alphabet and a fixed number of tapes rules out linear speedup. Specifically, we describe a language that can be recognized in linear time (e. g., 1.51n), and provide a proof, based on Kolmogorov complexity, that the computation cannot be sped up (e. g., below 1.49n). Without the tape and alphabet limitation, the linear speedup theorem does hold and yields machines of time complexity of the form (1 + ε)n for arbitrarily small ε> 0. Earlier results negating linear speedup in alternative models of computation have often been based on the existence of very efficient universal machines. In the vernacular of programming language theory: These models have very efficient selfinterpreters. As the second contribution of this paper, we define a class, PICSTI, of computation models that exactly captures this property, and we disprove the Linear Speedup Theorem for every model in this class, thus generalizing all similar, modelspecific proofs.
Generalizations of Rice’s Theorem, Applicable to Executable and NonExecutable Formalisms
"... We formulate and prove two Ricelike theorems that characterize limitations on nameability of properties within a given naming scheme for partial functions. Such a naming scheme can, but need not be, an executable formalism. A programming language is an example of an executable naming scheme, where ..."
Abstract
 Add to MetaCart
We formulate and prove two Ricelike theorems that characterize limitations on nameability of properties within a given naming scheme for partial functions. Such a naming scheme can, but need not be, an executable formalism. A programming language is an example of an executable naming scheme, where the program text names the partial function it implements. Halting is an example of a property that is not nameable in that naming scheme. The proofs reveal requirements on the naming scheme to make the characterization work. Universal programming languages satisfy these requirements, but also other formalisms can satisfy them. We present some nonuniversal programming languages and a nonexecutable specification language satisfying these requirements. Our theorems have Turing’s wellknown Halting Theorem and Rice’s Theorem as special cases, by applying them to a universal programming language or Turing Machines as naming scheme. Thus, our proofs separate the nature of the naming scheme (which can, but need not, coincide with computability) from the diagonal argument. This sheds further light on how far reaching and simple the ‘diagonal ’ argument is in itself. 1
Programs
, 2007
"... HAL is a multidisciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte p ..."
Abstract
 Add to MetaCart
(Show Context)
HAL is a multidisciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et a ̀ la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Modularity of the Quasiinterpretations Synthesis and an Application to HigherOrder