Results 1  10
of
26
Resource Control for Synchronous Cooperative Threads
 In CONCUR, volume 3170 of LNCS
, 2004
"... We develop new methods to statically bound the resources needed for the execution of systems of concurrent, interactive threads. ..."
Abstract

Cited by 43 (5 self)
 Add to MetaCart
(Show Context)
We develop new methods to statically bound the resources needed for the execution of systems of concurrent, interactive threads.
Resource analysis by supinterpretation
 In FLOPS 2006, volume 3945 of LNCS
, 2006
"... Abstract. We propose a new method to control memory resources by static analysis. For this, we introduce the notion of supinterpretation which bounds from above the size of function outputs. This method applies to first order functional programming with pattern matching. This work is related to qua ..."
Abstract

Cited by 26 (12 self)
 Add to MetaCart
Abstract. We propose a new method to control memory resources by static analysis. For this, we introduce the notion of supinterpretation which bounds from above the size of function outputs. This method applies to first order functional programming with pattern matching. This work is related to quasiinterpretations but we are now able to determine resources of more algorithms and it is easier to perform an analysis with this new tools. 1
Qin Analysing memory resource bounds for lowlevel programs
 In ISMM 08
, 2008
"... Embedded systems are becoming more widely used but these systems are often resource constrained. Programming models for these systems should take into formal consideration resources such as stack and heap. In this paper, we show how memory resource bounds can be inferred for assemblylevel programs. ..."
Abstract

Cited by 22 (1 self)
 Add to MetaCart
(Show Context)
Embedded systems are becoming more widely used but these systems are often resource constrained. Programming models for these systems should take into formal consideration resources such as stack and heap. In this paper, we show how memory resource bounds can be inferred for assemblylevel programs. Our inference process captures the memory needs of each method in terms of the symbolic values of its parameters. For better precision, we infer pathsensitive information through a novel guarded expression format. Our current proposal relies on a Presburger solver to capture memory requirements symbolically, and to perform fixpoint analysis for loops and recursion. Apart from safety in memory adequacy, our proposal can provide estimate on memory costs for embedded devices and improve performance via fewer runtime checks against memory bound. 1.
On the modularity of quasiinterpretations
"... Abstract. Quasiinterpretations have shown their interest to deal with the complexity of programming languages with rewriting semantics. For instance, with the help of Product Path Orderings, we characterize all polynomial time functions. Secondly, finding quasiinterpretations is decidable, and so ..."
Abstract

Cited by 10 (9 self)
 Add to MetaCart
(Show Context)
Abstract. Quasiinterpretations have shown their interest to deal with the complexity of programming languages with rewriting semantics. For instance, with the help of Product Path Orderings, we characterize all polynomial time functions. Secondly, finding quasiinterpretations is decidable, and so, there are automatic methods to certify the complexity of programs and extract resource upper bounds. The current proposition deals with the question of modularity of quasiinterpretations. We show that in the case of constructorsharing and hierarchical unions, the existence of quasiinterpretations is not a modular property. However, we can still certify the complexity of programs. As a consequence, modularity helps to augment the intentionality of quasiinterpretations. Another consequence is that modularity improves the quasiinterpretation synthesis. 1
A characterization of alternating log time by first order functional programs
 In LPAR 2006, volume 4246 of LNAI
, 2006
"... Abstract. We a give an intrinsic characterization of the class of functions which are computable in NC 1 that is by a uniform, logarithmic depth and polynomial size family circuit. Recall that the class of functions in ALogTime, that is in logarithmic time on an Alternating Turing Machine, is NC 1. ..."
Abstract

Cited by 7 (5 self)
 Add to MetaCart
(Show Context)
Abstract. We a give an intrinsic characterization of the class of functions which are computable in NC 1 that is by a uniform, logarithmic depth and polynomial size family circuit. Recall that the class of functions in ALogTime, that is in logarithmic time on an Alternating Turing Machine, is NC 1. Our characterization is in terms of first order functional programming languages. We define measuretools called Supinterpretations, which allow to give space and time bounds and allow also to capture a lot of program schemas. This study is part of a research on static analysis in order to predict program resources. It is related to the notion of Quasiinterpretations and belongs to the implicit computational complexity line of research. 1
Analyzing the Implicit Computational Complexity of objectoriented programs
 in "Annual Conference on Foundations of Software Technology and Theoretical Computer Science  FSTTCS CARTE 15 2008, Inde
"... ABSTRACT. A supinterpretation is a tool which provides upper bounds on the size of the values computed by the function symbols of a program. Supinterpretations have shown their interest to deal with the complexity of first order functional programs. This paper is an attempt to adapt the framework ..."
Abstract

Cited by 4 (1 self)
 Add to MetaCart
(Show Context)
ABSTRACT. A supinterpretation is a tool which provides upper bounds on the size of the values computed by the function symbols of a program. Supinterpretations have shown their interest to deal with the complexity of first order functional programs. This paper is an attempt to adapt the framework of supinterpretations to a fragment of objectoriented programs, including loop and while constructs and methods with side effects. We give a criterion, called brotherly criterion, which uses the notion of supinterpretation to ensure that each brotherly program computes objects whose size is polynomially bounded by the inputs sizes. Moreover we give some heuristics in order to compute the supinterpretation of a given method. 1
Resource Control Graphs
"... Resource Control Graphs are an abstract representation of programs. Each state of the program is abstracted by its size, and each instruction is abstracted by the effects it has on the state size whenever it is executed. The abstractions of instruction effects are then used as weights on the arcs of ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Resource Control Graphs are an abstract representation of programs. Each state of the program is abstracted by its size, and each instruction is abstracted by the effects it has on the state size whenever it is executed. The abstractions of instruction effects are then used as weights on the arcs of a program’s Control Flow Graph. Termination is proved by finding decreases in a wellfounded order on statesize, in line with other termination analyses, resulting in proofs similar in spirit to those produced by Size Change Termination analysis. However, the size of states may also be used to measure the amount of space consumed by the program at each point of execution. This leads to an alternative characterisation of the Non Size Increasing programs, i.e. of programs that can compute without allocating new memory. This new tool is able to encompass several existing analyses and similarities with other studies suggest that even more analyses might be expressable in this framework, thus giving hopes for a generic tool for studying programs.
The Racket Virtual Machine and Randomized Testing
"... We present a PLT Redex model of a substantial portion of the Racket virtual machine and bytecode verifier (formerly known as MzScheme), along with lessons learned in developing the model. Inspired by the “wartsandall ” approach of the VLISP project, in which Wand et al. produced a verified imple ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
We present a PLT Redex model of a substantial portion of the Racket virtual machine and bytecode verifier (formerly known as MzScheme), along with lessons learned in developing the model. Inspired by the “wartsandall ” approach of the VLISP project, in which Wand et al. produced a verified implementation of Scheme, our model reflects many of the realities of a production system. Our methodology departs from the VLISP project’s in its approach to validation; instead of producing a proof of correctness, we explore the use of QuickCheckstyle randomized testing, finding it a cheap and effective technique for discovering a variety of errors in the model—from simple typos to more fundamental design mistakes.
Memory consumption analysis of Java smart cards
"... Memory is a scarce resource in Java smart cards. Developers and card suppliers alike would want to make sure, at compile or loadtime, that a Java Card applet will not overflow memory when performing dynamic class instantiations. Although there are good solutions to the general problem, the challen ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
Memory is a scarce resource in Java smart cards. Developers and card suppliers alike would want to make sure, at compile or loadtime, that a Java Card applet will not overflow memory when performing dynamic class instantiations. Although there are good solutions to the general problem, the challenge is still out to produce a static analyser that is certified and could execute oncard. We provide a constraintbased algorithm which determines potential loops and (mutually) recursive methods. The algorithm operates on the bytecode of an applet and is written as a set of rules associating one or more constraints to each bytecode instruction. The rules are designed so that a certified analyser could be extracted from their proof of correctness. By keeping a clear separation between the rules dealing with the inter and intraprocedural aspects of the analysis we are able to reduce the spacecomplexity of a previous algorithm. 1