Results 1  10
of
12
Safe measurementbased WCET estimation
, 2005
"... This paper explores the issues to be addressed to provide safe worstcase execution time (WCET) estimation methods based on measurements. We suggest to use structural testing for the exhaustive exploration of paths in a program. Since test data generation is in general too complex to be used in pra ..."
Abstract

Cited by 14 (0 self)
 Add to MetaCart
(Show Context)
This paper explores the issues to be addressed to provide safe worstcase execution time (WCET) estimation methods based on measurements. We suggest to use structural testing for the exhaustive exploration of paths in a program. Since test data generation is in general too complex to be used in practice for most realsize programs, we propose to generate test data for program segments only, using program clustering. Moreover, to be able to combine execution time of program segments and to obtain the WCET of the whole program, we advocate the use of compiler techniques to reduce (ideally eliminate) the timing variability of program segments and to make the time of program segments independent from one another.
Abstract Faster WCET Flow Analysis by Program Slicing
"... Static WorstCase Execution Time (WCET) analysis is a technique to derive upper bounds for the execution times of programs. Such bounds are crucial when designing and verifying realtime systems. WCET analysis needs a program flow analysis to derive constraints on the possible execution paths of the ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
(Show Context)
Static WorstCase Execution Time (WCET) analysis is a technique to derive upper bounds for the execution times of programs. Such bounds are crucial when designing and verifying realtime systems. WCET analysis needs a program flow analysis to derive constraints on the possible execution paths of the analysed program, like iteration bounds for loops and dependences between conditionals. Current WCET analysis tools typically obtain flow information through manual annotations. Better support for automatic flow analysis would eliminate much of the need for this laborious work. However, to automatically derive highquality flow information is hard, and solution techniques with large time and space complexity are often required. In this paper we describe how to use program slicing to reduce the computational need of flow analysis methods. The slicing identifes statements and variables which are guaranteed not to influence the program flow. When these are removed, the calculation time of our different flow analyses decreases, in some cases considerably. We also show how program slicing can be used to identify the input variables and globals that control the outcome of a particular loop or conditional. This should be valuable aid when performing WCET analysis and systematic testing of large and complex realtime programs.
Optimal Speed Assignment for Probabilistic Execution Times
 In 2 nd Workshop on PowerAware RealTime Computing (PARC’05), NJ
, 2005
"... The problem of reducing energy consumption is dominating the design and the implementation of embedded realtime systems. For this reason, a new generation of processors allow to vary the voltage and the operating frequency to balance computational speed versus energy consumption. The policies that c ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
(Show Context)
The problem of reducing energy consumption is dominating the design and the implementation of embedded realtime systems. For this reason, a new generation of processors allow to vary the voltage and the operating frequency to balance computational speed versus energy consumption. The policies that can exploit this feature are called Dynamic Voltage Scheduling (DVS). In realtime systems, the DVS technique must also provide the worstcase computational requirement. However, it is well known that the probability of a task executing for the longest possible time is very low. Hence, DVS policies can exploit probabilistic information about the execution times of tasks to reduce the energy consumed by the processor. In this paper we provide the foundations to integrate probabilistic timing analysis with energy minimization techniques, starting from the simple case of one task. 1
Symbolic simulation on complicated loops for WCET path analysis
 In EMSOFT
, 2011
"... We address the WorstCase Execution Time (WCET) Path Analysis problem for bounded programs, formalized as discovering a tight upper bound of a resource variable. A key challenge is posed by complicated loops whose iterations exhibit nonuniform behavior. We adopt a bruteforce strategy by simply unr ..."
Abstract

Cited by 5 (2 self)
 Add to MetaCart
(Show Context)
We address the WorstCase Execution Time (WCET) Path Analysis problem for bounded programs, formalized as discovering a tight upper bound of a resource variable. A key challenge is posed by complicated loops whose iterations exhibit nonuniform behavior. We adopt a bruteforce strategy by simply unrolling them, and show how to make this scalable while preserving accuracy. Our algorithm performs symbolic simulation of the program. It maintains accuracy because it preserves, at critical points, pathsensitivity. In other words, the simulation detects infeasible paths. Scalability, on the other hand, is dealt with by using summarizations, compact representations of the analyses of loop iterations. They are obtained by a judicious use of abstraction which preserves critical information flowing from one iteration to another. These summarizations can be compounded in order for the simulation to have linear complexity: the symbolic execution can in fact be asymptotically shorter than a concrete execution. Finally, we present a comprehensive experimental evaluation using a standard benchmark suite. We show that our algorithm is fast, and importantly, we often obtain not just accurate but exact results.
Optimal TwoLevels Speed Assignment for RealTime Systems
, 2006
"... Reducing energy consumption is one of the main concerns in the design and the implementation of embedded realtime systems. For this reason, the current generation of processors allows to vary voltage and operating frequency to balance computational speed versus energy consumption. This technique is ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
Reducing energy consumption is one of the main concerns in the design and the implementation of embedded realtime systems. For this reason, the current generation of processors allows to vary voltage and operating frequency to balance computational speed versus energy consumption. This technique is called Dynamic Voltage Scaling (DVS). When applying DVS to hard realtime systems, it is important to provide the worstcase computational requirement, otherwise a task may miss some timing constraint. However, the probability of a task executing for its worstcase execution time is very low. In this paper, we show how to exploit probabilistic information about the execution time of a task in order to reduce the energy consumed by the processor. Optimal speed assignments and transition points are found using a very general model for the processor. The model accounts for the processor idle power and for both the time and the energy overheads due to frequency transitions. We also show how these results can be applied to some significant cases.
PathSensitive Resource Analysis Compliant with Assertions
"... Abstract. We consider the problem of bounding the worstcase resourse usage of loopbounded programs, where assertions about valid program executions may be enforced at selected program points. It is folklore that to be precise, pathsensitivity is needed. This entails unrolling loops in the manner ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. We consider the problem of bounding the worstcase resourse usage of loopbounded programs, where assertions about valid program executions may be enforced at selected program points. It is folklore that to be precise, pathsensitivity is needed. This entails unrolling loops in the manner of symbolic simulation. This in turn suggests that the treatment of the individual loop iterations must be greedy in the sense once analysis is finished on one iteration, we cannot backtrack to change it. We show that under these conditions, enforcing assertions produces unsound results. We then present a twophase algorithm with first uses a greedy strategy in the unrolling of loops. This phase explores what is conceptually a symbolic execution tree, which is of enormous size, while eliminates infeasible paths and dominated paths that guaranteed not to contribute to the worst case bound. A compact representation is produced at the end of this phase. Finally, the second phase attacks the remaining problem, to determine the worstcase path in the simplified tree, excluding all paths that violate the assertions from bound calculation. Scalability is achieved via an adaptation of a dynamic programming algorithm. 1
Exact WCET Analysis over Symbolic Explicit Paths
"... We consider the problem of exact WCET analysis of a loopbounded program executed in a given microarchitecture. Here the complexity of analyzing a user program is compounded by the microarchitecture’s specification. In principle, this problem is easily addressed by generating a representation of eac ..."
Abstract
 Add to MetaCart
We consider the problem of exact WCET analysis of a loopbounded program executed in a given microarchitecture. Here the complexity of analyzing a user program is compounded by the microarchitecture’s specification. In principle, this problem is easily addressed by generating a representation of each of its possible traces, and locally determining the resource consumption in each trace representation. The problem of course is that the number of traces is generally exponential in the program’s (runtime) length. The starting point is a general framework for symbolically representing the traces arising from both the program and the microarchitecture characteristics. This representation succinctly and precisely captures the way any resource, such as execution time, are consumed in each trace. The main result is an algorithm for using the optimal value obtained from the analysis of one set of traces in order to determine the value in another set, without reanalysis. The key steps are to discover if (a) the distinguishing features between the two sets are in fact redundant, and (b) the path producing the optimal value in the first set is feasible in the second. Consequently we can determine, and not just estimate, the optimal value without explicitly examining the trace that gives rise to it. We finally demonstrate the efficiency of our algorithm. 1
unknown title
"... Safe measurementbased WCET estimation This paper explores the issues to be addressed to provide safe worstcase execution time (WCET) estimation methods based on measurements. We suggest to use structural testing for the exhaustive exploration of paths in a program. Since test data generation is in ..."
Abstract
 Add to MetaCart
(Show Context)
Safe measurementbased WCET estimation This paper explores the issues to be addressed to provide safe worstcase execution time (WCET) estimation methods based on measurements. We suggest to use structural testing for the exhaustive exploration of paths in a program. Since test data generation is in general too complex to be used in practice for most realsize programs, we propose to generate test data for program segments only, using program clustering. Moreover, to be able to combine execution time of program segments and to obtain the WCET of the whole program, we advocate the use of compiler techniques to reduce (ideally eliminate) the timing variability of program segments and to make the time of program segments independent from one another. 1.
PREFACE
"... Parallèlement à l'école d'été temps réel ETR'05, les premières rencontres des jeunes chercheurs en informatique temps réel (RJCITR'05) sont organisées. Cet événement est une excellente occasion pour nous, les jeunes chercheurs, de présenter nos travaux à la communauté temps réel, ..."
Abstract
 Add to MetaCart
Parallèlement à l'école d'été temps réel ETR'05, les premières rencontres des jeunes chercheurs en informatique temps réel (RJCITR'05) sont organisées. Cet événement est une excellente occasion pour nous, les jeunes chercheurs, de présenter nos travaux à la communauté temps réel, de favoriser l'échange d'idées, d'expériences, d'information entre chercheurs dans le domaine de l'informatique temps réel; et ainsi de connaître le point de vue des participants à ETR 2005 sur nos travaux. Les thèmes abordés par les jeunes chercheurs durant ces rencontres seront les suivants: • approche par composants, • évaluation, validation, vérification d’applications et de systèmes temps réel, • ordonnancement temps réel, • pires temps d’exécution et conception faible consommation d’énergie. Nous voudrions adresser nos remerciements les plus sincères au comité local d'organisation de l ' ETR'05 qui a rendu possible l'organisation de ces rencontres et ainsi offert l'opportunité aux doctorants de présenter leurs travaux. Bienvenue aux rencontres des jeunes chercheurs en informatique temps réel 2005.
A Useful Bounded Resource Functional Language
"... Abstract. Realtime software, particularly that used in embedded systems, has unique resource and verification requirements. While embedded software may not have great need for processor and memory resources, the need to prove that computations are performed correctly and within hard time and space ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract. Realtime software, particularly that used in embedded systems, has unique resource and verification requirements. While embedded software may not have great need for processor and memory resources, the need to prove that computations are performed correctly and within hard time and space constraints is very great. Improvements in hardware and compiler technology mean that functional programming languages are increasingly practical for embedded situations. We present a functional programming language, Ca, built on catamorphisms instead of general recursion, intended for use in static analysis. Ca is not Turingcomplete—every program must terminate—but it still provides an excellent framework for building static analysis techniques. Catamorphisms are a general tool which encompass bounded iteration, and allow to traverse any algebraic data structure. We discuss the computational properties of this language, as well as provide a framework for future work in static analysis. 1