Results 1  10
of
25
Modeling Complex Flows for WorstCase Execution Time Analysis
, 2000
"... Knowing the WorstCase Execution Time (WCET) of a program is necessary when designing and verifying realtime systems. The WCET depends both on the program ow (like loop iterations and function calls), and on hardware factors like caches and pipelines. ..."
Abstract

Cited by 57 (7 self)
 Add to MetaCart
Knowing the WorstCase Execution Time (WCET) of a program is necessary when designing and verifying realtime systems. The WCET depends both on the program ow (like loop iterations and function calls), and on hardware factors like caches and pipelines.
Speed: Precise and efficient static estimation of program computational complexity
 In POPL’09
, 2009
"... This paper describes an interprocedural technique for computing symbolic bounds on the number of statements a procedure executes in terms of its scalar inputs and userdefined quantitative functions of input datastructures. Such computational complexity bounds for even simple programs are usually ..."
Abstract

Cited by 41 (3 self)
 Add to MetaCart
This paper describes an interprocedural technique for computing symbolic bounds on the number of statements a procedure executes in terms of its scalar inputs and userdefined quantitative functions of input datastructures. Such computational complexity bounds for even simple programs are usually disjunctive, nonlinear, and involve numerical properties of heaps. We address the challenges of generating these bounds using two novel ideas. We introduce a proof methodology based on multiple counter instrumentation (each counter can be initialized and incremented at potentially multiple program locations) that allows a given linear invariant generation tool to compute linear bounds individually on these counter variables. The bounds on these counters are then composed together to generate total bounds that are nonlinear and disjunctive. We also give an algorithm for automating this proof
Parametric timing analysis
 Workshop on Language, Compilers, and Tools for Embedded Systems
, 2001
"... Embedded systems often have realtime constraints. Traditional timing analysis statically determines the maximum execution time of a task or a program in a realtime system. These systems typically depend on the worstcase execution time of tasks in order to make static scheduling decisions so that ..."
Abstract

Cited by 39 (8 self)
 Add to MetaCart
Embedded systems often have realtime constraints. Traditional timing analysis statically determines the maximum execution time of a task or a program in a realtime system. These systems typically depend on the worstcase execution time of tasks in order to make static scheduling decisions so that tasks can meet their deadlines. Static determination of worstcase execution times imposes numerous restrictions on realtime programs, which include that the maximum number of iterations of each loop must be known statically. These restrictions can significantly limit the class of programs that would be suitable for a realtime embedded system. This paper describes workinprogress that uses static timing analysis to aid in making dynamic scheduling decisions. For instance, different algorithms with varying levels of accuracy may be selected based on the algorithm’s predicted worstcase execution time and the time allotted for the task. We represent the worstcase execution time of a function or a loop as a formula, where the unknown values affecting the execution time are parameterized. This parametric timing analysis produces formulas that can then be quickly evaluated at runtime so dynamic scheduling decisions can be made with little overhead. Benefits of this work include expanding the class of applications that can be used in a realtime system, improving the accuracy of dynamic scheduling decisions, and more effective utilization of system resources. 1.
Static checking of interruptdriven software
 In Proc. of the 23rd Intl. Conf. on Software Engineering (ICSE
, 2001
"... AbstractResourceconstrained devices are becoming ubiquitous. Examples include cell phones, palm pilots, and digital thermostats. It can be difficult to fit required functionality into such a device without sacrificing the simplicity and clarityof the software. Increasingly complex embedded systems ..."
Abstract

Cited by 28 (9 self)
 Add to MetaCart
AbstractResourceconstrained devices are becoming ubiquitous. Examples include cell phones, palm pilots, and digital thermostats. It can be difficult to fit required functionality into such a device without sacrificing the simplicity and clarityof the software. Increasingly complex embedded systems require extensive bruteforce testing, making development andmaintenance costly. This is particularly true for system components that are written in assembly language. Static checking has the potential of alleviating these problems, but until now there has been little tool support for programming at theassembly level.
Controlflow refinement and progress invariants for bound analysis
 In PLDI
, 2009
"... Symbolic complexity bounds help programmers understand the performance characteristics of their implementations. Existing work provides techniques for statically determining bounds of procedures with simple controlflow. However, procedures with nested loops or multiple paths through a single loop a ..."
Abstract

Cited by 25 (5 self)
 Add to MetaCart
Symbolic complexity bounds help programmers understand the performance characteristics of their implementations. Existing work provides techniques for statically determining bounds of procedures with simple controlflow. However, procedures with nested loops or multiple paths through a single loop are challenging. In this paper we describe two techniques, controlflow refinement and progress invariants, that together enable estimation of precise bounds for procedures with nested and multipath loops. Controlflow refinement transforms a multipath loop into a semantically equivalent code fragment with simpler loops by making the structure of path interleaving explicit. We show that this enables nondisjunctive invariant generation tools to find a bound on many procedures for which previous techniques were unable to prove termination. Progress invariants characterize relationships between
Chronos: A timing analyzer for embedded software
 Science of Computer Programming
, 2007
"... Estimating the Worstcase Execution Time (WCET) of realtime embedded software is an important problem. WCET is defined as the upper bound b on the execution time of a program P on a processor X such that for any input the execution time of P on X is guaranteed to not exceed b. Such WCET estimates a ..."
Abstract

Cited by 21 (7 self)
 Add to MetaCart
Estimating the Worstcase Execution Time (WCET) of realtime embedded software is an important problem. WCET is defined as the upper bound b on the execution time of a program P on a processor X such that for any input the execution time of P on X is guaranteed to not exceed b. Such WCET estimates are crucial for schedulability analysis of realtime systems. In this paper, we present Chronos, a static analysis tool for generating WCET estimates of C programs. It performs detailed microarchitectural modeling to capture the timing effects of the underlying processor platform. Consequently, we can provide safe but tight WCET estimate of a given C program running on a complex modern processor. Chronos is an opensource distribution specifically suited to the needs of the research community. We support processor models captured by the popular SimpleScalar architectural simulator rather than targeting specific commercial processors. This makes Chronos flexible, extensible and easily accessible to the researcher.
Bounding worstcase data cache behavior by analytically deriving cache reference patterns
 In IEEE RealTime Embedded Technology and Applications Symposium
, 2005
"... While caches have become invaluable for higherend architectures due to their ability to hide, in part, the gap between processor speed and memory access times, caches (and particularly data caches) limit the timing predictability for data accesses that may reside in memory or in cache. This is a si ..."
Abstract

Cited by 18 (6 self)
 Add to MetaCart
While caches have become invaluable for higherend architectures due to their ability to hide, in part, the gap between processor speed and memory access times, caches (and particularly data caches) limit the timing predictability for data accesses that may reside in memory or in cache. This is a significant problem for realtime systems. The objective our work is to provide accurate predictions of data cache behavior of scalar and nonscalar references whose reference patterns are known at compile time. Such knowledge about cache behavior provides the basis for significant improvements in bounding the worstcase execution time (WCET) of realtime programs, particularly for hardtoanalyze data caches. We exploit the power of the Cache Miss Equations (CME) framework but lift a number of limitations of traditional CME to generalize the analysis to more arbitrary programs. We further devised a transformation, coined “forced ” loop fusion, which facilitates the analysis across sequential loops. Our contributions result in exact data cache reference patterns — in contrast to approximate cache miss behavior of prior work. Experimental results indicate improvements on the accuracy of worstcase data cache behavior up to two orders of magnitude over the original approach. In fact, our results closely bound and sometimes even exactly match those obtained by tracedriven simulation for worstcase inputs. The resulting WCET bounds of timing analysis confirm these findings in terms of providing tight bounds. Overall, our contributions lift analytical approaches to predict data cache behavior to a level suitable for efficient static timing analysis and, subsequently, realtime schedulability of tasks with predictable WCET. 1.
A numerical abstract domain based on expression abstraction and max operator with application in timing analysis
 In CAV
, 2008
"... Abstract. This paper describes a precise numerical abstract domain for use in timing analysis. The numerical abstract domain is parameterized by a linear abstract domain and is constructed by means of two domain lifting operations. One domain lifting operation is based on the principle of expression ..."
Abstract

Cited by 14 (4 self)
 Add to MetaCart
Abstract. This paper describes a precise numerical abstract domain for use in timing analysis. The numerical abstract domain is parameterized by a linear abstract domain and is constructed by means of two domain lifting operations. One domain lifting operation is based on the principle of expression abstraction (which involves defining a set of expressions and specifying their semantics using a collection of directed inference rules) and has a more general applicability. It lifts any given abstract domain to include reasoning about a given set of expressions whose semantics is abstracted using a set of axioms. The other domain lifting operation domain via introduction of max expressions. We present experimental results demonstrating the potential of the new numerical abstract domain to discover a wide variety of timing bounds (including polynomial, disjunctive, logarithmic, exponential, etc.) for small C programs. 1
Tight Timing Estimation with the NewtonGregory Formulae
 In proceedings of CPC 2003
, 2003
"... Parametric worstcase execution time (WCET) bounds are critical in removing restrictions, such as known loop bounds, on algorithms for important applications such as scheduling for realtime embedded systems. Current parametric approaches have difficulties with loop nests that include nonrectangula ..."
Abstract

Cited by 9 (6 self)
 Add to MetaCart
Parametric worstcase execution time (WCET) bounds are critical in removing restrictions, such as known loop bounds, on algorithms for important applications such as scheduling for realtime embedded systems. Current parametric approaches have difficulties with loop nests that include nonrectangular loops, zerotrip loops, and/or loops with nonunit strides. This paper presents a novel approach to parametric WCET estimation based on numeric/symbolic manipulation of polynomial representations for the timing of rectangular and nonrectangular loop nests including those with zerotrip loops, nonunit strides, and multiple critical paths.
TUBOUND  A CONCEPTUALLY NEW TOOL FOR WORSTCASE EXECUTION TIME ANALYSIS
"... TUBOUND is a conceptually new tool for the worstcase execution time (WCET) analysis of programs. A distinctive feature of TUBOUND is the seamless integration of a WCET analysis component and of a compiler in a uniform tool. TUBOUND enables the programmer to provide hints improving the precision of ..."
Abstract

Cited by 9 (4 self)
 Add to MetaCart
TUBOUND is a conceptually new tool for the worstcase execution time (WCET) analysis of programs. A distinctive feature of TUBOUND is the seamless integration of a WCET analysis component and of a compiler in a uniform tool. TUBOUND enables the programmer to provide hints improving the precision of the WCET computation on the highlevel program source code, while preserving the advantages of using an optimizing compiler and the accuracy of a WCET analysis performed on the lowlevel machine code. This way, TUBOUND ideally serves the needs of both the programmer and the WCET analysis by providing them the interface on the very abstraction level that is most appropriate and convenient to them. In this paper we present the system architecture of TUBOUND, discuss the internal workflow of the tool, and report on first measurements using benchmarks from Mälardalen University. TUBOUND took also part in the WCET Tool Challenge 2008.