Results 11  20
of
112
A ScenarioAware Data Flow Model for Combined LongRun Average and WorstCase Performance Analysis
 Proceedings of MEMOCODE, pp 185194
, 2006
"... Data flow models are used for specifying and analysing signal processing and streaming applications. However, traditional data flow models are either not capable of expressing the dynamic aspects of modern streaming applications or they do not support relevant analysis techniques. The dynamism in mo ..."
Abstract

Cited by 31 (13 self)
 Add to MetaCart
(Show Context)
Data flow models are used for specifying and analysing signal processing and streaming applications. However, traditional data flow models are either not capable of expressing the dynamic aspects of modern streaming applications or they do not support relevant analysis techniques. The dynamism in modern streaming applications often originates from different modes of operation (scenarios) in which data production and consumption rates and/or execution times may differ. This paper introduces a scenarioaware generalisation of the Synchronous Data Flow model, which uses a stochastic approach to model the order in which scenarios occur. The formally defined operational semantics of a ScenarioAware Data Flow model implies a Markov chain, which can be analysed for both longrun average and worstcase performance metrics using existing exhaustive or simulationbased techniques. The potential of using ScenarioAware Data Flow models for performance analysis of modern streaming applications is illustrated with an MPEG4 decoder example. 1.
Language and Compiler Support for Stream Programs
, 2009
"... Stream programs represent an important class of highperformance computations. Defined by their regular processing of sequences of data, stream programs appear most commonly in the context of audio, video, and digital signal processing, though also in networking, encryption, and other areas. Stream ..."
Abstract

Cited by 28 (2 self)
 Add to MetaCart
Stream programs represent an important class of highperformance computations. Defined by their regular processing of sequences of data, stream programs appear most commonly in the context of audio, video, and digital signal processing, though also in networking, encryption, and other areas. Stream programs can be naturally represented as a graph of independent actors that communicate explicitly over data channels. In this work we focus on programs where the input and output rates of actors are known at compile time, enabling aggressive transformations by the compiler; this model is known as synchronous dataflow. We develop a new programming language, StreamIt, that empowers both programmers and compiler writers to leverage the unique properties of the streaming domain. StreamIt offers several new abstractions, including hierarchical singleinput singleoutput streams, composable primitives for data reordering, and a mechanism called teleport messaging that enables precise event handling
Models of Computation for Embedded System Design
 in SystemLevel Synthesis
, 1998
"... In the near future, most objects of common use will contain electronics to augment their functionality, performance, and safety. Hence, timetomarket, safety, lowcost, and reliability will have to be addressed by any system design methodology. A fundamental aspect of system design is the specificat ..."
Abstract

Cited by 26 (0 self)
 Add to MetaCart
(Show Context)
In the near future, most objects of common use will contain electronics to augment their functionality, performance, and safety. Hence, timetomarket, safety, lowcost, and reliability will have to be addressed by any system design methodology. A fundamental aspect of system design is the specification process. We advocate using an unambiguous formalism to represent design specifications and design choices. This facilitates tremendously efficiency of specification, formal verification, and correct design refinement, optimization, and implementation. This formalism is often called model of computation. There are several models of computation that have been used, but there is a lack of consensus among researchers and practitioners on the "right" models to use. To the best of our knowledge, there has also been little effort in trying to compare rigorously these models of computation. In this paper, we review current models of computation and compare them within a framework that has been r...
Symbolic Model Checking of Process Networks Using Interval Diagram Techniques
, 1998
"... In this paper, an approach to symbolic model checking of process networks is introduced. It is based on interval decision diagrams (IDDs), a representation of multivalued functions. Compared to other model checking strategies, IDDs show some important properties that enable the verification of pro ..."
Abstract

Cited by 24 (9 self)
 Add to MetaCart
In this paper, an approach to symbolic model checking of process networks is introduced. It is based on interval decision diagrams (IDDs), a representation of multivalued functions. Compared to other model checking strategies, IDDs show some important properties that enable the verification of process networks more adequately than with conventional approaches. Additionally, applications concerning scheduling will be shown. A new form of transition relation representation called interval mapping diagrams (IMDs)and their less general version predicate action diagrams (PADs)is explained together with the corresponding methods. 1 Introduction Process network modelsconsisting in general of concurrent processes communicating through unidirectional FIFO queuesas that of Kahn [7, 8] are commonly used, e.g., for specification and synthesis of distributed systems. They form the basis for applications such as realtime scheduling and allocation. Many other models of computation, ...
A Hierarchical Multiprocessor Scheduling Framework For Synchronous Dataflow Graphs
 Laboratory, University of California at Berkeley
, 1995
"... This paper discusses a hierarchical scheduling framework to reduce the complexity of scheduling synchronous dataflow (SDF) graphs onto multiple processors. The core of this framework is a clustering algorithm that reduces the number of nodes before expanding the SDF graph into a precedence DAG (dire ..."
Abstract

Cited by 24 (7 self)
 Add to MetaCart
(Show Context)
This paper discusses a hierarchical scheduling framework to reduce the complexity of scheduling synchronous dataflow (SDF) graphs onto multiple processors. The core of this framework is a clustering algorithm that reduces the number of nodes before expanding the SDF graph into a precedence DAG (directed acyclic graph). The internals of the clusters are then scheduled with uniprocessor SDF schedulers which can optimize for memory usage. The clustering is done in such a manner as to leave ample parallelism exposed for the multiprocessor scheduler. The advantages of this framework are demonstrated with several practical, realtime examples.
A formal definition of dataflow graph models
 IEEE Trans. Comput
, 1986
"... AbstractIn this paper, a new model for parallel computations and parallel computer systems that is based on data flow principles is presented. Uninterpreted data flow graphs can be used to model computer systems including data driven and parallel processors. A data flow graph is defined to be a bi ..."
Abstract

Cited by 23 (2 self)
 Add to MetaCart
(Show Context)
AbstractIn this paper, a new model for parallel computations and parallel computer systems that is based on data flow principles is presented. Uninterpreted data flow graphs can be used to model computer systems including data driven and parallel processors. A data flow graph is defined to be a bipartite graph with actors and links as the two vertex classes. Actors can be considered similar to transitions in Petri nets, and links similar to places. The nondeterministic nature of uninterpreted data flow graphs necessitates the derivation of liveness conditions. Index TermsBipartite graphs, data flow graphs, deadlocks, liveness, parallel computations, Petri nets. I.
Realtime Signal Processing  Dataflow, Visual, and Functional Programming
, 1995
"... This thesis presents and justifies a framework for programming realtime signal processing systems. The framework extends the existing "blockdiagram" programming model; it has three components: a very highlevel textual language, a visual language, and the dataflow process network model o ..."
Abstract

Cited by 20 (1 self)
 Add to MetaCart
(Show Context)
This thesis presents and justifies a framework for programming realtime signal processing systems. The framework extends the existing "blockdiagram" programming model; it has three components: a very highlevel textual language, a visual language, and the dataflow process network model of computation. The dataflow process network model, although widelyused, lacks a formal description, and I provide a semantics for it. The formal work leads into a new form of actor. Having established the semantics of dataflow processes, the functional language Haskell is layered above this model, providing powerful featuresnotably polymorphism, higherorder functions, and algebraic program transformationabsent in blockdiagram systems. A visual equivalent notation for Haskell, Visual Haskell, ensures that this power does not exclude the "intuitive" appeal of visual interfaces; with some intelligent layout and suggestive icons, a Visual Haskell program can be made to look very like a block dia...
Heterogenous Simulation  mixing discreteevent model with dataflow
, 1996
"... This paper relates to systemlevel design of signal processing systems, which are often heterogeneous in implementation technologies and design styles. The heterogeneous approach, by combining small, specialized models of computation, achieves generality and also lends itself to automatic synthesis ..."
Abstract

Cited by 19 (4 self)
 Add to MetaCart
This paper relates to systemlevel design of signal processing systems, which are often heterogeneous in implementation technologies and design styles. The heterogeneous approach, by combining small, specialized models of computation, achieves generality and also lends itself to automatic synthesis and formal verification. Key to the heterogeneous approach is to define interaction semantics that resolve the ambiguities when different models of computation are brought together. For this purpose, we introduce a tagged signal model as a formal framework within which the models of computation can be precisely described and unambiguously differentiated, and their interactions can be understood. In this paper, we will focus on the interaction between dataflow models, which have partially ordered events, and discreteevent models, with their notion of time that usually defines a total order of events. A variety of interaction semantics, mainly in handling the different notions of time in the two models, are explored to illustrate the subtleties involved. An implementation based on the Ptolemy system from U.C. Berkeley is described and critiqued.
EXPLOITING STATICALLY SCHEDULABLE REGIONS IN DATAFLOW PROGRAMS
"... Dataflow descriptions have been used in a wide range of Digital Signal Processing (DSP) applications, such as multimedia processing, and wireless communications. Among various forms of dataflow modeling, Synchronous Dataflow (SDF) is geared towards static scheduling of computational modules, which ..."
Abstract

Cited by 18 (9 self)
 Add to MetaCart
(Show Context)
Dataflow descriptions have been used in a wide range of Digital Signal Processing (DSP) applications, such as multimedia processing, and wireless communications. Among various forms of dataflow modeling, Synchronous Dataflow (SDF) is geared towards static scheduling of computational modules, which improves system performance and predictability. However, many DSP applications do not fully conform to the restrictions of SDF modeling. More general dataflow models, such as CAL [1], have been developed to describe dynamicallystructured DSP applications. Such generalized models can express dynamically changing functionality, but lose the powerful static scheduling capabilities provided by SDF. This paper focuses on detection of SDFlike regions in dynamic dataflow descriptions — in particular, in the generalized specification framework of CAL. This is an important step for applying static scheduling techniques within a dynamic dataflow framework. Our techniques combine the advantages of different dataflow languages and tools, including CAL [1], DIF [2] and CAL2C [3]. The techniques are demonstrated on the IDCT module of MPEG Reconfigurable Video
On Retiming of Multirate DSP Algorithms
 In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing
, 1996
"... In the paper retiming of DSP algorithms exhibiting multirate behavior is treated. Using the nonordinary marked graph model and the reachability theory, we provide a new condition for valid retiming of multirate graphs. We show that for a graph with n nodes the reachability condition can be split in ..."
Abstract

Cited by 17 (4 self)
 Add to MetaCart
In the paper retiming of DSP algorithms exhibiting multirate behavior is treated. Using the nonordinary marked graph model and the reachability theory, we provide a new condition for valid retiming of multirate graphs. We show that for a graph with n nodes the reachability condition can be split into the reachability condition for the topologically equivalent unitrate graph (all rates set to one), and (n 2 \Gamma n)=2 ratedependent conditions. Using this property a class of equivalent graphs of reduced complexity is introduced which are equivalent in terms of retiming. Additionally, the circuitbased necessary condition for valid retiming of multirate graphs is extended for the sufficient part. 1. INTRODUCTION Retiming was introduced as a technique to optimize hardware circuits by redistributing registers without affecting functionality [1]. Retiming is also useful for DSP software design. It changes precedence constraints among instructions or tasks, and can improve singlepro...