Results 1 
6 of
6
Maximally and Arbitrarily Fast Implementation of Linear and Feedback Linear Computations
, 2000
"... By establishing a relationship between the basic properties of linear computations and eight optimizing transformations (distributivity, associativity, commutativity, inverse and zero element law, common subexpression replication and elimination, constant propagation), a computeraided design platfo ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
By establishing a relationship between the basic properties of linear computations and eight optimizing transformations (distributivity, associativity, commutativity, inverse and zero element law, common subexpression replication and elimination, constant propagation), a computeraided design platform is developed to optimally speedup an arbitrary instance from this large class of computations with respect to those transformations. Furthermore, arbitrarily fast implementation of an arbitrary linear computation is obtained by adding loop unrolling to the transformations set. During this process, a novel Horner pipelining scheme is used so that the areatime (AT) product is maintained constant, regardless of achieved speedup. We also present a generalization of the new approach so that an important subclass of nonlinear computations, named feedback linear computations, is efficiently, maximally, and arbitrarily spedup.
Rephasing: A Transformation Technique for the Manipulation of Timing Constraints
 Design Autorrmtion Conference
, 1995
"... We introduce a transformation, named rephasing, that manipulates the timing parameters in control dataflow graphs. Traditionally highlevel synthesis systems for DSP have either assumed that all the relative times, called phases, when corresponding samples are available at input and delay nodes are ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
(Show Context)
We introduce a transformation, named rephasing, that manipulates the timing parameters in control dataflow graphs. Traditionally highlevel synthesis systems for DSP have either assumed that all the relative times, called phases, when corresponding samples are available at input and delay nodes are zero or have automatically assigned values to as part of the scheduling step when software pipelining is simultaneously applied.
Behavioral optimization using the manipulation of timing constraints
, 1995
"... Abstract — We introduce a transformation, named rephasing, that manipulates the timing parameters in controldataflow graphs (CDFG’s) during the highlevel synthesis of datapathintensive applications. Timing parameters in such CDFG’s include the sample period, the latencies between input–output pa ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
(Show Context)
Abstract — We introduce a transformation, named rephasing, that manipulates the timing parameters in controldataflow graphs (CDFG’s) during the highlevel synthesis of datapathintensive applications. Timing parameters in such CDFG’s include the sample period, the latencies between input–output pairs, the relative times at which corresponding samples become available on different inputs, and the relative times at which the corresponding samples become available at the delay nodes. While some of the timing parameters may be constrained by performance requirements, or by the interface to the external world, others remain free to be chosen during the process of highlevel synthesis. Traditionally highlevel synthesis systems for datapathintensive applications either have assumed that all the relative times, called phases, when corresponding samples are available at input and delay nodes are zero (i.e., all input and delay node samples enter at the initial cycle of the schedule) or have automatically assigned values to these phases as part of the datapath allocation/scheduling step in the case of newer schedulers that use techniques like overlapped scheduling to generate complex time shapes. Rephasing, however, manipulates the values of these phases as an algorithm transformation before the scheduling/allocation stage. The advantage of this approach is that phase values can be chosen to transform and optimize the algorithm for explicit metrics such as area, throughput, latency, and power. Moreover, the rephasing transformation can be combined with other transformations such as algebraic transformations. We have developed techniques for using rephasing to optimize a variety of design metrics, and our results show significant improvements in several design metrics. We have also investigated the relationship and interaction of rephasing with other highlevel synthesis tasks. Index Terms—Behavioral synthesis, transformations. I.
DivideAndConquer Techniques for Global Throughput Optimization
 Proc. IEEE VLSI Signal Processing Workshop
, 1996
"... ..."
(Show Context)
Detecting local events using global sensing
 IEEE Sensors
, 2011
"... Abstract—In order to create low power, low latency and reliable sensing systems, we propose a sensing strategy that identifies local events by the means of global measurements. We claim that capturing events globally, although seems against intuition, can save energy by enabling the organization of ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Abstract—In order to create low power, low latency and reliable sensing systems, we propose a sensing strategy that identifies local events by the means of global measurements. We claim that capturing events globally, although seems against intuition, can save energy by enabling the organization of effective searching queries. To enable this capability, sensor readings can be combined using electronic switches and therefore enable event detections in groups of sensors with single measurements. We demonstrate this sensing mechanism on a prototype keyboard system made from ETextile pressure sensors. I.
Critical Path Minimization Using Retiming and Algebraic SpeedUp
"... ABSTRACT The power of retiming is often limited by the underlying topology of a computational structure. We combine the power of retiming with a complete set of algebraic transformations in an iterative improvement framework, where retiming and algebraic speedup algorithms are successively applied, ..."
Abstract
 Add to MetaCart
(Show Context)
ABSTRACT The power of retiming is often limited by the underlying topology of a computational structure. We combine the power of retiming with a complete set of algebraic transformations in an iterative improvement framework, where retiming and algebraic speedup algorithms are successively applied, so that the latter enables the former. The key part of the approach is a new algebraic speedup algorithm being used for the first time in highlevel synthesis for transformations of algebraic expressions so that an arbitrary set of input arrival times and output required times are satisfied. Since the new method moves delays forward only and retiming is done locally and very infrequently, it also always calculates the new initial state efficiently. The proposed approach has yielded results better or equal to the best previously published on all benchmark examples and on several novel reallife examples.