Results 1  10
of
23
Synchronous data flow
, 1987
"... Data flow is a natural paradigm for describing DSP applications for concurrent implementation on parallel hardware. Data flow programs for signal processing are directed graphs where each node represents a function and each arc represents a signal path. Synchronous data flow (SDF) is a special case ..."
Abstract

Cited by 483 (44 self)
 Add to MetaCart
Data flow is a natural paradigm for describing DSP applications for concurrent implementation on parallel hardware. Data flow programs for signal processing are directed graphs where each node represents a function and each arc represents a signal path. Synchronous data flow (SDF) is a special case of data flow (either atomic or large grain) in which the number of data samples produced or consumed by each node on each invocation is specified a priori. Nodes can be scheduled statically (at compile time) onto single or parallel programmable processors so the runtime overhead usually associated with data flow evaporates. Multiple sample rates within the same system are easily and naturally handled. Conditions for correctness of SDF graph are explained and scheduling algorithms are described for homogeneous parallel processors sharing memory. A preliminary SDF software system for automatically generating assembly language code for DSP microcomputers is described. Two new efficiency techniques are introduced, static buffering and an extension to SDF to efficiently implement conditionals.
A Fast Fourier Transform Compiler
, 1999
"... FFTW library for computing the discrete Fourier transform (DFT) has gained a wide acceptance in both academia and industry, because it provides excellent performance on a variety of machines (even competitive with or faster than equivalent libraries supplied by vendors). In FFTW, most of the perform ..."
Abstract

Cited by 155 (6 self)
 Add to MetaCart
FFTW library for computing the discrete Fourier transform (DFT) has gained a wide acceptance in both academia and industry, because it provides excellent performance on a variety of machines (even competitive with or faster than equivalent libraries supplied by vendors). In FFTW, most of the performancecritical code was generated automatically by a specialpurpose compiler, called genfft, that outputs C code. Written in Objective Caml, genfft can produce DFT programs for any input length, and it can specialize the DFT program for the common case where the input data are real instead of complex. Unexpectedly, genfft “discovered” algorithms that were previously unknown, and it was able to reduce the arithmetic complexity of some other existing algorithms. This paper describes the internals of this specialpurpose compiler in some detail, and it argues that a specialized compiler is a valuable tool.
A Modified SplitRadix FFT With Fewer Arithmetic Operations
, 2007
"... Recent Results by Van Buskirk et al. have broken the record set by Yavne in 1968 for the lowest exact count of real additions and multiplications to compute a poweroftwo discrete Fourier transform (DFT). Here, we present a simple recursive modification of the splitradix algorithm that computes th ..."
Abstract

Cited by 24 (5 self)
 Add to MetaCart
Recent Results by Van Buskirk et al. have broken the record set by Yavne in 1968 for the lowest exact count of real additions and multiplications to compute a poweroftwo discrete Fourier transform (DFT). Here, we present a simple recursive modification of the splitradix algorithm that computes the DFT with asymptotically about 6 % fewer operations than Yavne, matching the count achieved by Van Buskirk’s programgeneration framework. We also discuss the application of our algorithm to realdata and realsymmetric (discrete cosine) transforms, where we are again able to achieve lower arithmetic counts than previously published algorithms.
Portable HighPerformance Programs
, 1999
"... right notice and this permission notice are preserved on all copies. ..."
Abstract

Cited by 17 (0 self)
 Add to MetaCart
right notice and this permission notice are preserved on all copies.
Behavioral Synthesis Techniques for Intellectual Property Protection”, unpublished manuscript
, 1997
"... The economic viability of the reusable corebased design paradigm depends on the development of techniques for intellectual property protection. We introduce the first dynamic watermarking technique for protecting the value of intellectual property of CAD and compilation tools and reusable core comp ..."
Abstract

Cited by 14 (8 self)
 Add to MetaCart
The economic viability of the reusable corebased design paradigm depends on the development of techniques for intellectual property protection. We introduce the first dynamic watermarking technique for protecting the value of intellectual property of CAD and compilation tools and reusable core components. The essence of the new approach is the addition of a set of design and timing constraints which encodes the author’s signature. The constraints are selected in such a way that they result in minimal hardware overhead while embedding the signature which is unique and difficult to detect, remove and forge. We establish the first set of relevant metrics which forms the basis for the quantitative analysis, evaluation, and comparison of watermarking techniques. We develop a generic approach for signature data hiding in designs, which is applicable in conjunction with an arbitrary behavioral synthesis task, such as scheduling, assignment, allocation, and transformations. Error correcting codes are used to augment the protection of the signature data from tampering attempts. On a large set of design examples, studies indicate the effectiveness of the new approach in a sense that the signature data, which are highly resilient, difficult to detect and remove, and yet easy to verify, can be embedded in designs with very low hardware overhead.
Intellectual property metering
 Inform. Hiding
, 2001
"... Abstract. We have developed the first hardware and software (intellectual property) metering scheme that enables reliable low overhead proofs for the number of manufactured parts and copied programs. The key idea is to make each design slightly different during postprocessing phase. Therefore, if tw ..."
Abstract

Cited by 11 (9 self)
 Add to MetaCart
Abstract. We have developed the first hardware and software (intellectual property) metering scheme that enables reliable low overhead proofs for the number of manufactured parts and copied programs. The key idea is to make each design slightly different during postprocessing phase. Therefore, if two identical hardware/software designs or a design that is not reported by the foundry are detected, the design house has proof of misconduct. We start by establishing implementation requirements for hardware metering. We also establish the connection between the requirements for hardware and software metering and synthesis process. Furthermore, we present mathematical analysis of statistical accuracy of the proposed hardware and software metering schemes. The effectiveness of the metering scheme is demonstrated on a number
Active Hardware Metering for
 Intellectual Property Protection and Security” USENIX Security
, 2007
"... We introduce the first hardware metering scheme that enables reliable low overhead proofs for the number of manufactured parts. The key idea is to make each design slightly different. Therefore, if two identical hardware designs or a design that is not reported by the foundry are detected, the desig ..."
Abstract

Cited by 10 (6 self)
 Add to MetaCart
We introduce the first hardware metering scheme that enables reliable low overhead proofs for the number of manufactured parts. The key idea is to make each design slightly different. Therefore, if two identical hardware designs or a design that is not reported by the foundry are detected, the design house has proof of misconduct. We start by establishing the connection between the requirements for hardware and synthesis process. Furthermore, we present mathematical analysis of statistical accuracy of the proposed hardware metering scheme. The effectiveness of the metering
Relationships between digital signal processing and control and estimation theory
 Proceedings of the IEEE
, 1978
"... The purpose of this paper is to explore several current research directions in the fields of digital signal processing and modern control and estimation theory. We examine topics such as stability theory, linear prediction, and parameter identification, system synthesis and implementation, twodimens ..."
Abstract

Cited by 8 (4 self)
 Add to MetaCart
The purpose of this paper is to explore several current research directions in the fields of digital signal processing and modern control and estimation theory. We examine topics such as stability theory, linear prediction, and parameter identification, system synthesis and implementation, twodimensional filtering, decentralized control and estimation, and image processing, in order to uncover some of the basic similarities and differences in the goals, techniques, and philosophy of the two disciplines.
On the use of multiple constant multiplication in polyphase FIR filters and filter banks,” under review to Nordic Signal Processing Symposium
, 2004
"... Multiple constant multiplication (MCM) has been shown to be an efficient way to reduce the number of additions and subtractions in FIR filter implementations. However, for polyphase decomposed FIR filters and filter banks, the problem can be formulated in three different ways. Either as one MCM bloc ..."
Abstract

Cited by 3 (1 self)
 Add to MetaCart
Multiple constant multiplication (MCM) has been shown to be an efficient way to reduce the number of additions and subtractions in FIR filter implementations. However, for polyphase decomposed FIR filters and filter banks, the problem can be formulated in three different ways. Either as one MCM block with all coefficients, one MCM block for each subfilter, or as a matrix MCM block. In this work we compare the approaches in terms of complexity, both for the MCM blocks and for the remaining hardware, such as structural additions and delay elements.
TypeII/III DCT/DST algorithms with reduced number of arithmetic operations
, 2007
"... We present algorithms for the discrete cosine transform (DCT) and discrete sine transform (DST), of types II and III, that achieve a lower count of real multiplications and additions than previously published algorithms, without sacrificing numerical accuracy. Asymptotically, the operation count is ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
We present algorithms for the discrete cosine transform (DCT) and discrete sine transform (DST), of types II and III, that achieve a lower count of real multiplications and additions than previously published algorithms, without sacrificing numerical accuracy. Asymptotically, the operation count is reduced from ∼ 2N log2 N to ∼ 17 9 N log2 N for a poweroftwo transform size N. Furthermore, we show that a further N multiplications may be saved by a certain rescaling of the inputs or outputs, generalizing a wellknown technique for N = 8 by Arai et al. These results are derived by considering the DCT to be a special case of a DFT of length 4N, with certain symmetries, and then pruning redundant operations from a recent improved fast Fourier transform algorithm (based on a recursive rescaling of the conjugatepair split radix algorithm). The improved algorithms for DCTIII, DSTII, and DSTIII follow immediately from the improved count for the DCTII.