Results 1  10
of
25
Synchronous data flow
, 1987
"... Data flow is a natural paradigm for describing DSP applications for concurrent implementation on parallel hardware. Data flow programs for signal processing are directed graphs where each node represents a function and each arc represents a signal path. Synchronous data flow (SDF) is a special case ..."
Abstract

Cited by 520 (44 self)
 Add to MetaCart
Data flow is a natural paradigm for describing DSP applications for concurrent implementation on parallel hardware. Data flow programs for signal processing are directed graphs where each node represents a function and each arc represents a signal path. Synchronous data flow (SDF) is a special case of data flow (either atomic or large grain) in which the number of data samples produced or consumed by each node on each invocation is specified a priori. Nodes can be scheduled statically (at compile time) onto single or parallel programmable processors so the runtime overhead usually associated with data flow evaporates. Multiple sample rates within the same system are easily and naturally handled. Conditions for correctness of SDF graph are explained and scheduling algorithms are described for homogeneous parallel processors sharing memory. A preliminary SDF software system for automatically generating assembly language code for DSP microcomputers is described. Two new efficiency techniques are introduced, static buffering and an extension to SDF to efficiently implement conditionals.
A Fast Fourier Transform Compiler
, 1999
"... FFTW library for computing the discrete Fourier transform (DFT) has gained a wide acceptance in both academia and industry, because it provides excellent performance on a variety of machines (even competitive with or faster than equivalent libraries supplied by vendors). In FFTW, most of the perform ..."
Abstract

Cited by 170 (5 self)
 Add to MetaCart
(Show Context)
FFTW library for computing the discrete Fourier transform (DFT) has gained a wide acceptance in both academia and industry, because it provides excellent performance on a variety of machines (even competitive with or faster than equivalent libraries supplied by vendors). In FFTW, most of the performancecritical code was generated automatically by a specialpurpose compiler, called genfft, that outputs C code. Written in Objective Caml, genfft can produce DFT programs for any input length, and it can specialize the DFT program for the common case where the input data are real instead of complex. Unexpectedly, genfft “discovered” algorithms that were previously unknown, and it was able to reduce the arithmetic complexity of some other existing algorithms. This paper describes the internals of this specialpurpose compiler in some detail, and it argues that a specialized compiler is a valuable tool.
A modified splitradix FFT with fewer arithmetic operations
 IEEE Trans. Signal Processing
, 2007
"... ..."
Portable HighPerformance Programs
, 1999
"... right notice and this permission notice are preserved on all copies. ..."
Abstract

Cited by 17 (0 self)
 Add to MetaCart
(Show Context)
right notice and this permission notice are preserved on all copies.
Behavioral Synthesis Techniques for Intellectual Property Protection”, unpublished manuscript
, 1997
"... The economic viability of the reusable corebased design paradigm depends on the development of techniques for intellectual property protection. We introduce the first dynamic watermarking technique for protecting the value of intellectual property of CAD and compilation tools and reusable core comp ..."
Abstract

Cited by 16 (8 self)
 Add to MetaCart
(Show Context)
The economic viability of the reusable corebased design paradigm depends on the development of techniques for intellectual property protection. We introduce the first dynamic watermarking technique for protecting the value of intellectual property of CAD and compilation tools and reusable core components. The essence of the new approach is the addition of a set of design and timing constraints which encodes the author’s signature. The constraints are selected in such a way that they result in minimal hardware overhead while embedding the signature which is unique and difficult to detect, remove and forge. We establish the first set of relevant metrics which forms the basis for the quantitative analysis, evaluation, and comparison of watermarking techniques. We develop a generic approach for signature data hiding in designs, which is applicable in conjunction with an arbitrary behavioral synthesis task, such as scheduling, assignment, allocation, and transformations. Error correcting codes are used to augment the protection of the signature data from tampering attempts. On a large set of design examples, studies indicate the effectiveness of the new approach in a sense that the signature data, which are highly resilient, difficult to detect and remove, and yet easy to verify, can be embedded in designs with very low hardware overhead.
Active Hardware Metering for
 Intellectual Property Protection and Security” USENIX Security
, 2007
"... We introduce the first hardware metering scheme that enables reliable low overhead proofs for the number of manufactured parts. The key idea is to make each design slightly different. Therefore, if two identical hardware designs or a design that is not reported by the foundry are detected, the desig ..."
Abstract

Cited by 13 (7 self)
 Add to MetaCart
(Show Context)
We introduce the first hardware metering scheme that enables reliable low overhead proofs for the number of manufactured parts. The key idea is to make each design slightly different. Therefore, if two identical hardware designs or a design that is not reported by the foundry are detected, the design house has proof of misconduct. We start by establishing the connection between the requirements for hardware and synthesis process. Furthermore, we present mathematical analysis of statistical accuracy of the proposed hardware metering scheme. The effectiveness of the metering
Intellectual property metering
 Inform. Hiding
, 2001
"... Abstract. We have developed the first hardware and software (intellectual property) metering scheme that enables reliable low overhead proofs for the number of manufactured parts and copied programs. The key idea is to make each design slightly different during postprocessing phase. Therefore, if tw ..."
Abstract

Cited by 13 (10 self)
 Add to MetaCart
(Show Context)
Abstract. We have developed the first hardware and software (intellectual property) metering scheme that enables reliable low overhead proofs for the number of manufactured parts and copied programs. The key idea is to make each design slightly different during postprocessing phase. Therefore, if two identical hardware/software designs or a design that is not reported by the foundry are detected, the design house has proof of misconduct. We start by establishing implementation requirements for hardware metering. We also establish the connection between the requirements for hardware and software metering and synthesis process. Furthermore, we present mathematical analysis of statistical accuracy of the proposed hardware and software metering schemes. The effectiveness of the metering scheme is demonstrated on a number
Relationships between digital signal processing and control and estimation theory
 Proceedings of the IEEE
, 1978
"... The purpose of this paper is to explore several current research directions in the fields of digital signal processing and modern control and estimation theory. We examine topics such as stability theory, linear prediction, and parameter identification, system synthesis and implementation, twodimens ..."
Abstract

Cited by 8 (4 self)
 Add to MetaCart
The purpose of this paper is to explore several current research directions in the fields of digital signal processing and modern control and estimation theory. We examine topics such as stability theory, linear prediction, and parameter identification, system synthesis and implementation, twodimensional filtering, decentralized control and estimation, and image processing, in order to uncover some of the basic similarities and differences in the goals, techniques, and philosophy of the two disciplines.
Behavioral synthesis techniques for intellectual property protection
, 2003
"... We introduce dynamic watermarking techniques for protecting the value of intellectual property of CAD and compilation tools and reusable design components. The essence of the new approach is the addition of a set of design and timing constraints which encodes the author’s signature. The constraints ..."
Abstract

Cited by 5 (4 self)
 Add to MetaCart
We introduce dynamic watermarking techniques for protecting the value of intellectual property of CAD and compilation tools and reusable design components. The essence of the new approach is the addition of a set of design and timing constraints which encodes the author’s signature. The constraints are selected in such a way that they result in a minimal hardware overhead while embedding a unique signature that is difficult to remove and forge. Techniques are applicable in conjunction with an arbitrary behavioral synthesis task such as scheduling, assignment, allocation, transformation, and template matching. On a large set of design examples, studies indicate the effectiveness of the new approach that results in signature data that is highly resilient, difficult to detect and remove, and yet is easy to verify and can be embedded in designs with very low hardware overhead. For example, the probability that the same design with the embedded signature is obtained by any other designers by themselves is less than 1 in 10102, and no register overhead was incurred. The probability of tampering, the probability that part of the embedded signature can be removed by random attempts, is shown to be extremely low, and the watermark is additionally protected from such tampering with errorcorrecting codes.
On the use of multiple constant multiplication in polyphase FIR filters and filter banks,” under review to Nordic Signal Processing Symposium
, 2004
"... Multiple constant multiplication (MCM) has been shown to be an efficient way to reduce the number of additions and subtractions in FIR filter implementations. However, for polyphase decomposed FIR filters and filter banks, the problem can be formulated in three different ways. Either as one MCM bloc ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
(Show Context)
Multiple constant multiplication (MCM) has been shown to be an efficient way to reduce the number of additions and subtractions in FIR filter implementations. However, for polyphase decomposed FIR filters and filter banks, the problem can be formulated in three different ways. Either as one MCM block with all coefficients, one MCM block for each subfilter, or as a matrix MCM block. In this work we compare the approaches in terms of complexity, both for the MCM blocks and for the remaining hardware, such as structural additions and delay elements.