Results 1 
8 of
8
Graphbased algorithms for Boolean function manipulation
 IEEE Transactions on Computers
, 1986
"... In this paper we present a new data structure for representing Boolean functions and an associated set of manipulation algorithms. Functions are represented by directed, acyclic graphs in a manner similar to the representations introduced by Lee [1] and Akers [2], but with further restrictions on th ..."
Abstract

Cited by 2927 (46 self)
 Add to MetaCart
In this paper we present a new data structure for representing Boolean functions and an associated set of manipulation algorithms. Functions are represented by directed, acyclic graphs in a manner similar to the representations introduced by Lee [1] and Akers [2], but with further restrictions on the ordering of decision variables in the graph. Although a function requires, in the worst case, a graph of size exponential in the number of arguments, many of the functions encountered in typical applications have a more reasonable representation. Our algorithms have time complexity proportional to the sizes of the graphs being operated on, and hence are quite efficient as long as the graphs do not grow too large. We present experimental results from applying these algorithms to problems in logic design verification that demonstrate the practicality of our approach. Index Terms: Boolean functions, symbolic manipulation, binary decision diagrams, logic design verification 1.
On the Complexity of VLSI Implementations and Graph Representations of Boolean Functions with Application to Integer Multiplication
 IEEE Transactions on Computers
, 1998
"... This paper presents lower bound results on Boolean function complexity under two different models. The first is an abstraction of tradeoffs between chip area and speed in very large scale integrated (VLSI) circuits. The second is the ordered binary decision diagram (OBDD) representation used as a da ..."
Abstract

Cited by 233 (10 self)
 Add to MetaCart
This paper presents lower bound results on Boolean function complexity under two different models. The first is an abstraction of tradeoffs between chip area and speed in very large scale integrated (VLSI) circuits. The second is the ordered binary decision diagram (OBDD) representation used as a data structure for symbolically representing and manipulating Boolean functions. These lower bounds demonstrate the fundamental limitations of VLSI as an implementation medium, and OBDDs as a data structure. They also lend insight into what properties of a Boolean function lead to high complexity under these models. Related techniques can be...
Special Purpose Parallel Computing
 Lectures on Parallel Computation
, 1993
"... A vast amount of work has been done in recent years on the design, analysis, implementation and verification of special purpose parallel computing systems. This paper presents a survey of various aspects of this work. A long, but by no means complete, bibliography is given. 1. Introduction Turing ..."
Abstract

Cited by 77 (5 self)
 Add to MetaCart
A vast amount of work has been done in recent years on the design, analysis, implementation and verification of special purpose parallel computing systems. This paper presents a survey of various aspects of this work. A long, but by no means complete, bibliography is given. 1. Introduction Turing [365] demonstrated that, in principle, a single general purpose sequential machine could be designed which would be capable of efficiently performing any computation which could be performed by a special purpose sequential machine. The importance of this universality result for subsequent practical developments in computing cannot be overstated. It showed that, for a given computational problem, the additional efficiency advantages which could be gained by designing a special purpose sequential machine for that problem would not be great. Around 1944, von Neumann produced a proposal [66, 389] for a general purpose storedprogram sequential computer which captured the fundamental principles of...
Models of Computation  Exploring the Power of Computing
"... Theoretical computer science treats any computational subject for which a good model can be created. Research on formal models of computation was initiated in the 1930s and 1940s by Turing, Post, Kleene, Church, and others. In the 1950s and 1960s programming languages, language translators, and oper ..."
Abstract

Cited by 57 (7 self)
 Add to MetaCart
Theoretical computer science treats any computational subject for which a good model can be created. Research on formal models of computation was initiated in the 1930s and 1940s by Turing, Post, Kleene, Church, and others. In the 1950s and 1960s programming languages, language translators, and operating systems were under development and therefore became both the subject and basis for a great deal of theoretical work. The power of computers of this period was limited by slow processors and small amounts of memory, and thus theories (models, algorithms, and analysis) were developed to explore the efficient use of computers as well as the inherent complexity of problems. The former subject is known today as algorithms and data structures, the latter computational complexity. The focus of theoretical computer scientists in the 1960s on languages is reflected in the first textbook on the subject, Formal Languages and Their Relation to Automata by John Hopcroft and Jeffrey Ullman. This influential book led to the creation of many languagecentered theoretical computer science courses; many introductory theory courses today continue to reflect the content of this book and the interests of theoreticians of the 1960s and early 1970s. Although
The AreaTime Complexity of Binary Multiplication
 Journal of the ACM
, 1981
"... ABSTRACT The problem of performing multtphcaUon of nbit binary numbers on a chip is considered Let A denote the ch~p area and T the time reqmred to perform mult~phcation. By using a model of computation which is a realistic approx~mauon to current and anucipated LSI or VLSI technology, ~t is shown ..."
Abstract

Cited by 28 (1 self)
 Add to MetaCart
ABSTRACT The problem of performing multtphcaUon of nbit binary numbers on a chip is considered Let A denote the ch~p area and T the time reqmred to perform mult~phcation. By using a model of computation which is a realistic approx~mauon to current and anucipated LSI or VLSI technology, ~t is shown that A T 2. for all a ~ [0, 1], where A0 and To are posmve constants which depend on the technology but are mdependent of n. The exponent 1 + a is the best possible A consequence of this result is that binary multiphcatlon is "harder " than binary addmon More precisely, ff(AT2~)M(n) and (AT2~)A(n) denote the mmimum areatime complexity for nb~t binary multiphcauon and addmon, respectively, then (AT2~)M(n) _ 1 f~(nla) for 0 _< a< na for ~<a_<l for°>, ( = fi(nl/2) for all a _> 0).
A time domain approach for avoiding crosstalk in optical blocking multistage interconnection networks
 J. Lightwave Technology
, 1994
"... can he avoided by ensuring that a switch is not used by two connections simultaneously. In order to support crosstalkfree communications among N inputs and N outputs, a space domain approach dilates an NxN network into one that is essentially equivalent to a 2Nx2N network. Path conflicts, however m ..."
Abstract

Cited by 13 (4 self)
 Add to MetaCart
can he avoided by ensuring that a switch is not used by two connections simultaneously. In order to support crosstalkfree communications among N inputs and N outputs, a space domain approach dilates an NxN network into one that is essentially equivalent to a 2Nx2N network. Path conflicts, however may still exist in dilated networks. This paper proposes a time domain approach for avoiding crosstalk. Such an approach can be regarded as “dilating ” a network in time, instead of space. More specifically, the connections that need to use the same switch are established during different time slots. This way, path conflicts are automatically avoided. The time domain dilation is useful for overcoming the limits on the network size while utilizing the high bandwidth of optical interconnects. We study the set of permutations whose crosstalkfree connections can be established in just two time slots using the time domain approach. While the space domain approach trades hardware complexity for crosstalkfree communications, the time domain approach trades time complexity. We compare the proposed time domain to the space domain approach by analyzing the tradeoffs involved in these two approaches. I.
Fourier Transforms in VLSI
"... AbstractThis paper surveys nine designs for VLSI circuits that compute Nelement Fourier transforms. The largest of the designs requires O(N2 log N) units of silicon area; it can start a new Fourier transform every O(log N) time units. The smallest designs have about 1/Nth of this throughput, but t ..."
Abstract
 Add to MetaCart
AbstractThis paper surveys nine designs for VLSI circuits that compute Nelement Fourier transforms. The largest of the designs requires O(N2 log N) units of silicon area; it can start a new Fourier transform every O(log N) time units. The smallest designs have about 1/Nth of this throughput, but they require only 1/Nth as much area. The designs exhibit an areatime tradeoff: the smaller ones are slower, for two reasons. First, they may have fewer arithmetic units and thus less parallelism. Second, their arithmetic units may be interconnected in a pattern that is less efficient but more compact. The optimality of several of the designs is immediate, since they achieve the limiting area * time2 performance of Q(N2 log2 N). Index TermsAlgorithms implemented in hardware, areatime complexity, computational complexity, FFT, Fourier transform, meshconnected computers, parallel algorithms, shuffleexchange