Results 1  10
of
16
Graphbased algorithms for Boolean function manipulation
 IEEE TRANSACTIONS ON COMPUTERS
, 1986
"... In this paper we present a new data structure for representing Boolean functions and an associated set of manipulation algorithms. Functions are represented by directed, acyclic graphs in a manner similar to the representations introduced by Lee [1] and Akers [2], but with further restrictions on th ..."
Abstract

Cited by 3502 (46 self)
 Add to MetaCart
(Show Context)
In this paper we present a new data structure for representing Boolean functions and an associated set of manipulation algorithms. Functions are represented by directed, acyclic graphs in a manner similar to the representations introduced by Lee [1] and Akers [2], but with further restrictions on the ordering of decision variables in the graph. Although a function requires, in the worst case, a graph of size exponential in the number of arguments, many of the functions encountered in typical applications have a more reasonable representation. Our algorithms have time complexity proportional to the sizes of the graphs being operated on, and hence are quite efficient as long as the graphs do not grow too large. We present experimental results from applying these algorithms to problems in logic design verification that demonstrate the practicality of our approach.
On the complexity of VLSI implementations and graph representations of Boolean functions with application to integer multiplication
 IEEE Transactions on Computers
, 1991
"... ..."
(Show Context)
Models of Computation  Exploring the Power of Computing
"... Theoretical computer science treats any computational subject for which a good model can be created. Research on formal models of computation was initiated in the 1930s and 1940s by Turing, Post, Kleene, Church, and others. In the 1950s and 1960s programming languages, language translators, and oper ..."
Abstract

Cited by 86 (6 self)
 Add to MetaCart
Theoretical computer science treats any computational subject for which a good model can be created. Research on formal models of computation was initiated in the 1930s and 1940s by Turing, Post, Kleene, Church, and others. In the 1950s and 1960s programming languages, language translators, and operating systems were under development and therefore became both the subject and basis for a great deal of theoretical work. The power of computers of this period was limited by slow processors and small amounts of memory, and thus theories (models, algorithms, and analysis) were developed to explore the efficient use of computers as well as the inherent complexity of problems. The former subject is known today as algorithms and data structures, the latter computational complexity. The focus of theoretical computer scientists in the 1960s on languages is reflected in the first textbook on the subject, Formal Languages and Their Relation to Automata by John Hopcroft and Jeffrey Ullman. This influential book led to the creation of many languagecentered theoretical computer science courses; many introductory theory courses today continue to reflect the content of this book and the interests of theoreticians of the 1960s and early 1970s. Although
Special Purpose Parallel Computing
 Lectures on Parallel Computation
, 1993
"... A vast amount of work has been done in recent years on the design, analysis, implementation and verification of special purpose parallel computing systems. This paper presents a survey of various aspects of this work. A long, but by no means complete, bibliography is given. 1. Introduction Turing ..."
Abstract

Cited by 81 (6 self)
 Add to MetaCart
A vast amount of work has been done in recent years on the design, analysis, implementation and verification of special purpose parallel computing systems. This paper presents a survey of various aspects of this work. A long, but by no means complete, bibliography is given. 1. Introduction Turing [365] demonstrated that, in principle, a single general purpose sequential machine could be designed which would be capable of efficiently performing any computation which could be performed by a special purpose sequential machine. The importance of this universality result for subsequent practical developments in computing cannot be overstated. It showed that, for a given computational problem, the additional efficiency advantages which could be gained by designing a special purpose sequential machine for that problem would not be great. Around 1944, von Neumann produced a proposal [66, 389] for a general purpose storedprogram sequential computer which captured the fundamental principles of...
The AreaTime Complexity of Binary Multiplication
 Journal of the ACM
, 1981
"... ABSTRACT The problem of performing multtphcaUon of nbit binary numbers on a chip is considered Let A denote the ch~p area and T the time reqmred to perform mult~phcation. By using a model of computation which is a realistic approx~mauon to current and anucipated LSI or VLSI technology, ~t is shown ..."
Abstract

Cited by 42 (1 self)
 Add to MetaCart
ABSTRACT The problem of performing multtphcaUon of nbit binary numbers on a chip is considered Let A denote the ch~p area and T the time reqmred to perform mult~phcation. By using a model of computation which is a realistic approx~mauon to current and anucipated LSI or VLSI technology, ~t is shown that A T 2. for all a ~ [0, 1], where A0 and To are posmve constants which depend on the technology but are mdependent of n. The exponent 1 + a is the best possible A consequence of this result is that binary multiphcatlon is &quot;harder &quot; than binary addmon More precisely, ff(AT2~)M(n) and (AT2~)A(n) denote the mmimum areatime complexity for nb~t binary multiphcauon and addmon, respectively, then (AT2~)M(n) _ 1 f~(nla) for 0 _< a< na for ~<a_<l for°>, ( = fi(nl/2) for all a _> 0).
A time domain approach for avoiding crosstalk in optical blocking multistage interconnection networks
 J. Lightwave Technology
, 1994
"... can he avoided by ensuring that a switch is not used by two connections simultaneously. In order to support crosstalkfree communications among N inputs and N outputs, a space domain approach dilates an NxN network into one that is essentially equivalent to a 2Nx2N network. Path conflicts, however m ..."
Abstract

Cited by 19 (4 self)
 Add to MetaCart
(Show Context)
can he avoided by ensuring that a switch is not used by two connections simultaneously. In order to support crosstalkfree communications among N inputs and N outputs, a space domain approach dilates an NxN network into one that is essentially equivalent to a 2Nx2N network. Path conflicts, however may still exist in dilated networks. This paper proposes a time domain approach for avoiding crosstalk. Such an approach can be regarded as “dilating ” a network in time, instead of space. More specifically, the connections that need to use the same switch are established during different time slots. This way, path conflicts are automatically avoided. The time domain dilation is useful for overcoming the limits on the network size while utilizing the high bandwidth of optical interconnects. We study the set of permutations whose crosstalkfree connections can be established in just two time slots using the time domain approach. While the space domain approach trades hardware complexity for crosstalkfree communications, the time domain approach trades time complexity. We compare the proposed time domain to the space domain approach by analyzing the tradeoffs involved in these two approaches. I.
Why area might reduce power in nanoscale CMOS
 in Proceedings of the 2005 IEEE International Symposium on Circuits and Systems (ISCAS’05
, 2005
"... Abstract — In this paper we explore the relationship between power and area. By exploiting parallelism (and thus using more area) one can reduce the switching frequency allowing a reduction in VDD which results in a reduction in power. Under a scaling regime which allows threshold voltage to increas ..."
Abstract

Cited by 5 (1 self)
 Add to MetaCart
(Show Context)
Abstract — In this paper we explore the relationship between power and area. By exploiting parallelism (and thus using more area) one can reduce the switching frequency allowing a reduction in VDD which results in a reduction in power. Under a scaling regime which allows threshold voltage to increase as VDD decreases we find that dynamic and subthreshold power loss in CMOS exhibit a dependence on area proportional to A (σ−3)/σ while gate leakage power ∝ A (σ−6)/σ and short circuit power ∝ A (σ−8)/σ. Thus, with the large number of devices at our disposal we can exploit techniques such as spatial computing–tailoring the program directly to the hardware–to overcome the negative effects of scaling. The value of σ describes the effectiveness of the technique for a particular circuit and/or algorithm–for circuits that exhibit a value of σ ≤3, power will be a constant or reducing function of area. We briefly speculate on how σ might be influenced by a move to nanoscale technology. I.
and K Steiglitz: A VLSI layout for a pipelined dadda multiplier
 ACM Trans. Comp. Syst
, 1983
"... Parallel counters (unarytobinary converters) are the principal component of a Dadda multiplier. We specify a design first for a pipelined parallel counter, and then for a complete multiplier. As a result of its structural regularity, the layout is suitable for use in a VLSI implementation. We anal ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
(Show Context)
Parallel counters (unarytobinary converters) are the principal component of a Dadda multiplier. We specify a design first for a pipelined parallel counter, and then for a complete multiplier. As a result of its structural regularity, the layout is suitable for use in a VLSI implementation. We analyze the complexity of the resulting design using a VLSI model of computation, showing that it is optimal with respect to both its period and latency. In this sense the design compares favorably with other recent VLSI multiplier designs.
Collision Finding with Many Classical or Quantum Processors
"... I hereby declare that I am the sole author of this thesis. This is a true copy of the thesis, including any required final revisions, as accepted by my examiners. I understand that my thesis may be made electronically available to the public. ii In this thesis, we investigate the cost of finding col ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
(Show Context)
I hereby declare that I am the sole author of this thesis. This is a true copy of the thesis, including any required final revisions, as accepted by my examiners. I understand that my thesis may be made electronically available to the public. ii In this thesis, we investigate the cost of finding collisions in a blackbox function, a problem that is of fundamental importance in cryptanalysis. Inspired by the excellent performance of the heuristic rho method of collision finding, we define several new models of complexity that take into account the cost of moving information across a large space, and lay the groundwork for studying the performance of classical and quantum algorithms in these models. iii Acknowledgements I am deeply indebted to my supervisor, Dr. Michele Mosca, for introducing me to the subject of quantum information processing, and for the years of support and encouragement