Results 1  10
of
19
A coding framework for lowpower address and data busses
 IEEE Transactions on VLSI Systems
, 1999
"... Abstract—This paper presents a sourcecoding framework for the design of coding schemes to reduce transition activity. These schemes are suited for highcapacitance busses where the extra power dissipation due to the encoder and decoder circuitry is offset by the power savings at the bus. In this fr ..."
Abstract

Cited by 46 (1 self)
 Add to MetaCart
Abstract—This paper presents a sourcecoding framework for the design of coding schemes to reduce transition activity. These schemes are suited for highcapacitance busses where the extra power dissipation due to the encoder and decoder circuitry is offset by the power savings at the bus. In this framework, a data source (characterized in a probabilistic manner) is first passed through a decorrelating function �I. Next, a variant of entropy coding function �P is employed, which reduces the transition activity. The framework is then employed to derive novel encoding schemes whereby practical forms for �I and �P are proposed. Simulation results with an encoding scheme for data busses indicate an average reduction in transition activity of 36%. This translates into a reduction in total power dissipation for bus capacitances greater than 14 pF/b in 1.2"m CMOS technology. For a typical value for bus capacitance of 50 pF/b, there is a 36 % reduction in power dissipation and eight times more power savings compared to existing schemes. Simulation results with an encoding scheme for instruction address busses indicate an average reduction in transition activity by a factor of 1.5 times over known coding schemes. Index Terms — CMOS VLSI, coding, highcapacitance busses, lowpower design, switching activity.
Toward Achieving Energy Efficiency in Presence of Deep Submicron Noise
 IEEE TRANSACTIONS ON VLSI SYSTEMS
, 2000
"... Presented in this paper are 1) informationtheoretic lower bounds on energy consumption of noisy digital gates and 2) the concept of noise tolerance via coding for achieving energy efficiency in the presence of noise. In particular, lower bounds on a) circuit speed and supply voltage ; b) transition ..."
Abstract

Cited by 27 (1 self)
 Add to MetaCart
Presented in this paper are 1) informationtheoretic lower bounds on energy consumption of noisy digital gates and 2) the concept of noise tolerance via coding for achieving energy efficiency in the presence of noise. In particular, lower bounds on a) circuit speed and supply voltage ; b) transition activity in presence of noise; c) dynamic energy dissipation; and d) total (dynamic and static) energy dissipation are derived. A surprising result is that in a scenario where dynamic component of power dissipation dominates, the supply voltage for minimum energy operation ( ) is greater than the minimum supply voltage ( min ) for reliable operation. We then propose noise tolerance via coding to approach the lower bounds on energy dissipation. We show that the lower bounds on energy for an offchip I/O signaling example are a factor of 24 below present day systems. A very simple Hamming code can reduce the energy consumption by a factor of 3 , while ReedMuller (RM) codes give a 4 reduction in energy dissipation.
EnergyEfficient Signal Processing via Algorithmic NoiseTolerance
, 1999
"... In this paper, we propose a framework for lowenergy digital signal processing (DSP) where the supply voltage is scaled beyond the critical voltage required to match the critical path delay to the throughput. This deliberate introduction of inputdependent errors leads to degradation in the algorith ..."
Abstract

Cited by 22 (2 self)
 Add to MetaCart
In this paper, we propose a framework for lowenergy digital signal processing (DSP) where the supply voltage is scaled beyond the critical voltage required to match the critical path delay to the throughput. This deliberate introduction of inputdependent errors leads to degradation in the algorithmic performance, which is compensated for via algorithmic noisetolerance (ANT) schemes. The resulting setup that comprises of the DSP architecture operating at subcritical voltage and the error control scheme is referred to as soft DSP. It is shown that technology scaling renders the proposed scheme more effective as the delay penalty suffered due to voltage scaling reduces due to short channel effects. The effectiveness of the proposed scheme is also enhanced when arithmetic units with a higher "delayimbalance" are employed. A prediction based errorcontrol scheme is proposed to enhance the performance of the filtering algorithm in presence of errors due to soft computations. For a frequ...
The price of certainty: “waterslide curves” and the gap to capacity
 IEEE Trans. Inform. Theory, Submitted 2007. [Online]. Available: http://arxiv.org/abs/0801.0352
"... The classical problem of reliable pointtopoint digital communication is to achieve a low probability of error while keeping the rate high and the total power consumption small. Traditional informationtheoretic analysis uses explicit models for the communication channel to study the power spent in ..."
Abstract

Cited by 13 (8 self)
 Add to MetaCart
The classical problem of reliable pointtopoint digital communication is to achieve a low probability of error while keeping the rate high and the total power consumption small. Traditional informationtheoretic analysis uses explicit models for the communication channel to study the power spent in transmission. The resulting bounds are expressed using ‘waterfall ’ curves that convey the revolutionary idea that unboundedly low probabilities of biterror are attainable using only finite transmit power. However, practitioners have long observed that the decoder complexity, and hence the total power consumption, goes up when attempting to use sophisticated codes that operate close to the waterfall curve. This paper gives an explicit model for power consumption at an idealized decoder that allows for extreme parallelism in implementation. The decoder architecture is in the spirit of message passing and iterative decoding for sparsegraph codes, but is further idealized in that it allows for more computational power than is currently known to be implementable. Generalized spherepacking arguments are used to derive lower bounds on the decoding power needed for any possible code given only the gap from the Shannon limit and the desired probability of error. As the gap goes to zero, the energy per bit spent in decoding is shown to go to infinity. This suggests that to optimize total power, the transmitter should operate at a power that is strictly above the minimum demanded by the Shannon capacity. The lower bound is plotted to show an unavoidable tradeoff between the average biterror probability and the total power used in transmission and decoding. In the spirit of conventional waterfall curves, we call these ‘waterslide’ curves. The bound is shown to be order optimal by showing the existence of codes that can achieve similarly shaped waterslide curves under the proposed idealized model of decoding. 1 The price of certainty: “waterslide curves ” and the gap to capacity I.
NoiseTolerant Dynamic Circuit Design
, 1999
"... Noise in deep submicron technology combined with the move towards dynamic circuit techniques for higher performance have raised concerns about reliability and energyefficiency of VLSI systems in the deep submicron era. To address this problem, a new noisetolerant dynamic circuit technique is presen ..."
Abstract

Cited by 11 (4 self)
 Add to MetaCart
Noise in deep submicron technology combined with the move towards dynamic circuit techniques for higher performance have raised concerns about reliability and energyefficiency of VLSI systems in the deep submicron era. To address this problem, a new noisetolerant dynamic circuit technique is presented. In addition, the average noise threshold energy (ANTE) and the energy normalized ANTE metrics are proposed for quantifying the noise immunity and energyefficiency, respectively, of circuit techniques. Simulation results in 0.35 micron CMOS for NAND gate designs indicate that the proposed technique improves the ANTE and energy normalized ANTE by 2.54X and 2.25X over the conventional domino circuit. The improvement in energy normalized ANTE is 1.22X higher than the existing noisetolerance techniques. A full adder design based on the proposed technique improves the ANTE and energy normalized ANTE by 3.7X and 1.95X over the conventional dynamic circuit. In comparison, the static circuit im...
InformationTheoretic Bounds on Average Signal Transition Activity
, 1999
"... this paper, we derive lower and upper bounds on the average signal transition activity via an informationtheoretic approach in which symbols generated by a process (possibly correlated) with entropy vae Tl are coded with an average of R bits per symbol. The bounds are asymptotically achievable if t ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
this paper, we derive lower and upper bounds on the average signal transition activity via an informationtheoretic approach in which symbols generated by a process (possibly correlated) with entropy vae Tl are coded with an average of R bits per symbol. The bounds are asymptotically achievable if the process is stationary and ergodic. We also present a coding algorithm based on the LempelZiv data compression algorithm to achieve the bounds. Bounds are also obtained on the expected num ber of l's (or O's). These results are applied to, 1.) determine the activity reducing efficiency of different coding algorithms such as Entropy coding, Transition signaling, and BusInvert coding, and 2.) determine the lowerbound on the powerdelay product given T/ and R. Two examples are provided where transition activity within 4% and 9% of the lower bound is achieved when blocks of 8 symbols and 13 symbols, respectively, are coded at a time
Achievable Bounds on Signal Transition Activity
 ICCAD97
"... Transitions on high capacitance busses in VLSI systems result in considerable system power dissipation. Therefore, various coding schemes have been proposed in the literature to encode the input signal in order to reduce the number of transitions. In this paper we derive achievable lower and upper b ..."
Abstract

Cited by 9 (4 self)
 Add to MetaCart
Transitions on high capacitance busses in VLSI systems result in considerable system power dissipation. Therefore, various coding schemes have been proposed in the literature to encode the input signal in order to reduce the number of transitions. In this paper we derive achievable lower and upper bounds on the expected signal transition activity. These bounds are derived via an informationtheoretic approach in which symbols generated by a source (possibly correlated) with entropy rate H are coded with an average of R bits/symbol. These results are applied to, 1.) determine the activity reducing efficiency of different coding algorithms such as Entropy coding, Transition coding, and BusInvert coding, 2.) bound the error in entropybased power estimation schemes, and 3.) determine the lowerbound on the powerdelay product. Two examples are provided where transition activity within 4% and 8% of the lower bound is achieved when blocks of 8 and 13 symbols respectively are coded at a tim...
Signal Coding for Low Power: Fundamental Limits and Practical Realizations
 International Symposium on Circuits and Systems
, 1999
"... Transitions on high capacitance busses result in considerable system power dissipation. Therefore, various coding schemes have been proposed in the literature to encode the input signal in order to reduce the number of transitions. In this paper, we present: 1.) fundamental bounds on the activity re ..."
Abstract

Cited by 8 (1 self)
 Add to MetaCart
Transitions on high capacitance busses result in considerable system power dissipation. Therefore, various coding schemes have been proposed in the literature to encode the input signal in order to reduce the number of transitions. In this paper, we present: 1.) fundamental bounds on the activity reduction capability of any encoding scheme for a given source, and 2.) practical novel encoding schemes that approach these bounds. The fundamental bounds in 1.) are obtained via an informationtheoretic approach where a signal x(n) with entropy rate H is coded with R bits per sample on average. The encoding schemes in 2.) are developed via a communicationtheoretic approach, whereby a data source is passed through a decorrelating function followed by a variant of entropy coding function which reduces the transition activity. Simulation results with an encoding scheme for data busses indicate an average reduction in transition activity of 36%. Keywords Low power, switching activity, achie...
Capacity and energy cost of information in biological and silicon photoreceptors
 Proceedings of the IEEE
, 2001
"... We outline a theoretical framework to analyze information processing in biological sensory organs and in engineered microsystems. We employ the mathematical tools of communication theory and model natural or synthetic physical structures as microscale communication networks, studying them under phys ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
We outline a theoretical framework to analyze information processing in biological sensory organs and in engineered microsystems. We employ the mathematical tools of communication theory and model natural or synthetic physical structures as microscale communication networks, studying them under physical constraints at two different levels of abstraction. At the functional level, we examine the operational and task specification, while at the physical level, we examine the material specification and realization. Both levels of abstraction are characterized by Shannon’s channel capacity, as determined by the channel bandwidth, the signal power, and the noise power. The link between the functional level and the physical level of abstraction is established through models for transformations on the signal, physical constraints on the system, and noise that degrades the signal. As a specific example, we present a comparative study of information capacity (in bits per second) versus energy cost of information (in joules per bit) in a biological and in a silicon adaptive photoreceptor. The communication channel model for each of the two systems is a cascade of linear bandlimiting sections followed by additive noise. We model the filters and the noise from first principles whenever possible and phenomenologically otherwise. The parameters for the blowfly model are determined from biophysical data available in the literature, and the parameters of the silicon model are determined from our experimental data. This comparative study is a first step toward a fundamental and quantitative understanding of the tradeoffs between system performance and associated costs such as size, reliability, and energy requirements for natural and engineered sensory microsystems. I.
Energyefficiency in presence of deep submicron noise
 in Int. Conf. ComputerAided Design
, 1998
"... of noisy digital gates and 2.) the concept of noise tolerance via coding for achieving energy efficiency in the presence of noise. A discrete channel model for noisy digital logic in deep submicron technology that captures the manifestation of circuit noise is presented. The lower bounds are derived ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
of noisy digital gates and 2.) the concept of noise tolerance via coding for achieving energy efficiency in the presence of noise. A discrete channel model for noisy digital logic in deep submicron technology that captures the manifestation of circuit noise is presented. The lower bounds are derived via an informationtheoretic approach whereby a VLSI architecture implemented in a certain technology is viewed as a channel with information transfer capacity C (in bits/sec). A computing application is shown to require a minimum information transfer rate R (also in bits/sec). Lower bounds are obtained by employing the information theoretic constraint C> R. This constraint ensures reliability of computation though in an asymptotic sense. Lower bounds on transition activity at the output of noisy logic gates are also obtained using this constraint. Past work (for noiseless bus coding) is shown to fall out as a special case. In addition, lower bounds on energy dissipation is computed by solving an optimization problem where the objective function is the energy subject to the constraint of C> R. A surprising result is that in a scenario where capacitive component of power dissipation dominates: the voltage for minimum energy is greater than the minimum voltage for reliable operation. Foran offchip I/O signaling example, we show that the lower bounds are a factor of 24X below present day systems and that a very simple Hamming code can reduce the energy consumption by a factor of 3X. This indicates the potential of noise tolerance (via error control coding) in achieving low energy operation in the presence of noise.