Results 1  10
of
34
Power Efficient Technology Decomposition and Mapping under an Extended Power Consumption Model
, 1994
"... We propose a new power consumption model which accounts for the power consumption at the internal nodes of a cmos gate. Next, we address the problem of minimizing the average power consumption during the technology dependent phase of logic synthesis. Our approach consists of two steps. In the first ..."
Abstract

Cited by 23 (6 self)
 Add to MetaCart
We propose a new power consumption model which accounts for the power consumption at the internal nodes of a cmos gate. Next, we address the problem of minimizing the average power consumption during the technology dependent phase of logic synthesis. Our approach consists of two steps. In the first step, we generate a nand decomposition of an optimized Boolean network such that the sum of average switching rates for all nodes in the network is minimum. In the second step, we perform a power efficient technology mapping that finds a minimal power mapping for given timing constraints (subject to the unknown load problem). 1 Introduction With recent advances in microelectronic technology, smaller devices are now possible allowing more functionality on an integrated circuit (ic). Portable applications have shifted from conventional low performance products such as wristwatches and calculators to high throughput and computationally intensive products such as notebook computers and cellul...
Practical LengthLimited Coding for Large Alphabets
 The Computer Journal
, 1995
"... The use of Huffman coding for economical representation of a stream of symbols drawn from a defined source alphabet is widely known. In this paper we consider the problems encountered when Huffman coding is applied to an alphabet containing millions of symbols. Conventional treebased methods for ge ..."
Abstract

Cited by 18 (0 self)
 Add to MetaCart
The use of Huffman coding for economical representation of a stream of symbols drawn from a defined source alphabet is widely known. In this paper we consider the problems encountered when Huffman coding is applied to an alphabet containing millions of symbols. Conventional treebased methods for generating the set of codewords require large amounts of main memory; and worse, the codewords generated may be longer than 32 bits, which can severely limit the usefulness of both software and hardware implementations. The solution to the second problem is to generate "lengthlimited" codes, but previous algorithms for this restricted problem have required even more memory space than Huffman's unrestricted method. Here we reexamine the "packagemerge" algorithm for generating optimal lengthlimited prefixfree codes and show that with a considered reorganisation of the key steps and careful attention to detail it is possible to implement it to run quickly in modest amounts of memory. As evid...
Is Huffman Coding Dead?
 Computing
, 1993
"... : In recent publications about data compression, arithmetic codes are often suggested as the state of the art, rather than the more popular Huffman codes. While it is true that Huffman codes are not optimal in all situations, we show that the advantage of arithmetic codes in compression performance ..."
Abstract

Cited by 17 (3 self)
 Add to MetaCart
: In recent publications about data compression, arithmetic codes are often suggested as the state of the art, rather than the more popular Huffman codes. While it is true that Huffman codes are not optimal in all situations, we show that the advantage of arithmetic codes in compression performance is often negligible. Referring also to other criteria, we conclude that for many applications, Huffman codes should still remain a competitive choice. 1. Introduction It is paradoxical that, as the technology for storing and transmitting information has gotten cheaper and more effective, interest in data compression has increased. There are many explanations, but most conspicuous is that improvements in media have expanded our sense of what we wish to store. For example, CDRom technology allows us to store whole libraries instead of records describing individual items; but the requirements of storing full text easily exceeds the capabilities even of the optical format. Similarly, there is ...
A fast and spaceeconomical algorithm for lengthlimited coding
 Proc. Int. Symp. Algorithms and Computation, pp.1221
, 1995
"... Abstract. The minimumredundancy prefix code problem is to determine a list of integer codeword lengths I = [li l i E {1... n}], given a list of n symbol weightsp = [pili C {1.n}], such that ~' ~ 2l ' < 1, 9 " i = ln and ~i=1 lipi is minimised. An extension is the minimumredundancy lengthl ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
Abstract. The minimumredundancy prefix code problem is to determine a list of integer codeword lengths I = [li l i E {1... n}], given a list of n symbol weightsp = [pili C {1.n}], such that ~' ~ 2l ' < 1, 9 " i = ln and ~i=1 lipi is minimised. An extension is the minimumredundancy lengthlimited prefix code problem, in which the further constraint li < L is imposed, for all i C {1...n} and some integer L> [log 2 hi. The packagemerge algorithm of Larmore and Hirschberg generates lengthlimited codes in O(nL) time using O(n) words of auxiliary space. Here we show how the size of the work space can be reduced to O(L2). This represents a useful improvement, since for practical purposes L is O(log n). 1
Optimal PrefixFree Codes for Unequal Letter Costs: Dynamic Programming with the Monge Property
 J. Algorithms
, 2000
"... In this paper we discuss the problem of finding optimal prefixfree codes for unequal letter costs, a variation of the classical Huffman coding problem. Our problem consists of finding a minimal cost prefixfree code in which the encoding alphabet consists of unequal cost (length) letters, with leng ..."
Abstract

Cited by 14 (7 self)
 Add to MetaCart
In this paper we discuss the problem of finding optimal prefixfree codes for unequal letter costs, a variation of the classical Huffman coding problem. Our problem consists of finding a minimal cost prefixfree code in which the encoding alphabet consists of unequal cost (length) letters, with lengths ff and fi. The most efficient algorithm known previously requires O(n 2+max(ff;fi) ) time to construct such a minimalcost set of n codewords, provided ff and fi are integers. In this paper we provide an O(n max(ff;fi) ) time algorithm. Our improvement comes from the use of a more sophisticated modeling of the problem, combined with the observation that the problem possesses a "Monge property" and that the SMAWK algorithm on monotone matrices can therefore be applied. Keywords: Dynamic Programming, Huffman Codes, Lopsided Trees, Monge Matrix, Monotone Matrix, PrefixFree Codes. 1 Introduction Finding optimal prefixfree codes for unequal letter costs (and the associated problem of...
Resourceaware conference key establishment for heterogeneous networks
 U 2 Sliver Sport Members U 1 Gold Members U 4 Basic Members U 3 Sliver Finance Members (b) R 1 Sports News R 3 Top News Weather R 2 Financial News Stock (c) V 2 U 2 , R 1 V 1 U 1 V 4 U 4 , R 3 V 3 U 3 , R 2
, 2005
"... Abstract—The Diffie–Hellman problem is often the basis for establishing conference keys. In heterogeneous networks, many conferences have participants of varying resources, yet most conference keying schemes do not address this concern and place the same burden upon less powerful clients as more pow ..."
Abstract

Cited by 8 (5 self)
 Add to MetaCart
Abstract—The Diffie–Hellman problem is often the basis for establishing conference keys. In heterogeneous networks, many conferences have participants of varying resources, yet most conference keying schemes do not address this concern and place the same burden upon less powerful clients as more powerful ones. The establishment of conference keys should minimize the burden placed on resourcelimited users while ensuring that the entire group can establish the key. In this paper, we present a hierarchical conference keying scheme that forms subgroup keys for successively larger subgroups en route to establishing the group key. A tree, called the conference tree, governs the order in which subgroup keys are formed. Key establishment schemes that consider users with varying costs or budgets are built by appropriately designing the conference tree. We then examine the scenario where users have both varying costs and budget constraints. A greedy algorithm is presented that achieves nearoptimal performance, and requires significantly less computational effort than finding the optimal solution. We provide a comparison of the total cost of treebased conference keying schemes against several existing schemes, and introduce a new performance criterion, the probability of establishing the session key (PESKY), to study the likelihood that a conference key can be established in the presence of budget constraints. Simulations show that the likelihood of forming a group key using a treebased conference keying scheme is higher than the GDH schemes of Steiner et al.. Finally, we study the effect that greedy users have upon the Huffmanbased conference keying scheme, and present a method to mitigate the detrimental effects of the greedy users upon the total cost. Index Terms—Conference key agreement, DiffieHellman, Huffman algorithm.
Lossless Compression for Text and Images
 International Journal of High Speed Electronics and Systems
, 1995
"... Most data that is inherently discrete needs to be compressed in such a way that it can be recovered exactly, without any loss. Examples include text of all kinds, experimental results, and statistical databases. Other forms of data may need to be stored exactly, such as imagesparticularly bilevel ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
Most data that is inherently discrete needs to be compressed in such a way that it can be recovered exactly, without any loss. Examples include text of all kinds, experimental results, and statistical databases. Other forms of data may need to be stored exactly, such as imagesparticularly bilevel ones, or ones arising in medical and remotesensing applications, or ones that may be required to be certified true for legal reasons. Moreover, during the process of lossy compression, many occasions for lossless compression of coefficients or other information arise. This paper surveys techniques for lossless compression. The process of compression can be broken down into modeling and coding. We provide an extensive discussion of coding techniques, and then introduce methods of modeling that are appropriate for text and images. Standard methods used in popular utilities (in the case of text) and international standards (in the case of images) are described. Keywords Text compression, ima...
Efficient Implementation of the WARMUP Algorithm for the Construction of LengthRestricted Prefix Codes
 in Proceedings of the ALENEX
, 1999
"... . Given an alphabet \Sigma = fa1 ; : : : ; ang with a corresponding list of positive weights fw1 ; : : : ; wng and a length restriction L, the lengthrestricted prefix code problem is to find, a prefix code that minimizes P n i=1 w i l i , where l i , the length of the codeword assigned to a i , ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
. Given an alphabet \Sigma = fa1 ; : : : ; ang with a corresponding list of positive weights fw1 ; : : : ; wng and a length restriction L, the lengthrestricted prefix code problem is to find, a prefix code that minimizes P n i=1 w i l i , where l i , the length of the codeword assigned to a i , cannot be greater than L, for i = 1; : : : ; n. In this paper, we present an efficient implementation of the WARMUP algorithm, an approximative method for this problem. The worstcase time complexity of WARMUP is O(n log n +n log wn ), where wn is the greatest weight. However, some experiments with a previous implementation of WARMUP show that it runs in linear time for several practical cases, if the input weights are already sorted. In addition, it often produces optimal codes. The proposed implementation combines two new enhancements to reduce the space usage of WARMUP and to improve its execution time. As a result, it is about ten times faster than the previous implementat...
Restructuring Ordered Binary Trees
"... We consider the problem of restructuring an ordered binary tree T, preserving the inorder sequence of its nodes, so as to reduce its height to some target value h. Such a restructuring necessarily involves the downward displacement of some of the nodes of T. Our results, focusing both on the maximu ..."
Abstract

Cited by 5 (0 self)
 Add to MetaCart
We consider the problem of restructuring an ordered binary tree T, preserving the inorder sequence of its nodes, so as to reduce its height to some target value h. Such a restructuring necessarily involves the downward displacement of some of the nodes of T. Our results, focusing both on the maximum displacement over all nodes and on the maximum displacement