Results 1 
9 of
9
Fast and Compact Prefix Codes ⋆
"... Abstract. It is wellknown that, given a probability distribution over n characters, in the worst case it takes Θ(n log n) bits to store a prefix code with minimum expected codeword length. However, in this paper we first show that, for any ɛ with 0 < ɛ < 1/2 and 1/ɛ = O(polylog(n)), it takes O(n lo ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
Abstract. It is wellknown that, given a probability distribution over n characters, in the worst case it takes Θ(n log n) bits to store a prefix code with minimum expected codeword length. However, in this paper we first show that, for any ɛ with 0 < ɛ < 1/2 and 1/ɛ = O(polylog(n)), it takes O(n log log(1/ɛ)) bits to store a prefix code with expected codeword length within an additive ɛ of the minimum. We then show that, for any constant c> 1, it takes O ( n 1/c log n) bits to store a prefix code with expected codeword length at most c times the minimum. In both cases, our data structures allow us to encode and decode any character in O(1) time. 1
Using Fibonacci Compression Codes as Alternatives to Dense Codes
"... Abstract Recent publications advocate the use of various variable length codes forwhich each codeword consists of an integral number of bytes in compression applications using large alphabets. This paper shows that another tradeoffwith similar properties can be obtained by Fibonacci codes. These are ..."
Abstract

Cited by 2 (1 self)
 Add to MetaCart
Abstract Recent publications advocate the use of various variable length codes forwhich each codeword consists of an integral number of bytes in compression applications using large alphabets. This paper shows that another tradeoffwith similar properties can be obtained by Fibonacci codes. These are fixed codeword sets, using binary representations of integers based on Fibonaccinumbers of order m> = 2. Fibonacci codes have been used before, and thispaper extends previous work presenting several novel features. In particular,
Adapting the KnuthMorrisPratt Algorithm for Pattern Matching in Huffman Encoded Texts
"... We perform compressed pattern matching in Huffman encoded texts. A modified KnuthMorrisPratt (KMP) algorithm is used in order to overcome the problem of false matches, i.e., an occurrence of the encoded pattern in the encoded text that does not correspond to an occurrence of the pattern itself in ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
We perform compressed pattern matching in Huffman encoded texts. A modified KnuthMorrisPratt (KMP) algorithm is used in order to overcome the problem of false matches, i.e., an occurrence of the encoded pattern in the encoded text that does not correspond to an occurrence of the pattern itself in the original text. We propose a bitwise KMP algorithm that can move one extra bit in the case of a mismatch, since the alphabet is binary. To avoid processing any encoded text bit more than once, a preprocessed table is used to determine how far to back up when a mismatch is detected, and is defined so that the encoded pattern is always aligned with the start of a codeword in the encoded text. We combine our KMP algorithm with two Huffman decoding algorithms which handle more than a single bit per machine operation; Skeleton trees defined by Klein [1], and numerical comparisons between special canonical values and portions of a sliding window presented in Moffat and Turpin [3]. We call the combined algorithms skkmp and winkmp respectively. The following table compares our algorithms with cgrep of Moura et al. [2] and agrep which searches the uncompressed text. Columns three and four compare the compression performance (size of the compressed text as a percentage of the uncompressed text) of the Huffman code (huff) with cgrep. The next columns compare the processing time of pattern matching of these algorithms. The “decompress and search ” methods, which decode using skeleton trees or Moffat and Turpin’s sliding window and search in parallel using agrep, are called skd and wind respectively. The search times are average values for patterns ranging from infrequent to frequent ones.
FAST CODES FOR LARGE ALPHABETS ∗
"... Abstract. We address the problem of constructing a fast lossless code in the case when the source alphabet is large. The main idea of the new scheme may be described as follows. We group letters with small probabilities in subsets (acting as super letters) and use time consuming coding for these sub ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
Abstract. We address the problem of constructing a fast lossless code in the case when the source alphabet is large. The main idea of the new scheme may be described as follows. We group letters with small probabilities in subsets (acting as super letters) and use time consuming coding for these subsets only, whereas letters in the subsets have the same code length and therefore can be coded fast. The described scheme can be applied to sources with known and unknown statistics.
Towards Using Neural Networks to Perform ObjectOriented Function Approximation
"... Abstract — Many computational methods are based on the manipulation of entities with internal structure, such as objects, records, or data structures. Most conventional approaches based on neural networks have problems dealing with such structured entities. The algorithms presented in this paper rep ..."
Abstract
 Add to MetaCart
Abstract — Many computational methods are based on the manipulation of entities with internal structure, such as objects, records, or data structures. Most conventional approaches based on neural networks have problems dealing with such structured entities. The algorithms presented in this paper represent a novel approach to neuralsymbolic integration that allows for symbolic data in the form of objects to be translated to a scalar representation that can then be used by connectionist systems. We present the implementation of two translation algorithms that aid in performing objectoriented function approximation. We argue that objects provide an abstract representation of data that is well suited for the input and output of neural networks, as well as other statistical learning techniques. By examining the results of a simple sorting example, we illustrate the efficacy of these techniques. I.
On the Usefulness of Fibonacci Compression Codes
, 2004
"... Recent publications advocate the use of various variable length codes for which each codeword consists of an integral number of bytes in compression applications using large alphabets. This paper shows that another tradeoff with similar properties can be obtained by Fibonacci codes. These are fixed ..."
Abstract
 Add to MetaCart
Recent publications advocate the use of various variable length codes for which each codeword consists of an integral number of bytes in compression applications using large alphabets. This paper shows that another tradeoff with similar properties can be obtained by Fibonacci codes. These are fixed codeword sets, using binary representations of integers based on Fibonacci numbers of order m ≥ 2. Fibonacci codes have been used before, and this paper extends previous work presenting several novel features. In particular, the compression efficiency is analyzed and compared to that of dense codes, and various tabledriven decoding routines are suggested.
Comparative Study of Arithmetic and Huffman Compression Techniques for Enhancing Security and Effective Bandwidth Utilization in the Context of ECC for Text
"... In this paper, we proposed a model for text encryption using elliptic curve cryptography (ECC) for secure transmission of text and by incorporating the Arithmetic/Huffman data compression technique for effective utilization of channel bandwidth and enhancing the security. In this model, every charac ..."
Abstract
 Add to MetaCart
In this paper, we proposed a model for text encryption using elliptic curve cryptography (ECC) for secure transmission of text and by incorporating the Arithmetic/Huffman data compression technique for effective utilization of channel bandwidth and enhancing the security. In this model, every character of text message is transformed into the elliptic curve points (X m,Y m), these elliptic curve points are converted into cipher text.The resulting size of cipher text becomes four times of the original text. For minimizing the channel bandwidth requirements, the encrypted text is compressed using the Arithmetic and Huffman compression technique in the following two ways by considering i)xy coordinates of encrypted text and ii) xcoordinates of the encrypted text. The results of the above two cases are compared in terms of overall bandwidth required and saved for Arithmetic and Huffman compression.