Results 11  20
of
32
Robust Universal Complete Codes for Transmission and Compression
 Discrete Applied Mathematics
, 1996
"... Several measures are defined and investigated, which allow the comparison of codes as to their robustness against errors. Then new universal and complete sequences of variablelength codewords are proposed, based on representing the integers in a binary Fibonacci numeration system. Each sequence is ..."
Abstract

Cited by 10 (4 self)
 Add to MetaCart
Several measures are defined and investigated, which allow the comparison of codes as to their robustness against errors. Then new universal and complete sequences of variablelength codewords are proposed, based on representing the integers in a binary Fibonacci numeration system. Each sequence is constant and need not be generated for every probability distribution. These codes can be used as alternatives to Huffman codes when the optimal compression of the latter is not required, and simplicity, faster processing and robustness are preferred. The codes are compared on several "reallife" examples. 1. Motivation and Introduction Let A = fA 1 ; A 2 ; \Delta \Delta \Delta ; An g be a finite set of elements, called cleartext elements, to be encoded by a static uniquely decipherable (UD) code. For notational ease, we use the term `code' as abbreviation for `set of codewords'; the corresponding encoding and decoding algorithms are always either given or clear from the context. A code i...
On the Construction of Statistically Synchronizable Codes
, 1992
"... We consider the problem of constructing statistically synchronizable codes over arbitrary alphabets and for any finite source. We show how to efficiently construct a statistically synchronizable code whose average codeword length is whithin the least likely codeword probability from that of the H ..."
Abstract

Cited by 10 (0 self)
 Add to MetaCart
We consider the problem of constructing statistically synchronizable codes over arbitrary alphabets and for any finite source. We show how to efficiently construct a statistically synchronizable code whose average codeword length is whithin the least likely codeword probability from that of the Huffman code for the same source. Moreover, we give a method for constructing codes having a synchronizing codeword. The codes we construct present high synchronizing capability and low redundancy. 3 Part of this work was done while visiting IBM T. J. Watson Research Center, P.O. Box 218, Yorktown Heights, New York, 10598. This work was partially supported by the Italian Ministry of the University and Scientific Research, within the framework of the Project: Progetto ed Analisi di Algoritmi. Part of this work has been presented at the 1990 IEEE International Symposium on Information Theory, San Diego, CA, Jan. 1990. 1 1 Introduction A basic problem in information transmission is to mai...
Bidirectional Huffman Coding
, 1989
"... Under what conditions can Huffman codes be efficiently decoded in both directions? The usual decoding procedure works also for backward decoding only if the code has the affix property, i.e., both prefix and suffix properties. Some affix Huffman codes are exhibited, and necessary conditions for the ..."
Abstract

Cited by 10 (2 self)
 Add to MetaCart
Under what conditions can Huffman codes be efficiently decoded in both directions? The usual decoding procedure works also for backward decoding only if the code has the affix property, i.e., both prefix and suffix properties. Some affix Huffman codes are exhibited, and necessary conditions for the existence of such codes are given. An algorithm is presented which, for a given set of codeword lengths, constructs an affix code, if there exists one. Since for many distributions there is no affix code giving the same compression as the Huffman code, a new algorithm for backward decoding of nonaffix Huffman codes is presented, and its worst case complexity is proved to be linear in the length of the encoded text. 1. Introduction For a given sequence of n weights w 1 ; : : : ; wn , with w i ? 0, Huffman's wellknown algorithm [9] constructs an optimum prefix code. We use throughout the term `code' as abbreviation for `set of codewords'. In a prefix code no codeword is the prefix of any o...
Bidirectionally Decodable Streams of Prefix CodeWords
, 1999
"... A new general scheme is introduced that allows bidirectional decoding of variable length coded bitstreams from either end. Except for a small fixed number of extra bits appended to a sequence of code words, the scheme is as efficient as Huffman coding. The extra operations required at coder and deco ..."
Abstract

Cited by 9 (1 self)
 Add to MetaCart
A new general scheme is introduced that allows bidirectional decoding of variable length coded bitstreams from either end. Except for a small fixed number of extra bits appended to a sequence of code words, the scheme is as efficient as Huffman coding. The extra operations required at coder and decoder are code word reversal and one EXOR for each bit.
Synchronization recovery and state model reduction for soft decoding of variable length codes
 IEEE TRANSACTIONS ON INFORMATION THEORY
, 2006
"... Variable length codes (VLCs) exhibit desynchronization problems when transmitted over noisy channels. Trellis decoding techniques based on Maximum A Posteriori (MAP) estimators are often used to minimize the error rate on the estimated sequence. If the number of symbols and/or bits transmitted are k ..."
Abstract

Cited by 5 (3 self)
 Add to MetaCart
Variable length codes (VLCs) exhibit desynchronization problems when transmitted over noisy channels. Trellis decoding techniques based on Maximum A Posteriori (MAP) estimators are often used to minimize the error rate on the estimated sequence. If the number of symbols and/or bits transmitted are known by the decoder, termination constraints can be incorporated in the decoding process. All the paths in the trellis which do not lead to a valid sequence length are suppressed. This paper presents an analytic method to assess the expected error resilience of a VLC when trellis decoding with a sequence length constraint is used. The approach is based on the computation, for a given code, of the amount of information brought by the constraint. It is then shown that this quantity as well as the probability that the VLC decoder does not resynchronize in a strict sense, are not significantly altered by appropriate trellis states aggregation. This proves that the performance obtained by running a lengthconstrained Viterbi decoder on aggregated state models approaches the one obtained with the bit/symbol trellis, with a significantly reduced complexity. It is then shown that the complexity can be further decreased by projecting the state model on two state models of reduced size.
Almost All Complete Binary Prefix Codes Have a SelfSynchronizing String
, 2002
"... The probability that a complete binary prefix code has a selfsynchronizing string approaches one, as the number of codewords tends to infinity. ..."
Abstract

Cited by 4 (0 self)
 Add to MetaCart
The probability that a complete binary prefix code has a selfsynchronizing string approaches one, as the number of codewords tends to infinity.
Lattice/Trellis based Fixedrate Entropycoded Vector Quantization
, 2001
"... I hereby declare that I am the sole author of this thesis. I authorize the University of Waterloo to lend this thesis to other institutions or individuals for the purpose of scholarly research. I further authorize the University of Waterloo to reproduce this thesis by photocopying or by other means ..."
Abstract

Cited by 3 (3 self)
 Add to MetaCart
I hereby declare that I am the sole author of this thesis. I authorize the University of Waterloo to lend this thesis to other institutions or individuals for the purpose of scholarly research. I further authorize the University of Waterloo to reproduce this thesis by photocopying or by other means, in total or in part, at the request of other institutions or individuals for the purpose of scholarly research. ii The University of Waterloo requires the signatures of all persons using or photocopying this thesis. Please sign below, and give address and date. iii The fixedrate entropyconstrained vector quantizer draws its motivation from the large gap in the performance of the optimal entropyconstrained scalar quantizer (ESCQ) and the fixedrate LMQ for most nonuniform sources and tries to bridge this gap while maintaining a fixedrate output. Having a fixedrate output avoids
Low Complexity Iterative Decoding of VariableLength Codes
"... Widely used in data compression schemes, where they allow to reduce the length of the transmitted bistream, variablelength codes (VLCs) are very sensitive to channel noise. In fact, when some bits are altered by the channel, synchronisation losses can occur at the receiver, possibly leading to drama ..."
Abstract

Cited by 3 (0 self)
 Add to MetaCart
Widely used in data compression schemes, where they allow to reduce the length of the transmitted bistream, variablelength codes (VLCs) are very sensitive to channel noise. In fact, when some bits are altered by the channel, synchronisation losses can occur at the receiver, possibly leading to dramatic symbol error rates. The key point is to appropriately use the residual source redundancy at the decoding side, for instance by considering it as an implicit channel protection that can be exploited to provide error correction capability. We propose in this paper a softinput softoutput low complexity VLC decoding algorithm. Combined with a convolutional code SISO decoder, this new Soft Output Stack Algorithm (SOSA) is used in an iterative joint decoder and simulations results over an Additive White Gaussian Noise (AWGN) channel are shown.
SelfSynchronization of Huffman Codes
, 2003
"... Variable length binary codes have been frequently used for communications since Huffman’s important paper on constructing minimum average length codes. One drawback of variable length codes is the potential loss of synchronization in the presence of channel errors. However, many variable length code ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Variable length binary codes have been frequently used for communications since Huffman’s important paper on constructing minimum average length codes. One drawback of variable length codes is the potential loss of synchronization in the presence of channel errors. However, many variable length codes seem to possess a “selfsynchronization” property that lets them recover from bit errors. In particular, for some variable length codes there exists a certain binary string (not necessarily a codeword) which automatically resynchronizes the code. That is, if a transmitted sequence of bits is corrupted by one or more bit errors, then as soon as the receiver by random chance correctly detects a selfsynchronizing string, the receiver can continue properly parsing the bit sequence into codewords. Most commonly used binary prefix codes, including Huffman codes, are “complete”, in the sense that the vertices in their decoding trees are either leaves or have two children. An open question has been
SuffixConstrained Codes for Progressive and Robust Data Compression: SelfMultiplexed Codes
 in Proc. EUSIPCO
, 2004
"... This paper addresses the issue of robust transmission of Variable Length Codes (VLCs) encoded sources over errorprone channels. A new class of codes, called selfmultiplexed codes, is introduced. Their performance in terms of compression is the same as the one of classical VLCs (e.g. Huffman codes) ..."
Abstract

Cited by 1 (1 self)
 Add to MetaCart
This paper addresses the issue of robust transmission of Variable Length Codes (VLCs) encoded sources over errorprone channels. A new class of codes, called selfmultiplexed codes, is introduced. Their performance in terms of compression is the same as the one of classical VLCs (e.g. Huffman codes). Their property in terms of energy distribution on respective transitions of the codetree allows to confine error propagation to transitions bearing low reconstruction energy. Simulation results reveal high performances in terms of signal to noise ratio.