Results 1  10
of
30
On the construction of some capacityapproaching coding schemes
, 2000
"... This thesis proposes two constructive methods of approaching the Shannon limit very closely. Interestingly, these two methods operate in opposite regions, one has a block length of one and the other has a block length approaching infinity. The first approach is based on novel memoryless joint source ..."
Abstract

Cited by 56 (2 self)
 Add to MetaCart
This thesis proposes two constructive methods of approaching the Shannon limit very closely. Interestingly, these two methods operate in opposite regions, one has a block length of one and the other has a block length approaching infinity. The first approach is based on novel memoryless joint sourcechannel coding schemes. We first show some examples of sources and channels where no coding is optimal for all values of the signaltonoise ratio (SNR). When the source bandwidth is greater than the channel bandwidth, joint coding schemes based on spacefilling curves and other families of curves are proposed. For uniform sources and modulo channels, our coding scheme based on spacefilling curves operates within 1.1 dB of Shannon’s ratedistortion bound. For Gaussian sources and additive white Gaussian noise (AWGN) channels, we can achieve within 0.9 dB of the ratedistortion bound. The second scheme is based on lowdensity paritycheck (LDPC) codes. We first demonstrate that we can translate threshold values of an LDPC code between channels accurately using a simple mapping. We develop some models for density evolution
Computational mechanics: Pattern and prediction, structure and simplicity
 Journal of Statistical Physics
, 1999
"... Computational mechanics, an approach to structural complexity, defines a process’s causal states and gives a procedure for finding them. We show that the causalstate representation—an Emachine—is the minimal one consistent with ..."
Abstract

Cited by 43 (8 self)
 Add to MetaCart
Computational mechanics, an approach to structural complexity, defines a process’s causal states and gives a procedure for finding them. We show that the causalstate representation—an Emachine—is the minimal one consistent with
Universal Rendering Sequences for Transparent Vertex Caching of Progressive Meshes
 Computer Graphics Forum
, 2001
"... We present methods to generate rendering sequences for triangle meshes which preserve mesh locality as much as possible. This is useful for maximizing vertex reuse when rendering the mesh using a FIFO vertex buffer, such as those available in modern 3D graphics hardware. The sequences are universal ..."
Abstract

Cited by 40 (3 self)
 Add to MetaCart
We present methods to generate rendering sequences for triangle meshes which preserve mesh locality as much as possible. This is useful for maximizing vertex reuse when rendering the mesh using a FIFO vertex buffer, such as those available in modern 3D graphics hardware. The sequences are universal in the sense that they perform well for all sizes of vertex buffers, and generalize to progressive meshes. This has been verified experimentally. 1 Universal Rendering Sequences for Transparent Vertex Caching of Progressive Meshes Abstract We present methods to generate rendering sequences for triangle meshes which preserve mesh locality as much as possible. This is useful for maximizing vertex reuse when rendering the mesh using a FIFO vertex buffer, such as those available in modern 3D graphics hardware. The sequences are universal in the sense that they perform well for all sizes of vertex buffers, and generalize to progressive meshes. This has been verified experimentally. 1. Introdu...
On the Metric Properties of Discrete SpaceFilling Curves
, 1996
"... A spacefilling curve is a linear traversal of a discrete finite multidimensional space. In order that this traversal be useful in many applications, the curve should preserve "locality". We quantify "locality" and bound the locality of multidimensional spacefilling curves. Classic Hilbert spacef ..."
Abstract

Cited by 35 (1 self)
 Add to MetaCart
A spacefilling curve is a linear traversal of a discrete finite multidimensional space. In order that this traversal be useful in many applications, the curve should preserve "locality". We quantify "locality" and bound the locality of multidimensional spacefilling curves. Classic Hilbert spacefilling curves come close to achieving optimal locality. EDICS: IP 3.1 Corresponding author: Craig Gotsman Dept. of Computer Science Technion, Haifa 32000 Israel Tel: +9724294336 Fax: +9724294353 Email: gotsman@cs.technion.ac.il # A preliminary version of this work was presented at the IEEE International Conference on Pattern Recognition, Jerusalem, 1994. 1 1
Information rates of twodimensional finite state IS1 channels,” submitted to 2003 ZEEE Znt. Symp. Inform. Theory. 0780377990/03/$17.00 @ 2003 IEEE Authorized licensed use limited to
 Univ of Calif San Diego. Downloaded on March 13, 2009 at 13:18 from IEEE Xplore. Restrictions apply
"... Abstract We derive upper and lower bounds on the symmetric information rate of a twodimensional finitestate intersymbolinterference (ISI) channel model. I. ..."
Abstract

Cited by 10 (3 self)
 Add to MetaCart
Abstract We derive upper and lower bounds on the symmetric information rate of a twodimensional finitestate intersymbolinterference (ISI) channel model. I.
On the WynerZiv problem for individual sequences
 IEEE Trans. Inform. Theory
, 2006
"... We consider a variation of the Wyner–Ziv problem pertaining to lossy compression of individual sequences using finite–state encoders and decoders. There are two main results in this paper. The first characterizes the relationship between the performance of the best M–state encoder–decoder pair to th ..."
Abstract

Cited by 10 (5 self)
 Add to MetaCart
We consider a variation of the Wyner–Ziv problem pertaining to lossy compression of individual sequences using finite–state encoders and decoders. There are two main results in this paper. The first characterizes the relationship between the performance of the best M–state encoder–decoder pair to that of the best block code of size ℓ for every input sequence, and shows that the loss of the latter relative to the former (in terms of both rate and distortion) never exceeds the order of (log M)/ℓ, independently of the input sequence. Thus, in the limit of large M, the best rate–distortion performance of every infinite source sequence can be approached universally by a sequence of block codes (which are also implementable by finite–state machines). While this result assumes an asymptotic regime where the number of states is fixed, and only the length n of the input sequence grows without bound, we then consider the case where the number of states M = Mn is allowed to grow concurrently with n. Our second result is then about the critical growth rate of Mn such that the rate–distortion performance of Mn–state encoder–decoder pairs can still be matched by a universal code. We show that this critical growth rate is of Mn is linear in n. Index Terms: Finite–state machines, individual sequences, side information, block codes, universal coding, Wyner–Ziv problem.
An Image Compression Method for Spatial Search
, 2000
"... The maintenance of large raster images under spatial operations is still a major performance bottleneck. For reasons of storage space, images in a collection, such as satellite pictures in geographic information systems, are maintained in compressed form. Instead of performing a spatially selective ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
The maintenance of large raster images under spatial operations is still a major performance bottleneck. For reasons of storage space, images in a collection, such as satellite pictures in geographic information systems, are maintained in compressed form. Instead of performing a spatially selective operation on an image by first decompressing the compressed version, we propose in this paper to perform queries directly on the compressed version of the image. We suggest a compression technique that allows for the subsequent use of a spatial index structure to guide a spatial search. In response to a window query, our algorithm delivers a compressed partial image, or the exact uncompressed requested image region. In addition to the support of spatial queries on compressed continuous tone images, the new compression algorithm is even competitive in terms of the compression ratio that it achieves, compared to other standard lossless compression techniques. Index TermsLossless image comp...
Tensor product formulation for Hilbert spacefilling curves
 In Proceedings of the 2003 International Conference on Parallel Processing
, 2003
"... We present a tensor product formulation for Hilbert spacefilling curves. Both recursive and iterative formulas are expressed in the paper. We view a Hilbert spacefilling curve as a permutation which maps twodimensional ¥§¦©¨�¥� ¦ data elements stored in the row major or column major order to the ..."
Abstract

Cited by 6 (6 self)
 Add to MetaCart
We present a tensor product formulation for Hilbert spacefilling curves. Both recursive and iterative formulas are expressed in the paper. We view a Hilbert spacefilling curve as a permutation which maps twodimensional ¥§¦©¨�¥� ¦ data elements stored in the row major or column major order to the order of traversing a Hilbert spacefilling curve. The tensor product formula of Hilbert spacefilling curves uses several permutation operations: stride permutation, radix2 Gray permutation, transposition, and antidiagonal transposition. The iterative tensor product formula can be manipulated to obtain the inverse Hilbert permutation. Also, the formulas are directly translated into computer programs which can be used in various applications including Rtree indexing, image processing, and process llocation, etc. Key words: tensor product, block recursive algorithm, Hilbert spacefilling curve, stride
Interpretation of the LempelZiv Complexity Measure in the Context of Biomedical Signal Analysis
 IEEE Transactions on Biomedical Engineering
"... Abstract—LempelZiv complexity (LZ) and derived LZ algorithms have been extensively used to solve information theoretic problems such as coding and lossless data compression. In recent years, LZ has been widely used in biomedical applications to estimate the complexity of discretetime signals. Desp ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
Abstract—LempelZiv complexity (LZ) and derived LZ algorithms have been extensively used to solve information theoretic problems such as coding and lossless data compression. In recent years, LZ has been widely used in biomedical applications to estimate the complexity of discretetime signals. Despite its popularity as a complexity measure for biosignal analysis, the question of LZ interpretability and its relationship to other signal parameters and to other metrics has not been previously addressed. We have carried out an investigation aimed at gaining a better understanding of the LZ complexity itself, especially regarding its interpretability as a biomedical signal analysis technique. Our results indicate that LZ is particularly useful as a scalar metric to estimate the bandwidth of random processes and the harmonic variability in quasiperiodic signals. Index Terms—Complex analysis, LempelZiv complexity (LZ), nonlinear analysis.
Scanning and sequential decision making for multidimensional data  part I: the noiseless case
 IEEE Trans. on Inform. Theory
"... We consider the problem of sequential decision making on random fields corrupted by noise. In this scenario, the decision maker observes a noisy version of the data, yet judged with respect to the clean data. In particular, we first consider the problem of sequentially scanning and filtering noisy r ..."
Abstract

Cited by 6 (2 self)
 Add to MetaCart
We consider the problem of sequential decision making on random fields corrupted by noise. In this scenario, the decision maker observes a noisy version of the data, yet judged with respect to the clean data. In particular, we first consider the problem of sequentially scanning and filtering noisy random fields. In this case, the sequential filter is given the freedom to choose the path over which it traverses the random field (e.g., noisy image or video sequence), thus it is natural to ask what is the best achievable performance and how sensitive this performance is to the choice of the scan. We formally define the problem of scanning and filtering, derive a bound on the best achievable performance and quantify the excess loss occurring when nonoptimal scanners are used, compared to optimal scanning and filtering. We then discuss the problem of sequential scanning and prediction of noisy random fields. This setting is a natural model for applications such as restoration and coding of noisy images. We formally define the problem of scanning and prediction of a noisy multidimensional array and relate the optimal performance to the clean scandictability defined by Merhav and Weissman. Moreover, bounds on the excess loss due to suboptimal scans are derived, and a universal prediction algorithm is suggested.