Results 1 
9 of
9
On the construction of some capacityapproaching coding schemes
, 2000
"... This thesis proposes two constructive methods of approaching the Shannon limit very closely. Interestingly, these two methods operate in opposite regions, one has a block length of one and the other has a block length approaching infinity. The first approach is based on novel memoryless joint source ..."
Abstract

Cited by 63 (2 self)
 Add to MetaCart
(Show Context)
This thesis proposes two constructive methods of approaching the Shannon limit very closely. Interestingly, these two methods operate in opposite regions, one has a block length of one and the other has a block length approaching infinity. The first approach is based on novel memoryless joint sourcechannel coding schemes. We first show some examples of sources and channels where no coding is optimal for all values of the signaltonoise ratio (SNR). When the source bandwidth is greater than the channel bandwidth, joint coding schemes based on spacefilling curves and other families of curves are proposed. For uniform sources and modulo channels, our coding scheme based on spacefilling curves operates within 1.1 dB of Shannon’s ratedistortion bound. For Gaussian sources and additive white Gaussian noise (AWGN) channels, we can achieve within 0.9 dB of the ratedistortion bound. The second scheme is based on lowdensity paritycheck (LDPC) codes. We first demonstrate that we can translate threshold values of an LDPC code between channels accurately using a simple mapping. We develop some models for density evolution
On the Metric Properties of Discrete SpaceFilling Curves
, 1996
"... A spacefilling curve is a linear traversal of a discrete finite multidimensional space. In order that this traversal be useful in many applications, the curve should preserve "locality". We quantify "locality" and bound the locality of multidimensional spacefilling curves. Cl ..."
Abstract

Cited by 40 (1 self)
 Add to MetaCart
A spacefilling curve is a linear traversal of a discrete finite multidimensional space. In order that this traversal be useful in many applications, the curve should preserve "locality". We quantify "locality" and bound the locality of multidimensional spacefilling curves. Classic Hilbert spacefilling curves come close to achieving optimal locality. EDICS: IP 3.1 Corresponding author: Craig Gotsman Dept. of Computer Science Technion, Haifa 32000 Israel Tel: +9724294336 Fax: +9724294353 Email: gotsman@cs.technion.ac.il # A preliminary version of this work was presented at the IEEE International Conference on Pattern Recognition, Jerusalem, 1994. 1 1
Towards Optimal Locality in MeshIndexings
, 1997
"... The efficiency of many data structures and algorithms relies on "localitypreserving" indexing schemes for meshes. We concentrate on the case in which the maximal distance between two mesh nodes indexed i and j shall be a slowgrowing function of ji jj. We present a new 2D indexing scheme ..."
Abstract

Cited by 31 (4 self)
 Add to MetaCart
The efficiency of many data structures and algorithms relies on "localitypreserving" indexing schemes for meshes. We concentrate on the case in which the maximal distance between two mesh nodes indexed i and j shall be a slowgrowing function of ji jj. We present a new 2D indexing scheme we call Hindexing , which has superior (possibly optimal) locality in comparison with the wellknown Hilbert indexings. Hindexings form a Hamiltonian cycle and we prove that they are optimally localitypreserving among all cyclic indexings. We provide fairly tight lower bounds for indexings without any restriction. Finally, illustrated by investigations concerning 2D and 3D Hilbert indexings, we present a framework for mechanizing upper bound proofs for locality.
Algorithms for Rendering Realistic Terrain Image Sequences and Their Parallel Implementation
 The Visual Computer
, 1995
"... We present algorithms for rendering realistic images of large terrains and their implementation on a parallel computer for rapid production of terrain animation sequences. By "large" terrains, we mean the use of datasets too large to be contained in RAM, a fact which significantly influenc ..."
Abstract

Cited by 6 (0 self)
 Add to MetaCart
(Show Context)
We present algorithms for rendering realistic images of large terrains and their implementation on a parallel computer for rapid production of terrain animation sequences. By "large" terrains, we mean the use of datasets too large to be contained in RAM, a fact which significantly influences the design of rendering algorithms, and has not been addressed in most other works on this subject. To achieve a good speed without quality degradation, we use a hybrid raycasting and projection technique, incorporating quadtree subdivision techniques and filter using precomputed bit masks. Hilbert spacefilling curves determine the image pixel rendering order, fully exploiting spatial coherence. A parallel version of the algorithm is presented, based on an architecture implemented on a Meiko parallel computer. The architecture is designed to relieve dataflow bottlenecks caused by slow interprocessor communications, and exploit temporal image coherence. Using our parallel system, incorporating 26 processors, we are able to generate a full color terrain image at video resolution, without noticable aliasing artifacts, every 2 seconds, including I/O and communication overheads. The speedup of the system is linear with the number of processors.
Texture Mixing via Universal Simulation
"... A framework for studying texture in general, and for texture mixing in particular, is presented in this paper. The work follows concepts from universal type classes and universal simulation. Based on the wellknown Lempel and Ziv (LZ) universal compression scheme, the universal type class of a one d ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
(Show Context)
A framework for studying texture in general, and for texture mixing in particular, is presented in this paper. The work follows concepts from universal type classes and universal simulation. Based on the wellknown Lempel and Ziv (LZ) universal compression scheme, the universal type class of a one dimensional sequence is defined as the set of possible sequences of the same length which produce the same dictionary (or parsing tree) with the classical LZ incremental parsing algorithm. Universal simulation is realized by sampling uniformly from the universal type class, which can be efficiently implemented. Starting with a source texture image, we use universal simulation to synthesize new textures that have, asymptotically, the same statistics as the source texture, yet have as much uncertainty as possible, in the sense that they are sampled from the broadest pool of possible sequences that comply with the statistical constraint. When considering two or more textures, a parsing tree is constructed for each one, and samples from the trees are randomly interleaved according to predefined proportions, thus obtaining a mixed texture. As with single texture synthesis, the kth order statistics of this mixture, for any k, approach, asymptotically, the weighted mixture of the kth order statistics of each individual texture used in the mixing. We present the underlying principles of universal types, universal simulation, and their extensions and application to mixing two or more textures with predefined proportions.
and Synthesis, pp. 65–70, 2005. Texture Mixing via Universal Simulation ∗
"... A framework for studying texture in general, and for texture mixing in particular, is presented in this paper. The work follows concepts from universal type classes and universal simulation. Based on the wellknown Lempel and Ziv (LZ) universal compression scheme, the universal type class of a one d ..."
Abstract
 Add to MetaCart
(Show Context)
A framework for studying texture in general, and for texture mixing in particular, is presented in this paper. The work follows concepts from universal type classes and universal simulation. Based on the wellknown Lempel and Ziv (LZ) universal compression scheme, the universal type class of a one dimensional sequence is defined as the set of possible sequences of the same length which produce the same dictionary (or parsing tree) with the classical LZ incremental parsing algorithm. Universal simulation is realized by sampling uniformly from the universal type class, which can be efficiently implemented. Starting with a source texture image, we use universal simulation to synthesize new textures that have, asymptotically, the same statistics of any order as the source texture, yet have as much uncertainty as possible, in the sense that they are sampled from the broadest pool of possible sequences that comply with the statistical constraint. When considering two or more textures, a parsing tree is constructed for each one, and samples from the trees are randomly interleaved according to predefined proportions, thus obtaining a mixed texture. As with single texture synthesis, the kth order statistics of this mixture, for any k, asymptotically approach the weighted mixture of the kth order statistics of each individual texture used in the mixing. We present the underlying principles of universal types, universal simulation, and their extensions and application to mixing two or more textures with predefined proportions. 1
Dynamic loadbalancing in a lightweight adaptive parallel multigrid PDE solver.
"... A parallel version of an adaptive multigrid solver for partial differential equations is considered. The main emphasis is put on the load balancing algorithm to distribute the adaptive grids at runtime. The background and some applications of spacefilling curves are discussed, which are later on us ..."
Abstract
 Add to MetaCart
(Show Context)
A parallel version of an adaptive multigrid solver for partial differential equations is considered. The main emphasis is put on the load balancing algorithm to distribute the adaptive grids at runtime. The background and some applications of spacefilling curves are discussed, which are later on used as the basic principle of the loadbalancing heuristic. A tight integration of spacefilling curves as a memory addressing scheme into the numerical algorithm is proposed. Some experiments on a cluster of PCs demonstrates the parallel efficiency and scalability of the approach. 1 An adaptive multigrid solver Our goal is to solve a partial differential equation as fast as possible. We consider a multigrid solver, adaptive grid refinement and their efficient parallelization. We have to develop a parallel multigrid code that is almost identical to the sequential implementation. The computational workload has to be distributed into similar sized partitions and, at the same time, the communic...
A Parallel System for Rendering Realistic Terrain Image Sequences
"... We present algorithms for rendering realistic images of large terrains and their implementation on a parallel computer for rapid production of terrain animation sequences. By "large terrains", we mean the use of datasets too large to be contained in RAM, a fact which significantly influ ..."
Abstract
 Add to MetaCart
(Show Context)
We present algorithms for rendering realistic images of large terrains and their implementation on a parallel computer for rapid production of terrain animation sequences. By "large terrains", we mean the use of datasets too large to be contained in RAM, a fact which significantly influences the design of rendering algorithms, and has not been addressed in other works on this subject. To achieve a good speed/quality tradeoff, we use a hybrid raycasting and projection technique, and Hilbert spacefilling curves to determine the image pixel rendering order, fully exploiting spatial coherence. A parallel version of the algorithm is presented, based on an architecture implemented on a Meiko parallel computer. The architecture is designed to relieve dataflow bottlenecks caused by slow interprocessor communications, and exploit temporal image coherence. Using our parallel system, we are able to generate a full color terrain image at video resolution, without noticable aliasing art...
Abstract Texture Mixing via Universal Simulation
"... A framework for studying texture in general, and for texture mixing in particular, is presented in this paper. The work follows concepts from universal type classes and universal simulation. Based on the wellknown Lempel and Ziv (LZ) universal compression scheme, the universal type class of a one d ..."
Abstract
 Add to MetaCart
(Show Context)
A framework for studying texture in general, and for texture mixing in particular, is presented in this paper. The work follows concepts from universal type classes and universal simulation. Based on the wellknown Lempel and Ziv (LZ) universal compression scheme, the universal type class of a one dimensional sequence is defined as the set of possible sequences of the same length which span the same tree or dictionary with the classical LZ incremental parsing algorithm. Using universal simulation, beginning with a source texture image, we can synthesize new textures that have the same universal type and statistics as the source texture, yet they are sampled from the broadest pool of possible sequences that comply with the universal type constraint, thereby obtaining new textures with the same statistics and as much uncertainty as possible. When considering two or more textures, a universal type class is obtained for each one, and following the universal sampling approach while combining the corresponding trees, a mixed texture is obtained. As with single texture synthesis, the kth order statistics of this mixture, for all k, is the weighted mixture of the kth order statistics of each individual texture used in the mixing. We present the underlaying principles of universal types, universal simulation, and their extensions and application to mixing two or more textures with predefined proportions. 1