Results 1  10
of
1,522
Light Field Rendering
, 1996
"... A number of techniques have been proposed for flying through scenes by redisplaying previously rendered or digitized views. Techniques have also been proposed for interpolating between views by warping input images, using depth information or correspondences between multiple images. In this paper, w ..."
Abstract

Cited by 1337 (22 self)
 Add to MetaCart
A number of techniques have been proposed for flying through scenes by redisplaying previously rendered or digitized views. Techniques have also been proposed for interpolating between views by warping input images, using depth information or correspondences between multiple images. In this paper, we describe a simple and robust method for generating new views from arbitrary camera positions without depth information or feature matching, simply by combining and resampling the available images. The key to this technique lies in interpreting the input images as 2D slices of a 4D function  the light field. This function completely characterizes the flow of light through unobstructed space in a static scene with fixed illumination. We describe a
Compression of Individual Sequences via VariableRate Coding
 IEEE TRANSACTIONS ON INFORMATION THEORY
, 1978
"... ..."
A blocksorting lossless data compression algorithm
, 1994
"... We describe a blocksorting, lossless data compression algorithm, and our implementation of that algorithm. We compare the performance of our implementation with widely available data compressors running on the same hardware. The algorithm works by applying a reversible transformation to a block o ..."
Abstract

Cited by 809 (5 self)
 Add to MetaCart
We describe a blocksorting, lossless data compression algorithm, and our implementation of that algorithm. We compare the performance of our implementation with widely available data compressors running on the same hardware. The algorithm works by applying a reversible transformation to a block of input text. The transformation does not itself compress the data, but reorders it to make it easy to compress with simple algorithms such as movetofront coding. Our algorithm achieves speed comparable to algorithms based on the techniques of Lempel and Ziv, but obtains compression close to the best statistical modelling techniques. The size of the input block must be large (a few kilobytes) to achieve good compression.
Venti: A New Approach to Archival Storage
, 2002
"... This paper describes a network storage system, called Venti, intended for archival data. In this system, a unique hash of a block's contents acts as the block identifier for read and write operations. This approach enforces a writeonce policy, preventing accidental or malicious destruction of ..."
Abstract

Cited by 342 (0 self)
 Add to MetaCart
(Show Context)
This paper describes a network storage system, called Venti, intended for archival data. In this system, a unique hash of a block's contents acts as the block identifier for read and write operations. This approach enforces a writeonce policy, preventing accidental or malicious destruction of data. In addition, duplicate copies of a block can be coalesced, reducing the consumption of storage and simplifying the implementation of clients. Venti is a building block for constructing a variety of storage applications such as logical backup, physical backup, and snapshot file systems.
Edgebreaker: Connectivity compression for triangle meshes
 IEEE Transactions on Visualization and Computer Graphics
, 1999
"... Edgebreaker is a simple scheme for compressing the triangle/vertex incidence graphs (sometimes called connectivity or topology) of threedimensional triangle meshes. Edgebreaker improves upon the worst case storage required by previously reported schemes, most of which require O(nlogn) bits to store ..."
Abstract

Cited by 298 (24 self)
 Add to MetaCart
(Show Context)
Edgebreaker is a simple scheme for compressing the triangle/vertex incidence graphs (sometimes called connectivity or topology) of threedimensional triangle meshes. Edgebreaker improves upon the worst case storage required by previously reported schemes, most of which require O(nlogn) bits to store the incidence graph of a mesh of n triangles. Edgebreaker requires only 2n bits or less for simple meshes and can also support fully general meshes by using additional storage per handle and hole. Edgebreaker’s compression and decompression processes perform the same traversal of the mesh from one triangle to an adjacent one. At each stage, compression produces an opcode describing the topological relation between the current triangle and the boundary of the remaining part of the mesh. Decompression uses these opcodes to reconstruct the entire incidence graph. Because Edgebreaker’s compression and decompression are independent of the vertex locations, they may be combined with a variety of vertexcompressing techniques that exploit topological information about the mesh to better estimate vertex locations. Edgebreaker may be used to compress the connectivity of an entire mesh bounding a 3D polyhedron or the connectivity of a triangulated surface patch whose boundary needs not be encoded. Its superior compression capabilities, the simplicity of its implementation, and its versatility make Edgebreaker particularly suitable for the emerging 3D data exchange standards for interactive graphic applications. The paper also offers a comparative survey of the rapidly growing field of geometric compression.
An InformationTheoretic Model for Steganography
, 1998
"... An informationtheoretic model for steganography with passive adversaries is proposed. The adversary's task of distinguishing between an innocentcover message C and a modified message S containing a secret part is interpreted as a hypothesis testing problem. The security of a steganographic sys ..."
Abstract

Cited by 275 (3 self)
 Add to MetaCart
(Show Context)
An informationtheoretic model for steganography with passive adversaries is proposed. The adversary's task of distinguishing between an innocentcover message C and a modified message S containing a secret part is interpreted as a hypothesis testing problem. The security of a steganographic system is quantified in terms of the relative entropy (or discrimination) between PC and PS . Several secure steganographic schemes are presented in this model; one of them is a universal information hiding scheme based on universal data compression techniques that requires no knowledge of the covertext statistics.
Compressed fulltext indexes
 ACM COMPUTING SURVEYS
, 2007
"... Fulltext indexes provide fast substring search over large text collections. A serious problem of these indexes has traditionally been their space consumption. A recent trend is to develop indexes that exploit the compressibility of the text, so that their size is a function of the compressed text l ..."
Abstract

Cited by 263 (94 self)
 Add to MetaCart
Fulltext indexes provide fast substring search over large text collections. A serious problem of these indexes has traditionally been their space consumption. A recent trend is to develop indexes that exploit the compressibility of the text, so that their size is a function of the compressed text length. This concept has evolved into selfindexes, which in addition contain enough information to reproduce any text portion, so they replace the text. The exciting possibility of an index that takes space close to that of the compressed text, replaces it, and in addition provides fast search over it, has triggered a wealth of activity and produced surprising results in a very short time, and radically changed the status of this area in less than five years. The most successful indexes nowadays are able to obtain almost optimal space and search time simultaneously. In this paper we present the main concepts underlying selfindexes. We explain the relationship between text entropy and regularities that show up in index structures and permit compressing them. Then we cover the most relevant selfindexes up to date, focusing on the essential aspects on how they exploit the text compressibility and how they solve efficiently various search problems. We aim at giving the theoretical background to understand and follow the developments in this area.
The LOCOI Lossless Image Compression Algorithm: Principles and Standardization into JPEGLS
 IEEE TRANSACTIONS ON IMAGE PROCESSING
, 2000
"... LOCOI (LOw COmplexity LOssless COmpression for Images) is the algorithm at the core of the new ISO/ITU standard for lossless and nearlossless compression of continuoustone images, JPEGLS. It is conceived as a "low complexity projection" of the universal context modeling paradigm, match ..."
Abstract

Cited by 253 (11 self)
 Add to MetaCart
LOCOI (LOw COmplexity LOssless COmpression for Images) is the algorithm at the core of the new ISO/ITU standard for lossless and nearlossless compression of continuoustone images, JPEGLS. It is conceived as a "low complexity projection" of the universal context modeling paradigm, matching its modeling unit to a simple coding unit. By combining simplicity with the compression potential of context models, the algorithm "enjoys the best of both worlds." It is based on a simple fixed context model, which approaches the capability of the more complex universal techniques for capturing highorder dependencies. The model is tuned for efficient performance in conjunction with an extended family of Golombtype codes, which are adaptively chosen, and an embedded alphabet extension for coding of lowentropy image regions. LOCOI attains compression ratios similar or superior to those obtained with stateoftheart schemes based on arithmetic coding. Moreover, it is within a few percentage points of the best available compression ratios, at a much lower complexity level. We discuss the principles underlying the design of LOCOI, and its standardization into JPEGLS.