Results 1 - 10
of
27,727
TABLE II MUTUAL INFORMATION ACHIEVED BY QUANTIZER OPTIMIZATION ALGORITHMS WHEN THE SOURCE DISTRIBUTION IS GAUSSIAN.
2007
Cited by 3
TABLE I MUTUAL INFORMATION, ENTROPY AND MEAN SQUARE ERROR FOR DIFFERENT BIT-MAPPINGS OF 3 BIT QUANTIZED PARAMETERS.
2000
Cited by 8
TABLE I: Bit redundancy, mutual informations and mean square error for different bit-mappings of 2 bit quantized parameters.
TABLE II: Bit redundancy, mutual information and mean square error for different bit-mappings of 3 bit quantized parameters.
Table 1. Coder e ciency comparison between an embedded (bit-plane) coder a and coder conditioned on the quantization step-sizes (MjL,Q). Step sizes ^ Qk are generated for each image near threshold and supra-threshold. The rates in the rst column result from coding the quantization indices independently of the side information. The rates in the second column result from coding the quantized data conditioned on the step-size map.
2005
"... In PAGE 7: ... Most of the distortion is pushed into the grass, where it is more di cult to detect. Table1 compares the e ciency of the proposed coder with a more traditional coder. The rates in the rst column are generated by coding the quantized data separately from the side information with an embedded Tarp- lter-based arithmetic bit-plane coder.... ..."
Cited by 2
Table 1. First order entropies for perceptually lossless quantization
"... In PAGE 4: ... All ij were set equal 4. Table1 contains the rst order entropies and coding gains for perceptually lossless compression (DT = 1:0). It compares DCTune, optimal locally-adaptive quanti- zation as well as locally-adaptive quantization without side information.... ..."
Table 11 Impact of the quantization step-size on the lossless compression performance of the prediction-based scalable MVC (the average number of bytes per frame needed to losslessly code the motion information is reported)
2004
"... In PAGE 22: ...Table11 . We report the average number of bytes needed per H-frame to losslessly code the motion information.... ..."
Cited by 2
Table 2: An algorithm for iteratively quantizing a source with erasures.
"... In PAGE 7: ...4 Iterative Decoding/Quantization and Duality In the following we rst review the intuition behind iterative erasure decoding algorithms and describe the particular decoding algorithm we consider in Table 1. Next we outline the intuition behind a similar approach for iterative quantization and precisely describe our quantization algorithm in Table2 . Finally, we show that these algorithms are duals.... In PAGE 9: ... Essentially, the requirement of consistent tie-breaking can be interpreted as a constraint on the message-passing schedule: tie-breaking information for a given tie should be propagated through the graph before other ties are broken. In order to provide a precise algorithm for the purpose of proving theorems, we consider the ERASURE-QUANTIZE in Table2 based on applying the rules in (6) with a sequential schedule and all tie-breaking collected into step 8. Table 2: An algorithm for iteratively quantizing a source with erasures.... ..."
Table 2: An algorithm for iteratively quantizing a source with erasures.
"... In PAGE 5: ...4 Iterative Decoding/Quantization and Duality In the following we rst review the intuition behind iterative erasure decoding algorithms and describe the particular decoding algorithm we consider in Table 1. Next we outline the intuition behind a similar approach for iterative quantization and precisely describe our quantization algorithm in Table2 . Finally, we show that these algorithms are duals.... In PAGE 7: ... Essentially, the requirement of consistent tie-breaking can be interpreted as a constraint on the message-passing schedule: tie-breaking information for a given tie should be propagated through the graph before other ties are broken. In order to provide a precise algorithm for the purpose of proving theorems, we consider the ERASURE-QUANTIZE in Table2 based on applying the rules in (6) with a sequential schedule and all tie-breaking collected into step 8. Table 2: An algorithm for iteratively quantizing a source with erasures.... ..."
TABLE 1. Progressive compression results for a non-overtesselated 10bit quantized Bunny model. Connectivity information amounts overall to 3.6 bits per triangle, coordinate data requires 7.7 bits per triangle or 15.4 per vertex. See also Figures 3 and 14 for images of the bunny model.
2000
Cited by 92
Results 1 - 10
of
27,727