• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 27,727
Next 10 →

TABLE II MUTUAL INFORMATION ACHIEVED BY QUANTIZER OPTIMIZATION ALGORITHMS WHEN THE SOURCE DISTRIBUTION IS GAUSSIAN.

in Channel identification: Secret sharing using reciprocity in UWB channels
by Robert Wilson, David Tse, Robert A. Scholtz, Life Fellow 2007
Cited by 3

TABLE I MUTUAL INFORMATION, ENTROPY AND MEAN SQUARE ERROR FOR DIFFERENT BIT-MAPPINGS OF 3 BIT QUANTIZED PARAMETERS.

in Combined Source/Channel (De-)Coding: Can A Priori Information Be Used Twice?
by T. Hindelang, T. Fingscheidt, N. Seshadri, R. V. Cox 2000
Cited by 8

TABLE I: Bit redundancy, mutual informations and mean square error for different bit-mappings of 2 bit quantized parameters.

in Source-Controlled Channel Decoding: Estimation of Correlated Parameters
by Thomas Hindelang, Joachim Hagenauer, Stefan Heinen

TABLE II: Bit redundancy, mutual information and mean square error for different bit-mappings of 3 bit quantized parameters.

in Source-Controlled Channel Decoding: Estimation of Correlated Parameters
by Thomas Hindelang, Joachim Hagenauer, Stefan Heinen

Table 1. Coder e ciency comparison between an embedded (bit-plane) coder a and coder conditioned on the quantization step-sizes (MjL,Q). Step sizes ^ Qk are generated for each image near threshold and supra-threshold. The rates in the rst column result from coding the quantization indices independently of the side information. The rates in the second column result from coding the quantized data conditioned on the step-size map.

in Spatial quantization via local texture masking
by Matthew D. Gaubatz, Damon M. Ch, Sheila S. Hemami 2005
"... In PAGE 7: ... Most of the distortion is pushed into the grass, where it is more di cult to detect. Table1 compares the e ciency of the proposed coder with a more traditional coder. The rates in the rst column are generated by coding the quantized data separately from the side information with an embedded Tarp- lter-based arithmetic bit-plane coder.... ..."
Cited by 2

Table 1. First order entropies for perceptually lossless quantization

in unknown title
by unknown authors
"... In PAGE 4: ... All ij were set equal 4. Table1 contains the rst order entropies and coding gains for perceptually lossless compression (DT = 1:0). It compares DCTune, optimal locally-adaptive quanti- zation as well as locally-adaptive quantization without side information.... ..."

Table 11 Impact of the quantization step-size on the lossless compression performance of the prediction-based scalable MVC (the average number of bytes per frame needed to losslessly code the motion information is reported)

in unknown title
by unknown authors 2004
"... In PAGE 22: ...Table11 . We report the average number of bytes needed per H-frame to losslessly code the motion information.... ..."
Cited by 2

Table 2: An algorithm for iteratively quantizing a source with erasures.

in Iterative Quantization using Codes . . .
by Emin Martinian, Jonathan S. Yedidia
"... In PAGE 7: ...4 Iterative Decoding/Quantization and Duality In the following we rst review the intuition behind iterative erasure decoding algorithms and describe the particular decoding algorithm we consider in Table 1. Next we outline the intuition behind a similar approach for iterative quantization and precisely describe our quantization algorithm in Table2 . Finally, we show that these algorithms are duals.... In PAGE 9: ... Essentially, the requirement of consistent tie-breaking can be interpreted as a constraint on the message-passing schedule: tie-breaking information for a given tie should be propagated through the graph before other ties are broken. In order to provide a precise algorithm for the purpose of proving theorems, we consider the ERASURE-QUANTIZE in Table2 based on applying the rules in (6) with a sequential schedule and all tie-breaking collected into step 8. Table 2: An algorithm for iteratively quantizing a source with erasures.... ..."

Table 2: An algorithm for iteratively quantizing a source with erasures.

in Iterative Quantization Using Codes On Graphs
by Emin Martinian, Jonathan S. Yedidia
"... In PAGE 5: ...4 Iterative Decoding/Quantization and Duality In the following we rst review the intuition behind iterative erasure decoding algorithms and describe the particular decoding algorithm we consider in Table 1. Next we outline the intuition behind a similar approach for iterative quantization and precisely describe our quantization algorithm in Table2 . Finally, we show that these algorithms are duals.... In PAGE 7: ... Essentially, the requirement of consistent tie-breaking can be interpreted as a constraint on the message-passing schedule: tie-breaking information for a given tie should be propagated through the graph before other ties are broken. In order to provide a precise algorithm for the purpose of proving theorems, we consider the ERASURE-QUANTIZE in Table2 based on applying the rules in (6) with a sequential schedule and all tie-breaking collected into step 8. Table 2: An algorithm for iteratively quantizing a source with erasures.... ..."

TABLE 1. Progressive compression results for a non-overtesselated 10bit quantized Bunny model. Connectivity information amounts overall to 3.6 bits per triangle, coordinate data requires 7.7 bits per triangle or 15.4 per vertex. See also Figures 3 and 14 for images of the bunny model.

in unknown title
by unknown authors 2000
Cited by 92
Next 10 →
Results 1 - 10 of 27,727
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University