#### DMCA

## Quantization with an Information-Theoretic Distortion Measure (2002)

Citations: | 1 - 0 self |

### Citations

650 | Divergence measures based on the Shannon entropy
- Lin
- 1991
(Show Context)
Citation Context ... i (14) but since f k = E k [P ] we obtain D k = H(E k [P ]) E k [H(P )]: (15) When g(:) is actually a probability mass function, this expression is known as the generalized Jensen-Shannon divergence =-=[7]-=-. We conclude that minimization of the average K-L divergence or the average Jensen-Shannon divergence within a cluster are similar problems. 6 5 Rate-Constrained Quantization Just as in standard vect... |

536 | The information bottleneck method
- Tishby, Pereira, et al.
- 1999
(Show Context)
Citation Context ...B given XA , which is not available, otherwise we could use it directly to encode XB . 2.2 Information bottleneck method In the so-called information bottleneck method from Tishby, Pereira and Bialek =-=[12-=-], a stochastic map of the form p(k j xA ) plays the role of the quantizer , and minimizes I(K; XB ) subject to a constraint on the value of I(K; XA ). Tishby et al. describe an algorithm for computin... |

300 |
Elements of Information Theory. Wileys Series in Telecommunications
- Cover, Thomas
- 1991
(Show Context)
Citation Context ...onstraint on the value of I(K; XA ). Tishby et al. describe an algorithm for computing these maps based on classical developments in rate-distortion theory and similar to the Blahut-Arimoto algorithm =-=[3]. In -=-a subsequent development of the method [11], they describe a greedy heuristic algorithm for designing \hard clusters", i.e. a deterministic quantizer, minimizing the same criteria. The algorithm ... |

186 | Agglomerative information bottleneck,”
- Slonim, Friedman, et al.
- 1999
(Show Context)
Citation Context ...t maximizes the mutual information with a constraint on the output entropy of the quantizer. This happens to be the goal of a previously published 1 method called agglomerative information bottleneck =-=[11]-=-. We propose a variation in this agglomerative algorithm that takes into account the change in output entropy and compare the various methods on small and medium scale data sets. As a conclusion, we m... |

170 |
Entropy-constrained vector quantization
- Chou, Lookabaugh, et al.
- 1989
(Show Context)
Citation Context ...at minimization of the average K-L divergence or the average Jensen-Shannon divergence within a cluster are similar problems. 6 5 Rate-Constrained Quantization Just as in standard vector quantization [2], we can dene a rate-constrained algorithm that does not restrict the range of K but rather put a constraint on the rate of . The rate is dened dierently according to the target application. In th... |

109 |
A new vector quantization clustering algorithm ”, IEEE Trans. on acoustics, speech ,and signal processing.
- Equitz
- 1989
(Show Context)
Citation Context ...s an adaptation of the well-known generalized Lloyd algorithm, we can consider that the agglomerative information bottleneck technique [11] is an adaptation of the Pairwise Nearest Neighbor algorithm =-=[5]-=- for vector quantizer design. The local optimization algorithm can be implemented in practice using a training set T of outcomes of XA and applying the nearest neighbor rule (8) for each element of th... |

82 | Clustering based on conditional distribution in an auxiliary space
- Sinkkonen, Kaski
- 2001
(Show Context)
Citation Context ...XA ) reduces to a constraint on H(K), since H(K j XA ) = 0. 2.3 Neural computation Let us also notice several recent contributions in the neural computation community based on the same ideas, such as =-=[10, 4]-=-. In [10], a soft clustering method is presented that minimizes the Kullback-Leibler divergence between estimated distributions of auxiliary data, what we call XB , conditioned on primary data, here c... |

14 |
Quantum distribution of Gaussian keys using squeezed states
- Cerf, Lévy, et al.
- 2001
(Show Context)
Citation Context ... the lower bound on the leaked information. This amounts to maximizing H(K) H(K j XB ) = I(K; XB ). This idea has recently been applied to a key distillation procedure using continuous quantum states =-=[1]-=-. 3 Algorithm We propose a method that follows the developments provided in [13] and inspired from the Lloyd optimality conditions for vector quantizers [6]. We assume that K belongs to the set f1; 2;... |

13 | Minimum Conditional Entropy context Quantization[J], inProc
- Wu, Chou
(Show Context)
Citation Context ...e on quantization for maximal mutual information. This idea has actually emerged recently in rather dierent contexts we now detail. 2.1 Context quantization In a recent contribution from Wu and Chou [=-=13]-=-, a maximal mutual information quantizer is utilized to classify context vectors in data compression applications. We wish to predict the distribution of a variable XB given a context XA . Since the r... |

7 |
Voronoi diagram in statistical parametric space by Kullback-Leibler divergence
- Onishi, Imai
- 1997
(Show Context)
Citation Context ..., using higher order statistics. All these ideas straightforwardly apply to our method. Properties of the Voronoi diagram induced by the K-L divergence in a Gaussian parametric space are described in =-=[-=-9]. 9 Conclusion This optimization problem is ubiquitous in pattern classication and compression. It has already been studied before, but we proposed to unify dierent existing views through a vector q... |

4 | Analysis of neural coding using quantization with an information-based distortion measure
- Dimitrov, Miller, et al.
- 2003
(Show Context)
Citation Context ...XA ) reduces to a constraint on H(K), since H(K j XA ) = 0. 2.3 Neural computation Let us also notice several recent contributions in the neural computation community based on the same ideas, such as =-=[10, 4]-=-. In [10], a soft clustering method is presented that minimizes the Kullback-Leibler divergence between estimated distributions of auxiliary data, what we call XB , conditioned on primary data, here c... |

3 | Edgeworth approximation of the Kullback-Leibler distance towards problems in image analysis
- Lin, Saito, et al.
- 1999
(Show Context)
Citation Context ... multivariate Gaussian with estimated covariance matrices. Simple formulas for the K-L divergence are still applicable. Evensner approximations have been recently proposed by Lin, Saito and Levine in =-=[8]-=-, using higher order statistics. All these ideas straightforwardly apply to our method. Properties of the Voronoi diagram induced by the K-L divergence in a Gaussian parametric space are described in ... |