• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations
Advanced Search Include Citations

Composite quantization for approximate nearest neighbor search. (2014)

by T Zhang, C Du, J Wang
Venue:In ICML (2),
Add To MetaCart

Tools

Sorted by:
Results 1 - 5 of 5

Hashing for Similarity Search: A Survey

by Jingdong Wang, Heng Tao Shen, Jingkuan Song, Jianqiu Ji , 2014
"... Similarity search (nearest neighbor search) is a problem of pursuing the data items whose distances to a query item are the smallest from a large database. Various methods have been developed to address this problem, and recently a lot of efforts have been devoted to approximate search. In this pap ..."
Abstract - Cited by 2 (2 self) - Add to MetaCart
Similarity search (nearest neighbor search) is a problem of pursuing the data items whose distances to a query item are the smallest from a large database. Various methods have been developed to address this problem, and recently a lot of efforts have been devoted to approximate search. In this paper, we present a survey on one of the main solutions, hashing, which has been widely studied since the pioneering work locality sensitive hashing. We divide the hashing algorithms two main categories: locality sensitive hashing, which designs hash functions without exploring the data distribution and learning to hash, which learns hash functions according the data distribution, and review them from various aspects, including hash function design and distance measure and search scheme in the hash coding space.

Hashing for similarity search: A survey

by Jingdong Wang, Heng Tao Shen, Jingkuan Song, Jianqiu Ji - CoRR
"... ar ..."
Abstract - Cited by 1 (1 self) - Add to MetaCart
Abstract not found
(Show Context)

Citation Context

...g [60] E LI HD EV harmonious hashing [138] E LI HD QE + EV Angular quantization [29] CS LI NHA MCS product quantization [49] E QU (A)ED QE Cartesian k-means [102] E QU (A)ED QE composite quantization =-=[147]-=- E QU (A)ED QE Algorithm 1 Distribute M bits into the principal directions 1. Initialization: ei ← log2 σi, mi ← 0. 2. for j = 1 to b do 3. i← argmax ei. 4. mi ← mi + 1. 5. ei ← ei − 1. 6. end for cen...

Supervised Quantization for Similarity Search

by Xiaojuan Wang , Ting Zhang , Guo-Jun Qi , Jinhui Tang , Jingdong Wang
"... Abstract In this paper, we address the problem of searching for semantically similar images from a large database. We present a compact coding approach, supervised quantization. Our approach simultaneously learns feature selection that linearly transforms the database points into a low-dimensional ..."
Abstract - Add to MetaCart
Abstract In this paper, we address the problem of searching for semantically similar images from a large database. We present a compact coding approach, supervised quantization. Our approach simultaneously learns feature selection that linearly transforms the database points into a low-dimensional discriminative subspace, and quantizes the data points in the transformed space. The optimization criterion is that the quantized points not only approximate the transformed points accurately, but also are semantically separable: the points belonging to a class lie in a cluster that is not overlapped with other clusters corresponding to other classes, which is formulated as a classification problem. The experiments on several standard datasets show the superiority of our approach over the state-of-the art supervised hashing and unsupervised quantization algorithms.
(Show Context)

Citation Context

...ich the distance is efficiently computed. The objective is that the similarity computed in the coding space is well aligned with the similarity that is computed based on the Euclidean distance in the input space, or that comes from the given semantic similarity (e.g., the data items from the same class should be similar). The solution to the former kind of similarity search is unsupervised compact coding, such as hash∗This work was partly done when Xiaojuan Wang and Ting Zhang were interns at MSR. They contributed equally to this work. ing [1,5–7,9–12,16,20,22,28,29,34,36–38] and quantization [8, 26, 39]. The solution to the latter problem is supervised compact coding, which is our interest in this paper. Almost all research efforts in supervised compact coding focus on developing hashing algorithms to preserve semantic similarities, such as LDA Hashing [30], minimal loss hashing [24], supervised hashing with kernels [21], FastHash [18], triplet loss hashing [25], and supervised discrete hashing [27]. In contrast, there is less study in quantization, which however already shows the superior performance for Euclidean distance and cosine-based similarity search. This paper makes a study on the ...

Sparse Composite Quantization

by Ting Zhang , Guo-Jun Qi , Jinhui Tang , Jingdong Wang
"... ..."
Abstract - Add to MetaCart
Abstract not found
(Show Context)

Citation Context

...each partition, the extensions with optimized space partitions, Cartesian kmeans [22] and optimized product quantization [6], have been proposed. The recently proposed composite quantization approach =-=[33]-=- introduces a new framework generalizing those algorithms. The acceleration obtained by these ANN algorithms stems from the ability of efficiently computing the distance between a query and a database...

Tree Quantization for Large-Scale Similarity Search and Classification

by Artem Babenko, Victor Lempitsky
"... We propose a new vector encoding scheme (tree quan-tization) that obtains lossy compact codes for high-dimensional vectors via tree-based dynamic programming. Similarly to several previous schemes such as product quantization, these codes correspond to codeword num-bers within multiple codebooks. We ..."
Abstract - Add to MetaCart
We propose a new vector encoding scheme (tree quan-tization) that obtains lossy compact codes for high-dimensional vectors via tree-based dynamic programming. Similarly to several previous schemes such as product quantization, these codes correspond to codeword num-bers within multiple codebooks. We propose an integer programming-based optimization that jointly recovers the coding tree structure and the codebooks by minimizing the compression error on a training dataset. In the experiments with diverse visual descriptors (SIFT, neural codes, Fisher vectors), tree quantization is shown to combine fast encod-ing and state-of-the-art accuracy in terms of the compres-sion error, the retrieval performance, and the image classi-fication error. 1.
(Show Context)

Citation Context

... AQ, the “optimized” versions of PQ and TQ, which additionally estimate a global rotation of the data that optimizes the coding accuracy [15, 8] and with the recent Composite Quantization (CQ) method =-=[19]-=- which approximates a vector as a sum of several codewords with fixed pairwise scalar products. Overall, the global optimality of the TQ encoding (given the coding tree) as well as the global optimali...

Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University