#### DMCA

## Complementary projection hashing (2013)

Venue: | in Proceedings of the IEEE International Conference on Computer Vision |

Citations: | 3 - 0 self |

### Citations

1855 | H.: Introduction to Information Retrieval
- Manning, Raghavan, et al.
- 2008
(Show Context)
Citation Context ... of the query. Following [25, 16, 9], we used three criteria to evaluate different aspects of hashing algorithms as follows: • Mean Average Precision (MAP): This is a classical metric in IR community =-=[6]-=-. MAP approximates the area under precision-recall curve [3] and evaluates the overall performance of a hashing algorithm. This metric has been widely used to evaluate the performance of various hashi... |

1524 |
Multidimensional binary search trees used for associative searching
- Bentley
- 1975
(Show Context)
Citation Context ...ighbors (NN) search is a fundamental problem and has found applications in many computer vision tasks [23, 10, 29]. A number of efficient algorithms, based on pre-built index structures (e.g. KD-tree =-=[4]-=- and Rtree [2]), have been proposed for nearest neighbors search. Unfortunately, these approaches perform worse than a linear scan when the dimensionality of the space is high [5], which is often the ... |

539 |
Introductory lectures on convex optimization: a basic course
- Nesterov
- 2003
(Show Context)
Citation Context ... (1− q2sq2), q4 = ϕ(1ε− q1sq2) The symbolsrepresents the Hadamard product(i.e., element-wise product). In the gradient descent procedure, we enforce ‖pk‖2 = 1 and apply the Nesterov’s gradient method =-=[19]-=- for the fast convergence. The algorithm procedure of CPH is summarized in Algorithm 1. 3.6. Computational Complexity Analysis Given n data points with the dimensionality d, we select m(m n) samples... |

457 | Near-optimal hashing algorithms for approximate nearest neighbor in high dimensions.
- Andoni, Indyk
- 2008
(Show Context)
Citation Context ...s often the case of computer vision applications. Given the intrinsic difficulty of exact nearest neighbors search, many hashing algorithms are proposed for Approximate Nearest Neighbors (ANN) search =-=[1, 25, 27, 16, 7, 9]-=-. a (a) b (b) Figure 1. Illustration for the first motivation. (a) The hyperplane a crosses the sparse region and the neighbors are quantized into the same bucket; (b) The hyperplane b crosses the den... |

408 | When is ‘nearest neighbor’ meaningful
- Beyer, Goldstein, et al.
- 1999
(Show Context)
Citation Context ...ctures (e.g. KD-tree [4] and Rtree [2]), have been proposed for nearest neighbors search. Unfortunately, these approaches perform worse than a linear scan when the dimensionality of the space is high =-=[5]-=-, which is often the case of computer vision applications. Given the intrinsic difficulty of exact nearest neighbors search, many hashing algorithms are proposed for Approximate Nearest Neighbors (ANN... |

376 |
Pattern Classification, 2nd edition.
- Duda, Hart, et al.
- 2001
(Show Context)
Citation Context ...ane f(x) = wTx−b crossing the sparse region, the number of data points in the boundary region of this hyperplane should be small. It is easy to check that the distance of a point xi to the hyperplane =-=[21]-=- is di = |wTxi − b| ‖w‖ . Without loss of generality, we assume ‖w‖ = 1. Then di = |wTxi − b|. Given the boundary parameter ε > 0, we can find the hyperplane which cross the sparse region by solving t... |

311 | Shape indexing using approximate nearest-neighbour search in high-dimensional spaces.
- Beis, Lowe
- 1997
(Show Context)
Citation Context ...t hashing methods demonstrate the effectiveness of the proposed method. 1. Introduction Nearest Neighbors (NN) search is a fundamental problem and has found applications in many computer vision tasks =-=[23, 10, 29]-=-. A number of efficient algorithms, based on pre-built index structures (e.g. KD-tree [4] and Rtree [2]), have been proposed for nearest neighbors search. Unfortunately, these approaches perform worse... |

284 | Spectral hashing
- Weiss, Torralba, et al.
- 2009
(Show Context)
Citation Context ...s often the case of computer vision applications. Given the intrinsic difficulty of exact nearest neighbors search, many hashing algorithms are proposed for Approximate Nearest Neighbors (ANN) search =-=[1, 25, 27, 16, 7, 9]-=-. a (a) b (b) Figure 1. Illustration for the first motivation. (a) The hyperplane a crosses the sparse region and the neighbors are quantized into the same bucket; (b) The hyperplane b crosses the den... |

157 | Iterative quantization: A procrustean approach to learning binary codes for large-scale image retrieval.
- Gong, Lazebnik, et al.
- 2012
(Show Context)
Citation Context ...s often the case of computer vision applications. Given the intrinsic difficulty of exact nearest neighbors search, many hashing algorithms are proposed for Approximate Nearest Neighbors (ANN) search =-=[1, 25, 27, 16, 7, 9]-=-. a (a) b (b) Figure 1. Illustration for the first motivation. (a) The hyperplane a crosses the sparse region and the neighbors are quantized into the same bucket; (b) The hyperplane b crosses the den... |

117 | Multiprobe lsh: Efficient indexing for high-dimensional similarity search.
- Lv, Josephson, et al.
- 2007
(Show Context)
Citation Context ...ets. Apparently, the hyperplane a is more suitable as a hashing function. However, many popular hashing algorithms (e.g., Locality Sensitive Hashing (LSH) [1], Entropy based LSH [22], Multi-Probe LSH =-=[11, 17]-=-, Kernelized Locality Sensitive Hashing (KLSH) [13]) are based on the random projection. These methods generate the hash functions randomly and fail to consider this requirement. In order to satisfy t... |

108 | Hashing with graphs.
- Liu, Wang, et al.
- 2011
(Show Context)
Citation Context ...se. Following [9, 12], a returned point is considered to be a true neighbor if it lies in the 1000 closest neighbors (measured by the Euclidian distance in the original space) of the query. Following =-=[25, 16, 9]-=-, we used three criteria to evaluate different aspects of hashing algorithms as follows: • Mean Average Precision (MAP): This is a classical metric in IR community [6]. MAP approximates the area under... |

100 | User performance versus precision measures for simple search tasks
- Turpin, Scholer
- 2006
(Show Context)
Citation Context ... to evaluate different aspects of hashing algorithms as follows: • Mean Average Precision (MAP): This is a classical metric in IR community [6]. MAP approximates the area under precision-recall curve =-=[3]-=- and evaluates the overall performance of a hashing algorithm. This metric has been widely used to evaluate the performance of various hashing algorithms [25, 24, 7, 16, 9, 15]. 3http://www.cs.utoront... |

94 | Annosearch: Image auto-annotation by search.
- Wang, Zhang, et al.
- 2006
(Show Context)
Citation Context ...irical studies [1] showed that the LSH is significantly more efficient than the methods based on hierarchical tree decomposition. It has been successfully used in various computer vision applications =-=[26, 25]-=-. There are many extensions for LSH [11, 22, 17, 15]. Entropy based LSH [22] and Multi-Probe LSH [11, 17] are proposed to reduce the space requirement in LSH but need much longer time to deal with the... |

84 | Supervised hashing with kernels.
- Liu, Wang, et al.
- 2012
(Show Context)
Citation Context ...gnificantly more efficient than the methods based on hierarchical tree decomposition. It has been successfully used in various computer vision applications [26, 25]. There are many extensions for LSH =-=[11, 22, 17, 15]-=-. Entropy based LSH [22] and Multi-Probe LSH [11, 17] are proposed to reduce the space requirement in LSH but need much longer time to deal with the query. Kernelized Locality Sensitive Hashing (KLSH)... |

81 | Minimal loss hashing for compact binary codes.
- Norouzi, Fleet
- 2011
(Show Context)
Citation Context ...herical Hashing (SPH) [9] which learns hypersphere-based hash functions. There are also many efforts on leveraging the label information into hash function learning, which leads to supervised hashing =-=[20, 15]-=- and semi-supervised hashing [25, 18]. There are some key points indicate the differences between our method and the previous methods. In [7, 25, 24], the orthogonality constraint of projections has b... |

80 | The priority r-tree: a practically efficient and worst-case optimal r-tree - Arge - 2004 |

65 | Sequential projection learning for hashing with compact codes.
- Wang, Kumar, et al.
- 2010
(Show Context)
Citation Context ...on the random projection. These methods generate the hash functions randomly and fail to consider this requirement. In order to satisfy the second requirement, many existing hashing algorithms (e.g., =-=[7, 25, 24]-=-) require that the data points are evenly separated by each hash function (hyperplane). However, this does not guarantee that the data points are evenly distributed among all the hypercubes generated ... |

60 | Fast similarity search for learned metrics. - Kulis, Jain, et al. - 2009 |

51 | Entropy based nearest neighbor search in high dimensions. In SODA,
- Panigrahy
- 2006
(Show Context)
Citation Context ...nto the different buckets. Apparently, the hyperplane a is more suitable as a hashing function. However, many popular hashing algorithms (e.g., Locality Sensitive Hashing (LSH) [1], Entropy based LSH =-=[22]-=-, Multi-Probe LSH [11, 17], Kernelized Locality Sensitive Hashing (KLSH) [13]) are based on the random projection. These methods generate the hash functions randomly and fail to consider this requirem... |

50 | Weakly-supervised hashing in kernel space.
- Mu, Shen, et al.
- 2010
(Show Context)
Citation Context ...s hypersphere-based hash functions. There are also many efforts on leveraging the label information into hash function learning, which leads to supervised hashing [20, 15] and semi-supervised hashing =-=[25, 18]-=-. There are some key points indicate the differences between our method and the previous methods. In [7, 25, 24], the orthogonality constraint of projections has been proposed. For obtaining more bala... |

45 |
Kernelized locality-sensitive hashing.
- Kulis, Grauman
- 2012
(Show Context)
Citation Context ... hashing function. However, many popular hashing algorithms (e.g., Locality Sensitive Hashing (LSH) [1], Entropy based LSH [22], Multi-Probe LSH [11, 17], Kernelized Locality Sensitive Hashing (KLSH) =-=[13]-=-) are based on the random projection. These methods generate the hash functions randomly and fail to consider this requirement. In order to satisfy the second requirement, many existing hashing algori... |

40 | A posteriori multi-probe locality sensitive hashing. In:
- Joly, Buisson
- 2008
(Show Context)
Citation Context ...ets. Apparently, the hyperplane a is more suitable as a hashing function. However, many popular hashing algorithms (e.g., Locality Sensitive Hashing (LSH) [1], Entropy based LSH [22], Multi-Probe LSH =-=[11, 17]-=-, Kernelized Locality Sensitive Hashing (KLSH) [13]) are based on the random projection. These methods generate the hash functions randomly and fail to consider this requirement. In order to satisfy t... |

34 | Spherical hashing.
- Heo, Lee, et al.
- 2012
(Show Context)
Citation Context |

31 | Complementary hashing for approximate nearest neighbor search.
- Xu, Wang, et al.
- 2011
(Show Context)
Citation Context ...t hashing methods demonstrate the effectiveness of the proposed method. 1. Introduction Nearest Neighbors (NN) search is a fundamental problem and has found applications in many computer vision tasks =-=[23, 10, 29]-=-. A number of efficient algorithms, based on pre-built index structures (e.g. KD-tree [4] and Rtree [2]), have been proposed for nearest neighbors search. Unfortunately, these approaches perform worse... |

29 | Random maximum margin hashing.
- Joly, Buisson
- 2011
(Show Context)
Citation Context ...et is publicly available 4 and has been used in [25, 9, 15]. • SIFT-1M: It contains one million SIFT descriptors and each descriptor is represented by a 128-dim vector. This data set has been used in =-=[25, 24, 12]-=- and is provided by those authors. As suggested in [7], all the data is centralized to produce a better result. For each data set, we randomly select 2k data points as the queries and use the remainin... |

27 | Semi-supervised hashing for large scale search.
- Wang, Kumar, et al.
- 2012
(Show Context)
Citation Context ...se. Following [9, 12], a returned point is considered to be a true neighbor if it lies in the 1000 closest neighbors (measured by the Euclidian distance in the original space) of the query. Following =-=[25, 16, 9]-=-, we used three criteria to evaluate different aspects of hashing algorithms as follows: • Mean Average Precision (MAP): This is a classical metric in IR community [6]. MAP approximates the area under... |

26 | Compact hashing with joint optimization of search accuracy and time
- He, Chang, et al.
- 2011
(Show Context)
Citation Context ...kets). Thus requiring one bit hashing function to evenly separate the data is not enough. However, learning c hyperplanes which distributes all the data points into 2c hypercubes is generally NP-hard =-=[8]-=-. We use a pair-wise hash buckets balance condition [9] to get a reasonable approximation. The pair-wise hash buckets balance requires that every two hyperplanes split the whole space into four region... |

6 | Semi-supervised nonlinear hashing using bootstrap sequential projection learning.
- Wu, Zhu, et al.
- 2012
(Show Context)
Citation Context ...eature embedding for the kernel is unknown. All these methods are fundamentally based on the random projection and do not aware of the data distribution. Recently, many learning-based hashing methods =-=[27, 16, 7, 9, 28, 14]-=- are proposed to make use of the data distribution. Many of them [27, 24, 16] exploit the spectral properties of the data affinity (i.e., item-item similarity) matrix for binary coding. The spectral a... |

5 | Compressed hashing
- Lin, Jin, et al.
- 2013
(Show Context)
Citation Context ...eature embedding for the kernel is unknown. All these methods are fundamentally based on the random projection and do not aware of the data distribution. Recently, many learning-based hashing methods =-=[27, 16, 7, 9, 28, 14]-=- are proposed to make use of the data distribution. Many of them [27, 24, 16] exploit the spectral properties of the data affinity (i.e., item-item similarity) matrix for binary coding. The spectral a... |