Results 1  10
of
10
High Dimensional Similarity Search With Space Filling Curves
 In Proceedings of the 17th International Conference on Data Engineering
, 2000
"... We present a new approach for approximate nearest neighbor queries for sets of high dimensional points under any L t metric, t = 1,2,3,... The proposed algorithm is efficient and simple to implement. The algorithm uses multiple shifted copies of the data points and stores them in up to (d + 1) Btr ..."
Abstract

Cited by 15 (1 self)
 Add to MetaCart
We present a new approach for approximate nearest neighbor queries for sets of high dimensional points under any L t metric, t = 1,2,3,... The proposed algorithm is efficient and simple to implement. The algorithm uses multiple shifted copies of the data points and stores them in up to (d + 1) Btrees where d is the dimensionality of the data, sorted according to their position along a space filling curve. This is done in a way that allows us to guarantee that a neighbor within an O(d^(1+1/t)) factor of the exact nearest, can be returned with at most (d + 1) log p n page accesses, where p is the branching factor of the Btrees. In practice, for real data sets, our approximate technique finds the exact nearest neighbor between 87% and 99% of the time and a point no farther than the third nearest neighbor between 98% and 100% of the time. Our solution is dynamic, allowing insertion or deletion of points in O(d log p n) page accesses and generalizes easily to find approximate knea...
Copyright Protection Protocols Based On Asymmetric Watermarking: The Ticket Concept
 In Communications and Multimedia Security Issues of the New Century
, 2001
"... Traditional watermarking systems require the complete disclosure of a watermarking key in the watermark verification process. In most systems, an attacker is able to remove the watermark completely once the key is known. We propose the use of publickey watermarking systems, systems in which a water ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
Traditional watermarking systems require the complete disclosure of a watermarking key in the watermark verification process. In most systems, an attacker is able to remove the watermark completely once the key is known. We propose the use of publickey watermarking systems, systems in which a watermark is inserted using a private key but checked with a public key. A construction of such a scheme is given, which uses scrambled documents and socalled ownership tickets in the watermark verification process.
Security Analysis of PublicKey Watermarking Schemes
 In Proceedings of the SPIE, Mathematics of Data/Image Coding, Compression, and Encryption IV
, 2001
"... Traditional watermarking systems require the complete disclosure of the watermarking key in the watermark verification process. In most systems an attacker is able to remove the watermark completely once the key is known, thus subverting the intention of copyright protection. To cope with this probl ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
Traditional watermarking systems require the complete disclosure of the watermarking key in the watermark verification process. In most systems an attacker is able to remove the watermark completely once the key is known, thus subverting the intention of copyright protection. To cope with this problem, publickey watermarking schemes were proposed that allow asymmetric watermark detection: while a private key is used to insert watermarks in digital objects, a separate, public key is used to verify the marks' presence. We descibe two publickey watermarking schemes which are similar in spirit to zeroknowledge proofs. The key idea of one system is to verify a watermark in a blinded version of the document, where the scrambling is determined by the private key. A probabilistic protocol is constructed that allows public watermark detection with probability of 1/2; by iteration, the verifier can get any degree of certainty that the watermark is present. The second system is based on watermark attacks, using controlled counterfeiting to conceal real watermark data safely amid data useless to an attacker.
On the Quality of Partitions based on SpaceFilling Curves
, 2002
"... This paper presents bounds on the quality of partitions induced by spacefilling curves. We compare the surface that surrounds an arbitrary index range with the optimal partition in the grid, i. e. the square. It is shown that partitions induced by Lebesgue and Hilbert curves behave about 1.85 times ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
This paper presents bounds on the quality of partitions induced by spacefilling curves. We compare the surface that surrounds an arbitrary index range with the optimal partition in the grid, i. e. the square. It is shown that partitions induced by Lebesgue and Hilbert curves behave about 1.85 times worse with respect to the length of the surface. The Lebesgue indexing gives better results than the Hilbert indexing in worst case analysis. Furthermore, the surface of partitions based on the Lebesgue indexing are at most 3 times larger than the optimal in average case.
Average Case Quality of Partitions Induced by the Lebesgue Indexing
, 2001
"... This paper presents the quality of partitions induced by the Lebesgue curve in average case. The surface that surrounds an arbitrary index range is compared with the optimal partition in the grid, i. e. the square. The upper bound on the surface is asymptotically 3 times the optimal size. ..."
Abstract

Cited by 2 (2 self)
 Add to MetaCart
This paper presents the quality of partitions induced by the Lebesgue curve in average case. The surface that surrounds an arbitrary index range is compared with the optimal partition in the grid, i. e. the square. The upper bound on the surface is asymptotically 3 times the optimal size.
Logarithmic PathLength in SpaceFilling Curves
 14TH CANADIAN CONFERENCE ON COMPUTATIONAL GEOMETRY
"... Data structures based on spacefilling curves have shown to be a good approach in several applications. For the monitoring of moving objects, e. g. necessary for the contact detection in finiteelement simulations, we need a special metrics to compare the quality of different curves. This paper ..."
Abstract

Cited by 2 (0 self)
 Add to MetaCart
Data structures based on spacefilling curves have shown to be a good approach in several applications. For the monitoring of moving objects, e. g. necessary for the contact detection in finiteelement simulations, we need a special metrics to compare the quality of different curves. This paper
Multiresolution organization and browsing of images using multimedia knowledge networks
, 2003
"... ana @ ee.columbia.edu This paper presents novel methods for organizing and browsing annotated images using multiresolution networks that represent knowledge about the images (e.g., objects, events and interactions). At the highest resolution, images are organized by perceptual knowledge (e.g., image ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
ana @ ee.columbia.edu This paper presents novel methods for organizing and browsing annotated images using multiresolution networks that represent knowledge about the images (e.g., objects, events and interactions). At the highest resolution, images are organized by perceptual knowledge (e.g., image clusters and visual relations), semantic knowledge (e.g., word senses and semantic relations), and statistical interrelations discovered from the collection. This process drives on the integrated processing of both images and annotations and the use of the electronic dictionary WordNet. Knowledge networks at lower resolutions are constructed by clustering similar concepts together. Users can then browse the annotated images by navigating the resulting knowledge network pyramid. Ideas from fisheye views and spring modeling are exploited for displaying concepts using text and image example, and for drawing networks, respectively. Although the network pyramid is hierarchical, the navigation is not restricted to the hierarchy. Experiments are being conducted with users to evaluate the effectiveness, efficiency, and subjective satisfaction of the users in performing common browsing tasks such as image search. In these experiments, the proposed techniques are being compared to the sequential navigation of concepts in the initial knowledge network.
Definition of a New Circular SpaceFilling Curve  βΩIndexing
"... This technical report presents the definition of a circular Hilbertlike spacefilling curve. Preliminary evaluations in a simulation environment have shown good locality preserving properties. The results are compared with known bounds for other indexing schemes: Hilbert, Lebesgue, and HInde ..."
Abstract

Cited by 1 (0 self)
 Add to MetaCart
This technical report presents the definition of a circular Hilbertlike spacefilling curve. Preliminary evaluations in a simulation environment have shown good locality preserving properties. The results are compared with known bounds for other indexing schemes: Hilbert, Lebesgue, and HIndexing. We evaluated partitions induced by the indexing schemes and uses the diameter and the surface as measures. For both we present worst case and average case results.
Efficient Shared Memory Parallelisation and Resource Management of Explicit Finite Element Codes
, 2003
"... In this paper we show the efficient parallelisation of an industrial finite element simulation code and the dynamic and fair management of jobs running on a multiprocessor machine. We have parallelised... ..."
Abstract
 Add to MetaCart
In this paper we show the efficient parallelisation of an industrial finite element simulation code and the dynamic and fair management of jobs running on a multiprocessor machine. We have parallelised...
Effect of Image Linearization on Normalized Compression Distance
"... Abstract. Normalized Information Distance, based on Kolmogorov complexity, is an emerging metric for image similarity. It is approximated by the Normalized Compression Distance (NCD) which generates the relative distance between two strings by using standard compression algorithms to compare linear ..."
Abstract
 Add to MetaCart
Abstract. Normalized Information Distance, based on Kolmogorov complexity, is an emerging metric for image similarity. It is approximated by the Normalized Compression Distance (NCD) which generates the relative distance between two strings by using standard compression algorithms to compare linear strings of information. This relative distance quantifies the degree of similarity between the two objects. NCD has been shown to measure similarity effectively on information which is already a string: genomic string comparisons have created accurate phylogeny trees and NCD has also been used to classify music. Currently, to find a similarity measure using NCD for images, the images must first be linearized into a string, and then compared. To understand how linearization of a 2D image affects the similarity measure, we perform four types of linearization on a subset of the Corel image database and compare each for a variety of image transformations. Our experiment shows that different linearization techniques produce statistically significant differences in NCD for identical spatial transformations.