Results 1  10
of
70
Properties of embedding methods for similarity searching in metric spaces
 PAMI
, 2003
"... Complex data types—such as images, documents, DNA sequences, etc.—are becoming increasingly important in modern database applications. A typical query in many of these applications seeks to find objects that are similar to some target object, where (dis)similarity is defined by some distance functi ..."
Abstract

Cited by 80 (4 self)
 Add to MetaCart
Complex data types—such as images, documents, DNA sequences, etc.—are becoming increasingly important in modern database applications. A typical query in many of these applications seeks to find objects that are similar to some target object, where (dis)similarity is defined by some distance function. Often, the cost of evaluating the distance between two objects is very high. Thus, the number of distance evaluations should be kept at a minimum, while (ideally) maintaining the quality of the result. One way to approach this goal is to embed the data objects in a vector space so that the distances of the embedded objects approximates the actual distances. Thus, queries can be performed (for the most part) on the embedded objects. In this paper, we are especially interested in examining the issue of whether or not the embedding methods will ensure that no relevant objects are left out (i.e., there are no false dismissals and, hence, the correct result is reported). Particular attention is paid to the SparseMap, FastMap, and MetricMap embedding methods. SparseMap is a variant of Lipschitz embeddings, while FastMap and MetricMap are inspired by dimension reduction methods for Euclidean spaces (using KLT or the related PCA and SVD). We show that, in general, none of these embedding methods guarantee that queries on the embedded objects have no false dismissals, while also demonstrating the limited cases in which the guarantee does hold. Moreover, we describe a variant of SparseMap that allows queries with no false dismissals. In addition, we show that with FastMap and MetricMap, the distances of the embedded objects can be much greater than the actual distances. This makes it impossible (or at least impractical) to modify FastMap and MetricMap to guarantee no false dismissals.
On OneDimensional Quantum Cellular Automata
 In 36th Annual Symposium on Foundations of Computer Science
, 1995
"... Since Richard Feynman introduced the notion of quantum computation in 1982, various models of "quantum computers" have been proposed. These models include quantum Turing machines and quantum circuits. In this paper we define another quantum computational model, onedimensional quantum cellular autom ..."
Abstract

Cited by 35 (2 self)
 Add to MetaCart
Since Richard Feynman introduced the notion of quantum computation in 1982, various models of "quantum computers" have been proposed. These models include quantum Turing machines and quantum circuits. In this paper we define another quantum computational model, onedimensional quantum cellular automata, and demonstrate that any quantum Turing machine can be efficiently simulated by a onedimensional quantum cellular automaton with constant slowdown. This can be accomplished by consideration of a restricted class of onedimensional quantum cellular automata called onedimensional partitioned quantum cellular automata. We also show that any onedimensional partitioned quantum cellular automaton can be simulated by a quantum Turing machine with linear slowdown, but the problem of efficiently simulating an arbitrary onedimensional quantum cellular automaton with a quantum Turing machine is left open. From this discussion, some interesting facts concerning these models are easily deduced. 1...
Worstcase Quadratic Loss Bounds for Online Prediction of Linear Functions by Gradient Descent
 IEEE Transactions on Neural Networks
, 1993
"... this paper we study the performance of gradient descent when applied to the problem of online linear prediction in arbitrary inner product spaces. We show worstcase bounds on the sum of the squared prediction errors under various assumptions concerning the amount of a priori information about the ..."
Abstract

Cited by 31 (12 self)
 Add to MetaCart
this paper we study the performance of gradient descent when applied to the problem of online linear prediction in arbitrary inner product spaces. We show worstcase bounds on the sum of the squared prediction errors under various assumptions concerning the amount of a priori information about the sequence to predict. The algorithms we use are variants and extensions of online gradient descent. Whereas our algorithms always predict using linear functions as hypotheses, none of our results requires the data to be linearly related. In fact, the bounds proved on the total prediction loss are typically expressed as a function of the total loss of the best fixed linear predictor with bounded norm. All the upper bounds are tight to within constants. Matching lower bounds are provided in some cases. Finally, we apply our results to the problem of online prediction for classes of smooth functions. Keywords: prediction, WidrowHoff algorithm, gradient descent, smoothing, inner product spaces, computational learning theory, online learning, linear systems.
Contractive Embedding Methods for Similarity Searching in Metric Spaces
, 2000
"... Complex data types (e.g., images, documents, DNA sequences, etc) are becoming increasingly important in database applications. The term multimedia database is often used to characterize such databases. A typical query for such data seeks to find objects that are similar to some target object, where ..."
Abstract

Cited by 22 (2 self)
 Add to MetaCart
Complex data types (e.g., images, documents, DNA sequences, etc) are becoming increasingly important in database applications. The term multimedia database is often used to characterize such databases. A typical query for such data seeks to find objects that are similar to some target object, where (dis)similarity is defined by some distance function. Often, the cost of evaluating the distance of two objects is very high. Thus, the number of distance evaluations should be kept at a minimum, while (ideally) maintaining the quality of the result. One way to approach this goal is to embed the data objects in a vector space, such that the distances of the embedded objects approximates the actual distances. Thus, queries can be performed (for the most part) on the embedded objects. In this paper, our focus is on embedding methods that allow returning the same query result as if the actual distances of the objects are consulted, thus ensuring that no relevant objects are left out (i.e., there are no false dismissals). Particular attention was paid to SparseMap, avariant of Lipschitz embeddings, and FastMap, which is designed to be a heuristic alternative to the KLT (and the equivalent PCA and SVD) method for dimensionality reduction. We show that neither SparseMap nor FastMap guarantee that queries on the embedded objects have no false dismissals. However, we describe a variant of SparseMap allows queries with no false dismissals. Moreover, we show that with FastMap, the distances of the embedded objects can be much greater than the actual distances. This makes it impossible (or at least impractical) to modify FastMap to guarantee no false dismissals.
Estimation of Model Quality
 Automatica
, 1994
"... This paper gives an introduction to recent work on the problem of quantifying errors in the estimation of models for dynamic systems. This is a very large field. We therefore concentrate on approaches that have been motivated by the need for reliable models for control system design. This will invol ..."
Abstract

Cited by 22 (7 self)
 Add to MetaCart
This paper gives an introduction to recent work on the problem of quantifying errors in the estimation of models for dynamic systems. This is a very large field. We therefore concentrate on approaches that have been motivated by the need for reliable models for control system design. This will involve a discussion of efforts which go under the titles of `Estimation in H1 ', `Worst Case Estimation', `Estimation in ` 1 ', `Information Based Complexity', and `Stochastic Embedding of Undermodelling'. A central theme of this survey is to examine these new methods with reference to the classic bias/variance tradeoff in model structure selection. Technical Report EE9437 Centre for Industrial Control Science and Department of Electrical and Computer Engineering, University of Newcastle, Callaghan 2308, AUSTRALIA 1 Introduction Our aim in this paper is to survey an area of research which has flourished in recent years. The common denominator of this work is that of finding system identificat...
Quantum associative memory
 Information Sciences
, 2000
"... Abstract This paper combines quantum computation with classical neural network theory to produce a quantum computational learning algorithm. Quantum computation uses microscopic quantum level effects to perform computational tasks and has produced results that in some cases are exponentially faster ..."
Abstract

Cited by 14 (5 self)
 Add to MetaCart
Abstract This paper combines quantum computation with classical neural network theory to produce a quantum computational learning algorithm. Quantum computation uses microscopic quantum level effects to perform computational tasks and has produced results that in some cases are exponentially faster than their classical counterparts. The unique characteristics of quantum theory may also be used to create a quantum associative memory with a capacity exponential in the number of neurons. This paper combines two quantum computational algorithms to produce such a quantum associative memory. The result is an exponential increase in the capacity of the memory when compared to traditional associative memories such as the Hopfield network. The paper covers necessary highlevel quantum mechanical and quantum computational ideas and introduces a quantum associative memory. Theoretical analysis proves the utility of the memory, and it is noted that a small version should be physically realizable in the near future. 1.
2001) Model uncertainty, robust policies, and the value of commitment. Macroeconomic Dynamics, this volume. 27
, 1996
"... Abstract. Using results from the literature on H∞control, this paper incorporates model uncertainty into Whiteman’s (1986) frequency domain approach to stabilization policy. The derived policies guarantee a minimum performance level even in the worst of (a bounded set of) circumstances. For a given ..."
Abstract

Cited by 11 (0 self)
 Add to MetaCart
Abstract. Using results from the literature on H∞control, this paper incorporates model uncertainty into Whiteman’s (1986) frequency domain approach to stabilization policy. The derived policies guarantee a minimum performance level even in the worst of (a bounded set of) circumstances. For a given level of model uncertainty, robust H ∞ policies are shown to be more ‘activist ’ than Whiteman’s H 2 policies in the sense that their impulse responses are larger. Robust policies also tend to be more autocorrelated. Consequently, the premium associated with being able to commit is greater under model uncertainty. Without commitment, the policymaker isn’t able to (credibly) smooth his response to the degree that he would like. From a technical standpoint, a contribution of this paper is its analysis of robust control in a model featuring a forwardlooking state transition equation, which arises from the fact that the private sector bases its decisions on expectations of future government policy. Existing applications of H ∞control in economics follow the engineering literature, and only consider backwardlooking state transition equations. It is the forwardlooking nature of the state transition equation that makes a frequency domain approach attractive.
Positive extensions, FejérRiesz factorization and autoregressive filters in two variables
 Ann. of Math
, 2004
"... In this paper we treat the twovariable positive extension problem for trigonometric polynomials where the extension is required to be the reciprocal of the absolute value squared of a stable polynomial. This problem may also be interpreted as an autoregressive filter design problem for bivariate st ..."
Abstract

Cited by 10 (5 self)
 Add to MetaCart
In this paper we treat the twovariable positive extension problem for trigonometric polynomials where the extension is required to be the reciprocal of the absolute value squared of a stable polynomial. This problem may also be interpreted as an autoregressive filter design problem for bivariate stochastic processes. We show that the existence of a solution is equivalent to solving a finite positive definite matrix completion problem where the completion is required to satisfy an additional low rank condition. As a corollary of the main result a necessary and sufficient condition for the existence of a spectral FejérRiesz factorization of a strictly positive twovariable trigonometric polynomial is given in terms of the Fourier coefficients of its reciprocal. Tools in the proofs include a specific twovariable Kronecker theorem based on certain elements from algebraic geometry, as well as a twovariable ChristoffelDarboux like formula. The key ingredient is a matrix valued polynomial that appears in a parameterized version of the SchurCohn test for stability. The results also have consequences in the theory of twovariable orthogonal polynomials where a spectral matching result is obtained, as well as in the study of inverse formulas for doublyindexed Toeplitz matrices. Finally, numerical results are presented for both the autoregressive filter problem and the factorization problem. Key Words: autoregressive filter, bivariate stochastic processes, twovariable positive extension, structured matrix completions, doublyindexed Toeplitz matrix, twovariable
Spectral convergence of the discrete Laplacian on models of a metrized graph, preprint
"... Abstract. A metrized graph is a compact singular 1manifold endowed with a metric. A given metrized graph can be modelled by a family of weighted combinatorial graphs. If one chooses a sequence of models from this family such that the vertices become uniformly distributed on the metrized graph, then ..."
Abstract

Cited by 10 (1 self)
 Add to MetaCart
Abstract. A metrized graph is a compact singular 1manifold endowed with a metric. A given metrized graph can be modelled by a family of weighted combinatorial graphs. If one chooses a sequence of models from this family such that the vertices become uniformly distributed on the metrized graph, then the ith largest eigenvalue of the Laplacian matrices of these combinatorial graphs converges to the ith largest eigenvalue of the continuous Laplacian operator on the metrized graph upon suitable scaling. The eigenvectors of these matrices can be viewed as functions on the metrized graph by linear interpolation. These interpolated functions form a normal family, any convergent subsequence of which limits to an eigenfunction of the continuous Laplacian operator on the metrized graph. Contents
How can the meromorphic approximation help to solve some 2D inverse problems for the Laplacian?
, 1999
"... . We exhibit new links between approximation theory in the complex domain and a family of inverse problems for the 2D Laplacian related to nondestructive testing. 1. Introduction Our aim is to describe a method related to the approximation by analytic and meromorphic functions that allows us to de ..."
Abstract

Cited by 10 (4 self)
 Add to MetaCart
. We exhibit new links between approximation theory in the complex domain and a family of inverse problems for the 2D Laplacian related to nondestructive testing. 1. Introduction Our aim is to describe a method related to the approximation by analytic and meromorphic functions that allows us to detect, from boundary data, the presence of cracks in a planar domain and to provide information about their location. Existing procedures for solving nondestructive control problems from either thermal, electric, acoustic, or elastic measurements classically rely on multiple iterative integrations of the involved partial differential equation (PDE); hence, they are highly time consuming and very sensitive to initial guesses. Existing identifiability results and reconstruction algorithms are effective only when complete overdetermined data are available, and under strong a priori information, for instance when the crack is known to lie on some line, see [2, 8, 13] and the bibliographies ther...