Results 1  10
of
543
Stochastic Perturbation Theory
, 1988
"... . In this paper classical matrix perturbation theory is approached from a probabilistic point of view. The perturbed quantity is approximated by a firstorder perturbation expansion, in which the perturbation is assumed to be random. This permits the computation of statistics estimating the variatio ..."
Abstract

Cited by 617 (31 self)
 Add to MetaCart
. In this paper classical matrix perturbation theory is approached from a probabilistic point of view. The perturbed quantity is approximated by a firstorder perturbation expansion, in which the perturbation is assumed to be random. This permits the computation of statistics estimating the variation in the perturbed quantity. Up to the higherorder terms that are ignored in the expansion, these statistics tend to be more realistic than perturbation bounds obtained in terms of norms. The technique is applied to a number of problems in matrix perturbation theory, including least squares and the eigenvalue problem. Key words. perturbation theory, random matrix, linear system, least squares, eigenvalue, eigenvector, invariant subspace, singular value AMS(MOS) subject classifications. 15A06, 15A12, 15A18, 15A52, 15A60 1. Introduction. Let A be a matrix and let F be a matrix valued function of A. Two principal problems of matrix perturbation theory are the following. Given a matrix E, pr...
The Jackknife and the Bootstrap for General Stationary Observations
, 1989
"... this paper we will always consider statistics TN of the form TN (X 1 ; :::; XN ) = T (ae ..."
Abstract

Cited by 225 (2 self)
 Add to MetaCart
this paper we will always consider statistics TN of the form TN (X 1 ; :::; XN ) = T (ae
Testing ContinuousTime Models of the Spot Interest Rate
 Review of Financial Studies
, 1996
"... Different continuoustime models for interest rates coexist in the literature. We test parametric models by comparing their implied parametric density to the same density estimated nonparametrically. We do not replace the continuoustime model by discrete approximations, even though the data are rec ..."
Abstract

Cited by 194 (7 self)
 Add to MetaCart
Different continuoustime models for interest rates coexist in the literature. We test parametric models by comparing their implied parametric density to the same density estimated nonparametrically. We do not replace the continuoustime model by discrete approximations, even though the data are recorded at discrete intervals. The principal source of rejection of existing models is the strong nonlinearity of the drift. Around its mean, where the drift is essentially zero, the spot rate behaves like a random walk. The drift then meanreverts strongly when far away from the mean. The volatility is higher when away from the mean. The continuoustime financial theory has developed extensive tools to price derivative securities when the underlying traded asset(s) or nontraded factor(s) follow stochastic differential equations [see Merton (1990) for examples]. However, as a practical matter, how to specify an appropriate stochastic differential equation is for the most part an unanswered question. For example, many different continuoustime The comments and suggestions of Kerry Back (the editor) and an anonymous referee were very helpful. I am also grateful to George Constantinides,
Continuous Record Asymptotics for Rolling Sample Variance Estimators
 Econometrica
, 1996
"... It is widely known that conditional covariances of asset returns change over time. ..."
Abstract

Cited by 89 (0 self)
 Add to MetaCart
It is widely known that conditional covariances of asset returns change over time.
An efficient Semiparametric Estimator for Binary Response Models
 Econometrica
, 1993
"... Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at ..."
Abstract

Cited by 87 (2 self)
 Add to MetaCart
Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at
Mean and Variance of Implicitly Defined Biased Estimators (such as Penalized Maximum Likelihood): Applications to Tomography
 IEEE Tr. Im. Proc
, 1996
"... Many estimators in signal processing problems are defined implicitly as the maximum of some objective function. Examples of implicitly defined estimators include maximum likelihood, penalized likelihood, maximum a posteriori, and nonlinear leastsquares estimation. For such estimators, exact analyti ..."
Abstract

Cited by 84 (30 self)
 Add to MetaCart
Many estimators in signal processing problems are defined implicitly as the maximum of some objective function. Examples of implicitly defined estimators include maximum likelihood, penalized likelihood, maximum a posteriori, and nonlinear leastsquares estimation. For such estimators, exact analytical expressions for the mean and variance are usually unavailable. Therefore investigators usually resort to numerical simulations to examine properties of the mean and variance of such estimators. This paper describes approximate expressions for the mean and variance of implicitly defined estimators of unconstrained continuous parameters. We derive the approximations using the implicit function theorem, the Taylor expansion, and the chain rule. The expressions are defined solely in terms of the partial derivatives of whatever objective function one uses for estimation. As illustrations, we demonstrate that the approximations work well in two tomographic imaging applications with Poisson sta...
Unsupervised Learning of Distributions on Binary Vectors Using Two Layer Networks
, 1994
"... this paper is related to both of these lines of work and has some advantages over each of them. If we find a good model of the distribution, we can tackle other interesting learning problems, such as the problem of estimating the conditional distribution on certain components of the vector ~x when p ..."
Abstract

Cited by 59 (1 self)
 Add to MetaCart
this paper is related to both of these lines of work and has some advantages over each of them. If we find a good model of the distribution, we can tackle other interesting learning problems, such as the problem of estimating the conditional distribution on certain components of the vector ~x when provided with the values for the other components (a kind of regression problem), or predicting the actual values for certain components of ~x based on the values of the other components (a kind of pattern completion task). In the example of the binary images presented above, this would amount to the task of recovering the value of a pixel whose value has been corrupted. We can often also use the distribution model to help us in a supervised learning task. This is because it is often easier to express the mapping of an instance to the correct label by using "features" that are correlation patterns among the bits of the instance. For example, it is easier to describe each of the ten digits in terms of patterns such as lines and circles, rather than in terms of the values of individual pixels, that are more likely to change between different instances of the same digit. The process of learning an unknown distribution from examples is usually called density estimation or
Integrating structured biological data by kernel maximum mean discrepancy
 IN ISMB
, 2006
"... Motivation: Many problems in data integration in bioinformatics can be posed as one common question: Are two sets of observations generated by the same distribution? We propose a kernelbased statistical test for this problem, based on the fact that two distributions are different if and only if the ..."
Abstract

Cited by 54 (15 self)
 Add to MetaCart
Motivation: Many problems in data integration in bioinformatics can be posed as one common question: Are two sets of observations generated by the same distribution? We propose a kernelbased statistical test for this problem, based on the fact that two distributions are different if and only if there exists at least one function having different expectation on the two distributions. Consequently we use the maximum discrepancy between function means as the basis of a test statistic. The Maximum Mean Discrepancy (MMD) can take advantage of the kernel trick, which allows us to apply it not only to vectors, but strings, sequences, graphs, and other common structured data types arising in molecular biology. Results: We study the practical feasibility of an MMDbased test on three central data integration tasks: Testing crossplatform comparability of microarray data, cancer diagnosis, and datacontent based schema matching for two different protein function classification schemas. In all of these experiments, including highdimensional ones, MMD is very accurate in finding samples that were generated from the same distribution, and outperforms its best competitors. Conclusions: We have defined a novel statistical test of whether two samples are from the same distribution, compatible with both multivariate and structured data, that is fast, easy to implement, and works well, as confirmed by our experiments.
A Hilbert space embedding for distributions
 In Algorithmic Learning Theory: 18th International Conference
, 2007
"... Abstract. We describe a technique for comparing distributions without the need for density estimation as an intermediate step. Our approach relies on mapping the distributions into a reproducing kernel Hilbert space. Applications of this technique can be found in twosample tests, which are used for ..."
Abstract

Cited by 53 (26 self)
 Add to MetaCart
Abstract. We describe a technique for comparing distributions without the need for density estimation as an intermediate step. Our approach relies on mapping the distributions into a reproducing kernel Hilbert space. Applications of this technique can be found in twosample tests, which are used for determining whether two sets of observations arise from the same distribution, covariate shift correction, local learning, measures of independence, and density estimation. Kernel methods are widely used in supervised learning [1, 2, 3, 4], however they are much less established in the areas of testing, estimation, and analysis of probability distributions, where information theoretic approaches [5, 6] have long been dominant. Recent examples include [7] in the context of construction of graphical models, [8] in the context of feature extraction, and [9] in the context of independent component analysis. These methods have by and large a common issue: to compute quantities such as the mutual information, entropy, or KullbackLeibler divergence, we require sophisticated space partitioning and/or