Results 1  10
of
305
A unified statistical approach for determining significant signals in images of cerebral activation
, 1996
"... Abstract: We present a unified statistical theory for assessing the significance of apparent signal observed in noisy difference images. The results are usable in a wide range of applications, including astrophysics, but are discussed with particular reference to images which represent changes in ce ..."
Abstract

Cited by 210 (37 self)
 Add to MetaCart
Abstract: We present a unified statistical theory for assessing the significance of apparent signal observed in noisy difference images. The results are usable in a wide range of applications, including astrophysics, but are discussed with particular reference to images which represent changes in cerebral blood flow elicited by a specific cognitive or sensorimotor task. Our main result is an estimate of the pvalue for local maxima of Gaussian, t, χ 2 and F fields over search regions of any shape or size in any number of dimensions. This unifies the pvalues for large search areas in 2D (Friston et al. 1991), large search regions in 3D (Worsley et al. 1992), and the usual uncorrected pvalue at a single pixel or voxel.
A multifractal wavelet model with application to TCP network traffic
 IEEE TRANS. INFORM. THEORY
, 1999
"... In this paper, we develop a new multiscale modeling framework for characterizing positivevalued data with longrangedependent correlations (1=f noise). Using the Haar wavelet transform and a special multiplicative structure on the wavelet and scaling coefficients to ensure positive results, the mo ..."
Abstract

Cited by 171 (30 self)
 Add to MetaCart
In this paper, we develop a new multiscale modeling framework for characterizing positivevalued data with longrangedependent correlations (1=f noise). Using the Haar wavelet transform and a special multiplicative structure on the wavelet and scaling coefficients to ensure positive results, the model provides a rapid O(N) cascade algorithm for synthesizing Npoint data sets. We study both the secondorder and multifractal properties of the model, the latter after a tutorial overview of multifractal analysis. We derive a scheme for matching the model to real data observations and, to demonstrate its effectiveness, apply the model to network traffic synthesis. The flexibility and accuracy of the model and fitting procedure result in a close fit to the real data statistics (variancetime plots and moment scaling) and queuing behavior. Although for illustrative purposes we focus on applications in network traffic modeling, the multifractal wavelet model could be useful in a number of other areas involving positive data, including image processing, finance, and geophysics.
Local maxima and the expected Euler characteristic of excursion sets of χ², F and t fields
, 1994
"... The maximum of a Gaussian random field was used by Worsley et al. (... ..."
Abstract

Cited by 108 (23 self)
 Add to MetaCart
The maximum of a Gaussian random field was used by Worsley et al. (...
Gaussian processes for machine learning
 International Journal of Neural Systems
, 2004
"... Gaussian processes (GPs) are natural generalisations of multivariate Gaussian random variables to infinite (countably or continuous) index sets. GPs have been applied in a large number of fields to a diverse range of ends, and very many deep theoretical analyses of various properties are available. ..."
Abstract

Cited by 66 (15 self)
 Add to MetaCart
Gaussian processes (GPs) are natural generalisations of multivariate Gaussian random variables to infinite (countably or continuous) index sets. GPs have been applied in a large number of fields to a diverse range of ends, and very many deep theoretical analyses of various properties are available. This paper gives an introduction to Gaussian processes on a fairly elementary level with special emphasis on characteristics relevant in machine learning. It draws explicit connections to branches such as spline smoothing models and support vector machines in which similar ideas have been investigated. Gaussian process models are routinely used to solve hard machine learning problems. They are attractive because of their flexible nonparametric nature and computational simplicity. Treated within a Bayesian framework, very powerful statistical methods can be implemented which offer valid estimates of uncertainties in our predictions and generic model selection procedures cast as nonlinear optimization problems. Their main drawback of heavy computational scaling has recently been alleviated by the introduction of generic sparse approximations [13, 78, 31]. The mathematical literature on GPs is large and often uses deep
Detecting Activations in PET and fMRI: Levels of Inference and Power
, 1996
"... This paper is about detecting activations in statistical parametric maps and considers the relative sensitivity of a nested hierarchy of tests that we have framed in terms of the level of inference (voxel level, cluster level, and set level). These tests are based on the probability of obtaining c, ..."
Abstract

Cited by 65 (7 self)
 Add to MetaCart
This paper is about detecting activations in statistical parametric maps and considers the relative sensitivity of a nested hierarchy of tests that we have framed in terms of the level of inference (voxel level, cluster level, and set level). These tests are based on the probability of obtaining c, or more, clusters with k, or more, voxels, above a threshold u. This probability has a reasonably simple form and is derived using distributional approximations from the theory of Gaussian fields. The most important contribution of this work is the notion of setlevel inference. Setlevel inference refers to the statistical inference that the number of clusters comprising an observed activation profile is highly unlikely to have occurred by chance. This inference pertains to the set of activations reaching
Selective Sampling For Nearest Neighbor Classifiers
 MACHINE LEARNING
, 2004
"... Most existing inductive learning algorithms work under the assumption that their training examples are already tagged. There are domains, however, where the tagging procedure requires significant computation resources or manual labor. In such cases, it may be beneficial for the learner to be active, ..."
Abstract

Cited by 61 (3 self)
 Add to MetaCart
Most existing inductive learning algorithms work under the assumption that their training examples are already tagged. There are domains, however, where the tagging procedure requires significant computation resources or manual labor. In such cases, it may be beneficial for the learner to be active, intelligently selecting the examples for labeling with the goal of reducing the labeling cost. In this paper we present LSSa lookahead algorithm for selective sampling of examples for nearest neighbor classifiers. The algorithm is looking for the example with the highest utility, taking its effect on the resulting classifier into account. Computing the expected utility of an example requires estimating the probability of its possible labels. We propose to use the random field model for this estimation. The LSS algorithm was evaluated empirically on seven real and artificial data sets, and its performance was compared to other selective sampling algorithms. The experiments show that the proposed algorithm outperforms other methods in terms of average error rate and stability.
Multisubject fMRI studies and conjunction analyses
 NeuroImage
, 1999
"... In this paper we present an approach to making inferences about generic activations in groups of subjects using fMRI. In particular we suggest that activations common to all subjects reflect aspects of functional anatomy that may be ‘‘typical’ ’ of the population from which that group was sampled. T ..."
Abstract

Cited by 61 (6 self)
 Add to MetaCart
In this paper we present an approach to making inferences about generic activations in groups of subjects using fMRI. In particular we suggest that activations common to all subjects reflect aspects of functional anatomy that may be ‘‘typical’ ’ of the population from which that group was sampled. These commonalities can be identified by a conjunction analysis of the activation effects in which the contrasts, testing for an activation, are specified separately for each subject. A conjunction is the joint refutation of multiple null hypotheses, in this instance, of no activation in any subject. The motivation behind this use of conjunctions is that fixedeffect analyses are generally more ‘‘sensitive’ ’ than equivalent randomeffect analyses. This is because fixedeffect analyses can harness the large degrees of freedom and small scantoscan variability (relative to the variability in responses from subject to subject) when assessing the significance of an estimated response. The price one pays for the apparent sensitivity of fixedeffect analyses is that the ensuing inferences pertain to, and only to, the subjects studied. However, a conjunction analysis, using a fixedeffect model, allows one to infer: (i) that every subject studied activated and (ii) that at least a certain proportion of the population would have shown this effect. The second inference depends upon a metaanalytic formulation in terms of a confidence region for this proportion. This approach retains the sensitivity of fixedeffect analyses when the inference that only a substantial proportion of the population activates is sufficient.
Testing for a signal with unknown location and scale in a stationary Gaussian random field
, 1995
"... this paper are concerned with approximate evaluation of the significance level of the test defined by (1.5), i.e., the probability when = 0 that X max exceeds a constant threshold, say b. First order approximations for this can easily be derived from the results going back to Belyaev and Pitaberg ( ..."
Abstract

Cited by 52 (18 self)
 Add to MetaCart
this paper are concerned with approximate evaluation of the significance level of the test defined by (1.5), i.e., the probability when = 0 that X max exceeds a constant threshold, say b. First order approximations for this can easily be derived from the results going back to Belyaev and Pitaberg (1972) (see Adler, 1981, Theorem 6.9.1, p. 160) who give the the following. Suppose Y (r) is a zero mean, unit variance, stationary random field defined on an interval S ae IR
Multifractional Brownian motion: definition and preliminary results

, 1995
"... We generalize the definition of the fractional Brownian motion of exponent H to the case where H is no longer a constant, but a function of the time index of the process. This allows us to model non stationary continuous processes, and we show that H(t) and 2 \Gamma H(t) are indeed respectively t ..."
Abstract

Cited by 50 (3 self)
 Add to MetaCart
We generalize the definition of the fractional Brownian motion of exponent H to the case where H is no longer a constant, but a function of the time index of the process. This allows us to model non stationary continuous processes, and we show that H(t) and 2 \Gamma H(t) are indeed respectively the local Holder exponent and the local box and Hausdorff dimension at point t. Finally, we propose a simulation method and an estimation procedure for H(t) for our model.
Robust smoothness estimation in statistical parametric maps using standardized residuals from the general linear model
 NeuroImage
, 1999
"... The assessment of significant activations in functional imaging using voxelbased methods often relies on results derived from the theory of Gaussian random fields. These results solve the multiple comparison problem and assume that the spatial correlation or smoothness of the data is known or can b ..."
Abstract

Cited by 40 (4 self)
 Add to MetaCart
The assessment of significant activations in functional imaging using voxelbased methods often relies on results derived from the theory of Gaussian random fields. These results solve the multiple comparison problem and assume that the spatial correlation or smoothness of the data is known or can be estimated. End results (i.e., P values associated with local maxima, clusters, or sets of clusters) critically depend on this assessment, which should be as exact and as reliable as possible. In some earlier implementations of statistical parametric mapping (SPM) (SPM94, SPM95) the smoothness was assessed on Gaussianized tfields (Gtf) that are not generally free of physiological signal. This technique has two limitations. First, the estimation is not stable (the variance of the estimator being far from negligible) and, second, physiological signal in the Gtf will bias the estimation. In this paper, we describe an estimation method that overcomes these drawbacks. The new approach involves estimating the smoothness of standardized residual fields which approximates the smoothness of the component fields of the associated tfield. Knowing the smoothness of these component fields is important because it allows one to compute corrected P values for statistical fields other than the tfield or the Gtf (e.g., the Fmap) and eschews bias due to deviation from the null hypothesis. We validate the method on simulated data and demonstrate it using data from a functional MRI study. � 1999 Academic Press