#### DMCA

## Thresholding of statistical maps in functional neuroimaging using the false discovery rate (2002)

Venue: | Neuroimage |

Citations: | 521 - 9 self |

### Citations

8747 |
Controlling the False Discovery Rate: a practical and powerful approach to multiple testing.
- Benjamini, Hochberg
- 1995
(Show Context)
Citation Context ...n this paper, we describe a recent development in statistics that can be adapted to automatic and implicit threshold selection in neuroimaging: procedures that control the false discovery rate (FDR) (=-=Benjamini and Hochberg, 1995-=-; Benjamini and Liu, 1999; Benjamini and Yekutieli, 2001). Whenever one performs multiple tests, the FDR is the proportion of false positives (incorrect rejections of the null hypothesis) among those ... |

1094 | On the adaptive control of the false discovery rate in multiple testing with independent statistics.
- Benjamini, Hochberg
- 2000
(Show Context)
Citation Context ...d waste data). There have been a number of efforts to find an objective and effective method for threshold determination (Genovese et al., 1997; Worsley et al., 1996; Holmes et al., 1996). While these methods are promising, they all involve either extra computational effort or extra data collection that may deter researchers from using them. In this paper, we describe a recent development in statistics that can be adapted to automatic and implicit threshold selection in neuroimaging: procedures that control the false discovery rate (FDR) (Benjamini and Hochberg, 1995; Benjamini and Liu, 1999; Benjamini and Yekutieli, 2001). Whenever one performs multiple tests, the FDR is the proportion of false positives (incorrect rejections of the null hypothesis) among those tests for which the null hypothesis is rejected. We believe that this quantity gets at the essence of what one wants to control, in contrast to the Bonferroni correction, for instance, which controls the rate of false positives among all tests whether or not the null is actually rejected. A procedure that controls the FDR bounds the expected rate of false positives among those tests that show a significant result. The procedures we describe operate simu... |

400 | A unified statistical approach for determining significant signals in images of cerebral activation.
- Worsley, Marrett, et al.
- 1996
(Show Context)
Citation Context ...I error. This is a conservative condition, and in practice with neuroimaging data, the Bonferroni correction has a tendency to wipe out both falseReceived Feb This work partially supported by NSF Grant SES 9866147. 8 ry 23, 2001 and true positives when applied to the entire data set.rua70 To get useful results, it is necessary to use a more complicated method or to reduce the number of tests considered simultaneously. For instance, one could identify regions of interest (ROI) and apply the correction separately within each region. More involved methods include random field approaches (such as Worsley et al., 1996) or permutation based methods (such as Holmes et al., 1996). The random field methods are suitable only for smoothed data and may require assumptions that are very difficult to check; the permutation method makes few assumptions, but has an additional computational burden and does not account for temporal autocorrelation easily. ROIs are labor intensive to create, and further, they must be created prior to data analysis and left unchanged throughout, a rigid condition of which researchers are understandably wary. Variation across subjects has a critical impact on threshold selection in practic... |

359 |
Simultaneous Statistical Inference.
- Miller
- 1966
(Show Context)
Citation Context ... is less than or equal to . As concisely stated by Holmes et al. (1996), “A test with strong control declares nonactivated voxels as activated with probability at most . . .” A significant result from a test procedure with weak control only implies there is an activation somewhere; a procedure with strong control allows individual voxels to be declared active—it has localizing power. There is a variety of methods available for controlling the false-positive rate when performing multiple tests. Among the methods, perhaps the most commonly used is the Bonferroni correction (see, for example, Miller, 1981). If there are k tests being performed, the Bonferroni correction replaces the nominal significance level (e.g., 0.05) with the level /k for each test. It can be shown that the Bonferroni correction has strong control of Type I error. This is a conservative condition, and in practice with neuroimaging data, the Bonferroni correction has a tendency to wipe out both falseReceived Feb This work partially supported by NSF Grant SES 9866147. 8 ry 23, 2001 and true positives when applied to the entire data set.rua70 To get useful results, it is necessary to use a more complicated method or to red... |

137 |
Non-parametric analysis of statistic images from functional mapping experiments.
- Holmes, Blair, et al.
- 1996
(Show Context)
Citation Context ...ns of interest (ROI) and apply the correction separately within each region. More involved methods include random field approaches (such as Worsley et al., 1996) or permutation based methods (such as =-=Holmes et al., 1996-=-). The random field methods are suitable only for smoothed data and may require assumptions that are very difficult to check; the permutation method makes few assumptions, but has an additional comput... |

113 |
Non-linear fourier time series analysis for human brain mapping by functional magnetic resonance imaging.”
- Lange
- 1997
(Show Context)
Citation Context ...any multiple testing situation. Many recent methods for the analysis of fMRI data rely on fitting sophisticated statistical models to the data (see, for example, Friston et al., 1994; Genovese, 2000; =-=Lange and Zeger, 1997-=-). Part of such analyses inevitably involves examining the values of fitted parameters at each voxel to test hypotheses about the underlying value of those parameters. FDR-based methods can also be us... |

45 | A Bayesian time-course model for functional magnetic resonance imaging data.
- Genovese
- 2000
(Show Context)
Citation Context ...edures apply to any multiple testing situation. Many recent methods for the analysis of fMRI data rely on fitting sophisticated statistical models to the data (see, for example, Friston et al., 1994; =-=Genovese, 2000-=-; Lange and Zeger, 1997). Part of such analyses inevitably involves examining the values of fitted parameters at each voxel to test hypotheses about the underlying value of those parameters. FDR-based... |

13 |
A distribution-free multiple test procedure that controls the false discovery rate.
- Benjamini, Liu
- 1999
(Show Context)
Citation Context ...cent development in statistics that can be adapted to automatic and implicit threshold selection in neuroimaging: procedures that control the false discovery rate (FDR) (Benjamini and Hochberg, 1995; =-=Benjamini and Liu, 1999-=-; Benjamini and Yekutieli, 2001). Whenever one performs multiple tests, the FDR is the proportion of false positives (incorrect rejections of the null hypothesis) among those tests for which the null ... |

9 |
A direct comparison between whole-brain PET and BOLD fMRI measurements of single-subject activation response.
- Kinahan, Noll
- 1999
(Show Context)
Citation Context ...In Section 3, we present simple simulations that illustrate the performance of the FDR-controlling procedures. In Section 4, we apply the methods to two data sets, one describing a simple motor task (=-=Kinahan and Noll, 1999-=-) and the other from a study of auditory stimulation. Finally, in Section 5, we discuss some of the practical issues in the use of FDR. THE FALSE DISCOVERY RATE In a typical functional magnetic resona... |

1 |
Estimating test–retest reliability in fMRI.
- Genovese, Noll, et al.
- 1997
(Show Context)
Citation Context ...justments, but its forced consistency can significantly reduce sensitivity (and waste data). There have been a number of efforts to find an objective and effective method for threshold determination (=-=Genovese et al., 1997-=-; Worsley et al., 1996; Holmes et al., 1996). While these methods are promising, they all involve either extra computational effort or extra data collection that may deter researchers from using them.... |