Results 1  10
of
144
Using confidence intervals in withinsubject designs
 Psychonomic Bulletin & Review
, 1994
"... Wolford, and two anonymous reviewers for very useful comments on earlier drafts of the manuscript. Correspondence may be addressed to ..."
Abstract

Cited by 178 (21 self)
 Add to MetaCart
Wolford, and two anonymous reviewers for very useful comments on earlier drafts of the manuscript. Correspondence may be addressed to
Testing that distributions are close
 In IEEE Symposium on Foundations of Computer Science
, 2000
"... Given two distributions over an n element set, we wish to check whether these distributions are statistically close by only sampling. We give a sublinear algorithm which uses O(n 2/3 ɛ −4 log n) independent samples from each distribution, runs in time linear in the sample size, makes no assumptions ..."
Abstract

Cited by 77 (16 self)
 Add to MetaCart
Given two distributions over an n element set, we wish to check whether these distributions are statistically close by only sampling. We give a sublinear algorithm which uses O(n 2/3 ɛ −4 log n) independent samples from each distribution, runs in time linear in the sample size, makes no assumptions about the structure of the distributions, and distinguishes the cases ɛ when the distance between the distributions is small (less than max ( 2 32 3 √ n, ɛ 4 √)) or large (more n than ɛ) in L1distance. We also give an Ω(n 2/3 ɛ −2/3) lower bound. Our algorithm has applications to the problem of checking whether a given Markov process is rapidly mixing. We develop sublinear algorithms for this problem as well.
Severe Testing as a Basic Concept in a NeymanPearson Philosophy of Induction
 BRITISH JOURNAL FOR THE PHILOSOPHY OF SCIENCE
, 2006
"... Despite the widespread use of key concepts of the Neyman–Pearson (N–P) statistical paradigm—type I and II errors, significance levels, power, confidence levels—they have been the subject of philosophical controversy and debate for over 60 years. Both current and longstanding problems of N–P tests s ..."
Abstract

Cited by 35 (14 self)
 Add to MetaCart
Despite the widespread use of key concepts of the Neyman–Pearson (N–P) statistical paradigm—type I and II errors, significance levels, power, confidence levels—they have been the subject of philosophical controversy and debate for over 60 years. Both current and longstanding problems of N–P tests stem from unclarity and confusion, even among N–P adherents, as to how a test’s (predata) error probabilities are to be used for (postdata) inductive inference as opposed to inductive behavior. We argue that the relevance of error probabilities is to ensure that only statistical hypotheses that have passed severe or probative tests are inferred from the data. The severity criterion supplies a metastatistical principle for evaluating proposed statistical inferences, avoiding classic fallacies from tests that are overly sensitive, as well as those not sensitive enough to particular errors and discrepancies.
A NEW GENERATION OF HOMOLOGY SEARCH TOOLS BASED ON PROBABILISTIC INFERENCE
, 2009
"... Many theoretical advances have been made in applying probabilistic inference methods to improve the power of sequence homology searches, yet the BLAST suite of programs is still the workhorse for most of the field. The main reason for this is practical: BLAST’s programs are about 100fold faster tha ..."
Abstract

Cited by 33 (2 self)
 Add to MetaCart
Many theoretical advances have been made in applying probabilistic inference methods to improve the power of sequence homology searches, yet the BLAST suite of programs is still the workhorse for most of the field. The main reason for this is practical: BLAST’s programs are about 100fold faster than the fastest competing implementations of probabilistic inference methods. I describe recent work on the HMMER software suite for protein sequence analysis, which implements probabilistic inference using profile hidden Markov models. Our aim in HMMER3 is to achieve BLAST’s speed while further improving the power of probabilistic inference based methods. HMMER3 implements a new probabilistic model of local sequence alignment and a new heuristic acceleration algorithm. Combined with efficient vectorparallel implementations on modern processors, these improvements synergize. HMMER3 uses more powerful logodds likelihood scores (scores summed over alignment uncertainty, rather than scoring a single optimal alignment); it calculates accurate expectation values (Evalues) for those scores without simulation using a generalization of Karlin/Altschul theory; it computes posterior distributions over the ensemble of possible alignments and returns posterior probabilities (confidences) in each aligned residue; and it does all this at an overall speed comparable to BLAST. The HMMER project aims to usher in a new generation of more powerful homology search tools based on probabilistic inference methods.
Statistical Themes and Lessons for Data Mining
, 1997
"... Data mining is on the interface of Computer Science and Statistics, utilizing advances in both disciplines to make progress in extracting information from large databases. It is an emerging field that has attracted much attention in a very short period of time. This article highlights some statist ..."
Abstract

Cited by 32 (3 self)
 Add to MetaCart
Data mining is on the interface of Computer Science and Statistics, utilizing advances in both disciplines to make progress in extracting information from large databases. It is an emerging field that has attracted much attention in a very short period of time. This article highlights some statistical themes and lessons that are directly relevant to data mining and attempts to identify opportunities where close cooperation between the statistical and computational communities might reasonably provide synergy for further progress in data analysis.
Biometric Decision Landscapes
, 2000
"... This report investigates the "decision lanisio es" that characterize several forms of biometric decision makinn The issues discussed inIP/PF (i) Estimatin the degreesoffreedom associated with different biometrics, as a way ofmeasurin theranFfl3#9N an complexity(an therefore the unWflWW#9Nfl of the ..."
Abstract

Cited by 31 (1 self)
 Add to MetaCart
This report investigates the "decision lanisio es" that characterize several forms of biometric decision makinn The issues discussed inIP/PF (i) Estimatin the degreesoffreedom associated with different biometrics, as a way ofmeasurin theranFfl3#9N an complexity(an therefore the unWflWW#9Nfl of their templates. (ii) The conflflP#9NflY of combin/I more than on biometric test to arrive at a decision (iii) The requiremen ts for performin iden tification by largescale exhaustive database search, as opposed to mere verification bycomparison againr a sin;I template. (iv)ScenWP3F for Biometric Key Cryptography (the use of biometrics forenPflW/#9N of messages). These issues are conFFYI#9 here in abstract form, but where appropriate, the particular example of iris recognflfl#9 is used asan illustration Aun;FflI# theme of all four sets of issues is the role of combinF3PY#9 complexity, an itsmeasuremen t,in determinFP the potential decisiveness of biometric decision making.
The dynamics of choice among multiple alternatives
 Journal of Mathematical Psychology
, 2006
"... We consider neurallybased models for decisionmaking in the presence of noisy incoming data. The twoalternative forcedchoice task has been extensively studied, and in that case it is known that mutuallyinhibited leaky integrators in which leakage and inhibition balance can closely approximate a ..."
Abstract

Cited by 29 (4 self)
 Add to MetaCart
We consider neurallybased models for decisionmaking in the presence of noisy incoming data. The twoalternative forcedchoice task has been extensively studied, and in that case it is known that mutuallyinhibited leaky integrators in which leakage and inhibition balance can closely approximate a driftdiffusion process that is the continuum limit of the optimal sequential probability ratio test (SPRT). Here we study the performance of neural integrators in n ≥ 2 alternative choice tasks and relate them to a multihypothesis sequential probability ratio test (MSPRT) that is asymptotically optimal in the limit of vanishing error rates. While a simple race model can implement this ‘maxvsnext ’ MSPRT, it requires an additional computational layer, while absolute threshold crossing tests do not require such a layer. Race models with absolute thresholds perform relatively poorly, but we show that a balanced leaky accumulator model with an absolute crossing criterion can approximate a ‘maxvsave ’ test that is intermediate in performance between the absolute and maxvsnext tests. We consider free and fixed time response protocols, and show that the resulting mean reaction times under the former and decision times for fixed accuracy under the latter obey versions of Hick’s law in the low error rate range, and we interpret this in terms of information gained. Specifically, we derive relationships of the forms log(n − 1), log(n), or log(n + 1) depending on error rates, signaltonoise ratio, and the test itself. We focus on linearized models, but also consider nonlinear effects of neural activities (firing rates) that are bounded below and show how they modify Hick’s law. KEYWORDS: leaky accumulator, driftdiffusion model, neural network, Hick’s law, multihypothesis sequential test, sequential ratio test.
Diagnosis And Communication In Distributed Systems
 In Proceedings of the International Workshop on Discrete Event Systems
, 1998
"... This paper discusses diagnosis problems in distributed systems within the context of a languagetheoretic discrete event formalism. A distributed system is seen as a system with multiple spatially separated sites with each site having a diagnoser that observes some of the events generated by the sys ..."
Abstract

Cited by 24 (0 self)
 Add to MetaCart
This paper discusses diagnosis problems in distributed systems within the context of a languagetheoretic discrete event formalism. A distributed system is seen as a system with multiple spatially separated sites with each site having a diagnoser that observes some of the events generated by the system and diagnoses the faults associated with the site. We allow the diagnosers to share information by sending messages to each other. The existence and synthesis of diagnosers is investigated. The formulation and results are motivated by the diagnosis of failures in a wireless LAN. 1 Introduction We are interested in understanding the design of diagnostics for distributed systems. This theoretical work is motivated by our experience with the design of distributed diagnostics for coordinating vehicle systems [5, 10] and wireless local area networks [3, 6]. These systems are comprised of spatially separated sites (e.g., vehicles or radios) of semiautonomous activity. Since these systems op...
Dynamical Modeling and MultiExperiment Fitting with PottersWheel – Supplement
, 2008
"... This supplement provides detailed information about the functionalities of the PottersWheel toolbox as described in the main text. For further information please use the ..."
Abstract

Cited by 19 (5 self)
 Add to MetaCart
This supplement provides detailed information about the functionalities of the PottersWheel toolbox as described in the main text. For further information please use the