Results 1  10
of
14
Adaptive psychophysical procedures
 Perception (Suppl
, 1989
"... Improvements in measuring thresholds, or points on a psychometric function, have advanced the field of psychophysics in the last 30 years. The arrival of laboratory computers allowed the introduction of adaptive procedures, where the presentation of the next stimulus depends on previous responses of ..."
Abstract

Cited by 66 (0 self)
 Add to MetaCart
Improvements in measuring thresholds, or points on a psychometric function, have advanced the field of psychophysics in the last 30 years. The arrival of laboratory computers allowed the introduction of adaptive procedures, where the presentation of the next stimulus depends on previous responses of the subject. Unfortunately, these procedures present themselves in a bewildering variety, though some of them differ only slightly. Even someone familiar with several methods cannot easily name the differences, or decide which method would be best suited for a particular application. This review tries to illuminate the historical background of adaptive procedures, explain their differences and similarities, and provide criteria for choosing among the various techniques. Psychometric functions Psychophysical threshold Binary responses Sequential estimate Efficiency Yesno methods Forcedchoice methods
Can a twostage procedure enjoy secondorder properties? Statist
, 1996
"... SUMMARY. We first consider the classical fixedwidth confidence interval estimation problem for the mean µ of a normal population whose variance σ2 is unknown, but a particular application scenario guides the experimenter to assume that σ> σL where σL(> 0) is known. The seminal twostage metho ..."
Abstract

Cited by 6 (1 self)
 Add to MetaCart
SUMMARY. We first consider the classical fixedwidth confidence interval estimation problem for the mean µ of a normal population whose variance σ2 is unknown, but a particular application scenario guides the experimenter to assume that σ> σL where σL(> 0) is known. The seminal twostage methodology of Stein (1945, 1949), originally proposed when σ(> 0) is completely unknown, obviously needs major revisions since we wish to incorporate such added partial information regarding σ in the determination of the final sample size. In the case of completely unknown σ, Stein’s (1945, 1949) twostage procedure is known to enjoy the consistency property, but it is not even firstorder efficient. In the case when σ> σL(> 0), the revised twostage procedure is shown to enjoy all the usual secondorder properties together with the consistency property. As a followup, we include a simulation exercise in the interval estimation scenario. The minimum risk point estimation problem for µ is also discussed briefly in the same light. 1.
Sequential tests of multiple hypotheses controlling type I and II familywise error rates. Under review
, 2013
"... We propose a general and flexible procedure for testing multiple hypotheses about sequential (or streaming) data that simultaneously controls both the false discovery rate (FDR) and false nondiscovery rate (FNR) under minimal assumptions about the data streams which may differ in distribution, dime ..."
Abstract

Cited by 4 (2 self)
 Add to MetaCart
(Show Context)
We propose a general and flexible procedure for testing multiple hypotheses about sequential (or streaming) data that simultaneously controls both the false discovery rate (FDR) and false nondiscovery rate (FNR) under minimal assumptions about the data streams which may differ in distribution, dimension, and be dependent. All that is needed is a test statistic for each data stream that controls the conventional type I and II error probabilities, and no information or assumptions are required about the joint distribution of the statistics or data streams. The procedure can be used with sequential, group sequential, truncated, or other sampling schemes. The procedure is a natural extension of Benjamini and Hochberg’s (1995) widelyused fixed sample size procedure to the domain of sequential data, with the added benefit of simultaneous FDR and FNR control that sequential sampling affords. We prove the procedure’s error control and give some tips for implementation in commonly encountered testing situations. 1
Generalized Likelihood Ratio Statistics and Uncertainty Adjustments in Efficient Adaptive Design of Clinical Trials
"... Abstract: A new approach to adaptive design of clinical trials is proposed in a general multiparameter exponential family setting, based on generalized likelihood ratio statistics and optimal sequential testing theory. These designs are easy to implement, maintain the prescribed Type I error probabi ..."
Abstract

Cited by 4 (3 self)
 Add to MetaCart
Abstract: A new approach to adaptive design of clinical trials is proposed in a general multiparameter exponential family setting, based on generalized likelihood ratio statistics and optimal sequential testing theory. These designs are easy to implement, maintain the prescribed Type I error probability, and are asymptotically efficient. Practical issues involved in clinical trials allowing midcourse adaptation and the large literature on this subject are discussed, and comparisons between the proposed and existing designs are presented in extensive simulation studies of their finitesample performance, measured in terms of the expected sample size and power functions.
George B. Dantzig (1914–2005)
"... The final test of a theory is its capacity to solve the problems which originated it. This work is concerned with the theory and solution of linear inequality systems.... The viewpoint of this work is constructive. It reflects the beginning of a theory sufficiently powerful to cope with some of the ..."
Abstract
 Add to MetaCart
(Show Context)
The final test of a theory is its capacity to solve the problems which originated it. This work is concerned with the theory and solution of linear inequality systems.... The viewpoint of this work is constructive. It reflects the beginning of a theory sufficiently powerful to cope with some of the challenging decision problems upon which it was founded. So says George B. Dantzig in the preface to his book, Linear Programming and
by IN STATISTICAL DH::ISIONS l
"... problem we start by making assumptions concerning the class of distributions, the loss function, and other data of the problem. Usually these assumptions only approximate the actual conditions, either because the latter are unknown, or in ..."
Abstract
 Add to MetaCart
problem we start by making assumptions concerning the class of distributions, the loss function, and other data of the problem. Usually these assumptions only approximate the actual conditions, either because the latter are unknown, or in
Submitted to the Annals of Statistics BATCHED BANDIT PROBLEMS
"... Abstract Motivated by practical applications, chiefly clinical trials, we study the regret achievable for stochastic bandits under the constraint that the employed policy must split trials into a small number of batches. We propose a simple policy that operates under this contraint and show that a ..."
Abstract
 Add to MetaCart
(Show Context)
Abstract Motivated by practical applications, chiefly clinical trials, we study the regret achievable for stochastic bandits under the constraint that the employed policy must split trials into a small number of batches. We propose a simple policy that operates under this contraint and show that a very small number of batches gives close to minimax optimal regret bounds. As a byproduct, we derive optimal policies with low switching cost for stochastic bandits. 1. Introduction. All clinical trials are run in batches: groups of patients are treated simultaneously, with the data from each batch influencing the design of the next. Despite the fact that this structure is codified into law in the case of drug approval, it has received scant attention from statisticians. What can be achieved given the small number of batches that is