Results 11  20
of
187
Trialtotrial variability and its effect on timevarying dependence between two neurons
 J. Neurophysiology
, 2005
"... The joint peristimulus time histogram (JPSTH) and crosscorrelogram provide a visual representation of correlated activity for a pair of neurons, and the way this activity may increase or decrease over time. In a companion paper (Cai et al. 2004a) we showed how a Bootstrap evaluation of the peaks in ..."
Abstract

Cited by 14 (7 self)
 Add to MetaCart
The joint peristimulus time histogram (JPSTH) and crosscorrelogram provide a visual representation of correlated activity for a pair of neurons, and the way this activity may increase or decrease over time. In a companion paper (Cai et al. 2004a) we showed how a Bootstrap evaluation of the peaks in the smoothed diagonals of the JPSTH may be used to establish the likely validity of apparent timevarying correlation. As noted by Brody (1999a,b) and BenShaul et al. (2001), trialtotrial variation can confound correlation and synchrony effects. In this paper we elaborate on that observation, and present a method of estimating the timedependent trialtotrial variation in spike trains that may exceed the natural variation displayed by Poisson and nonPoisson point processes. The statistical problem is somewhat subtle because relatively few spikes per trial are available for estimating a firingrate function that fluctuates over time. The method developed here uses principal components of the trialtotrial variability in firing rate functions to obtain a small number of parameters (typically two or three) that characterize the deviation of each trial’s firing rate function from the acrosstrial average firing rate, represented by the
Quadruped robot obstacle negotiation via reinforcement learning
 in In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA
, 2006
"... Abstract — Legged robots can, in principle, traverse a large variety of obstacles and terrains. In this paper, we describe a successful application of reinforcement learning to the problem of negotiating obstacles with a quadruped robot. Our algorithm is based on a twolevel hierarchical decompositi ..."
Abstract

Cited by 12 (2 self)
 Add to MetaCart
Abstract — Legged robots can, in principle, traverse a large variety of obstacles and terrains. In this paper, we describe a successful application of reinforcement learning to the problem of negotiating obstacles with a quadruped robot. Our algorithm is based on a twolevel hierarchical decomposition of the task, in which the highlevel controller selects the sequence of footplacement positions, and the lowlevel controller generates the continuous motions to move each foot to the specified positions. The highlevel controller uses an estimate of the value function to guide its search; this estimate is learned partially from supervised data. The lowlevel controller is obtained via policy search. We demonstrate that our robot can successfully climb over a variety of obstacles which were not seen at training time. I.
NonGaussian conditional linear AR(1) models
 Australian and New Zealand Journal of Statistics
, 2000
"... Abstract: We give a general formulation of a nonGaussian conditional linear AR(1) model subsuming most of the nonGaussian AR(1) models that have appeared in the literature. We derive some general results giving properties for the stationary process mean, variance and correlation structure, and con ..."
Abstract

Cited by 11 (3 self)
 Add to MetaCart
Abstract: We give a general formulation of a nonGaussian conditional linear AR(1) model subsuming most of the nonGaussian AR(1) models that have appeared in the literature. We derive some general results giving properties for the stationary process mean, variance and correlation structure, and conditions for stationarity. These results highlight similarities and differences with the Gaussian AR(1) model, and unify many separate results appearing in the literature. Examples illustrate the wide range of properties that can appear under the conditional linear autoregressive assumption. These results are used in analysing three real data sets, illustrating general methods of estimation, model diagnostics and model selection. In particular, we show that the theoretical results can be used to develop diagnostics for deciding if a time series can be modelled by some linear autoregressive model, and for selecting among several candidate models.
Randomization Inference with Natural Experiments: An Analysis of Ballot Effects
 in the 2003 California Recall Election.” Journal of the American Statistical Association 101:888–900
, 2006
"... Since the 2000 U.S. Presidential election, social scientists have rediscovered a long tradition of research that investigates the effects of ballot format on voting. Using a new dataset collected by the New York Times, we investigate the causal effect of being listed on the first ballot page in the ..."
Abstract

Cited by 11 (2 self)
 Add to MetaCart
Since the 2000 U.S. Presidential election, social scientists have rediscovered a long tradition of research that investigates the effects of ballot format on voting. Using a new dataset collected by the New York Times, we investigate the causal effect of being listed on the first ballot page in the 2003 California gubernatorial recall election. California law mandates a unique randomization procedure of ballot order that, when appropriately modeled, can be used to approximate a classical randomized experiment in a real world setting. We apply (nonparametric) randomization inference based on Fisher’s exact test, which directly incorporates the actual randomization procedure and yields accurate confidence intervals. Our results suggest that over forty percent of the minor candidates gained more votes when listed on the first page of the ballot, while there is no significant effect for top two candidates. We also investigate how randomization inference differs from conventional estimators that do not fully incorporate California’s complex treatment assignment mechanism. The results indicate appreciable differences between the two approaches.
Improved semiparametric time series models of air pollution and mortality
 J. Am. Statist. Ass
, 2004
"... In 2002, methodological issues around time series analyses of air pollution and health attracted the attention of the scientific community, policy makers, the press, and the diverse stakeholders concerned with air pollution. As the Environmental Protection Agency (EPA) was finalizing its most recent ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
In 2002, methodological issues around time series analyses of air pollution and health attracted the attention of the scientific community, policy makers, the press, and the diverse stakeholders concerned with air pollution. As the Environmental Protection Agency (EPA) was finalizing its most recent review of epidemiological evidence on particulate matter air pollution (PM), statisticians and epidemiologists found that the SPlus implementation of Generalized Additive Models (GAM) can overestimate effects of air pollution and understate statistical uncertainty in time series studies of air pollution and health. This discovery delayed the completion of the PM Criteria Document prepared as part of the review of the U.S. National Ambient Air Quality Standard (NAAQS), as the timeseries findings were a critical component of the evidence. In addition, it raised concerns about the adequacy of current model formulations and their software implementations. In this paper we provide improvements in semiparametric regression directly relevant to risk estimation in time series studies of air pollution. First, we introduce a closed form estimate of the asymptotically exact covariance matrix of the linear component of a GAM. To ease the implementation of these calculations, we develop the S package gam.exact, an extended version of gam.
Nonlinear methods for multivariate statistical calibration and their use in palaeoecology: a comparison of inverse (knearest neighbours, partial least squares and weighted averaging partial least squares) and classical approaches
 Chemometrics and Intelligent Laboratory Systems
, 1995
"... and their use in palaeoecology: a comparison of inverse (knearest neighbours, partial least squares and weighted averaging partial least squares) and classical approaches. ..."
Abstract

Cited by 9 (0 self)
 Add to MetaCart
and their use in palaeoecology: a comparison of inverse (knearest neighbours, partial least squares and weighted averaging partial least squares) and classical approaches.
Identifying Quantitative Trait Loci in Experimental Crosses
, 1997
"... Identifying quantitative trait loci in experimental crosses by Karl William Broman Doctor of Philosophy in Statistics University of California, Berkeley Professor Terence P. Speed, Chair Identifying the genetic loci responsible for variation in traits which are quantitative in nature (such as the yi ..."
Abstract

Cited by 9 (2 self)
 Add to MetaCart
Identifying quantitative trait loci in experimental crosses by Karl William Broman Doctor of Philosophy in Statistics University of California, Berkeley Professor Terence P. Speed, Chair Identifying the genetic loci responsible for variation in traits which are quantitative in nature (such as the yield from an agricultural crop or the number of abdominal bristles on a fruit fly) is a problem of great importance to biologists. The number and effects of such loci help us to understand the biochemical basis of these traits, and of their evolution in populations over time. Moreover, knowledge of these loci may aid in designing selection experiments to improve the traits. We focus on data from a large experimental cross. The usual methods for analyzing such data use multiple tests of hypotheses. We feel the problem is best viewed as one of model selection. After a brief review of the major methods in this area, we discuss the use of model selection to identify quantitative trait loci. Forwa...
Applications of hybrid Monte Carlo to Bayesian generalized linear models: quasicomplete separation and neural networks
 Journal of Computational and Graphical Statistics
, 1999
"... The "leapfrog " hybrid Monte Carlo algorithm is a simple and effective MCMC method for fitting Bayesian generalized linear models with canonical link. The algorithm leads to large trajectories over the posterior and a rapidly mixing Markov chain, having superior performance over conventional methods ..."
Abstract

Cited by 8 (0 self)
 Add to MetaCart
The "leapfrog " hybrid Monte Carlo algorithm is a simple and effective MCMC method for fitting Bayesian generalized linear models with canonical link. The algorithm leads to large trajectories over the posterior and a rapidly mixing Markov chain, having superior performance over conventional methods in difficult problems like logistic regression with quasicomplete separation. This method offers a very attractive solution to this common problem, providing a method for identifying datasets that are quasicomplete separated, and for identifying the covariates that are at the root of the problem. The method is also quite successful in fitting generalized linear models in which the link function is extended to include a feedforward neural network. With a large number of hidden units, however, or when the dataset becomes large, the computations required in calculating the gradient in each trajectory can become very demanding. In this case, it is best to mix the algorithm with multivariate random walk MetropolisHastings. However, this entails very little additional programming work.
A SYSTEMATIC RELATIONSHIP BETWEEN MINIMUM BIAS AND GENERALIZED LINEAR MODELS
"... The minimum bias method is a natural tool to use in parameterizing classification ratemaking plans. Such plans build rates for a large, heterogeneous group of insureds using arithmetic operations to combine a small set of parameters in many different ways. Since the arithmetic structure of a class p ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
The minimum bias method is a natural tool to use in parameterizing classification ratemaking plans. Such plans build rates for a large, heterogeneous group of insureds using arithmetic operations to combine a small set of parameters in many different ways. Since the arithmetic structure of a class plan is usually not wholly appropriate, rates for some individual classification cells may be biased. Classification ratemaking therefore requires measures of bias, and minimum bias is a natural objective to use when determining rates. This paper introduces a family of linear bias measures and shows how classification rates with minimum (zero) linear bias for each class are the same as those obtained by solving a related generalized linear model using maximum likelihood. The examples considered include the standard additive and multiplicative models used by the Insurance Services Office (ISO) for private passenger auto ratemaking and general liability ratemaking (see ISO [11] and Graves and Castillo [8], respectively). Knowing how to associate a generalized linear model withalinearbiasfunctionisusefulforseveralreasons. It makes the underlying statistical assumptions explicit so the user can judge their appropriateness for a given application. It provides an alternative method to solve for the model parameters, which is computationally more efficient than using the minimum bias iterative method. In fact not all linear bias functions allow an iterative solution; in these cases, solving a generalized linear model using maximum likelihood provides an ef
TapPrints: Your Finger Taps Have Fingerprints
"... This paper shows that the location of screen taps on modern smartphones and tablets can be identified from accelerometer and gyroscope readings. Our findings have serious implications, as we demonstrate that an attacker can launch a background process on commodity smartphones and tablets, and silent ..."
Abstract

Cited by 7 (0 self)
 Add to MetaCart
This paper shows that the location of screen taps on modern smartphones and tablets can be identified from accelerometer and gyroscope readings. Our findings have serious implications, as we demonstrate that an attacker can launch a background process on commodity smartphones and tablets, and silently monitor the user’s inputs, such as keyboard presses and icon taps. While precise tap detection is nontrivial, requiring machine learning algorithms to identify fingerprints of closely spaced keys, sensitive sensors on modern devices aid the process. We present TapPrints, a framework for inferring the location of taps on mobile device touchscreens using motion sensor data combined with machine learning analysis. By running tests on two different offtheshelf smartphones and a tablet computer we show that identifying tap locations on the screen and inferring English letters could be done with up to 90 % and 80 % accuracy, respectively. By optimizing the core tap detection capability with additional information, such as contextual priors, we are able to further magnify the core threat.