Results 1  10
of
86
Maximum likelihood estimation of a stochastic integrateandfire neural model
 NIPS
, 2003
"... We examine a cascade encoding model for neural response in which a linear filtering stage is followed by a noisy, leaky, integrateandfire spike generation mechanism. This model provides a biophysically more realistic alternative to models based on Poisson (memoryless) spike generation, and can eff ..."
Abstract

Cited by 59 (20 self)
 Add to MetaCart
We examine a cascade encoding model for neural response in which a linear filtering stage is followed by a noisy, leaky, integrateandfire spike generation mechanism. This model provides a biophysically more realistic alternative to models based on Poisson (memoryless) spike generation, and can effectively reproduce a variety of spiking behaviors seen in vivo. We describe the maximum likelihood estimator for the model parameters, given only extracellular spike train responses (not intracellular voltage data). Specifically, we prove that the log likelihood function is concave and thus has an essentially unique global maximum that can be found using gradient ascent techniques. We develop an efficient algorithm for computing the maximum likelihood solution, demonstrate the effectiveness of the resulting estimator with numerical simulations, and discuss a method of testing the model’s validity using timerescaling and density evolution techniques. Paninski et al., November 30, 2004 2 1
Unbiased recursive partitioning: A conditional inference framework
 JOURNAL OF COMPUTATIONAL AND GRAPHICAL STATISTICS
, 2006
"... Recursive binary partitioning is a popular tool for regression analysis. Two fundamental problems of exhaustive search procedures usually applied to fit such models have been known for a long time: overfitting and a selection bias towards covariates with many possible splits or missing values. While ..."
Abstract

Cited by 44 (4 self)
 Add to MetaCart
Recursive binary partitioning is a popular tool for regression analysis. Two fundamental problems of exhaustive search procedures usually applied to fit such models have been known for a long time: overfitting and a selection bias towards covariates with many possible splits or missing values. While pruning procedures are able to solve the overfitting problem, the variable selection bias still seriously affects the interpretability of treestructured regression models. For some special cases unbiased procedures have been suggested, however lacking a common theoretical foundation. We propose a unified framework for recursive partitioning which embeds treestructured regression models into a well defined theory of conditional inference procedures. Stopping criteria based on multiple test procedures are implemented and it is shown that the predictive performance of the resulting trees is as good as the performance of established exhaustive search procedures. It turns out that the partitions and therefore the models induced by both approaches are structurally different, confirming the need for an unbiased variable selection. Moreover, it is shown that the prediction accuracy of trees with early stopping is equivalent to the prediction accuracy of pruned trees with unbiased variable selection. The methodology presented here is applicable to all kinds of regression problems, including nominal, ordinal, numeric, censored as well as multivariate response variables and arbitrary measurement scales of the covariates. Data from studies on glaucoma classification, node positive breast cancer survival and mammography experience are reanalyzed.
Methods for the Computation of Multivariate tProbabilities
 Computing Sciences and Statistics
, 2000
"... This paper compares methods for the numerical computation of multivariate tprobabilities for hyperrectangular integration regions. Methods based on acceptancerejection, sphericalradial transformations and separationofvariables transformations are considered. Tests using randomly chosen problems ..."
Abstract

Cited by 39 (9 self)
 Add to MetaCart
This paper compares methods for the numerical computation of multivariate tprobabilities for hyperrectangular integration regions. Methods based on acceptancerejection, sphericalradial transformations and separationofvariables transformations are considered. Tests using randomly chosen problems show that the most efficient numerical methods use a transformation developed by Genz (1992) for multivariate normal probabilities. These methods allow moderately accurate multivariate tprobabilities to be quickly computed for problems with as many as twenty variables. Methods for the noncentral multivariate tdistribution are also described. Key Words: multivariate tdistribution, noncentral distribution, numerical integration, statistical computation. 1 Introduction A common problem in many statistics applications is the numerical computation of the multivariate t (MVT) distribution function (see Tong, 1990) defined by T(a; b; \Sigma; ) = \Gamma( +m 2 ) \Gamma( 2 ) p j\Sigma...
Methods for Approximating Integrals in Statistics with Special Emphasis on Bayesian Integration Problems
 Statistical Science
"... This paper is a survey of the major techniques and approaches available for the numerical approximation of integrals in statistics. We classify these into five broad categories; namely, asymptotic methods, importance sampling, adaptive importance sampling, multiple quadrature and Markov chain method ..."
Abstract

Cited by 32 (4 self)
 Add to MetaCart
This paper is a survey of the major techniques and approaches available for the numerical approximation of integrals in statistics. We classify these into five broad categories; namely, asymptotic methods, importance sampling, adaptive importance sampling, multiple quadrature and Markov chain methods. Each method is discussed giving an outline of the basic supporting theory and particular features of the technique. Conclusions are drawn concerning the relative merits of the methods based on the discussion and their application to three examples. The following broad recommendations are made. Asymptotic methods should only be considered in contexts where the integrand has a dominant peak with approximate ellipsoidal symmetry. Importance sampling, and preferably adaptive importance sampling, based on a multivariate Student should be used instead of asymptotics methods in such a context. Multiple quadrature, and in particular subregion adaptive integration, are the algorithms of choice for...
Simultaneous Inference in General Parametric Models
, 2008
"... Simultaneous inference is a common problem in many areas of application. If multiple null hypotheses are tested simultaneously, the probability of rejecting erroneously at least one of them increases beyond the prespecified significance level. Simultaneous inference procedures have to be used which ..."
Abstract

Cited by 23 (3 self)
 Add to MetaCart
Simultaneous inference is a common problem in many areas of application. If multiple null hypotheses are tested simultaneously, the probability of rejecting erroneously at least one of them increases beyond the prespecified significance level. Simultaneous inference procedures have to be used which adjust for multiplicity and thus control the overall type I error rate. In this paper we describe simultaneous inference procedures in general parametric models, where the experimental questions are specified through a linear combination of elemental model parameters. The framework described here is quite general and extends the canonical theory of multiple comparison procedures in ANOVA models to linear regression problems, generalized linear models, linear mixed effects models, the Cox model, robust linear models, etc. Several examples using a variety of different statistical models illustrate the breadth of the results. For the analyses we use the R addon package multcomp, which provides a convenient interface to the general approach adopted here. Key words: multiple tests, multiple comparisons, simultaneous confidence intervals, adjusted pvalues, multivariate normal distribution, robust statistics. 1
On approximate graph colouring and MAXkCUT algorithms based on the θfunction
, 2002
"... The problem of colouring a kcolourable graph is wellknown to be NPcomplete, for k 3. The MAXkCUT approach to approximate kcolouring is to assign k colours to all of the vertices in polynomial time such that the fraction of `defect edges' (with endpoints of the same colour) is provably small. ..."
Abstract

Cited by 17 (1 self)
 Add to MetaCart
The problem of colouring a kcolourable graph is wellknown to be NPcomplete, for k 3. The MAXkCUT approach to approximate kcolouring is to assign k colours to all of the vertices in polynomial time such that the fraction of `defect edges' (with endpoints of the same colour) is provably small. The best known approximation was obtained by Frieze and Jerrum [9], using a semidefinite programming (SDP) relaxation which is related to the Lovasz #function. In a related work, Karger et al. [18] devised approximation algorithms for colouring kcolourable graphs exactly in polynomial time with as few colours as possible. They also used an SDP relaxation related to the #function.
What Affects the Accuracy of QuasiMonte Carlo Quadrature?
"... QuasiMonte Carlo quadrature methods have been used for several decades. Their accuracy ranges from excellent to poor, depending on the problem. This article discusses how quasiMonte Carlo quadrature error can be assessed, and what are the factors that influence it. ..."
Abstract

Cited by 13 (0 self)
 Add to MetaCart
QuasiMonte Carlo quadrature methods have been used for several decades. Their accuracy ranges from excellent to poor, depending on the problem. This article discusses how quasiMonte Carlo quadrature error can be assessed, and what are the factors that influence it.
Numerical Computation Of Multivariate tProbabilities With Application To Power Calculation Of Multiple Contrasts
, 1993
"... ..."
Bayesian Gaussian Process Classification with the EMEP Algorithm
"... Abstract—Gaussian process classifiers (GPCs) are Bayesian probabilistic kernel classifiers. In GPCs, the probability of belonging to a certain class at an input location is monotonically related to the value of some latent function at that location. Starting from a Gaussian process prior over this l ..."
Abstract

Cited by 11 (1 self)
 Add to MetaCart
Abstract—Gaussian process classifiers (GPCs) are Bayesian probabilistic kernel classifiers. In GPCs, the probability of belonging to a certain class at an input location is monotonically related to the value of some latent function at that location. Starting from a Gaussian process prior over this latent function, data are used to infer both the posterior over the latent function and the values of hyperparameters to determine various aspects of the function. Recently, the expectation propagation (EP) approach has been proposed to infer the posterior over the latent function. Based on this work, we present an approximate EM algorithm, the EMEP algorithm, to learn both the latent function and the hyperparameters. This algorithm is found to converge in practice and provides an efficient Bayesian framework for learning hyperparameters of the kernel. A multiclass extension of the EMEP algorithm for GPCs is also derived. In the experimental results, the EMEP algorithms are as good or better than other methods for GPCs or Support Vector Machines (SVMs) with crossvalidation. Index Terms—Gaussian process classification, Bayesian methods, kernel methods, expectation propagation, EMEP algorithm. 1
The asymptotic efficiency of randomized nets for quadrature
 Math. Comp
, 1999
"... Abstract. An L2type discrepancy arises in the average and worstcase error analyses for multidimensional quadrature rules. This discrepancy is uniquely defined by K(x, y), which serves as the covariance kernel for the space of random functions in the averagecase analysis and a reproducing kernel ..."
Abstract

Cited by 7 (3 self)
 Add to MetaCart
Abstract. An L2type discrepancy arises in the average and worstcase error analyses for multidimensional quadrature rules. This discrepancy is uniquely defined by K(x, y), which serves as the covariance kernel for the space of random functions in the averagecase analysis and a reproducing kernel for the space of functions in the worstcase analysis. This article investigates the asymptotic order of the root mean square discrepancy for randomized (0,m,s)nets in base b. For moderately smooth K(x, y) the discrepancy is O(N −1 [log(N)] (s−1)/2), and for K(x, y) with greater smoothness the discrepancy is O(N −3/2 [log(N)] (s−1)/2), where N = b m is the number of points in the net. Numerical experiments indicate that the (t, m, s)nets of Faure, Niederreiter and Sobol ′ do not necessarily attain the higher order of decay for sufficiently smooth kernels. However, Niederreiter nets may attain the higher order for kernels corresponding to spaces of periodic functions. 1.