Results 11  20
of
14,678
Selective sampling using the Query by Committee algorithm
 Machine Learning
, 1997
"... We analyze the "query by committee" algorithm, a method for filtering informative queries from a random stream of inputs. We show that if the twomember committee algorithm achieves information gain with positive lower bound, then the prediction error decreases exponentially with the numbe ..."
Abstract

Cited by 433 (7 self)
 Add to MetaCart
We analyze the "query by committee" algorithm, a method for filtering informative queries from a random stream of inputs. We show that if the twomember committee algorithm achieves information gain with positive lower bound, then the prediction error decreases exponentially
Sampling from LogConcave Distributions
 Ann. Appl. Prob
, 1994
"... This paper is concerned with the efficient sampling of random points from n, where the underlying density F is logconcave (i.e., log F is concave). This is a natural restriction which is satisfied by many common distributions, for example, the multivariate normal. The algorithm we use generates a s ..."
Abstract

Cited by 36 (3 self)
 Add to MetaCart
This paper is concerned with the efficient sampling of random points from n, where the underlying density F is logconcave (i.e., log F is concave). This is a natural restriction which is satisfied by many common distributions, for example, the multivariate normal. The algorithm we use generates a
Gibbs Sampling Methods for StickBreaking Priors
"... ... In this paper we present two general types of Gibbs samplers that can be used to fit posteriors of Bayesian hierarchical models based on stickbreaking priors. The first type of Gibbs sampler, referred to as a Polya urn Gibbs sampler, is a generalized version of a widely used Gibbs sampling meth ..."
Abstract

Cited by 388 (19 self)
 Add to MetaCart
that works by directly sampling values from the posterior of the random measure. The blocked Gibbs sampler can be viewed as a more general approach as it works without requiring an explicit prediction rule. We find that the blocked Gibbs avoids some of the limitations seen with the Polya urn approach
A Universal Generator for Discrete LogConcave Distributions
 Computing
, 1994
"... : We give an algorithm that can be used to sample from any discrete logconcave distribution (e.g. the binomial and hypergeometric distributions). It is based on rejection from a discrete dominating distribution that consists of parts of the geometric distribution. The algorithm is uniformly fast fo ..."
Abstract

Cited by 9 (3 self)
 Add to MetaCart
for all discrete logconcave distributions and not much slower than algorithms designed for a single distribution. AMS Subject Classification: 65C10, 68C25. Key Words: Random number generation, logconcave distributions, rejection method, simulation. 1. Introduction A discrete distribution
Efficient exact stochastic simulation of chemical systems with many species and many channels
 J. Phys. Chem. A
, 2000
"... There are two fundamental ways to view coupled systems of chemical equations: as continuous, represented by differential equations whose variables are concentrations, or as discrete, represented by stochastic processes whose variables are numbers of molecules. Although the former is by far more comm ..."
Abstract

Cited by 427 (5 self)
 Add to MetaCart
chemical reactions that is also efficient: it (a) uses only a single random number per simulation event, and (b) takes time proportional to the logarithm of the number of reactions, not to the number of reactions itself. The Next Reaction Method is extended to include timedependent rate constants and non
Random sampling with a reservoir
 ACM Transactions on Mathematical Software
, 1985
"... We introduce fast algorithms for selecting a random sample of n records without replacement from a pool of N records, where the value of N is unknown beforehand. The main result of the paper is the design and analysis of Algorithm Z; it does the sampling in one pass using constant space and in O(n(1 ..."
Abstract

Cited by 335 (2 self)
 Add to MetaCart
We introduce fast algorithms for selecting a random sample of n records without replacement from a pool of N records, where the value of N is unknown beforehand. The main result of the paper is the design and analysis of Algorithm Z; it does the sampling in one pass using constant space and in O
Improved methods for tests of longrun abnormal stock returns
 Journal of Finance
, 1999
"... We analyze tests for longrun abnormal returns and document that two approaches yield wellspecified test statistics in random samples. The first uses a traditional event study framework and buyandhold abnormal returns calculated using carefully constructed reference portfolios. Inference is based ..."
Abstract

Cited by 375 (12 self)
 Add to MetaCart
We analyze tests for longrun abnormal returns and document that two approaches yield wellspecified test statistics in random samples. The first uses a traditional event study framework and buyandhold abnormal returns calculated using carefully constructed reference portfolios. Inference
Slice sampling
 Annals of Statistics
, 2000
"... Abstract. Markov chain sampling methods that automatically adapt to characteristics of the distribution being sampled can be constructed by exploiting the principle that one can sample from a distribution by sampling uniformly from the region under the plot of its density function. A Markov chain th ..."
Abstract

Cited by 305 (5 self)
 Add to MetaCart
to each variable, based on the local properties of the density function. More ambitiously, such methods could potentially allow the sampling to adapt to dependencies between variables by constructing local quadratic approximations. Another approach is to improve sampling efficiency by suppressing random
Using mutual information for selecting features in supervised neural net learning
 IEEE TRANSACTIONS ON NEURAL NETWORKS
, 1994
"... This paper investigates the application of the mutual infor“ criterion to evaluate a set of candidate features and to select an informative subset to be used as input data for a neural network classifier. Because the mutual information measures arbitrary dependencies between random variables, it is ..."
Abstract

Cited by 358 (1 self)
 Add to MetaCart
This paper investigates the application of the mutual infor“ criterion to evaluate a set of candidate features and to select an informative subset to be used as input data for a neural network classifier. Because the mutual information measures arbitrary dependencies between random variables
Heterogeneous uncertainty sampling for supervised learning
 In Proceedings of the 11th International Conference on Machine Learning (ICML
, 1994
"... Uncertainty sampling methods iteratively request class labels for training instances whose classes are uncertain despite the previous labeled instances. These methods can greatly reduce the number of instances that an expert need label. One problem with this approach is that the classifier best suit ..."
Abstract

Cited by 312 (3 self)
 Add to MetaCart
Uncertainty sampling methods iteratively request class labels for training instances whose classes are uncertain despite the previous labeled instances. These methods can greatly reduce the number of instances that an expert need label. One problem with this approach is that the classifier best
Results 11  20
of
14,678