• Documents
  • Authors
  • Tables
  • Log in
  • Sign up
  • MetaCart
  • DMCA
  • Donate

CiteSeerX logo

Advanced Search Include Citations

Tools

Sorted by:
Try your query at:
Semantic Scholar Scholar Academic
Google Bing DBLP
Results 1 - 10 of 26,246
Next 10 →

PROBABILITY INEQUALITIES FOR SUMS OF BOUNDED RANDOM VARIABLES

by Wassily Hoeffding , 1962
"... Upper bounds are derived for the probability that the sum S of n independent random variables exceeds its mean ES by a positive number nt. It is assumed that the range of each summand of S is bounded or bounded above. The bounds for Pr(S-ES> nt) depend only on the endpoints of the ranges of the s ..."
Abstract - Cited by 2215 (2 self) - Add to MetaCart
Upper bounds are derived for the probability that the sum S of n independent random variables exceeds its mean ES by a positive number nt. It is assumed that the range of each summand of S is bounded or bounded above. The bounds for Pr(S-ES> nt) depend only on the endpoints of the ranges

On a test of whether one of two random variables is stochasitcally larger than the other

by H. B. Mann, D. R. Whitney - ANNALS OF MATHEMATICAL STATISTICS , 1947
"... ..."
Abstract - Cited by 613 (1 self) - Add to MetaCart
Abstract not found

Association of random variables with applications.

by J D Esary , ; F Proschan , ; D W Walkup - Ann. Math. Statist. , 1967
"... ..."
Abstract - Cited by 175 (1 self) - Add to MetaCart
Abstract not found

Random Variables

by Definition Of Random
"... on Function The cumulative distribution function F X (x) for a random variable X(.), or when convenient represented by just X,is defined for all x as follows F X (x)=Pr{.:X(.)#x}Pr{X#x} It is sufficient information to calculate the probabilities of all allowable events and as a result is called a t ..."
Abstract - Add to MetaCart
on Function The cumulative distribution function F X (x) for a random variable X(.), or when convenient represented by just X,is defined for all x as follows F X (x)=Pr{.:X(.)#x}Pr{X#x} It is sufficient information to calculate the probabilities of all allowable events and as a result is called a

Random variables

by Hans-peter Helfrich
"... We consider each row as a sample of random variables X1, X2,..., X5. In the cited paper N = 252 samples are given. H.-P. Helfrich (University of Bonn) Bayes ’ Statistics Brinkmann School 3 / 19Discrete example Counts Fat / circ. [60,80] (80,100] (100,120] (120,140] (140,160] [0,10] 18 18 0 0 0 (10,2 ..."
Abstract - Add to MetaCart
We consider each row as a sample of random variables X1, X2,..., X5. In the cited paper N = 252 samples are given. H.-P. Helfrich (University of Bonn) Bayes ’ Statistics Brinkmann School 3 / 19Discrete example Counts Fat / circ. [60,80] (80,100] (100,120] (120,140] (140,160] [0,10] 18 18 0 0 0 (10

random variables in

by Amin Gohari, Chandra Nair, Venkat Anantharam
"... cardinality bounds on the auxiliary ..."
Abstract - Add to MetaCart
cardinality bounds on the auxiliary

Random variable

by Hilbert Space Qubit Pure States, Mixed States, Holevo Theorem, Possible Outcomes
"... pure states mixed state possible states probabilities; ..."
Abstract - Add to MetaCart
pure states mixed state possible states probabilities;

Simple Constructions of Almost k-wise Independent Random Variables

by Noga Alon, Oded Goldreich, Johan Håstad, René Peralta , 1992
"... We present three alternative simple constructions of small probability spaces on n bits for which any k bits are almost independent. The number of bits used to specify a point in the sample space is (2 + o(1))(log log n + k/2 + log k + log 1 ɛ), where ɛ is the statistical difference between the dist ..."
Abstract - Cited by 303 (40 self) - Add to MetaCart
We present three alternative simple constructions of small probability spaces on n bits for which any k bits are almost independent. The number of bits used to specify a point in the sample space is (2 + o(1))(log log n + k/2 + log k + log 1 ɛ), where ɛ is the statistical difference between the distribution induced on any k bit locations and the uniform distribution. This is asymptotically comparable to the construction recently presented by Naor and Naor (our size bound is better as long as ɛ < 1/(k log n)). An additional advantage of our constructions is their simplicity.

Random forests

by Leo Breiman, E. Schapire - Machine Learning , 2001
"... Abstract. Random forests are a combination of tree predictors such that each tree depends on the values of a random vector sampled independently and with the same distribution for all trees in the forest. The generalization error for forests converges a.s. to a limit as the number of trees in the fo ..."
Abstract - Cited by 3613 (2 self) - Add to MetaCart
Abstract. Random forests are a combination of tree predictors such that each tree depends on the values of a random vector sampled independently and with the same distribution for all trees in the forest. The generalization error for forests converges a.s. to a limit as the number of trees

On the Variance of Fuzzy Random Variables

by Ralf Körner - Fuzzy Sets and Systems , 1997
"... This paper deals with an expectation and a real-valued variance of fuzzy random variables. The expectation and the variance of a fuzzy random variable is characterized by Fr'echet's principle in a metric space. We study properties of the variance of a fuzzy random variables and compare it ..."
Abstract - Cited by 13 (0 self) - Add to MetaCart
This paper deals with an expectation and a real-valued variance of fuzzy random variables. The expectation and the variance of a fuzzy random variable is characterized by Fr'echet's principle in a metric space. We study properties of the variance of a fuzzy random variables and compare
Next 10 →
Results 1 - 10 of 26,246
Powered by: Apache Solr
  • About CiteSeerX
  • Submit and Index Documents
  • Privacy Policy
  • Help
  • Data
  • Source
  • Contact Us

Developed at and hosted by The College of Information Sciences and Technology

© 2007-2019 The Pennsylvania State University