Results 1 
5 of
5
Prior Probabilities
 IEEE Transactions on Systems Science and Cybernetics
, 1968
"... e case of location and scale parameters, rate constants, and in Bernoulli trials with unknown probability of success. In realistic problems, both the transformation group analysis and the principle of maximum entropy are needed to determine the prior. The distributions thus found are uniquely determ ..."
Abstract

Cited by 166 (3 self)
 Add to MetaCart
e case of location and scale parameters, rate constants, and in Bernoulli trials with unknown probability of success. In realistic problems, both the transformation group analysis and the principle of maximum entropy are needed to determine the prior. The distributions thus found are uniquely determined by the prior information, independently of the choice of parameters. In a certain class of problems, therefore, the prior distributions may now be claimed to be fully as "objective" as the sampling distributions. I. Background of the problem Since the time of Laplace, applications of probability theory have been hampered by difficulties in the treatment of prior information. In realistic problems of decision or inference, we often have prior information which is highly relevant to the question being asked; to fail to take it into account is to commit the most obvious inconsistency of reasoning and may lead to absurd or dangerously misleading results. As an extreme examp
The WellPosed Problem
 Foundations of Physics
, 1973
"... distributions obtained from transformation groups, using as our main example the famous paradox of Bertrand. Bertrand's problem (Bertrand, 1889) was stated originally in terms of drawing a straight line "at random" intersecting a circle. It will be helpful to think of this in a more concrete way; p ..."
Abstract

Cited by 25 (0 self)
 Add to MetaCart
distributions obtained from transformation groups, using as our main example the famous paradox of Bertrand. Bertrand's problem (Bertrand, 1889) was stated originally in terms of drawing a straight line "at random" intersecting a circle. It will be helpful to think of this in a more concrete way; presumably, we do no violence to the problem (i.e., it is still just as "random") if we suppose that we are tossing straws onto the circle, without specifying how they are tossed. We therefore formulate the problem as follows. A long straw is tossed at random onto a circle; given that it falls so that it intersects the circle, what is the probability that the chord thus defined is longer than a side of the inscribed equilateral triangle? Since Bertrand proposed it in 1889 this problem has been cited to generations of students to demonstrate that Laplace's "principle of indifference" contains logical inconsistencies. For, there appear to be many ways of defining "equally possibl
How Many Entries of a Typical Orthogonal Matrix Can Be Approximated by Independent Normals?
 ANN. PROBAB. 34(4): 1497–1529
, 2006
"... We solve an open problem of Diaconis that asks what are the largest orders of pn and qn such that Zn, the pn ×qn upper left block of a random matrix Γn which is uniformly distributed on the orthogonal group O(n), can be approximated by independent standard normals? This problem is solved by two diff ..."
Abstract

Cited by 25 (8 self)
 Add to MetaCart
We solve an open problem of Diaconis that asks what are the largest orders of pn and qn such that Zn, the pn ×qn upper left block of a random matrix Γn which is uniformly distributed on the orthogonal group O(n), can be approximated by independent standard normals? This problem is solved by two different approximation methods. First, we show that the variation distance between the joint distribution of entries of Zn and that of pnqn independent standard normals goes to zero provided pn = o ( √ n) and qn = o ( √ n). We also show that the above variation distance does not go to zero if pn = [x √ n] and qn = [y √ n] for any positive numbers x and y. This says that the largest orders of pn and qn are o(n 1/2) in the sense of the above approximation. Second, suppose Γn = (γij)n×n is generated by performing the GramSchmidt algorithm on the columns of Yn = (yij)n×n where {yij;1 ≤ i,j ≤ n} are i.i.d. standard normals. We show that ǫn(m): = max1≤i≤n,1≤j≤m  √ nγij − yij  goes to zero in probability as long as m = mn = o(n/log n). We also prove that ǫn(mn) → 2 √ α in probability when mn = [nα/log n] for any α> 0. This says that mn = o(n/log n) is the largest order such that the entries of the first mn columns of Γn can be approximated simultaneously by independent standard normals.
ELECTRONIC COMMUNICATIONS in PROBABILITY ASYMPTOTIC DISTRIBUTION OF COORDINATES ON HIGH DIMENSIONAL SPHERES
, 2007
"... The coordinates xi of a point x = (x1, x2,..., xn) chosen at random according to a uniform distribution on the ℓ2(n)sphere of radius n 1/2 have approximately a normal distribution when n is large. The coordinates xi of points uniformly distributed on the ℓ1(n)sphere of radius n have approximately ..."
Abstract
 Add to MetaCart
The coordinates xi of a point x = (x1, x2,..., xn) chosen at random according to a uniform distribution on the ℓ2(n)sphere of radius n 1/2 have approximately a normal distribution when n is large. The coordinates xi of points uniformly distributed on the ℓ1(n)sphere of radius n have approximately a double exponential distribution. In these and all the ℓp(n), 1 ≤ p ≤ ∞, convergence of the distribution of coordinates as the dimension n increases is at the rate √ n and is described precisely in terms of weak convergence of a normalized empirical process to a limiting Gaussian process, the sum of a Brownian bridge and a simple normal process. 1