## Quantum Simulations of Complex Many-Body Systems: From Theory to Algorithms, Lecture Notes,

Citations: | 3 - 1 self |

### BibTeX

@MISC{Lewerenz_quantumsimulations,

author = {Marius Lewerenz and J. Grotendorst and D. Marx and A. Muramatsu (eds and Marius Lewerenz},

title = {Quantum Simulations of Complex Many-Body Systems: From Theory to Algorithms, Lecture Notes,},

year = {}

}

### OpenURL

### Abstract

Permission to make digital or hard copies of portions of this work for personal or classroom use is granted provided that the copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise requires prior specific permission by the publisher mentioned above.

### Citations

2243 |
Equation of state calculations by fast computing machines
- Metropolis, Rosenbluth, et al.
- 1953
(Show Context)
Citation Context ...: n� G = λigi(xi) λi ∈s(38) E(G) = 〈G〉 = i n� λi〈gi(xi)〉 (39) • A special choice is to use λi = 1/n for all weights and to consider all gi to be identical � n� � 1 E(G) = E g(xi) = n 1 n� E(g) = E(g) =-=(40)-=- n i We find that the expectation value E(G) for the sum G is identical with the expectation value E(g) for the function. Consequently G can serve as an estimator for E(g). This is in fact the basis o... |

677 |
Computer simulation of liquids
- Allen, Tildesley
- 1987
(Show Context)
Citation Context ...he expectation value E(x) of a random variable x as E(x) = 〈x〉 = � (14) Assume that g is a function of x, g(xi) = gi. Then also gi will be a random variable and we define E(g(x)) = 〈g(x)〉 = � pig(xi) =-=(15)-=- Suppose that g(xi) = g(x) = const: E(g(x)) = � pig(xi) = g(xi) � pi = g(x) (16) i We conclude that the expectation value of a constant is a constant. In the next step we prove the linearity of the ex... |

413 |
Simulation and Monte Carlo Method
- Rubinstein
- 1981
(Show Context)
Citation Context ...ations hold for any pair of Ei, Ej. 2. If Ei and Ej are mutually exclusive (Ei ⇒ ¬Ej, Ej ⇒ ¬Ei): {E} = {E1, E2, E3 . . . En} (1) P (Ek) = pk 1 ≥ pk ≥ 0 (2) P (Ei ∧ Ej) ≤ pi + pj P (Ei ∨ Ej) ≤ pi + pj =-=(3)-=- (4) P (Ei ∧ Ej) = 0 (5) P (Ei ∨ Ej) = pi + pj 3. For a class of mutually exclusive events, which contains all possible events we have: P (some E) = 1 = � (7) 2.2 Joint and Marginal Probabilities Supp... |

277 |
Understanding molecular simulation: from algorithms to applications
- Frenkel, Smit
- 1996
(Show Context)
Citation Context ...hat g is a function of x, g(xi) = gi. Then also gi will be a random variable and we define E(g(x)) = 〈g(x)〉 = � pig(xi) (15) Suppose that g(xi) = g(x) = const: E(g(x)) = � pig(xi) = g(xi) � pi = g(x) =-=(16)-=- i We conclude that the expectation value of a constant is a constant. In the next step we prove the linearity of the expectation value of two random functions g1(x) and g2(x) by substitution of the d... |

274 |
Monte Carlo Simulation
- Binder, Heermann
- 1988
(Show Context)
Citation Context ... (10) If the events Ei and Fj are not independent, i. e. pij �= p1ip2j, it is useful to decompose the joint probability as follows: � � � � pij pij = pik � k pik � (11) k � pij pij = p(i) � k pik � 7 =-=(12)-=-sThe quantity p(i) is called the marginal probability for the event Ei, the probability of observing Ei combined with any event in the reservoir {F }. Clearly p(i) = p1i � � � and i p(i) = i k pik = 1... |

226 |
Quantum Mechanics and Path Integrals
- Feynman, Hibbs
- 1965
(Show Context)
Citation Context ...necessary condition for Cov{x, y} to be zero! 2.9 Correlation and Autocorrelation The correlation coefficient r(x, y) is the normalized version of the covariance: r(x, y) = Cov{x, y} � V ar{x}V ar{y} =-=(26)-=- −1 ≤ r(x, y) ≤ 1 (27) If one considers the values of y as copies of x with a constant offset δ (in time or some pseudotime establishing an order) yj = xi = xj−δ one can compute a correlation coeffici... |

143 |
Space-time approach to non-relativistic quantum mechanics
- Feynman
- 1948
(Show Context)
Citation Context ...j pijxiyj If the random variables x and y are independent, the pij can be decomposed according 〈xy〉 = � = pij = p1ip2j ij � � i = 〈x〉〈y〉 p1ixip2jyj p1ixi � ⎛ ⎝ � j p2jyj ⎞ ⎠ (23) (24) ⇒ Cov{x, y} = 0 =-=(25)-=- Independence of two random variables x, y is clearly a sufficient but not a necessary condition for Cov{x, y} to be zero! 2.9 Correlation and Autocorrelation The correlation coefficient r(x, y) is th... |

86 |
Monte Carlo Methods in Statistical Physics
- Newman, Barkema
- 1999
(Show Context)
Citation Context ... or a logical decision). If we can associate a numerical value xi with each random event Ei, we call x a random variable. We define the expectation value E(x) of a random variable x as E(x) = 〈x〉 = � =-=(14)-=- Assume that g is a function of x, g(xi) = gi. Then also gi will be a random variable and we define E(g(x)) = 〈g(x)〉 = � pig(xi) (15) Suppose that g(xi) = g(x) = const: E(g(x)) = � pig(xi) = g(xi) � p... |

72 |
The Monte Carlo Method
- Binder, editor
- 1995
(Show Context)
Citation Context ...the probability of this event as the joint probability P (Ei, Fj) = pij The events Ei and Fj are called independent if the probability of the combined event can be expressed as (6) (9) pij = p1ip2j . =-=(10)-=- If the events Ei and Fj are not independent, i. e. pij �= p1ip2j, it is useful to decompose the joint probability as follows: � � � � pij pij = pik � k pik � (11) k � pij pij = p(i) � k pik � 7 (12)s... |

49 |
Monte Carlo simulations: Hidden errors from “good” random number generators
- Ferrenberg, Landau, et al.
- 1992
(Show Context)
Citation Context ...ty a of accepting a move into a practical algorithm is in fact very simple. The cumulative distribution function F (x) for The probability P {u ≤ x} that a random number u distributed in the interval =-=[0, 1]-=- with a uniform probability density ρ(u) = 1 is less then or equal x (x ≤ 1) is given by P {u ≤ x} = � x 0 ρ(u)du = � x du = x. Consequently u ≤ a will be true with probability 0 a and we can accept t... |

37 |
Handbook of Stochastic Methods, 2nd ed
- Gardiner
- 2002
(Show Context)
Citation Context ...nt (Ei, Fj). We define the probability of this event as the joint probability P (Ei, Fj) = pij The events Ei and Fj are called independent if the probability of the combined event can be expressed as =-=(6)-=- (9) pij = p1ip2j . (10) If the events Ei and Fj are not independent, i. e. pij �= p1ip2j, it is useful to decompose the joint probability as follows: � � � � pij pij = pik � k pik � (11) k � pij pij ... |

35 |
Path integrals in the theory of condensed helium
- Ceperley
- 1995
(Show Context)
Citation Context ...If one considers the values of y as copies of x with a constant offset δ (in time or some pseudotime establishing an order) yj = xi = xj−δ one can compute a correlation coefficient for each offset δ. =-=(28)-=- r(x, y; δ) = A(x; δ) (29) This function A(x; δ) is called the autocorrelation function and varies between -1 and +1 as a result of the normalisation by the variances of x and y. The computation of th... |

32 |
A Guide to Simulation. 2nd ed
- Bratley, Fox, et al.
- 1987
(Show Context)
Citation Context ... of Ei, Ej. 2. If Ei and Ej are mutually exclusive (Ei ⇒ ¬Ej, Ej ⇒ ¬Ei): {E} = {E1, E2, E3 . . . En} (1) P (Ek) = pk 1 ≥ pk ≥ 0 (2) P (Ei ∧ Ej) ≤ pi + pj P (Ei ∨ Ej) ≤ pi + pj (3) (4) P (Ei ∧ Ej) = 0 =-=(5)-=- P (Ei ∨ Ej) = pi + pj 3. For a class of mutually exclusive events, which contains all possible events we have: P (some E) = 1 = � (7) 2.2 Joint and Marginal Probabilities Suppose that the events Ei a... |

20 | Parallel linear congruential generators with prime moduli
- Mascagni
- 1997
(Show Context)
Citation Context ...ributions by the replacement of summations by integrations and of probabilities pi by dF (x). The variance is now given as � ∞ E(x) = 〈x〉 = xdF (x) −∞ � � ∞ � = xρ(x)dx −∞ (34) � ∞ ρ(x)dx = F (∞) = 1 =-=(35)-=- −∞ � ∞ E(g(x)) = 〈g(x)〉 = g(x)dF (x) (36) V ar{x} = E(x 2 ) − E(x) 2 −∞ � ∞ = x −∞ 2 �� ∞ ρ(x)dx − xρ(x)dx −∞ It is important to note that the variance is not a well defined quantity for all probabil... |

18 |
Four-tap shift-register-sequence random-number generators
- Ziff
- 1998
(Show Context)
Citation Context ... by integrations and of probabilities pi by dF (x). The variance is now given as � ∞ E(x) = 〈x〉 = xdF (x) −∞ � � ∞ � = xρ(x)dx −∞ (34) � ∞ ρ(x)dx = F (∞) = 1 (35) −∞ � ∞ E(g(x)) = 〈g(x)〉 = g(x)dF (x) =-=(36)-=- V ar{x} = E(x 2 ) − E(x) 2 −∞ � ∞ = x −∞ 2 �� ∞ ρ(x)dx − xρ(x)dx −∞ It is important to note that the variance is not a well defined quantity for all probability densities ρ(x). An well known example ... |

17 |
Monte Carlo Methods in ab initio Quantum Chemistry, World Scientific
- Hammond, Lester, et al.
- 1994
(Show Context)
Citation Context ...〉 − 〈x〉〈y〉 〈xy〉 = � ij pijxiyj If the random variables x and y are independent, the pij can be decomposed according 〈xy〉 = � = pij = p1ip2j ij � � i = 〈x〉〈y〉 p1ixip2jyj p1ixi � ⎛ ⎝ � j p2jyj ⎞ ⎠ (23) =-=(24)-=- ⇒ Cov{x, y} = 0 (25) Independence of two random variables x, y is clearly a sufficient but not a necessary condition for Cov{x, y} to be zero! 2.9 Correlation and Autocorrelation The correlation coef... |

10 |
Fixed-node Quantum Monte Carlo for molecules
- Reynolds, Ceperley, et al.
- 1982
(Show Context)
Citation Context ...= 〈xy〉 − 〈x〉〈y〉 〈xy〉 = � ij pijxiyj If the random variables x and y are independent, the pij can be decomposed according 〈xy〉 = � = pij = p1ip2j ij � � i = 〈x〉〈y〉 p1ixip2jyj p1ixi � ⎛ ⎝ � j p2jyj ⎞ ⎠ =-=(23)-=- (24) ⇒ Cov{x, y} = 0 (25) Independence of two random variables x, y is clearly a sufficient but not a necessary condition for Cov{x, y} to be zero! 2.9 Correlation and Autocorrelation The correlation... |

10 |
Kankaala,Physical models as tests of randomness, Phys
- Vattulainen, Ala-Nissila, et al.
- 1995
(Show Context)
Citation Context ...or may not be identical. The gi(xi) are then random variables. • We define a weighted sum G over these functions and its expectation value E(G): n� G = λigi(xi) λi ∈s(38) E(G) = 〈G〉 = i n� λi〈gi(xi)〉 =-=(39)-=- • A special choice is to use λi = 1/n for all weights and to consider all gi to be identical � n� � 1 E(G) = E g(xi) = n 1 n� E(g) = E(g) (40) n i We find that the expectation value E(G) for the sum ... |

9 |
Monte carlo calculations of the ground state of three- and four-body nuclei
- Kalos
- 1962
(Show Context)
Citation Context ...〉] 2 + λ 2 2 [g2(x) − 〈g2(x)〉] 2 + 2λ1λ2[g1(x) − 〈g1〉][g2(x) − 〈g2〉]〉 = λ 2 1 〈[g1(x) − 〈g1(x)〉] 2 〉 + λ 2 2 〈[g2(x) − 〈g2(x)〉] 2 〉 + 2λ1λ2〈[g1(x)g2(x) − g1(x)〈g2(x)〉 − 〈g1(x)〉g2(x) + 〈g1(x)〉〈g2(x)〉〉 =-=(19)-=- In short this result can be expressed through the variances of the random functions g1 and g2 and an extra term: 2.7 The Covariance V ar{λ1g1(x) + λ2g2(x)} = λ 2 1V ar{g1(x)} + λ 2 2V ar{g2(x)} + 2λ1... |

8 |
A random-walk simulation of the Schrödinger equation
- Anderson
- 1975
(Show Context)
Citation Context ...1) The possibility of negative covariance can be exploited in special sampling techniques (correlated sampling, antithetic variates) to achieve variance reduction. V ar{g1 + g2} < V ar{g1} + V ar{g2} =-=(22)-=- 10s2.8 Properties of the Covariance to Cov{x, y} = 〈xy〉 − 〈x〉〈y〉 〈xy〉 = � ij pijxiyj If the random variables x and y are independent, the pij can be decomposed according 〈xy〉 = � = pij = p1ip2j ij � ... |

8 | A fast, high quality, reproducible, parallel, lagged-fibonacci pseudorandom number generator
- Mascagni, Cuccaro, et al.
(Show Context)
Citation Context ...neralised to continuous distributions by the replacement of summations by integrations and of probabilities pi by dF (x). The variance is now given as � ∞ E(x) = 〈x〉 = xdF (x) −∞ � � ∞ � = xρ(x)dx −∞ =-=(34)-=- � ∞ ρ(x)dx = F (∞) = 1 (35) −∞ � ∞ E(g(x)) = 〈g(x)〉 = g(x)dF (x) (36) V ar{x} = E(x 2 ) − E(x) 2 −∞ � ∞ = x −∞ 2 �� ∞ ρ(x)dx − xρ(x)dx −∞ It is important to note that the variance is not a well defin... |

8 |
On correlations in ’good’ random number generators
- Grassberger
- 1993
(Show Context)
Citation Context ...tity for all probability densities ρ(x). An well known example is the Cauchy-Lorentz-distribution with an arbitrary width parameter a for which E(x) = 0 and E(x 2 ) = ∞. ρ(x) = 1 π 12 a x 2 + a 2 � 2 =-=(37)-=-s2.12 Sums of Random Variables • Suppose we have random variables x1, x2, . . . , xn which are distributed according to some probability density function ρ(x). The variable xi may represent a multidim... |

6 | Mission Impossible: Find a Random Pseudorandom Number Generator
- Vattulainen, Ala-Nissila
- 1995
(Show Context)
Citation Context ... each Ek: Properties of pk: 1. The following relations hold for any pair of Ei, Ej. 2. If Ei and Ej are mutually exclusive (Ei ⇒ ¬Ej, Ej ⇒ ¬Ei): {E} = {E1, E2, E3 . . . En} (1) P (Ek) = pk 1 ≥ pk ≥ 0 =-=(2)-=- P (Ei ∧ Ej) ≤ pi + pj P (Ei ∨ Ej) ≤ pi + pj (3) (4) P (Ei ∧ Ej) = 0 (5) P (Ei ∨ Ej) = pi + pj 3. For a class of mutually exclusive events, which contains all possible events we have: P (some E) = 1 =... |

6 |
Exploiting the isomorphism between quantum theory and classical statistical mechanics of polyatomic fluids
- Chandler, Wolynes
- 1981
(Show Context)
Citation Context ...r Cov{x, y} to be zero! 2.9 Correlation and Autocorrelation The correlation coefficient r(x, y) is the normalized version of the covariance: r(x, y) = Cov{x, y} � V ar{x}V ar{y} (26) −1 ≤ r(x, y) ≤ 1 =-=(27)-=- If one considers the values of y as copies of x with a constant offset δ (in time or some pseudotime establishing an order) yj = xi = xj−δ one can compute a correlation coefficient for each offset δ.... |

5 |
Physical tests for random numbers in simulations. Phys
- Vattulainen, Ala-Nissila, et al.
- 1994
(Show Context)
Citation Context ...ach xi where the functions gi may or may not be identical. The gi(xi) are then random variables. • We define a weighted sum G over these functions and its expectation value E(G): n� G = λigi(xi) λi ∈s=-=(38)-=- E(G) = 〈G〉 = i n� λi〈gi(xi)〉 (39) • A special choice is to use λi = 1/n for all weights and to consider all gi to be identical � n� � 1 E(G) = E g(xi) = n 1 n� E(g) = E(g) (40) n i We find that the e... |

3 |
Quantum Monte Carlo methods in chemistry
- Ceperley, Mitas
- 1996
(Show Context)
Citation Context ...d x1 > y are mutually exclusive and we conclude: P {x2 > y ≥ x1} + P {x1 > y} = P {x2 > y} P {x2 > y} ≥ P {x1 > y} It follows that F (x) is a monotonically increasing function. F (−∞) = 0 , F (∞) = 1 =-=(32)-=- The function F (x) is not necessarily smooth. In differentiable regions one can define the probability density function ρ(x): ρ(x) = dF (x) dx 2.11 Moments of Continuous Distributions ≥ 0 (33) The co... |

2 |
Fixed-node quantum Monte Carlo
- Anderson
- 1995
(Show Context)
Citation Context ...but generally random variables can also be continuous. For a one-dimensional case we have We can define a cumulative distribution function F (x) as −∞ ≤ x ≤ ∞ (30) F (x) = P {randomly selected y < x} =-=(31)-=- Assume that x2 > x1. Then the events x2 > y ≥ x1 and x1 > y are mutually exclusive and we conclude: P {x2 > y ≥ x1} + P {x1 > y} = P {x2 > y} P {x2 > y} ≥ P {x1 > y} It follows that F (x) is a monoto... |

1 |
Die Monte Carlo Methode, 4. Aufl. Deutscher Verlag der Wissenschaften
- Sobol
- 1991
(Show Context)
Citation Context ...(Ei ∧ Ej) ≤ pi + pj P (Ei ∨ Ej) ≤ pi + pj (3) (4) P (Ei ∧ Ej) = 0 (5) P (Ei ∨ Ej) = pi + pj 3. For a class of mutually exclusive events, which contains all possible events we have: P (some E) = 1 = � =-=(7)-=- 2.2 Joint and Marginal Probabilities Suppose that the events Ei and Fj satisfy the conditions defined above with associated probabilities p1i and p2j, i pi P (Ei) = p1i P (Fj) = p2j , (8) and we are ... |

1 |
Energy of a boson fluid with Lennard-Jones potentials
- Kalos
- 1970
(Show Context)
Citation Context ...(x)} + λ 2 2V ar{g2(x)} + 2λ1λ2(〈g1(x)g2(x)〉 − 〈g1(x)〉〈g2(x)〉) The mixed term in the preceeding equation is called the covariance of g1(x) and g2(x). Cov{g1(x), g2(x)} = 〈g1(x)g2(x)〉 − 〈g1(x)〉〈g2(x)〉 =-=(20)-=- This term can be positive or negative and we will show in the next section that this term is related to the mutual dependence between the two random functions g1 and g2. We have the following special... |

1 |
Green’s function Monte Carlo
- Lee, Schmidt
- 1992
(Show Context)
Citation Context ... covariance, the variance of a linear combination of random functions or variables can be larger or smaller than the sum of the individual variances. V ar{g1 + g2} = V ar{g1} + V ar{g2} + Cov{g1, g2} =-=(21)-=- The possibility of negative covariance can be exploited in special sampling techniques (correlated sampling, antithetic variates) to achieve variance reduction. V ar{g1 + g2} < V ar{g1} + V ar{g2} (2... |

1 |
Exact quantum chemistry by Monte Carlo methods
- Anderson
- 1994
(Show Context)
Citation Context ...dom events to simplify the presentation, but generally random variables can also be continuous. For a one-dimensional case we have We can define a cumulative distribution function F (x) as −∞ ≤ x ≤ ∞ =-=(30)-=- F (x) = P {randomly selected y < x} (31) Assume that x2 > x1. Then the events x2 > y ≥ x1 and x1 > y are mutually exclusive and we conclude: P {x2 > y ≥ x1} + P {x1 > y} = P {x2 > y} P {x2 > y} ≥ P {... |

1 |
Quantum Monte Carlo: Atoms, molecules, clusters, liquids, and solids
- Anderson
- 1999
(Show Context)
Citation Context ...∞) = 1 (32) The function F (x) is not necessarily smooth. In differentiable regions one can define the probability density function ρ(x): ρ(x) = dF (x) dx 2.11 Moments of Continuous Distributions ≥ 0 =-=(33)-=- The concept of moments can be generalised to continuous distributions by the replacement of summations by integrations and of probabilities pi by dF (x). The variance is now given as � ∞ E(x) = 〈x〉 =... |