1 / 19

9. Convergence and Monte Carlo Errors

Understanding convergence to equilibrium using variation distance and eigenvalue problems in Markov chains. Learn about error measurement, estimation, and correction in Monte Carlo simulations.

cdrummond
Download Presentation

9. Convergence and Monte Carlo Errors

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. 9. Convergence and Monte Carlo Errors

  2. Measuring Convergence to Equilibrium • Variation distance where P1 and P2 are two probability distributions, A is a set of states, i is a single state.

  3. Eigenvalue Problem • Consider the matrix S defined by [S]ij = pi½ W(i->j) pj-½ then S is real and symmetric and eigenvalues of S satisfy |n| ≤ 1 • One of the eigenvalue must be 0=1 with eigenvector pj½.

  4. Spectrum Decomposition • Then we have UTSU = Λ, or S = U Λ UT where Λ is a diagonal matrix with diagonal elements k and U is orthonormal matrix, U UT = I. • W can be expressed in U, P, and Λ as W = P-½UΛUTP½

  5. Evolution in terms of eigen-states • Pn= P0Wn = P0 P-½UΛUTP½ P-½UΛUTP½… = P0 P-½UΛnUTP½ • In component form, this means Pn(j) = ∑iP0(i) pi-½pj½∑kkn uikujk

  6. Discussion • In the limit n goes to ∞, Pn(j) ≈ ∑iP0(i) pi-½pj½ ui0uj0 = pj • The leading correction to the limit is Pn(j) ≈ pj + a 1n = pj + a e-n/

  7. Exponential Correlation Time • We define  by the next largest eigenvalue  = - 1/log 1 This number characterizes the theoretical rate of convergence in a Markov chain.

  8. Measuring Error • Let Qt be some quantity of interest at time step t, then sample average is QN = (1/N) ∑tQt • We treat QN as a random variable. By central limit theorem, QN is normal distributed with a mean <QN>=<Q> and variance σN2 = <QN2>-<QN>2. <…> standards for average over the exact distribution.

  9. Confidence Interval • The chance that the actual mean <Q> is in the interval [ QN – σN, QN + σN ] is about 68 percents. • σN cannot be computed (exactly) in a single MC run of length N.

  10. Estimating Variance The calculation of var(Q) = <Q2>-<Q>2 and int can be done in a single run of length N.

  11. Error Formula • The above derivation gives the famous error estimate in Monte Carlo as: where var(Q) = <Q2>-<Q>2 can be estimated by sample variance of Qt.

  12. Time-Dependent Correlation function and integrated correlation time • We define and

  13. Circular Buffer for Calculating f(t) We store the values of Qs of the previous M-1 times and the current value Qt Qs Qt-1 Previous time t-1 Qt, Current time t Earliest time t-(M-1)

  14. An Example of f(t) Time-dependent correlation function for 3D Ising at Tc on a 163 lattice; Swendsen-Wang dynamics. From J S Wang, Physica A 164 (1990) 240.

  15. Efficient Method for Computing int We compute int by the formula int = NσN2/var(Q) For small value N and then extrapolating N to ∞. From J S Wang, O Kozan and R H Swendsen, Phys Rev E 66 (2002) 057101.

  16. Exponential and integrated correlation times where 1 < 1 is the second largest eigenvalue of W matrix. This result says that exponential correlation time  (=-1/log1) is related to the largest integrated correlation time.

  17. Critical Slowing Down  The correlation time becomes large near Tc. For a finite system (Tc)  Lz, with dynamical critical exponent z ≈ 2 for local moves Tc T

  18. Relaxation towards Equilibrium Magnetization m Schematic curves of relaxation of the total magnetization as a function of time. At Tc relaxation is slow, described by power law: m t -β/(zν) T < Tc T = Tc T > Tc Time t

  19. Jackknife Method • Let n be the number of independent samples • Let c be some estimate using all n samples • Let ci be the same estimate but using n-1 samples, with i-th sample removed • Then Jackknife error estimate is

More Related