1.19k likes | 1.41k Views
Cornelia Parker Colder Dark Matter. Introduction to. Monte Carlo. Carlos Mana Astrofísica y Física de Partículas CIEMAT. Monte Carlo. stochastic process. problem. solution. Theory of Probability and Statistics. What is the Monte Carlo method ?. numerical method.
E N D
Cornelia Parker Colder Dark Matter Introduction to Monte Carlo Carlos Mana Astrofísica y Física de Partículas CIEMAT
Monte Carlo stochastic process problem solution Theory of Probability and Statistics What is the Monte Carlo method? numerical method How does it work? Do it Design the experiment Outcome problem …or simulate it in a computer What kind of problems can we tackle? Why Monte Carlo ? incapability? Particle physics, Bayesian inference,… What do we get ? Magic Box?,…,more information?...
A bit of history… 1777 : G. Louis Leclerc (Compte de Buffon) “Essai d’arithmetique morale”; Supplement al’Histoire Naturelle, Vol 4, 1777. Calculated the probability that a needle (of length d) thrown randomly on an array of parallel bands (of width l d) cuts the dividing lines among the bands He actually did the experiment as a cross-check.
1886 : P. S. De Laplace “Theorie Analitique des Probabilités”. Points out that the experiment of Buffon could be used to estimate the from a random process (... although with a large uncertainty). 1901 : W. Thomson (Lord Kelvin) Used a random sampling to evaluate (“try”?) some integrals related to the Kinetic Theory of Gasses. His sampling consists on picking up numbered papers from a well stirred hat. … but he was more worried about the biases induced from not being well stirred, the effect of the static electricity,…
1908 : W. S. Gosset (Student) “Probable error of a Correlation Coefficient”; Biometrika 6, 302 (1908). He was interested in the distribution of the statistic for small samples. For this: 1)Calculated the first 4 moments of the distribution of and found that they were the same as those for the Pearson’s curve of type III (2). Assumed (correctly) that this was the distribution followed by ; 2)Showed that the statistics and were not correlated and assumed (again correctly) that they were independent; 3)From 1) and 2), he derived (simple integral) the distribution followed by ; 4)He did a table of the Distribution Function for sample sizes n=4 and n=10 checking that for n=10 there was a good agreement with the Normal Distribution; Finally, he checked experimentally the assumed hypothesis with a random sampling. For this, he measured the height and middle finger length of 3000 prisoners grouping them in 750 samples of size n=4. The empiric distribution, based on the 750 experimental measurements, was in very good agreement (Pearson´s 2 test) with his tables.
1930’s : E. Fermi Used random samplings to study the diffusion and transport of the recently discovered neutrons and its interactions with condensed matter. 1940’s : J. von Neuman,S. Ulam,N. Metropolis,E. Fermi, ... Developed several techniques of random sampling (method that they “Monte Carlo”) to study different problems related to the atomic bomb in Los Álamos (using the ENIAC); in particular, to analyse the transport of neutrons at to estimate the eigenvalues of the Schrodinger’s equation. J. von Neuman and the ENIAC
Important and General Feature of Monte Carlo estimations: Example:Estimation of (… a la Laplace) Probability that a draw falls inside the circumference trials {Point falls inside the circumference} event Throws (N) Accepted (n) … for large N: The dependence of the uncertainty with is a general feature of the Monte Carlo estimations regardless the number of dimensions of the problem
Basis of Monte Carlo simulations Do it Design the experiment …or simulate it on a computer • Get a sequence • Get a sampling of size n from a stochastic process • of random numbers Need sequences of random numbers Do not do the experiment but…. generate sequences of (pseudo-)random numbers on a computer
Sampling moments: Sample size: 106 events
We can generate pseudo-random sequences with Great! But life is not uniform… Physics, Statistics,… we want to generate sequences from How? Inverse Transform Aceptance-Rejection (“Hit-Miss”) “Importance Sampling”,… Usually a combination of them • Tricks • Normal Distribution • M(RT)2 (Markov Chains) • Gibbs Sampling,….
Method 1: Inverse Transform We want to generate a sample of How is distributed? Algorithm i) Sample n) ii) Transform
Examples: Sampling of Sampling of …some fun…
Problem: 1) Generate a sample : 2) Get for each case the sampling distribution of 3) Discuss the sampling distribution of in connection with the Law of Large Numbers and the Central Limit Theorem 4) If and How is distributed? 5) If How is distributed? How is distributed? 6) If (assumed to be independent random quantities)
For discrete random quantities ………………… Algorithm i) Sample n) ii) Find
50,000 events Example: Sampling of from until k Sampling of 50,000 events from until n
Even though discrete distributions can be sampled this way,…usually one can find more efficient algorithms… Example: i) generate ii) multiply them and go to i) while iii) deliver
Generalization to n dimensions … trivial but … and if are independent … but if not ... there are ways to decompose the pdf and some are easier to sample than others
Example: Marginal Densities: notindependent Conditional Densities: dificult easy
Properties: Direct and efficient in the sense that one one Useful for many distributions of interest (Exponential, Cauchy, Weibull, Logistic,…) …but in general, difficult to invert numeric approximations are slow,…
Method 2: Acceptance-Rejection “Hit-Miss”J. Von Neuman, 1951 Sample 1) Enclose the pairs in a domain (not necessarily hypercube) such that: and 2) Consider a two-dimensional random quantity ; that is: uniformly distributed in the domain 3) Which is the conditional distribution ?
rejected accepted Algorithm i) Sample and n) ii) Get accept if reject if and go back to i)
(pedagogical; low efficiency) Example: density: (normalisation: …not needed Covering adjusted for maximum efficiency: generated in the domain
Algorithm i) Sample n) ii) Accept-Reject if accept if reject and go to i)
… you may hear about weighted events ??? We can do the rejection as follows: i) Assign aweight to each generated pair ii) Acceptance-rejection accept if reject if • Obviously, events with a larger weight are more likely to be accepted if rejected After step ii), all weights are: if accepted Sometimes, it is interesting not to apply step ii) and keep the weights
And suddenly… Many times we do not know and start with an initial guess (e.g.: usual case for matrix element for n-dimensional final states) After having generated events in the domain we find a value incorrect estimation of Don't throw away all generated events…
The pairs have been generated in with constant density so with we have to generate: additional pairs … in the domain with the pseudo-distribution … and proceed with
rejected accepted Properties: Easy to implement and generalize to n dimensions covering domain Efficiency # accepted events area under area covering domain # generated events is equivalent to the Inverse Transform method The better adjusted the covering is, the higher is the efficiency
Straight forward generalisation to n dimensions Algorithm: i) Generaten+1 distributed random quantities k) ii) Accept or reject accept if reject if not and go back to i)
(pedagogical; low efficiency) 3D Example: centred at Points inside a toroid of radius Cover the toroid by a parallelepiped: Algorithm: i) Generate n) ii) Reject if And go back to i) or or otherwise accept
knowing that: we can estimate:
Problem 3D: Sampling of Hidrogen atom wave functions (3,1,0) (3,2,0) Evaluate the energy using Virial Theorem (3,2,±1)
Problems with low efficiency (problem for convergence of estimations,…) Example in n-dimensions: n-dimensional sphere of radius centred at i) Sample ii) Aceptance-rejection: as inner point of the sphere or accept if over thesurface reject the n-tuple and go back to i) if
Why do we have low efficiency? Most of the samplings are done in regions of the domain of the random quantity with low probablity Example: Sampling of Generate values of uniformly in … do a more clever sampling… usually used in combination with “Importance Sampling”
Example of inefficiencies: Usual problem in Particle Physics… i) Sample phase-space variables ii) Acceptance-rejection on sample of events with dynamics of the process iii) Estimate cross-section Problem: Simple and usually inefficient generation of n-body phase-space
In rest-frame: for the intermediate states and then weight each event… + Lorentz boosts to overall centre of mass system …but , cuts,…!
Method 3: Importance Sampling Sample 1) Express as In particular: probability density 2) Consider a sampling of and apply the acceptance-rejection algorithm to How are the accepted values distributed?
Algorithm i) Sample ii) Apply Acceptance-Rejection to sample drawn from ? …but how do we choose
… ways ... Easy to invert so we can apply the Inverse Transform Method Easiest choice: … is just the aceptance – rejection procedure We would like to choose so that, instead of sampling uniformly the whole domain , we sample with higher probability from those regions where is “more important” Take as “close” as possible to will be ”as flat as possible” (“flatter” than ) and we shall have a high efficiency when applying the acceptance-rejection
Example: density: 1) We did “standard” acceptance-rejection 2) “Importance Sampling”
2.1) Generate 2.2) Acceptance-Rejection on concave on
acceptance-rejection Importance Sampling
Method 4: Decomposition (… trick) Idea: Decompose the p.d.f as a sum of simpler densities Normalisation: Each density has a relative weight Sample with probability … thus, more often from those with larger weight
Algorithm i) Generate to select with probability n) ii) Sample Note: Sometimes the p.d.f. can not be easily integrated… … normalisation unknown ... Evaluate the normalisation integrals during the generation process (numeric integration) and asign eventually a weight to each event
Example: Algorithm: normalisation i) Generate ii) Sample from: 15% of the times 75% of the times
Generalisation: to a continuous family and consider Extend as a marginal density: Algorithm: i) Sample n) ii) Sample Structure in bayesian analysis Experimental resolution
The Normal Distribution Standardisation but... i) Inverse Transform: is not anelementary function entire function series expansion convergent but inversion slow,… ii) Central Limit Theorem: approx iii) Acceptance - Rejection: not efficient although… easy to find good approximations iv) Importance Sampling: Sometimes, going to higher dimensions helps: in two dimensions…
Inverse Transform in two dimensions i) Consider two independent random quantities ii) Polar Transformation: independent iii) Marginal Distributions: …trivial inversion
Algorithm i) Generate n) ii) Invert and to get iii) Obtain as (independent)
Example: 2-dimensional Normal random quantity Sampling from Conditional Densities Standardisation i) Generate ii)