160 likes | 384 Views
Parallel Monte Carlo and the Quantum Harmonic Oscillator. PHYS 244 Final Project Spring 2013 By Raul Herrera and Travis Johnson. Theory of Quantum Oscillator. Quantum system with potential m is mass, w is angular frequency
E N D
Parallel Monte Carlo and the Quantum Harmonic Oscillator PHYS 244 Final Project Spring 2013 By Raul Herrera and Travis Johnson
Theory of Quantum Oscillator • Quantum system with potential • m is mass, w is angular frequency • Has discrete energy levels with the lowest or ground state energy: • Particle position follows a probability distribution • Probability density is absolute value squared of wavefunction • For ground state the wave function is a Gaussian
Path Integral • Quantum particle can take many paths but only some are probable • Turn path integral from complex to real using imaginary time with it -> T
Monte Carlo • Calculate integrals weighted by a distribution • Create a distribution using the Metropolis method
(Pseudo)Random Number Generation • Need to generate a multitude of random numbers for any MC simulation • Generating random numbers in parallel raises some issues: • Is it safe to be called from multiple threads? • rand() is not thread safe • Are the random number streams independent? • Solution?
Dynamic Creator of Mersenne Twisters • Use a library (dcmt) to generator independent random number generators • Each thread getsits own MT random number generator • Stores RNG parameters and state in a struct • Is thread safe and generates independent streams as desired
Basics of the Code • Each thread gets its own random number generator to sample paths with • Parallelized with OpenMP in two ways • Parallel - Split the time lattice of the distribution to be updated simultaneously • Distributed – Have each thread update and sample its own distribution independently
Program Structure • Path updates are done using Metropolis • Find an initial path near the lowest energy path • Use multiple threads in parallel to update one distribution some number of times • parallel_update(distribution, numIterations); • Create a parallel region in which to generate sample paths quickly using multiple threads • Each thread generates independent sample paths • Calculate the average value of x squared to be used to find the ground state energy
The Metropolis Algorithm double metropolis(double xminus,double x,double xplus, mt_struct* mts, uint32_t *accepted_count){ double actionOld, actionNew, xnew, prob; xnew = x + DELTA *((double)2*genrand_mt(mts)- UINT32MAX)/UINT32MAX; actionOld =(1/(2*EPSILON))*(pow(x-xminus,2) + pow(x-xplus,2)) +(EPSILON/2)*pow(x*OMEGA,2); actionNew =(1/(2*EPSILON))*(pow(xnew-xminus,2) + pow(xnew-xplus,2)) +(EPSILON/2)*pow(xnew*OMEGA,2);// If the energy is lowered, accept the new point.if(actionNew < actionOld){(*accepted_count)++;return xnew;}// If not, still accept the new point with probability Exp(-(changeinE)) prob =(double)genrand_mt(mts)/ UINT32MAX;if(exp(actionOld-actionNew)> prob){(*accepted_count)++;return xnew;}return x;}
Parallel Update void parallel_update(double* dist1,int numIterations) { int i, n, tid; double*dist2,*inBuf,*outBuf; #pragma omp parallel private(i, n, mts, acceptance) \ shared(dist2, dist1, numIterations) { tid = omp_get_thread_num(); mts = get_mt_parameter_id_st(32,521, tid, SEED); sgenrand_mt(time(NULL), mts);for(n=1; n <= numIterations; n++){// Alternate which buffer is input and which is outputif( n%2){ inBuf = dist1; outBuf = dist2;} else{ inBuf = dist2; outBuf = dist1;} #pragma omp for schedule(dynamic, 100)for(i=1; i <=POINTS; i++) outBuf[i]= metropolis(inBuf[i-1], inBuf[i], inBuf[i+1], mts,...);// Adjust the endpoint valuesoutBuf[0]= outBuf[POINTS];outBuf[POINTS+1]= outBuf[1];}// end for(n) free_mt_struct(mts); }// End omp parallel
Distributed Sampling #pragma omp parallel private(...) shared(...) { //( Initializations and Declarations) doubledist[POINTS]; // (Copy the inital distribution into the dist for each thread) // (Create and seed the RNG mts) numThreads = omp_get_num_threads(); tid = omp_get_thread_num();for(n =0; n < runs_per_thread; n++){for(i =0; i < updates_per_run; i++) update(dist, mts); xsquared=0;for(i=0; i <POINTS; i++) xsquared +=pow(dist[i],2); // Write the xsquared value to file #pragma omp criticalfprintf(fp_xsquared,"%f\t", xsquared/POINTS);} // End of for(n)// (Output a sample distribution) }// End of omp parallel
Results • Physics • Calculated Energy: 0.0332071 • Theoretical Energy: 0.0333333 (virial theorem) • Probability distribution is nearly Gaussian
Results • Total program speedup
Conclusions • By parallelizing the Monte Carlo algorithm we can get more paths in a shorter time or increase the number of lattice points • Improves accuracy and precision of the results • Speed up isn’t exactly linear with the number of threads. The parallel efficiency depends on the number of lattice points. • For a small lattice, too many threads can slow down the program
References • Creutz, M., & Freedman, B. (1981). A statistical approach to quantum mechanics. Annals of Physics, 132(2), 427-462. • Metropolis, N., Rosenbluth, A. W., Rosenbluth, M. N., Teller, A. H., & Teller, E. (1953). Equation of state calculations by fast computing machines. The journal of chemical physics, 21, 1087. • Slapik, A., & Serenone, W. (2012). Lattice Monte Carlo Study of the Harmonic Oscillator in the Path Integral Formulation. DESY Symmer Student Programme