1 / 33

Random-walk solution of the global illumination problem

Random-walk solution of the global illumination problem. L ászló Szirmay-Kalos. Expansion solution of the global illuminaton problem. M (  n L e ) = ... R ( w 1 ,...,w n ) L e ( w 1 ,...,w n ) d w 1 ,..., d w n. y. w 2. L (x, w ). ’ 1. ’ 2. w 1. w. x. L e.

lauren
Download Presentation

Random-walk solution of the global illumination problem

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Random-walk solution of the global illumination problem László Szirmay-Kalos

  2. Expansion solution of the global illuminaton problem M ( nLe )=...R(w1,...,wn)Le(w1,...,wn) dw1,...,dwn y w2 L(x,w) ’1 ’2 w1 w x Le R(w1,w2) =1/Spfr (w1,x,)cos’1 fr (w2,y,1)cos’2

  3. pixel w 1 e w L 2 Expansion R(w1,...,wn,x) = w1 w2 …wn wi= fr cos ’ F= n...R(w1,...,wn,x)Le(w1,...,wn,x) dw1,...,dwndx  1/MiMS(w1(i),...,wn(i),x(i) )

  4. Why Monte-Carlo expansion? • Avoid the exponential core: • Monte-Carlo or quasi-Monte Carlo integration • Intuitive: R(w1,...,wn,x) · Le(w1,...,wn,x) integrand= probability · emission • Integrand is • L2 : MC finite variance • Not of finite variation, but finite length of discontinuities: QMC

  5. Importance Sampling: F=M nLe= = n...R(w1,...,wn)Le(w1,...,wn) dw1,...,dwndp R(w1,...,wn,p) = 1/Spw1 w2 …wn, wi=fr cos’ F= M (Le +(Le +(Le + …) ))= = M (Le+w1(Le + w2(Le + Le …) dw2 ) dw1)

  6. Iterative importance sampling • Pr(wi)wi (Le +…) • reflection prob x (direct lighting + indirect lighting) • BRDF sampling: Pr(wi)wi • Direct lighting: Pr(wi)Le • Global imp. sampling: Pr(wi)wi (Lin estimate) • Adaptive imp. sampling: find a process that converges to Pr(wi)wi (Le +…)

  7. BRDF sampling: Pr(win |wout) w =fr (win,wout)cos’ • Transformation of random variables: • 1. use spherical coordinates: w(win |wout)dw=w(,’)sin’d d’ • 2. normalization: p(,’) d d’ = w dw / fr (win,wout)cos’dw • scale: a (wout) =  fr (win,wout)cos’dw • 3. decomposition: p(,’) = p() p’(’) • 4. probability distributions: • P’(’)=’ p’() d, P()= p () d • 5. Generation of uniform random variables r1, r2. • 6. Find ,’ sample by transforming r1, r2 by the inverse probability distributions • ’= P’-1(r1),  = P-1(r2)

  8. Diffuse BRDF sampling • 1. use spherical coordinates: • w(win |wout)dw= kd cos’sin ’d d’ • 2. normalization: albedo= kd   • p(,’) = cos’sin ’/ • 3. decomposition: • p(,’) = 1/2 · 2 cos’sin ’= 1/2  · sin 2’ • 4. probability distributions: • P()=/2, P’(’)=sin2 ’ • 5. Generation of uniform random variables r1, r2. • 6. Find ,’ samples •  =2r2, ’= arcsin r1

  9. Diffuse BRDF class class Diffuse { Color Kd; public: SColor BRDF(Vec& L, Vec& N, Vec& V) { return Kd; } double NextDir(Vec& L, Vec& N, Vec& V) { double theta = asin(sqrt( rand() )); double phi = M_PI * 2.0 * rand(); Vec O = N % Vec(0.0, 0.0, 1.0), P = N % O; // cheat !!! L = N*cos(theta)+O*sin(theta)*cos(phi)+P*sin(theta)*sin(phi); return (cos(theta)/M_PI); } };

  10. dA/cos  dA rejected samples Diffuse BRDF sampling: geometric approach dw dwcos • 1. Generate a uniformly distributed point in a square • 2. Reject and generate new if not in the base circle • do { x = r1, y = r2 } while (x2+ y2 > 1) • 3. Map to the sphere to find the direction • z =  1 - x2- y2

  11. Specular BRDF sampling • 1.use spherical coordinates: • w(win |wout) dw= ks cosny cos’ siny d dy • 2. compromise: p(,’)ks cosny siny • 3. normalization:  kscosny siny d dy =2ks /(n+1) • p(,y) = (n+1)/2 cosny siny • 3. decomposition: p(,y)= 1/2 ·(n+1)cosny siny • 4. probability distributions: • P()=/2, Py(y)= 1- cosn+1y • 5. Generation of uniform random variables r1, r2. • 6. Find ,’ samples: •  =2r2, y= arccos r11/(n+1) • 7. Radiance is 0 if’(, y) > 90 degrees

  12. Specular BRDF class class Phong { Color Ks; double shine; public: double NextDir(Vec& L, Vec& N, Vec& V) { u = rnd(), v = rnd(); double cos_V_R = pow(u, 1.0/(shine+1) ); double sin_V_R = sqrt(1.0-cos_V_R*cos_V_R ); Vec O = V % Vec(0.0, 0.0, 1.0), P = O % V; Vec R = O*sin_V_R*cos(2.0*M_PI*v) + P*sin_V_R * sin(2.0*M_PI*v) +V*cos_V_R; L = N * (N * R) * 2.0 - R; if (N * L < 0) return 0; return (shine+1)/2/M_PI * pow(cos_V_R, shine); } };

  13. Ideal reflection sampling • w(win |wout) is non-zero just for the reflection direction • Select the reflection direction with probability 1.

  14. Infinite dimensional integrals: Russian roulette • 1. MC integral of a bounce: • wi(Le +...) dwi = E[wi(Le +...) /p(wi)]= E[ Lrefl ] • 2. Compute the reflected radiance with probability s, • otherwise assume that it is 0 • 3. Compensate the error by dividing the radiance by s • Expected value is still correct: • E[ Lrefl* ] = s E[ Lrefl/s ] + (1-s) 0 = E[ Lrefl ] • Variance is increased • D2[ Lrefl* ] = s E[ (Lrefl/s)2 ] + (1-s) 0 - E2[ Lrefl ]= • (1/s - 1) E[(Lrefl)2 ] + D2[ Lrefl ]

  15. Selection of the termination probability: albedo • 1. BRDFimportance sampling: • wi(Le +...) dwi = E[wi (wi, wi-1)/p(wi) (Le +...) ]= • E[a(wi-1) (Le +...) ] • 2. Continuation with the probability of the albedo: • if rnd() < a(wi-1) • return Le +... • else return 0

  16. Russian roulette with QMC • Using less number of samples if the integrand is smaller: w Lin dw’ • Adding an indicator (0/Const) function C(w’,r): w = C(w’,r)dr • Substitution : • w Lindw’ = r w’C(w’, r)Lin dw’ dr

  17. Combined BRDF sampling • 1.BRDF is a sum of elementary BRDFs: • fr = f1 + f2 + … + fn, w = w1+ w2+ … + wn • 2.  wLindw=  w1Lindw + ... +  wnLindw • 3. Evaluate this sum randomly: • probability of terms: s1, s2, … sn, • probability of no term: (1- s1- s2 - …- sn) • if term i is selected it is divided by si • Expected value: • s1  w1Lindw/ s1 + ... + sn  wnLindw/ sn +(1-..)0= •  wLindw

  18. Low variance estimator • si is the albedo of the BRDF • 1. Select reflection model: • probabilities: a1, a2, … an, • stop with (1-a1-a2-…an) • 2. Generate the direction using the reflection model • 3. Lrefl = Lin from the selected direction or 0 if stopped  wLindw =  w1Lindw+...+ wnLindw= E[w1/p1Lin ]+...+ E[wn/pnLin ] = E[a1Lin ]+...+ E[anLin ]

  19. Combined BRDF class class CombMat : Diffuse, Phong, Reflector, Refractor { enum {NO, DIFF, PHONG, REFLECT, REFRACT} selected; public: double SelectModel(Vec& L, Vec& N, Vec& V); double NextDir(Vec& L, Vec& N, Vec& V, BOOL out); Color BRDF(Vec& L, Vec& N, Vec& V); };

  20. SelectModel double CombMat:: SelectModel(Vec& L, Vec& N, Vec& V) { double akd =Diffuse::Albedo(N,V), aks =Phong::Albedo(N,V), akr = Reflector::Albedo(N,V), akt = Refractor::Albedo(N,V); double r = rnd(); if ((r -= akd) < 0) { selected = DIFF; return akd; } if ((r -= aks) < 0) { selected = PHONG; return aks; } if ((r -= akr) < 0) { selected = REFLECT; return akr; } if ((r -= akt) < 0) { selected = REFRACT; return akt; } selected = NO; return 0.0; // Russian roulette }

  21. CombMat::Reflect double CombMat::NextDir(Vec& L, Vec& N, Vec& V, BOOL out) { switch (selected) { case DIFFUSE: return Diffuse :: NextDir(L, N, V); case PHONG: return Phong :: NextDir(L, N, V); case REFLECT: return Reflector :: NextDir(L, N, V); case REFRACT: return Refractor :: NextDir(L, N, V, out); } }

  22. CombMat::BRDF Color CombMat :: BRDF(Vec& L, Vec& N, Vec& V) { switch (selected) { case DIFF: return DiffuseMaterial :: BRDF(L, N, V); case PHONG: return SpecularMaterial :: BRDF(L, N, V); case REFLECT: return (N*L>0): kr() / (N*L): Color(0); case REFRACT: return (-N*L>0): kt() / (-N*L): Color(0); } }

  23. Requirements of BRDF models • Local illumination and ray-tracing: • Calculation of the Lout radiance from Lin • Random walk global illumination: • Generation of a random w’ : • Computation of the albedo • diffuse, ideal reflection: yes • Phong, Ward, Schlick, ideal refraction: almost • Cook-Torrance, He-Torrance: no

  24. Colored scenes • Rendering or potential equation should be solved on wavelengths: 1,2 ,…, m • Solving on each wavelength separately: • waste of visibility computation • Solving on all wavelength simultaneosly • scalar importance should be defined on the radiance vector • Luminance: e.g.: kdeff =  kd(i) weigth(i)

  25. Global importance sampling • Two-pass methods (estimation+random walk) • Preprocessing: photon shooting • Data structure about the radiance (4 5D-table) • 5D adaptive tree • photon map • links • wavelets • On the fly importance generation from the data structure

  26. Photon shooting + Importance generation  2 4  Dicretising the hemisphere Looking the photon impact from a patch Photon shooting 1 4/6 Random variable 2/6 2/6 , , Dicrete CDF Dicrete pdf

  27. Adaptive importance sampling • Single pass methods • Use previous walks to guide the importance of the future walks • explicit storage of importance: VEGAS sampling: approximate high-D importance functions by the product of 1D functions • implicit: use a process that converges to the desired probability density: Metropolis

  28. Metropolis sampling • Adaptation is guided by a Markov chain f(zi) f(zt) T zi zt acceptance probability: f(zt) / f(zi) zi+1

  29. Design of a Metropolis • “Arbitrary” mutation function T(zizt ) • Construct the acceptance a(zizt ) so that the limiting probabilities: p(z)  f(z) p(z1) T(z1z2 ) a(z1z2 ) a(z1z2 ) a(z2z1 ) f(z2) T(z2z1) f(z1) T(z1z2 ) z2 = z1 Detailed balance z3

  30. Metropolis algorithm FOR i=1 TO M DO Using zi choose another random, tentative point zt a(zi zt) = (f (zt) T(ztzi)) /((f (zi) T(zizt)) IF a(zi zt) > 1 THEN accept zi+1 = zt ELSE // accept with probability a(zi zt) Generate random number r in [0,1] IF r < a(zi zt) THEN zi+1 = zt ELSE zi+1 = zi ENDIF ENDFOR

  31. Convergence of the probability density p1 f(z) p2 p z0

  32. Definition of a Metropolis algorithm • Perturbation (mutation) strategy: • how to find a new tentative sample in the neightborhood of the actual samples. • Req: any state can be reached • Benefits: adaptive, exact importance sampling • Drawbacks: adaptation phase: start-up bias

  33. Metropolis method for the rendering equation pixel e L Mutations: perturbing the directions, adding and deleting steps

More Related