460 likes | 566 Views
Lighting affects appearance. Lightness. Digression from boundary detection Vision is about recovery of properties of scenes: lightness is about recovering material properties. Simplest is how light or dark material is (ie., its reflectance ).
E N D
Lightness • Digression from boundary detection • Vision is about recovery of properties of scenes: lightness is about recovering material properties. • Simplest is how light or dark material is (ie., its reflectance). • We’ll see how boundaries are critical in solving other vision problems.
Basic problem of lightness Luminance (amount of light striking the eye) depends on illuminance (amount of light striking the surface) as well as reflectance.
Basic problem of lightness B A Is B darker than A because it reflects a smaller proportion of light, or because it’s further from the light?
Planar, Lambertian material. L = r*cos(q)e where r is reflectance (aka albedo) q is angle between light and n e is illuminance (strength of light) n n If we combine q and e at a point into E(x,y) then: L(x,y) = R(x,y)*E(x,y)
L(x,y) = R(x,y)*E(x,y) Can think of E as appearance of white paper with given illuminance. R is appearance of planar object under constant lighting. L is what we see. Problem: We measure L, we want to recover R. How is this possible? Answer: We must make additional assumptions.
Illusions • Seems like visual system is making a mistake. • But, perhaps visual system is making assumptions to solve underconstrained problem; illusions are artificial stimuli that reveal these assumptions.
Assumptions • Light is slowly varying • This is reasonable for planar world: nearby image points come from nearby scene points with same surface normal. • Within an object reflectance is constant or slowly varying. • Between objects, reflectance varies suddenly.
L(x,y) = R(x,y)*E(x,y) • Formally, we assume that illuminance, E, is low frequency.
L(x,y) = R(x,y)*E(x,y) = * Smooth variations in image due to lighting, sharp ones due to reflectance.
So, we remove slow variations from image. Many approaches to this. One is: • Log(L(x,y)) = log(R(x,y)) + log(E(x,y)) • Hi-pass filter this, (say with derivative). • Why is derivative hi-pass filter? • d sin(nx)/dx = ncos(nx). Frequency n is amplified by a factor of n. • Threshold to remove small low-frequencies. • Then invert process; take integral, exponentiate.
Restored Reflectances Reflectances Reflectances*Lighting (Note that the overall scale of the reflectances is lost because we take derivative then integrate)
These operations are easy in 1D, tricky in 2D. • For example, in which direction do you integrate? • Many techniques exist.
These approaches fail on 3D objects, where illuminance can change quickly as well.
To solve this, we need to compute reflectance in the right region. This means that lightness depends on surface perception, ie., a different kind of boundary detection.
What is the Question ?(based on work of Basri and Jacobs, ICCV 2001) Given an object described by its normal at each surface point and its albedo (we will focus on Lambertian surfaces) 1.What is the dimension of the space of images that this object can generate given any set of lighting conditions ? 2. How to generate a basis for this space ?
Ball Face Phone Parrot #1 48.2 53.7 67.9 42.8 #3 94.4 90.2 88.2 76.3 #5 97.9 93.5 94.1 84.7 #7 99.1 95.3 96.3 88.5 #9 99.5 96.3 97.2 90.7 Empirical Study (Epstein, Hallinan and Yuille; see also Hallinan; Belhumeur and Kriegman) Dimension:
Domain Domain Lambertian No cast shadows (“convex” objects) Lights are distant n l q
+ + + Lighting to Reflectance: Intuition Lambert Law k(q)= max(cosq, 0)
Lighting to Reflectance: Intuition Three point-light sources, l(q,f), Illuminating a sphere and its reflection r(q,f). Profiles of l(q)and r(q)
Lighting Reflectance where Images ... ...
Spherical Harmonics (S.H.) • Orthonormal basis, , for functions on the sphere. • n’th order harmonics have 2n+1 components. • Rotation = phase shift (same n, different m). • In space coordinates: polynomials of degree n.
k S.H. analog to convolution theorem • Funk-Hecke theorem: “Convolution” in function domain is multiplication in spherical harmonic domain.filter.
Energy of Lambertian Kernel in low order harmonics k-is a low pass filter
Reflectance Functions Near Low-dimensional Linear Subspace Yields 9D linear subspace.
Forming Harmonic Images l lZ lX lY lXY lXZ lYZ
How accurate is approximation?Point light source 9D space captures 99.2% of energy
How accurate is approximation? Worst case. • DC component as big as any other. • 1st and 2nd harmonics of light could have zero energy 9D space captures 98% of energy
How Accurate Is Approximation? • Accuracy depends on lighting. • For point source: 9D space captures 99.2% of energy • For any lighting: 9D space captures >98% of energy.
Accuracy of Approximation of Images • Normals present to varying amounts. • Albedo makes some pixels more important. • Worst case approximation arbitrarily bad. • “Average” case approximation should be good.
Summary • Convex, Lambertian objects: 9D linear space captures >98% of reflectance. • Explains previous empirical results. • For lighting, justifies low-dim methods.
Recognition Find Pose Harmonic Images Query Compare Matrix: B Vector: I Models
Experiments (Basri&Jacobs) • 3-D Models of 42 faces acquired with scanner. • 30 query images for each of 10 faces (300 images). • Pose automatically computed using manually selected features (Blicher and Roy). • Best lighting found for each model; best fitting model wins.
Results • 9D Linear Method: 90% correct. • 9D Non-negative light: 88% correct. • Ongoing work: Most errors seem due to pose problems. With better poses, results seem near 100%.