1 / 40

Introduction to Image-Space Methods in Computer Graphics

This chapter explores digital image manipulation techniques, including position changes, compositing, segmentation, and texture synthesis, using image-space methods. It also discusses the differences between real and digital images and the core concepts of digital image representation.

ssheldon
Download Presentation

Introduction to Image-Space Methods in Computer Graphics

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CS 395: Adv. Computer Graphics Introduction to Image-Space Methods Watt & Watt: Chapter 8, "Pixel is Not a Little Square!" –AR Smith Jack Tumblin jet@cs.northwestern.edu

  2. Image Space Methods • A ‘Digital Image’ = a data structure for displayed images • ?What can it do, besides display? what can we change? • Position; offset, stretch, squeeze, smear, cut,…Try It! (Java Applets) http://www.mrl.nyu.edu/~hertzman/nudge/ • Combine: +/-*, Compositing, (depth images?) • Separate: matte or ‘segmenting’ • Elaborate: make ‘more of the same’ picture

  3. ‘Real’ Images: Lens-Focused Light • 3D2D Light Intensity MapI(x,y) • Angle(,)  Position(x,y) • ‘Blurring’—sharpness set by focus, lens quality I(x,y) Image Plane Intensity Viewed Scene Angle(,) Position(x,y)

  4. ‘Digital’ Images: 2D Grid of Numbers • NO intrinsic meaning, but ... • Widely assumed to represent • Point Samples of a “smoothed” 2D intensity surface • Uniform sampling pattern (but not always) (!weasel-word!) y x

  5. Real vs. Digital • Real Image == 2D Light Intensity map: I(x,y) • Digital Image == 2D grid of numbers: I(m,n) (pixels) I(x,y) I(m,n) x,y m,n

  6. Digital Images As Vectors I(m,n) m,n • ‘Stack up’ pixel values: VERY LONG vector • 1 digital image == 1 point in N-dim. Space • Nearby points == Similar images • All possible digital images: a grid of N-D points • Space of all practical digital images: • ~8Meg dimensions (2Mpix * RGBA) • discrete, quantized I00 I01 I02 … I10 I11 I12 … I = I00 I01 I02 I03 I04 I05

  7. Digital Images As Vectors • Sensible element-by-element operations: • Add, subtract, scale two images: 0.5 + (I1 - I2 ) = out - = I1 I2 out

  8. Digital Images As Vectors • Sensible element-by-element operations: ‘Alpha blending’ or ‘dissolve’:-Weighted sum of two images: 0 <  < 1 + =  I1 (1-) I2 out

  9. Digital Images As Vectors Compositing: : Use an image as ‘opacity’: = + out (1-) .* I2  .* I1

  10. Extended Compositing Methods • Environment Mattes Alpha Matte Environment Matte Photograph

  11. http://grail.cs.washington.edu/projects/envmatte/

  12. Digital Images As Vectors Exploration of the space-of-all-images: • Image shift, scale, rotation, warp, etc.as vector operation? awkward... • ‘Video Textures’ (Schodl et al., SIGG2000)http://www.gvu.gatech.edu/perception/projects/videotexture/ • ‘Face Space’, ‘eigenfaces’, etc: • Gather many example images of facesUse PCA to find desirable axes: gender, age, facial expression... • Depth images: is this still 2D?

  13. Texture Synthesis: Learned Vectors Input Sample Synthesized Result • Texture Synthesis/elaborationEfros98, Wei/Levoy99, Ashikhmin2000,...

  14. Texture Synthesis: Learned Vectors • Core Idea: • for each pixel, make a vector of neighborhood pixel values: • Make a ‘dictionary’ of these vectors:given a vector , • what are the most likely values of ? Dictionary: crude, local pattern learning • Use dictionary to build a new image from ‘seed’ pixels/regions chosen at random. I00 I01 I02 … I10 I11 I12 … I =

  15. Texture Synthesis: Learned Vectors • Refinements & Extensions: • Faster, simpler methods (Ashkhmin: real-time!) • http://www.cs.utah.edu/~michael/ts/ • Hertzmann(2001): Image analogieshttp://mrl.nyu.edu/projects/image-analogies/ • Given images A,B,C, ‘learn’ AB mapping. then find D Solves ‘A is to B’ as ‘C is to D’ problems • Recreates some local effects—brush strokes, blur, etc. • Wider variety of user-guided reconstruction.. • Recent extensions: video textures, depth, curves, shapes, textures on surfaces, colorizing B/W images...

  16. Texture Synthesis: Modified Vectors • ‘Image Inpainting’ –repair damaged images http://www.ece.umn.edu/users/marcelo/restoration.html • ‘Reaction/Diffusion’ textures: http://www.gvu.gatech.edu/people/faculty/greg.turk/reaction_diffusion/reaction_diffusion.htmliteratively compute new pixel intensity from neighbors + rules. Image evolves over time • Anisotropic Diffusion, Curvature Flow: iterative rules (PDEs) for progressive edge-preserving image smoothing

  17. ! NOT EQUIVALENT ! ? Digital image: pixel-by-pixel operations ? ? Real Image: point-by-point operations ? Source of much frustration & error! Digital Image == Real Image Also see Alvy Ray Smith’s article entitled:“A pixel is NOT a little square, a pixel is NOT a little square, a pixel is NOT a little square!”found here.

  18. Real Digital •  • Image Intensity • Image Spectral Distribution (‘color’) • Image Geometry (Position,Angle, Distortion, …) • Image Resolution (‘sharpness’ or ‘focus’) Real Images:Smooth, Continuous, Variable, Complete Unlimited… •  •  •  •  Digital Images: Sampled, Discrete, Quantized, Fixed Limited …

  19. Conversion Method? I(m,n) I(x,y) x,y m,n • Real Image == 2D Light Intensity map: I(x,y) • Digital Image == 2D grid of numbers: I(m,n) • ?

  20. Digital  Real : ‘Reconstruction’ 1.0 -0.5 0 0.5 A pixel sets strength of a real Basis function(aka ‘reconstruction filter’s impulse response’) ‘Box’ I(m,n) I(x,y) m,n x,y

  21. Digital  Real : ‘Reconstruction’ A pixel sets strength of a real Basis function(aka ‘reconstruction filter’s impulse response’) ‘Linear’ 1.0 I(m,n) I(x,y) -1.0 0 1.0 m,n x,y

  22. Digital  Real : ‘Reconstruction’ A pixel sets strength of a real Basis function(aka ‘reconstruction filter’s impulse response’) ‘Cubic’ 1.0 I(m,n) I(x,y) -1.0 0 1.0 2.0 m,n x,y

  23. RealDigital: ‘Sampling’ I(x,y) x,y • If I(x,y) is ‘smooth enough’ already, • ?Why not just grab I(x,y) values at (m,n) ? I(m,n) •  m,n

  24. ‘Smooth enough’ to Sample? • Because ‘smooth enough’ is undefined; • Samples may hit spurious peaks, valleys I(x,y) I(m,n) BAD! x,y m,n

  25. Real  Digital : ‘Pre-filter’ m,n ‘Smoothing’ Defined : Linear Pre-filter Pixel = local weighted average of real image around pixel sample point 1.0 Weight ‘Cubic’ weights I(x,y) 1.0 -1.0 0 1.0 2.0 x,y

  26. Pre-Filter functions 1.0 1.0 -0.5 0 0.5 -1.0 0 1.0 1.0 -1.0 0 1.0 2.0 • Pre-filters == anti-aliasing in Comp Graphics • SHOULD be done by continuous integration, • USUALLY done by super-sampling ? Quality Measures ? ‘Box’ filter ‘Bi-linear’ filter ‘Bi-cubic’ filter

  27. ‘Smoothness Measures’ • Sine wave magic: • ALL(continuous)functions are a (possibly infinite)weighted sum of sinusoids: f(x) = A0sin(w0x) + B0cos(w0x)+ A1sin(w1x) + B1cos(w0x)+… • For ALL FILTERS(box, linear, cubic,…, anything…)sinusoid in  scaled, shifted sinusoid out THUS– compute response to ALL signals from sinusoids • High-Frequency Sinusoids in a signal hold ALL of its sampling/aliasing troubles!

  28. 1-D Aliasing Example Real Real image: I(x,y) =sin(2fx) f=3.0 Aliasing: when bad conversion scrambles the frequencies of sinusoids

  29. 1-D Aliasing Example Real Sample Real image: I(x,y) =sin(2fx) f=3.0 Sample at: x(n) = n / 36.0 (~12 samples per cycle)

  30. Aliasing Example Real Sample Display Real image: I(x,y) =sin(2fx) f=3.0 Sample at: x(n) = n / 36.0 (~12 samples per cycle) Reconstruct: (use ‘box’ filter)  approximates f=3.0

  31. Aliasing Example Real Sample Display Real Real image: I(x,y) =sin(2fx) f=3.0 Sample at: x(n) = n / 36.0 (~12 samples per cycle) Reconstruct: (use ‘box’ filter)  approximates f=3.0 Another Real Image: I(x,y) = sin(2fx) where f= 39.0

  32. Aliasing Example Real Sample Display Sample Real Real image: I(x,y) =sin(2fx) f=3.0 Sample at: x(n) = n / 36.0 (~12 samples per cycle) Reconstruct: (use ‘box’ filter)  approximates f=3.0 Another Real Image: I(x,y) = sin(2fx) where f= 39.0 Sample at: x(n) = n / 36.0 (~0.92 samples per cycle)

  33. Aliasing Example Real Sample Display Display Sample Real Real image: I(x,y) =sin(2fx) f=3.0 Sample at: x(n) = n / 36.0 (~12 samples per cycle) Reconstruct: (use ‘box’ filter)  approximates f=3.0 Another Real Image: I(x,y) = sin(2fx) where f= 39.0 Sample at: x(n) = n / 36.0 (~0.92 samples per cycle) Reconstruct: (use ‘box’ filter) approximates f=3.0 !!

  34. Aliasing Example Real Sample Display Display Sample Real Real image: I(x,y) =sin(2fx) f=3.0 Sample at: x(n) = n / 36.0 (~12 samples per cycle) Reconstruct: (use ‘box’ filter)  approximates f=3.0 Another Real Image: I(x,y) = sin(2fx) where f= 39.0 Sample at: x(n) = n / 144.0 ( 4.0 samples per cycle) Reconstruct: (use ‘box’ filter) approximates f=39.0

  35. Aliasing Example Real Sample Display Display Sample Real • ? How much sampling is enough? • Depends on quality of reconstruction filter • Rule-of-Thumb: at least 4 samples per cycle of the highest frequency sinusoid • Nyquist Criterion: 2 samples/cycle is theoretical min.

  36. Conclusion: • Digital images are compact, indirect, ambiguous descriptions of light distributions • Digital images are vectors • Digital image patterns can be learned • Image-space manipulations must explicitly address real-digital (discrete-continuous) conversions to avoid aliasing artifacts.

  37. Aliasing Example Real Sample Display Real Sample Display Real image: I(x,y) =sin(2fx) f=3.0 Sample at: x(n) = n / 36.0 (~12 samples per cycle) Reconstruct: (use ‘box’ filter)  approximates f=3.0 Another Real Image: I(x,y) = sin(2fx) where f= 39.0 Sample at: x(n) = n / 36.0 Reconstruct: (use ‘box’ filter) approximates f=3.0

  38. Image Space Methods • Intro: What can we do with images? • Re-sampling:Continuous & Discrete Filters • Interpolation: What’s ‘in-between’ a pixel? • Antialiasing: What makes a good sampling? • Math Rigor: the Fourier Transform • Examples: box, triangle, Mitchell-Netravali • Compositing: Transparency • Un-Compositing: Matte Separations • Further extensions: • Environment Matte • Polynomial Texture Map • Texture Elaboration, Image Inpainting(Sigg2000)

  39. OLD • ‘Image’==A 2D map of light intensities from a lens • ‘Digital Image’==a 2D grid of numbers (pixels) • PROBLEMS: • What is between all those points? • Why go digital? Why is it better than film? • How can I combine the best of many pictures? Edit easily? GOAL: • More Flexibility; let me do more than just display!

  40. Image-Space Techniques GOAL: More flexibility! • Compositing /Matte: cut-and-paste, transparency , the ‘digital optical bench’ (Pixar`84)… • Warp: image as a ‘rubber sheet’, you can cut, stretch, and change at will • Environment Matte (Salesin99…) • Video Textures(Schodl 2000) • Polynomial Texture Map (Malzburg2001)

More Related