200 likes | 347 Views
Agenda . Project 2- Due this Thursday Office Hours Wed 10:30-12 Image blending Background Constrained optimization. Recall: goal. Formulation: find the best patch f. Given vector field v (pasted gradient), find the value of f in unknown region that optimize: . Pasted gradient. Mask.
E N D
Agenda • Project 2- Due this Thursday • Office Hours Wed 10:30-12 • Image blending • Background • Constrained optimization
Formulation: find the best patch f • Given vector field v (pasted gradient), find the value of f in unknown region that optimize: Pasted gradient Mask unknownregion Background
Notation • Destination image: f* (table) • Source image: g (table) • Output image: f (table) • W:listof (i,j) pixel coordinates from f* we want to replace • dW:listof (i,j) pixel coordinates on border ofW • We’ll use p = (i,j) to denote a pixel location • gp is a pixel value at p = (i,j) from source image, • fWis the set of pixels we’re trying to find
Notation • Destination image: f* (table) • Source image: g (table) • Output image: f (table) • W: set of (i,j) pixel coordinates from f we want to replace (list of pairs) • dW: set of (i,j) pixel coordinates on border of W (list of pairs) • We’ll use p = (i,j) to denote a pixel location • gp is a pixel value at p = (i,j) from source image, • fWis the set of pixels we’re trying to find sum over all pairs of neighbors in W With constraint that, for p in dW
Optimization Drop subscript for all p in dOmega Variational formulation of solution: The best patch is the one that produces the lowest score, subject to the constraint What is known versus unknown? What is optimal fW without above constraint?
Optimization Pretend constraint wasn’t there: how to find lowest scoring fW? Brute-force search -Keep guessing different patches f and score them -Output the best-scoring one Gradient descent -Guess a patch f. Update guess with f = f -
How to estimate gradient? In general, we can always do it numerically For above quadratic function, we can calculate in closed form
How to estimate gradient? In general, we can always do it numerically For above quadratic function, we can calculate in closed form
Constrained optimization Brute-force search -Keep guessing different patches f and score them -Output the best-scoring one Gradient descent -Guess a patch f. Update guess with f = f - What happens when gradient is zero?
Optimization Brute-force search -Keep guessing different patches f and score them -Output the best-scoring one Gradient descent -Guess a patch f. Update guess with f = f – 3) Closed-form solution (for simple functions)
Constrained optimization How to handle constraints? Brute-force search -Keep guessing different patches f and score them -Output the best-scoring one Gradient descent -Guess a patch f. Update guess with f = f - Correct fp = f*p after a gradient update
Constrained optimization How to handle constraints? Brute-force search -Keep guessing different patches f and score them -Output the best-scoring one Gradient descent -Guess a patch f. Update guess with f = f - What happens when gradient is zero?
Lagrangian optimization • If there was no constraint, we’d have a closed-form solution • Is there a way to get closed-form solutions using the constraint?
Lagrangian optimization min f(x,y) such that g(x,y) = 0 Imagine we want to synthesize a “two-pixel” patch
Lagrangian optimization min f(x,y) such that g(x,y) = 0 and g(x,y) = 0
Write conditions with single equation(just for convenience) At minimum of F, the its gradient is 0 Therefore, the following conditions hold
Multiple constraints min f(x,y) such that g1(x,y) = 0, g2(x,y) = 0 What is f(x,y) in our case? g1(x,y)?
Lagrangian optimization for p in dW (border pixels) for all other p in W Since S is quadratic in f, the above yeilds a set of linear equations Af =b f = inv(A)b