410 likes | 580 Views
Image Super-resolution Using Statistical Learning. Preliminary Exam Presentation Karl Ni Professor Truong Nguyen Professor Nuno Vasconceles Professor William Hodgkiss. Outline. Problem Description: Image super-resolution Background Information Non-statistical Techniques
E N D
Image Super-resolutionUsing Statistical Learning Preliminary Exam Presentation Karl Ni Professor Truong Nguyen Professor Nuno Vasconceles Professor William Hodgkiss
Outline • Problem Description: Image super-resolution • Background Information • Non-statistical Techniques • Statistical Techniques • Classification-based Approaches • Contributions • Regression-based Approach and Rationale • Spatial Domain SVM Superresolution • Frequency Domain SVM Superresolution • Results • Conclusion
Problem Description • Low-resolution image transferred to high resolution • Addition of pixels • Single-frame image superresolution is a way to fill in the missing information for a larger image, specifically what values these pixels take on.
Low Freq Coefficients Zero-pad Low Freq Coefficients All Zeros Frequency Domain Non-statistics-based Interpolation Techniques • B-Splines Methods • Bilinear • Bicubic • Cosine Domain Upscaling: Zero Padding.
Current Statistical Learning Techniques • Instead of blindly guessing or filling in information, we can use prior knowledge included in a training set. • Nearest Neighbor: Freeman, Jones, Pasztor • Expectation Maximization: Atkins and Bouman • Image estimation lowers MSE
On the correlation between known and unknown information • There’s some relationship between every even and every odd components, just as there is some relationship between high frequencies and low frequencies.
Knowledge Base Decision Operations Informed Decision Observations Using past observations for future decisions • Prior knowledge of the values and locations of the missing information. • Exploit this knowledge as a relationship between known and unknown
Application to Machine Perception • Would like pattern recognition for application of knowledge base to observation data • Call input random variables X, the data. • Call associated labels for input random variables Y. • Have pairs: { (x1, y1), (x2, y2), …, (xN, yN), … }
Statistical Learning: (Classification) • Two types of variables: • X : vector of observations (features) in the world • Y : state (class) of the world • X, Y are related by a f: function (unknown)
Knowledge Base: { (x1, y1), (x2, y2), …, (xN, yN)} Decision Operations based on Cost Function h(xobs) Informed Decision = ydec Observations = xobs Statistical Learning Goal • Goal: Make h(x) = f(x) given training data as knowledge base
Application to Super-resolution Pixel Locations • What is the feature set x? • Can be a single pixel • Can be a vector of all the pixels • Can be linear transformation of pixels • Can be kernelized transformation of pixel values • Can be anything! (reasonable) • What is the h(x) that we are trying to learn? • Can be filter coefficients with input x = original pixels • Can be actual pixel values • Can be anything! (reasonable)
Expectation Maximization For Filter Design C.B. Atkins, et. al., 1998, “Classification Based Methods in Optimal Image Interpolation”, PhD Dissertation, Purdue University, West Lafayette, IN, USA
Minimizing the Risk Function (1/2) • We wish to learn the relationship of f(x) = y, and model it the best we can with h(x) • “0-1” loss function: 0, if y = h(x, α) 1, if y ≠ h(x, α) • Minimize the risk, defined as expected loss: R(α) = EX,Y{ L[ y, h(x, α)] } = ∫ PX,Y (x, y) L[y, h(x, α)] dx dy = 0 • PX,Y[y= h(x, α)] + 1 • PX,Y[y ≠ h(x, α)] = PX,Y[y ≠ h(x, α)] L[ y, h(x, α)] =
Minimizing the Risk Function (2/2) • What function h(x, α) minimizes the risk? h* = argminhR (α, x) = argminhEX,Y{ L[ y, h(x, α)] } = argminh PX,Y[y ≠ h(x)] h*(x) = argminh PY|X[y ≠ h(x) | x] = argminh 1 – PY|X[y = h(x) | x ] = argmaxhPY|X[ h(x) | x ] = argmaxiPY|X[ i | x ] • In other words, the optimal value of h(x) = i, given an observation x, is the value which maximizes the posterior PY|X(i | x)
Learning Algorithms • Determine the value i that maximizes PY|X(i | x) • All methods must assume a model • Two Different Philosophies: • Generative Method • Model p(x,y) and use Bayesian rules to calculate p(y|x). • Possibly biased due to the unknown assumptions • Discriminant Method • Model p(y|x) directly and map accordingly. • Possibly highly variable with few data points • Relatively new field in the past decade, universally applied
Linear Discriminants • Underlying concept is to use discriminant functions to estimate a boundary or regression in feature space. • Use a hyperplane to create boundary of classes: wTx + b = 0. • A correct decision is given by y•g(x) = y•(wTx + b) > 0 ?
wTx + b = 0 divides hyperspace into two subspaces. Distance to origin is b/||w||, where ||w|| is the norm of w. Distance to closest point is: wTxi + b γ = mini ||w|| Pictoral Reference
wTxi + b γ = mini ||w|| Support Vector Machines:Classification • Recall the minimum distance to nearest point is: • This is called the margin. • It is natural that we would wish to maximize the margin. (Maximize this minimum distance.) • The SVM classifier that maximizes the margin under some normalization is
The Soft Margin • Perhaps data is not well behaved. Not all data can be separated. • Introduce an extra “slack variable”.
Support Vector Machine Regression • Classification has a tendency of “discretizing” our output • Can think of regression as kind of like a continuous version of classification, which would need infinite number of classes • Will approximate the function that given the known image information • Function is the relationship between known and unknown elements
Support Vector Regression • Soft Margin SVMs for Classification: Wish to minimize { ||w||2 + C Σξi } subject to yi (wTxi + b) ≥ 1 – ξi ξi ≥ 0, for all i • Soft Margin SVMs for Regression: Wish to minimize { ||w||2 + C Σ( ξi+ + ξi- ) } subject to -ξi- ≤ | yi - (wTxi + b) | – ε ≤ ξi+ ξi-, ξi+ ≥ 0 • Rearranging, the Lagrangian can be written: L(w,b,ξ+,ξ-) = wTw + Σαi-((wTxi+b)-yi-ε+ξi-) + Σαi+(yi-(wTxi+b)i-ε+ξi+) + Σri-ξi- + Σri+ξi+
SVR Cost Function and Dual 2/2 • The optimization is, for an ε and C chosen a priori: max W(α+, α-) = -εΣ (α+ + α-) + Σi (αi+ - αi-)yi - ½ Σi Σk (αi+ - αi-) (αk+ - αk-) k ( xi ,xk) subject to 0 ≤ α+, α-≤ C, Σi (αi+ - αi-) = 0 • The regression estimate will then have the form of: f(x) = Σi (α+ - α-) k (x, xi) + b
Contributions • Unsupervised learning using SVM • Application of Support Vector Regression to Superresolution Problem • Direct Application (i.e. pixel prediction) • Indirect Application (i.e. filter coefficients) • Use additional equations to add structure and better results.
FILTER COEFFICIENTS c11 c12 c13 c21 c22 c23 c31 c32 c33 SVR Spatial Filter Selection • Regression to find spatial domain filters • Direct regression actually works better f Regression
Support Vector Regression:Frequency Domain • DCT Domain for statistical purposes (regression) • A downsampled version of an image is all the even samples. • From the DCT-II of a downsampled image, can we reconstruct the DCT-II of the original image?
Decimation in Time K. R. Rao, P. Yip 1988, “Discrete Cosine Transform: Algorithms, Advantages, Applications”, San Diego, CA, USA: Academic Press
Applying Learning to DIT & DIS • We can rewrite decimation in time for DCT as a linear combination of the time/spatial domain terms corresponding to even and odd samples • Decimation in Time DCT-II(x) = k(m) (DCT-I(xeven) + DCT-I(xodd) + DCT-II(xeven) + DCT-II(xodd)) • Decimation in Time DCT-II(x) = k(m) { X1 + X2 + X3 + X4 } • Overall Idea DCT-II(x) = Input Signal + f (Input Signal) + g (Remaining Terms), f is exactly known and g is to be estimated • DCT-II(X2N) = DCT-II(XN) + Known + Estimated.
Support Vector Regression • DCT-II(X2N) = DCT-II(XN)+Known+Estimated. • Given DCT-II(XN), can we determine the estimated terms that will give us DCT-II(X2N)? • Our regression is thus, Estimated= Σi (αi+ – αi-) K ( xi, DCT-II[XN] ) + b • This is done for all lower coefficients. • LibSVM & LSSVM, Regression package
Bilinear Filtering SVR Spatial Filtering Spatial Domain (3x3 Filter) vs Bilinear Filtering
Close-up of Spatial Filtering Filter size = (3x3) SVR Filtering Bilinear
Small training set (10 frames) SVR Frequency Reconstruction vs Bilinear Interpolation Bilinear Interpolation SVR Frequency Regression
Small training set (10 frames) SVM Frequency Reconstruction
PSNR Values Method PSNR Bilinear 23.301 Bicubic 22.209 Spatial 25.995 Frequency 26.843 Comparison of SVR Algorithms
Small training set (10 frames) SVM Frequency Reconstruction vs Bilinear Interpolation Bilinear Interpolation SVR Frequency Regression
4 x 4 features 8 x 8 features Effect of Dimensionality
Structured Frequency Regression Structured versus Direct SVR Regression • Direct Frequency Regression
Future Work • Apply superresolution to the error residual in video • Denoising algorithms using Support Vector Regression • Markov Random Fields, or some interrelationship between predicted values • Motion Prediction Values