160 likes | 189 Views
Learn about face hallucination, inferring high-resolution faces from low-resolution inputs. Explore applications, research, difficulties, solution strategy, Bayesian inference method, learning process, training dataset, experimental results, comparison, and a hybrid modeling approach.
E N D
A Two-Step Approach to Hallucinating Faces:Global Parametric Model and Local Nonparametric Model Ce Liu Heung-Yeung Shum Chang Shui Zhang CVPR 2001
Face Hallucination — to infer high resolution face image from low resolution input Face hallucination (a) Input 24×32 (b) Hallucinated result (c) Original 96×128
Why to study face hallucination? Applications • Video conference • To use very low band to transmit face image sequence • To repair damaged images in transmission • Face image recovery • To recover low-quality faces in old photos • To recover low-resolution monitoring videos Research • Information recovery • How to formulate and learn prior knowledge of face • How to apply face prior to infer the lost high frequency details • Super resolution • How to model the bridge from low-resolution to high-resolution
Difficulties and solution strategy Difficulties • Sanity Constraint The result must be close to the input image when smoothed and down-sampled • Global Constraint The result must have common characteristics of a human face, e.g., eyes, mouth, nose and symmetry • Local Constraint The result must have specific characteristics of this face image, with photorealistic local features Solution strategy We choose learning based method aided by a large set of various face images to hallucinate face
Previous learning-based super-resolution methods • Multi-resolution texture synthesis De Bonet. SIGGRAPH 1997 • Markov network Freeman and Pasztor. ICCV 1999 • Face hallucination Baker and Kanade. AFGR 2000, CVPR 2000 • Image analogies Hertzmann, Jacobs, Oliver, Curless and Salesin. SIGGRAPH 2001 They all use local feature transfer or inference in Markov random field, without any global correspondence taken into account.
Decouple high-resolution face image to two parts — high resolution face image — global face — local face Two-step Bayesian inference Our method 1. Inferring global face 2. Inferring local face Finally adding them together 1. Inferring global face 2. Inferring local face Finally adding them together ?
Flowchart of hallucinating face (d) (c) Input Output Inference Process (b) Global faces Local faces Learning Process (a) Learning (a) Learn the prior of global face by PCA (b) Build Markov network between global and local faces Inference (c) Infer global face by linear regression (d) Infer local face by Markov network Training dataset
Inferring global face • Prior Assume the prior of global face to be Gaussian and learn it by PCA. The global face is the principal components of the high-resolution face image. (Many other methods such as Gaussian mixture, ICA, kernel PCA, TCA can be used to model the face prior. We choose PCA because it could get simple solution) • Likelihood Treat low resolution input as a soft constraint to the global face. The likelihood turns out to be a Gaussian distribution again. • Posteriori The energy of the posteriori has a quadratic form. The MAP solution is converted to linear regression by SVD.
Inferring local face by Markov network • Local face is pursued by minimizing the energy of Markov network • Two terms of energies: • external potential— to model the connective statistics between two linked patches in and . • internal potential— to make adjacent patches in well connected. • Energy minimization bysimulated annealing An inhomogeneous patch-based nonparametric Markov network
Experimental results (1) • Input low 24×32 • Inferred global face • Hallucinated result • Original high 96×128 (a) (b) (c) (d)
Comparison with other methods • Input • Hallucinated by our method • Cubic B-spline • Hertzmann et al. • Baker et al. • Original (a) (b) (c) (d) (e) (f)
Summary • Hybrid modeling of face (global plus local) • Global: the major information of face, lying in middle and low frequency band • Local: the residue between real data and global model, lying in high frequency band • The sanity constraint is added to the global part • The global face is modeled by PCA and inferred by linear regression • The conditional distribution of the local face given the global face is modeled upon a patch-based nonparametric Markov network, and inferred by energy minimization • Both of the two steps in inference are global optimal • Global part: optimizing a quadratic energy function by SVD • Local part: optimizing the network energy by simulated annealing