300 likes | 471 Views
Generalization Error of Linear Neural Networks in an Empirical Bayes Approach. Shinichi Nakajima Sumio Watanabe Tokyo Institute of Technology Nikon Corporation. Contents. Backgrounds Regular models Unidentifiable models Superiority of Bayes to ML What’s the purpose? Setting Model
E N D
Generalization Error of Linear Neural Networks in an Empirical Bayes Approach Shinichi Nakajima Sumio Watanabe Tokyo Institute of Technology Nikon Corporation
Contents • Backgrounds • Regular models • Unidentifiable models • Superiority of Bayes to ML • What’s the purpose? • Setting • Model • Subspace Bayes (SB) Approach • Analysis • (James-Stein estimator) • Solution • Generalization error • Discussion & Conclusions
Regular models Everywhere det (Fisher Information) > 0 - Mean estimation - Linear regression (Asymptotically) normal likelihood for ANY true parameter Regular Models Conventional Learning Theory K: dimensionality of parameter space n: # of samples x: input y: output 1. Asymptotic normalities of distribution of ML estimator andBayes posterior GE: FE: Model selection methods (AIC, BIC, MDL) 2. Asymptotic generalization error l(ML) = l(Bayes)
Unidentifiable models Exist singularities, where det (Fisher Information) = 0 - Neural networks - Bayesian networks - Mixture models - Hidden Markov models Unidentifiable set : NON-normal likelihoodwhen true is on singularities. Unidentifiable models H: # of components 1. Asymptotic normalities NOT hold. No (penalized likelihood type) information criterion.
Unidentifiable models Exist singularities, where det (Fisher Information) = 0 - Neural networks - Bayesian networks - Mixture models - Hidden Markov models In ML, In Bayes, Superiority of Bayes to ML How singularities work in learning ? When true is on singularities, Increase of neighborhood of true accelerates overfitting. Increase of population denoting true suppresses overfitting. (only in Bayes) 1. Asymptotic normalities NOT hold. No (penalized likelihood type) information criterion. 2. Bayes has advantage G(Bayes) < G(ML)
What’s the purpose ? • Bayes provides good generalization. • Expensive. (Needs Markov chain Monte Carlo) Is there any approximation with good generalization and tractability? • Variational Bayes (VB) [Hinton&vanCamp93; MacKay95; Attias99;Ghahramani&Beal00] Analyzed in another paper. [Nakajima&Watanabe05] • Subspace Bayes (SB)
Contents • Backgrounds • Regular models • Unidentifiable models • Superiority of Bayes to ML • What’s the purpose? • Setting • Model • Subspace Bayes (SB) Approach • Analysis • (James-Stein estimator) • Solution • Generalization error • Discussion & Conclusions
Trivial redundancy True map: B*A*with rankH* ( ). learner true Linear Neural Networks(LNNs) LNN with M input, N output, and H hidden units: A : input parameter (H x M ) matrix B : output parameter (N x H ) matrix Essential parameter dimensionality:
Maximum Likelihood estimator [Baldi&Hornik95] ML estimator is given by where Here : h-th largest singular value of RQ -1/2. : right singular vector. : left singular vector.
ntraining samples Marginal likelihood : Posterior : Predictive : Bayes estimation : input : output : parameter True Learner Prior In ML (or MAP) : Predict with one model In Bayes : Predict with ensemble of models
Hyperparameter : ntraining samples Marginal likelihood : Posterior : Predictive : Empirical Bayes (EB) approach[Effron&Morris73] True Learner Prior Hyperparameter is estimated by maximizing marginal likelihood.
Learner : Prior : Subspace Bayes (SB) approach SB is an EB where part of parameters are regarded as hyperparameters. a) MIP (Marginalizing in Input Parameter space) version A : parameter B : hyperparameter b) MOP (Marginalizing in Output Parameter space) version A : hyperparameter B : parameter Marginalization can be done analytically in LNNs.
Intuitive explanation Bayesposterior SB posterior For redundant comp. Optimize
Contents • Backgrounds • Regular models • Unidentifiable models • Superiority of Bayes to ML • What’s the purpose? • Setting • Model • Subspace Bayes (SB) Approach • Analysis • (James-Stein estimator) • Solution • Generalization error • Discussion & Conclusions
Free energy (a.k.a. evidence, stochastic complexity) Free energy : Important variable used for model selection. [Akaike80;Mackay92] We minimize the free energy, optimizing hyperparameter.
: generalization coefficient Generalization error Generalization Error : : Kullbuck-Leibler divergence between q & p where : Expectation of V over q Asymptotic expansion : In regular, In unidentifiable,
: ML estimator (arithmetic mean) ML is efficient (never dominated by any unbiased estimator),but is inadmissible (dominated by biased estimator) when [Stein56]. ML JS (K=3) true mean James-Stein (JS) estimator for any true Domination of a over b : for a certain true K-dimensional mean estimation (Regular model) A certain relation between EB and JSwas discussed in [Efron&Morris73] : samples James-Stein estimator [James&Stein61]
where Positive-part JS estimator Positive-part JS type (PJS) estimator where Thresholding Model selection PJS is a model selecting, shrinkage estimator.
Hyperparameter optimization Assume orthonormality : : d x d identity matrix Analytically solved in LNNs! Optimum hyperparameter value :
where SB solution (Theorem1, Lemma1) L : dimensionality of marginalized subspace (per component), i.e., L = M in MIP, or L = N in MOP. Theorem 1: The SB estimator is given by where Lemma 1: Posterior is localized so that we can substitute the model at the SB estimator for predictive. SB is asymptotically equivalent to PJS estimation.
Generalization error (Theorem 2) Theorem 2: SB generalization coefficient is given by :h-th largest eigenvalue of matrix subject to WN-H* (M-H*, IN-H* ). Expectation over Wishart distribution.
Large scale approximation(Theorem 3) Theorem 3: In the large scale limit when ,the generalization coefficient converges to where
learner true Results 1 (true rank dependence) N = 30 ML M = 50 Bayes SB(MIP) SB(MOP) N = 30 M = 50 SB provides good generalization. Note : This does NOT mean domination of SB over Bayes. Discussion of domination needs consideration of delicate situation. (See paper)
learner true Results 2(redundant rank dependence) N = 30 ML M = 50 Bayes SB(MOP) SB(MIP) N = 30 M = 50 depends on H similarly to ML.has also a property similar to ML.
Contents • Backgrounds • Regular models • Unidentifiable models • Superiority of Bayes to ML • What’s the purpose? • Setting • Model • Subspace Bayes (SB) Approach • Analysis • (James-Stein estimator) • Solution • Generalization error • Discussion & Conclusions
Feature of SB • provides good generalization. • In LNNs, asymptotically equivalent to PJS. • requires smaller computational costs. • Reduction of marginalized space. • In some models, marginalization can be done analytically. • related to variational Bayes (VB) approach.
Variational Bayes (VB) Solution [Nakajima&Watanabe05] • VB results in same solution as MIP. • VB automatically selects larger dimension to marginalize. For and Bayesposterior VB posterior Similar to SB posterior
Conclusions • We have introduced a subspace Bayes (SB) approach. • We have proved that, in LNNs, SB is asymptotically equivalent to a shrinkage (PJS) estimation. • Even in asymptotics, SB for redundant components converges not to ML but to smaller value, which means suppression of overfitting. • Interestingly, MIP of SB is asymptotically equivalent to VB. • We have clarified the SB generalization error. • SB has Bayes-like and ML-like properties, i.e., shrinkage and acceleration of overfitting by basis selection.
Future work • Analysis of other models. (neural networks, Bayesian networks, mixture models, etc). • Analysis of variational Bayes (VB) in other models.