440 likes | 459 Views
Explore the challenges of gaps in data modelling and how to overcome them. Topics covered include sampling, density and distribution, representation, regularization, and cross-validation. Learn how to infer behavior when data is scarce and the importance of validation and generalization. Discover the impact of dimensionality and the use of basis functions. Understand the concepts of overfitting and underfitting, and learn techniques for performance evaluation and generalization. Cross-validation is introduced as a method for realistic performance estimation.
E N D
GAPS IN OUR KNOWLEDGE? a central problem in data modelling and how to get round it Rob Harrison AC&SE
Agenda • sampling • density & distribution • representation • accuracy vs generality • regularization • trading off accuracy and generality • cross validation • what happens when we can’t see
The Data Modelling Problem • y= f(x) z=y+e • multivariate & non-linear • measurement errors • {xi, zi} i = 1:N zi = f(xi)+ei • infer behaviour everywhere from a few examples • little or no prior information on f(x)
A Simple Example • one cycle of a sine wave • “well-spaced” samples • enough data (N = 6) • noise-free
Sparsely Sampled • two well-spaced samples
Densely Sampled • 200 well-spaced samples
Large N • 200 poorly-spaced samples
dimension!! What’s So Hard? • the gaps • get more (well-spaced) data • lack of prior knowledge • can’t see
Two Dimensions • same sampling density (N = 62) • well-spaced?
Dimensionality • lose ability to see the shape of f(x) • try it in 13-D • number of samples exponential in d • if N OK in 1-D, Nd needed in d-D • how do we know if “well-spaced”? • how can we sample where the action is? • observational vs experimental data!
Generic Structure • use e.g. a power series • Stone-Weierstrass • other bases e.g. Fourier series … • y = a5x5 + a4x4 + a3x3 + …+ a1x + a0 • d = 5 & six samples – no error!
Generic Structure • use e.g. a power series • Stone-Weierstrass • other bases e.g. Fourier series … • y = a5x5 + a4x4 + a3x3 + …+ a1x + a0 • d = 5 & six samples – no error! • d > 5 still no error but …
poor inter-sample behaviour how can we know without looking?
Generic Structure • use e.g. a power series • Stone-Weierstrass • other bases e.g. Fourier series … • y = a5x5 + a4x4 + a3x3 + …+ a1x + a0 • d = 5 & six samples – no error! • d > 5 still no error but … • measurement error
Generic Structure • use e.g. a power series • Stone-Weierstrass • other bases e.g. Fourier series … • y = a5x5 + a4x4 + a3x3 + …+ a1x + a0 • d = 5 & six samples – no error! • d > 5 still no error but … • measurement error • model is as complex as data
Curse Of Dimension • we can still use the idea but … • in 2-D we get 21 terms • direct and cross products • in d-D we get (d+d)!/d!d! • e.g. transform a 16x16 bitmap by d=3 polynomial and get ~ 3 million terms • sample size / distribution • practical for “small” problems
Other Basis Functions • Gaussian radial basis functions • additional design choices • how many?, where?, how wide? • adaptive sigmoidal basis functions • how many?
Overfitting (sample data) zero error? rough components
Underfitting (sample data) over-smooth components
Goldilocks v. small error “just right” components
Restricting “Flexibility” • can we use data to tell the estimator how to behave? • regularization/penalization • penalize roughness • e.g. SSE + rQ • use potentially complex structure • data constrains where it can • Q constrains elsewhere
four narrow grbfs RMSE = 0.80
four narrow grbfs + penalty RMSE = 0.24
How Well Are We Doing? • so far we know answer • for d > 2 we are “blind” • what’s happening between samples? • test on new samples in between VALIDATION • compute a performance index GENERALIZATION
Hold-out Method keep back P% for testing wasteful sample dependent Training RMSE 0.23 Testing RMSE 0.38
Cross Validation • leave-one-out CV • train on all but one • test that one • repeat N times • compute performance • m-fold CV • divide sample into m non-overlapping sets • proceed as above • all data used for training and testing • more work but realistic performance estimates • used to choose “hyper-parameters” • e.g. r, number, width …
Conclusion • gaps in our data cause most of our problems • noise • more data only part of the answer • distribution / parsimony • restricting flexibility helps • if no evidence to contrary • cross validation is a window into d-D • estimates inter-sample behaviour