490 likes | 582 Views
Part I: Classifier Performance. Mahesan Niranjan Department of Computer Science The University of Sheffield M.Niranjan@Sheffield.ac.uk & Cambridge Bioinformatics Limited Mahesan.Niranjan@ntlworld.com. Relevant Reading. Bishop, Neural Networks for Pattern Recognition
E N D
Part I: Classifier Performance Mahesan Niranjan Department of Computer Science The University of Sheffield M.Niranjan@Sheffield.ac.uk & Cambridge Bioinformatics Limited Mahesan.Niranjan@ntlworld.com
Relevant Reading • Bishop, Neural Networks for Pattern Recognition • http://www.ncrg.aston.ac.uk/netlab • David Hand, Construction and Assessment of Classification Rules • Lovell, et. Al. CUED/F-INFENG/TR.299 • Scott et al CUED/F-INFENG/TR.323 reports linked from http://www.dcs.shef.ac.uk/~niranjan Mahesan Niranjan
Pattern Recognition Framework Mahesan Niranjan
Two Approaches to Pattern Recognition • Probabilistic via explicit modelling of probabilities encountered in Bayes’ formula • Parametric form for class boundary and optimise it • In some specific cases (often not) both reduce to the same answer Mahesan Niranjan
Pattern Recognition: Simple case O • Gaussian Distributions • Isotropic • Equal Variances • Optimal Classifier: • Distance to mean • Linear Class Boundary Mahesan Niranjan
Mahalanobis Distance Distance can be misleading O Optimal Classifier for this case is Fisher Linear Discriminant Mahesan Niranjan
X X X X X X X X X X O X X O O O O O O O O O Support Vector MachinesMaximum Margin Perceptron Mahesan Niranjan
X X X X X X X O O X O O O O X O O X O O O O O O Support Vector MachinesNonlinear Kernel Functions Mahesan Niranjan
Support Vector MachinesComputations • Quadratic Programming • Class boundary defined only by data that lie close to it - support vectors • Kernels in data space equal scalar products in higher dimensional space Mahesan Niranjan
Support Vector MachinesThe Hypes • Strong theoretical basis - Computational Learning Theory; complexity controlled by the Vapnik-Chervonenkis dimension • Not many parameters to tune • High performance on many practical problems, high dimensional problems in particular Mahesan Niranjan
Support Vector MachinesThe Truths • Worst case bounds from Learning theory are not very practical • Several parameters to tune • What kernel? • Internal workings of the optimiser • Noise in training data • Performance? • depends on who you ask Mahesan Niranjan
SVM: data driven kernel • Fisher Kernel [Jaakola & Haussler] • Kernel based on a generative model of all the data Mahesan Niranjan
Classifier Performance • Error rates can be misleading • Imbalance in training/test data • 98% of population healthy • 2% population has disease • Cost of misclassification can change after design of classifier Mahesan Niranjan
Adverse Outcome x Benign Outcome x x Class Boundary x x x x x x x x x Threshold Mahesan Niranjan
True Positive False Positive Area under the ROC Curve: Neat Statistical Interpretation Mahesan Niranjan
Convex Hull of ROC Curves True Positive False Positive Mahesan Niranjan
Yeast Gene Example: MATLAB Demo here Mahesan Niranjan
Part II: Particle Filters for Tracking and Sequential Problems Mahesan Niranjan Department of Computer Science The University of Sheffield
Overview • Motivation • State Space Model • Kalman Filter and Extensions • Sequential MCMC Methods • Particle Filter & Variants Mahesan Niranjan
Motivation • Neural Networks for Learning: • Function Approximation • Statistical Estimation • Dynamical Systems • Parallel Processing • Guarantee Generalisation: • Regularise / control complexity • Cross validate to detect / avoid overfitting • Bootstrap to deal with model / data uncertainty • Many of the above tricks won’t work in a sequential setting Mahesan Niranjan
Interesting Applications • Speech Signal Processing • Medical Signals • Monitoring Liver Transplant Patients • Tracking the prices of Options contracts in computational finance Mahesan Niranjan
Good References • Bar-Shalom and Fortman: Tracking and Data Association • Jazwinski: Stochastic Processes and Filtering Theory • Arulampalam et al: “Tutorial on Particle Filters…”; IEEE Transactions on Signal Processing • Arnaud Doucet: Technical Report 310, Cambridge University Engineering Department • Benveniste, A et al: Adaptive Algorithms and Stochastic Approximation • Simon Haykin: Adaptive Filters Mahesan Niranjan
Matrix Inversion Lemma Mahesan Niranjan
Linear Regression Mahesan Niranjan
Recursive Least Squares Mahesan Niranjan
State Process Noise Measurement Noise Observation State Space Model Mahesan Niranjan
Simple Linear Gaussian Model Mahesan Niranjan
Prediction Correction Kalman Filter Mahesan Niranjan
Innovation Kalman Gain Kalman Filter Mahesan Niranjan
Prior Likelihood Innovation Probability Bayesian Setting • Run Multiple Models and Switch - Bar-Shalom • Set Noise Levels to Max Likelihood Values - Jazwinski Mahesan Niranjan
Extended Kalman Filter Taylor Series Expansion around the operating point First Order Second Order Iterated Extended Kalman Filter Lee Feldkamp @ Ford Successful training of Recurrent Neural Networks Mahesan Niranjan
Iterated Extended Kalman Filter Local Linearization of State and / or Observation Propagation and Update Mahesan Niranjan
Generate some points at time Unscented Kalman Filter So they can represent the mean and covariance: Propagate these through the state equations Recompute predicted mean and covariance: Mahesan Niranjan
Recompute: Recipe to define: Mahesan Niranjan
Formant Tracking Example Excitation Speech Linear Filter Mahesan Niranjan
Formant Tracking Example Mahesan Niranjan
Formant Track Example Mahesan Niranjan
Grid-based methods Discretize continuous state into “cells” Integrating probabilities over each partition Fixed partitioning of state space Mahesan Niranjan
Parameters Uncertainty over parameters Sampling Methods: Bayesian Inference Inference: Mahesan Niranjan
Basic Tool: Composition [Tanner] To generate samples of Mahesan Niranjan
Importance Sampling Mahesan Niranjan
Particle Filters Bootstrap Filters ( Gordon et al, Tracking ) CONDENSATION Algorithm ( Isard et al, Vision ) Prediction Weights of Sample Mahesan Niranjan
Sequential Importance Sampling Recursive update of weights Only upto a constant of proportionality Mahesan Niranjan
Effective number of particles Degeneracy in SIS Variance of weights monotonically increases All except one decay to zero very rapidly Resample if Mahesan Niranjan
Sampling, Importance Re-Sampling (SIR) Multiply samples of high weight; kill off samples in parts of space not relevant “Particle Collapse” Mahesan Niranjan
Suppose Sample with respect to Rao-Blackwell Integrate with respect to Marginalizing Part of the State Space Possible to analytically integrate with respect to part of the state space Mahesan Niranjan
Variations to the Basic Algorithm • Integrate out part of the state space • Rao-Blackwellized particle filters ( e.g. Multi-layer perceptron with linear output layer ) • Variational Importance Sampling ( Lawrence et al ) • Auxilliary Particle Filters ( Pitt et al ) • Regularized Particle Filters • Likelihood Particle Filters Mahesan Niranjan
Regularised PF: basic idea Samples Kernel Density Resample Propagate in time Mahesan Niranjan
Conclusion / Summary • Collection of powerful algorithms • New and interesting signal processing problems Mahesan Niranjan