1 / 26

EE645 Final Project Kenichi Kaneshige Department of Electrical Engineering

Development of Node-Decoupled Extended Kalman Filter (NDEKF) Training Method to Design Neural Network Diagnostic/Prognostic Reasoners. EE645 Final Project Kenichi Kaneshige Department of Electrical Engineering University of Hawaii at Manoa 2540 Dole St. Honolulu, HI 96822

sherryboyd
Download Presentation

EE645 Final Project Kenichi Kaneshige Department of Electrical Engineering

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Development of Node-Decoupled Extended Kalman Filter (NDEKF) Training Method to Design Neural Network Diagnostic/Prognostic Reasoners EE645 Final Project Kenichi Kaneshige Department of Electrical Engineering University of Hawaii at Manoa 2540 Dole St. Honolulu, HI 96822 Email: kkanesh@spectra.eng.hawaii.edu

  2. Contents • Motivation • What is Kalman Filter? • Linear Kalman Filter • Simulation with Linear KF • Kalman Filter and Neural Network • Extended Kalman Filter (EKF) • Node-Decoupled Extended Kalman Filter (NDEKF) • Node-decoupling • NDEKF Algorithm • Training the network • Result • Detecting the fault condition • Diagnosis/prognosis • Simulation • Create a situation • Train the neural network • Result • Conclusion and possible future work • Reference

  3. Motivation • The detection of the system condition in a real time manner • The node-decoupled extended kalman filter (NDEKF) algorithm • Should work even when the input changes (robustness) • The strength of the neural network

  4. What is Kalman Filter? • Kalman filter = an optimal recursive data processing algorithm • Used for stochastic estimation from noisy sensor measurements • Predictor-Corrector type estimator • Optimal because it incorporates all the provided information (measurements) regardless of their precision

  5. Linear Kalman Filter • Used for linear system model • Time update equation (predictor equation) • Responsible for projecting • the current state • the error covariance estimates • Obtain a priori estimates • Measurement update equation (corrector equation) • Responsible for feedback • Incorporate a new measurement into the a priori estimate • Obtain a posteriori estimates

  6. Linear Kalman Filter Algorithm Time Update (predict) Measurement Update (correct) (1) Project the state ahead (1) Compute the Kalman gain (2) Update estimate with measurement Zk (2) Project the error covariance ahead (3) Update the error covariance Initial estimates for and

  7. Simulation with Linear KF

  8. The downside of LKF • LKF seems to be working fine. What’s wrong? • Works only for the linear model of a dynamical system • When the system is linear, we may extend Kalman filtering through a linearization procedure

  9. Kalman Filter and Neural Network • Want better training methods in • Training speed • Mapping accuracy • Generalization • Overall performance • The most promising training methods to satisfy above. • weight update procedures based upon second-order derivative information (Standard BP is based on first derivative) • Popular second-order methods • Quasi-Newton • Levenburg-Marquardt • Conjugate gradient techniques • However, these often converge to local optima because of the lack of a stochastic component in the weight update procedures

  10. Extended Kalman Filter (EKF) • Second-order neural network training method • During training, not only the weights, but also an error covariance matrix that encodes second-order information is also maintained • Practical and effective alternative to the batch oriented • Developed to enable the application of feedfoward and recurrent neural network (late 1980s) • Shown to be substantially more effective than standard BP (epochs) • Downside of EKF: computational complexity • Because of second-order information that correlates every pair of network weights

  11. Decoupled Extended Kalman Filter (DEKF) • EKF: develops and maintains correlations between each pair of network weights • DEKF: develops and maintains second-order information only between weights that belong to mutually exclusive groups • Family of DEKF: • Layer Decoupled EKF • Node Decoupled EKF • Fully Decoupled EKF

  12. Neural Network’s Behavior Process Equation: Weight Parameter Vector Measurement Equation: Desired Response Vector Nonlinear Function Process Noise: Input Vector Measurement Noise:

  13. Node-decoupling • Perform the Kalman recursion on a smaller part of the network at a time and continue until each part of the network is updated. • Reduces the matrix to diagonal matrix • State Variable Representation is the following

  14. NDEKF Algorithm Estimated value of Approximate conditional error covariance matrix of Kalman filter gain matrix Error between desired and actual outputs Measurement noise covariance matrix Process noise covariance matrix Weight update equation (take partial derivatives) (For details, please refer to the paper)

  15. NDEKF with Neural Network • 1-20-50-1 MLFFNN is used. (1 input, 20 nodes in the first layer, 50 nodes in the second layer) • Used bipolar activation function for the two hidden layers • Linear activation function at the output node

  16. Training the Network Input Actual System Neural Network System in normal condition System in normal condition System in failed condition 1 System in failed condition 1 System in failed condition 2 System in failed condition 2 System in failed condition n System in failed condition n

  17. Simulation • Assume the system or the plant has characteristics with following nonlinear equation • Inputs are the following • For k=1:249 • For k=250:500

  18. Why Neural Network? • Input independency • Robustness • High accuracy for fault detection and identification

  19. Result for normal condition and failure condition 1 MSE = 4.7777

  20. Result for failure condition 2 and 3 MSE=7.8946

  21. The actual outputs of the system with its inputs.

  22. Diagnosis of the system in actual condition • Create an actual situation using one of the conditions (normal, failed1, failed2, failed3) • Take MSE with the neural network of each of the conditions with the same input to the actual system. • Take minimum of the MSE (MMSE), and it is the most probable condition of the actual system

  23. Result Here, the actual condition was tested with failed condition 2 From above, the MMSE shows the actual system is most probably be in failed condition 2

  24. Conclusion and possible future work • The downside is there have to be a priori knowledge about the fault conditions • Work in frequency domain (FFT) • Implement with different algorithm and compare • SVM, BP, Perceptron, etc… • Work with huge noise • Work with an actual model • OSA/CBM (Open Systems Architecture / Condition Based Maintenance) • Using XML and let the real time report available on the internet

  25. References • [1] Haykin, Simon Kalman Filtering and Neural Network John Wiley & Sons, Inc., New York 2001 • [2] Murtuza, Syed; Chorian, Steven “Node Decoupled Extended Kalman Filter Based Learning Algorithm For Neural Networks”, IEEE International Symposium on Intelligent Control, August, 1994 • [3] Maybeck, Peter “Stochastic models, estimation, and control; Vol. 1”, Academic Press 1979 • [4] Narendra, K.S.; Parthasarathy, K.; “Identification and Control of Dynamical Systems Using Neural Networks”, IEEE Transactions on Neural Networks Volume: 1 Issue 1, Mar 1990 • [5] Welch, Greg; Bishop Gary “An Introduction to the Kalman Filter”, Siggraph 2001, University of North Carolina at Chapel Hill • [6] Ruchti, T.L.; Brown, R.H.; Garside, J.J.; “Kalman based artificial neural network training algorithms for nonlinear system identification”Intelligent Control, 1993., Proceedings of the 1993 IEEE International Symposium on , 25-27 Aug 1993 Page(s): 582 -587 • [7] Marcus, Bengtsson, “Condition Based Maintenance on Rail Vehicles”, 2002 Technical Report • [8] Wetzer, J.M.; Rutgers, W.R.; Verhaat, H.F.A. “Diagnostic- and Condition Assessment- Techniques for Condition Based Maintenance” 2000 Conference on Electrical Insulation and Dielectric Phenomena

  26. (cont’d) • [9] Engel, Stephen; Gilmartin, Barbara; “Prognostics, The Real Issues Involved With Predicting Life Remaining” Aerospace Conference Proceedings, 2000 IEEE , Volume: 6 , 2000 Page(s): 457 -469 vol.6 • [10] Hu, X.; Vian, J.; Choi, J.; Carlson, D.; Il, D.C.W.; “Propulsion vibration analysis using neural network inverse modeling” Neural Networks, 2002. IJCNN '02. Proceedings of the 2002 International Joint Conference on , Volume: 3 , 2002 Page(s): 2866 -2871 • [11] Puskorius, G.V.; Feldkamp, L.A.; “Neurocontrol of nonlinear dynamical systems with Kalman filter trained recurrent networks” Neural Networks, IEEE Transactions on , Volume: 5 Issue: 2 , Mar 1994 Page(s): 279 -297 • [12] Puskorius, G.V.; Feldkamp, L.A.; “Model reference adaptive control with recurrent networks trained by the dynamic DEKF algorithm” Neural Networks, 1992. IJCNN., International Joint Conference on , Volume: 2 , 7-11 Jun 1992 Page(s): 106 -113 vol.2 • [13] Iiguni, Y.; Sakai, H.; Tokumaru, H.; “A real-time learning algorithm for a multilayered neural network based on the extended Kalman filter”Signal Processing, IEEE Transactions on , Volume: 40 Issue: 4 , Apr 1992 Page(s): 959 -966 • [14] Jin, L.; Nikiforuk, P.N.; Gupta, M.M.; “Decoupled recursive estimation training and trainable degree of feedforward neural networks”Neural Networks, 1992. IJCNN., International Joint Conference on , Volume: 1 , 7-11 Jun 1992 Page(s): 894 -900 vol.1

More Related