1 / 20

Robust Neural Networks using Motes

Robust Neural Networks using Motes. James Hereford, T ü ze Kuyucu Department of Physics and Engineering Murray State University. Outline. Introduction Goal System overview Background information Results Neural net with independent nodes (motes) Training of distributed neural net

robinphelps
Download Presentation

Robust Neural Networks using Motes

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Robust Neural Networks using Motes James Hereford, Tüze Kuyucu Department of Physics and Engineering Murray State University

  2. Outline • Introduction • Goal • System overview • Background information • Results • Neural net with independent nodes (motes) • Training of distributed neural net • Demonstration of fault recovery EH2005

  3. Why interested • Ultimate goal: • Build devices that never fail • Applications: • NASA: Long journeys (e.g., roundtrip to Mars), space probes • Hazardous/dangerous operations – difficult to repair by humans EH2005

  4. System Overview One approach to fault tolerance – redundant components Multiple, redundant sensors T1 Processing circuit T2 Tavg T3 • Steps: • Derive/evolve circuit to average N sensors. • Detect a failure. • Re-evolve to average remaining sensors. Simple processing nodes provide redundancy which opens the possibility of fault tolerance in the processing circuit EH2005

  5. System Overview - neural net • Neural net: Each artificial neuron (node) receives weighted info from previous layer, sums data, then passes it to next layer. • Neural nets are trained with an iterative technique that determines the interconnection weights. • Challenge: reprogram the neural net when it is unknown a priori which node failed. Idea: If one of our processing units fails, re-train the whole network using evolvable programming techniques. EH2005

  6. System Overview 2 key questions: • How to build the neural net? What hardware device to use for each of the nodes? • How to program the neural net? We need a programming technique that does not require a priori information. EH2005

  7. System Overview Hardware components T1 Multiple, redundant sensors T2 Tavg T3 • Required node characteristics • Multiply/add • Memory (Store weights) • Internode communication • Power • Mote characteristics • Processor • Memory • Transmit/receive (wireless) • Power • Interface to sensor boards • Software infrastructure (TinyOS, nesC) Devices? EH2005

  8. Background information The function of each node is performed by a mote. Mica2Dot mote Mica2 mote Practicalities: Crossbow, $125 (Mica2), range = 10’s of meters, event driven OS, 433 MHz/900 MHz/2.4 Ghz, programming is non-intuitive! EH2005

  9. Background information - Particle Swarm Optimization Use PSO to train and re-train neural net 40 39 . . . . 20 . . . . 4 3 2 1 0 Corn Field 2-D Search Space 0 1 2 3 4 ……………………...20…………………………..39 40 EH2005

  10. Background information - PSO • 2 update equations: • Velocity • vn+1 = vn + c1*rand*(pbestn – pn) + c2*rand*(gbestn–pn) • Position • pn+1 = pn + vn+1 • Advantages for our application: • Simple update equations • No “hard” functions (e.g., sqrt, sin, fft) • Can “tune” algorithm via constants c1 and c2 EH2005

  11. Results • Results in 3 major areas: • Training of neural network with PSO (simulation) • Fault recovery (simulation + hardware) • Building neural net with independent (physically distinct) nodes EH2005

  12. Neural Net – Training Results Comparison of classical NN training techniques vs PSO 2 layer 3 layer EH2005

  13. NAND TRAIN RETRAIN XOR Neural Net – Training Results Used PSO to re-train the neural net for a different operation Successful in all cases! EH2005

  14. Neural Net – Training Results Failure recovery – failure in hidden node(s) 2x4x1 Showed fault recovery for NAND and XOR operations. Successful in all cases-again! Concern: Highly variable number of evaluations to reprogram. Extreme case showed 200:1 variation. 2x3x1 2x2x1 EH2005

  15. Neural Net - Hardware • Built hardware NN out of motes • 2 layer (no hidden layer) neural net • Training times shorter with PSO than with perceptron • Demonstrated fault recovery EH2005

  16. Neural Net - Hardware 2 layer neural net with fault recovery EH2005

  17. Neural Net - Hardware 3 layer neural net built using motes • Programmed successfully “off-line” • Weights from simulation stored on motes worked fine. EH2005

  18. Neural Net - Hardware • Embedded programming • Programmed using base mote as “master” • All training done by output (base) node and weight updates sent to hidden layer nodes • Developing distributed training method • Experiments on-going. Mote to mote “feedback” communications problematic. Once programmed – system is able to withstand the hammer test. EH2005

  19. Acknowledgements • David Gwaltney – NASA-MSFC • James Humes • Funding: • Murray State: Committee on Institutional Studies and Research • KY NASA EPSCOR program EH2005

  20. Conclusions • Simulated and built neural network with independent processors used for each node • Trained (and re-trained) neural net with PSO • Used motes to do processing, not just sensing • Failure recovery approach – every node is identical and replaceable EH2005

More Related