1 / 11

PANN Testing

Test and compare the training time and error rates of various artificial neural networks (ANNs) including Perceptron, Back Propagation, and Progressive ANN (PANN). Also, explore the potential of PANN for building high-speed computers and electronics.

Download Presentation

PANN Testing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. PANN Testing

  2. Training Time and Error Benchmarking Target error: 0.02 (Mean Squared Error)

  3. Training Time and Error Benchmarking • Progress ANN reduces its training error to desired minimum in less than one second. • Perceptron and Back Propagation ANNs have substantial error, which reduces slowly. Perceptron and Back Propagation ANNs showno tendency to reach target error. • Tested training set: • 30,000 images 60 Perceptron 50 40 Mean Squared Error (x 100) 30 Back propagation 20 Progress ANN 10 Target error: 0.02 0 50 250 1250 6250 31250 156250 781250 Time (milliseconds)

  4. Additional Comparison to Alternatives Progress P-network NeuroSolutions Progress P-Net Creator NeuroSolution Data Manager

  5. Additional Comparison to Alternatives Number of: records, images, lines, samples, data set and sample data are used synonymously. Comparison of training time between IBM SPSS Statistics 22andP-network, for the same problem, tested on Apple iMac 27" 3.5 GHz quad-core Intel Core i7 8GB of 1600MHz DDR3 memory; SSD 14000 • IBM SPSS Statistics 22 • Exponential growth of training time 12000 10000 8000 Time in seconds 6000 4000 2000 Training advantage factor = Numberof images 0 1000 2000 3000 4000 5000 6000 7000 0 • ProgressP-network • Linear growth of training time 4 3 2 Time in seconds 1 Numberof images 0 1000 2000 3000 4000 5000 6000 7000 0

  6. PANN vs. Classical Neural Network • Theoretically unlimited intelligence • Practically unlimited intelligence Classical neural network: PANN Human brain • Long training time • Quick training Network Intelligence is proportional to the number of elements and problems at hand 1 hour 1 month 1 thousand years 1 million years 1 minute 24 hour 1 year PANN requires minutes (at most - hours) to reach the level of network intelligence unreachable for classical neural networks for thousands of years Network Intelligence Classical neural network PANN Hardware PANN Software Seconds 0 10 102 103 104 105 106 107 108 109 1010 1011 1012 1013 1014 Training Time

  7. Test on images compression Files with images from CIFAR-10 test site No compression 20000 MSE:0.006 MSE:0.013 MSE:0.005 15000 MSE:0.011 Economy of disk space Number of color images in a file 10000 MSE:0.008 MSE:0.004 PANN with 50 weights on a synapse PANN with 100 weights on a synapse 5000 1000 100 1 50 100 150 200 250 300 File size on disk, MB

  8. #2 GPUs breakthrough - 2 Amdahl’s Law and P-net PANN simple matrix algebra mathematics allows 100% parallel processing. Thus, speed increases linearly with additional GPUs and CPUs. Progress US patent application 15/449,614 that covers matrix algebra application with PANN, is filed on March 3, 2017. • This allows building computers and other electronics with: • very high processing speed, and • reduced number of GPUs and CPUs P-net application

  9. #2 GPUs breakthrough - 2 Trading speed: Comparison of PANN and nVidia cuDNN PANN training speed on CPU and GPUs is thousand times higher than of existing ANN. • Allows to: • Improve ANN training speed thousands times • Build supercomputer on GPUs • Build Hypercomputer on GPUs PANNprovides 60 (threads on GPU) x = 201 000 x. Acceleration is proportional to the number (N) of GPUs: 201 000 xN

  10. Comparison of P-net with CPU/GPU CPU - central processing unit GPU - graphics processing unit 1 sec = 1000 msec Testing computer with CPU speed/ GPU speed = 4 time, log

  11. Comparison of P-net with CPU/GPU CPU - central processing unit GPU - graphics processing unit time, log

More Related