1 / 27

Modeling and Visualizing Dynamic Associative Networks:

Modeling and Visualizing Dynamic Associative Networks:. Towards Developing a More Robust and Biologically-Plausible Cognitive Model Based on Dr. Anthony Beavers’ ongoing research By Michael Zlatkovsky, dual-major in Computer Science and Cognitive Science. I’m a PC. I’m a neural net.

nardo
Download Presentation

Modeling and Visualizing Dynamic Associative Networks:

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Modeling and VisualizingDynamic Associative Networks: Towards Developing a More Robust and Biologically-Plausible Cognitive Model Based on Dr. Anthony Beavers’ ongoing research By Michael Zlatkovsky, dual-major in Computer Science and Cognitive Science

  2. I’m a PC... I’m a neural net

  3. Why Neural Nets? • Pattern recognition • Inferring a function by observation • Robustness against errors • Parallel nature

  4. Artificial Neural Networks

  5. Artificial Neural Networks • Artificial way of adjusting: setting weights

  6. Dr. Beaver’s Dynamic Associative Network Model • Dr. Beavers, Director of UE’s Cognitive Science Department, is attempting to explore a different model of cognition.

  7. Dr. Beaver’s Dynamic Associative Network Model • No more mystery “hidden layer” • Learning through the order and structure of experience • No “unnatural” training • Organic network • Can incorporate new information

  8. DAN’s Cognitive Abilities come from Long-Term Learning and Current State

  9. Translation into a Node-Centric Model

  10. Early Excel Prototype

  11. The DAN Software Suite • Based on prototype, create a self-contained DAN Model • Written in Java; object-oriented approach • Expand on features of Excel Model (various activation modes, learning mode, settings) • Most importantly: focus on design fundamentals to ensure speedy operation and high capacity. • Create visualization routines

  12. Re-Calculations • Most frequent operations • DANs are massively parallel • Re-computing from scratch: O(n2). • EX: for 1000 node-network, change in 2 nodes that impact 5 nodes each... • Instead of 10 re-calculations, 1,000,000! • My scheme: buffered change-propagating dependency-driven re-calculations

  13. Other Design Considerations • General separation of concerns (59 classes) • Model-View-Controller • “Core framework” with “helper” controllers & GUI views/wrappers • GUI look, cross-platform

  14. Visualization • PREFUSE framework • Radial tree layout (PREFUSE) • Color nodes based on activation • Color edges based on connection type • Highlighting, animation, etc.

  15. Results: DAN Software Suite • Overall successful • Quick • Convenient UI • Adaptable • True to model

  16. Results: DAN Model • Promising results: various rudimentary cognitive abilities: • “Initial Intelligence”: pattern recognition, feature detection, memorization of simple sequences, identification of similarities and differences, storage of relational data, comparison and classification, etc. • Possibly, building blocks of more sophisticated intelligence.

  17. Results: DAN Model • Has not gone unchanged:

  18. Results: DAN Model • Has not gone unchanged:

  19. Results: DAN Model • Has not gone unchanged: training: “the boy woke up” “the boy fell asleep” “theboywoke up” “theboyfell asleep”

  20. Results: DAN Model • Has not gone unchanged: training: “the boy woke up” “the boy fell asleep” “theboywoke up” “theboyfell asleep”

  21. Results: Overall • More robust? • Don’t know... Yet. • Received with curiosity and some enthusiasm by researchers working in the field. • More biologically plausible? • Absolutely. • Hebbian Neurological Principle: nodes that “fire together, wire together”. • Contrast with ANNs’s statistically-based learning

  22. I’m a DAN

More Related