1 / 28

Toward a Great Class Project: Discussion of Stoianov & Zorzi’s Numerosity Model

This article discusses the relevance of Stoianov & Zorzi's Numerosity Model in addressing theoretical and computational issues in computational intelligence and computational neuroscience. It explores the nature of computations underlying visual numerosity and the potential for unsupervised learning to capture behavioral and neural data.

bracey
Download Presentation

Toward a Great Class Project: Discussion of Stoianov & Zorzi’s Numerosity Model

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Toward a Great Class Project:Discussion of Stoianov & Zorzi’s Numerosity Model Psych 209 – 2019 Feb 14, 2019

  2. What makes a good topic? • A good topic is one that is relevant to theoretical issues in a broad field of science while also making contact with empirical data or addressing a computational issue. • Where does knowledge come from? Do we acquire it through evolution or might it be learned? • What principles or mechanisms are sufficient to explain some aspect of behavior or cognition? • What computations or principles explain the receptive field properties of neurons in various parts of the brain? • Many excellent papers begin with a statement of a specific question like one of those above related to a particular topic, or implicitly ask such a question.

  3. Introduction to S&Z Many animal species have evolved a capacity to estimate the number of objects seen1. Numerosity estimation is foundational to mathematical learning in humans2,3, and susceptibility to adaptation suggests that numerosity is a primary visual property4. Nonetheless, the nature of the computations underlying this “visual sense of number”4 remains controversial5. Variability in object size prevents a simple solution based on the summation of their surface area (cumulative surface area), which is a main perceptual correlate of numerosity. A prominent theory6 requires object size normalization as key preprocessing stage for numerosity estimation. Others circumvent the problem, assuming the use of “occupied area” independent of object size7. Here we show that visual numerosity emerges as a statistical property of images through unsupervised learning.

  4. Modeling Human Behavior, Addressing Issues in Computational Intelligence, and Computational Neuroscience • You are welcome to do a project of any of these three types, or ideally, some sort of combination • S&Z addresses all three to some extent • Other papers may address only one of these three topics • For example, many in the deep learning literature focus primarily on computational intelligence

  5. Questions about S&Z • What are the target issues and findings? • How do the authors show that their model addresses the findings? • What is the network architecture, learning algorithm, and training set? • What conclusions might we want to reach from the findings? • E.g., what does the paper say about whether Numerosity sensitivity is an evolved ability? • What are the limitations and remaining questions and how can they be addressed in future research?

  6. Capturing Behavioral and Neural Data

  7. S&Z’s Deep RBM networkand Learning Algorithm • Greedy layer-wise training using this rule: • Essentially, each layer is trained to find a representation that allows it to reproduce the input it receives from the layer below.

  8. Why a Deep Network? • As we know, a single layer is limited in the computations it can perform. • With even just one hidden layer, it is possible to compute any computable function • More layers are though to provide a more abstract solution, one that throws away unimportant information while still capturing the underlying abstract elements and their relationships. • How can we learn a deep network? • Supervised learning – CNN’s • Unsupervised learning – DBN’s

  9. The deep belief network vision (Hinton) • Consider some sense data D • We imagine our goal is to construct a generative model of the process that generated it • Search for the most probable ‘cause’ C of the data • The one in which p(D|C)p(C) is greatest • How do we find C? • Minimize contrastive divergence or KL divergence between distributions of generated (Q) and observed (states. • The KL divergence of Q from P is given by the equation below. • For us, P(i) indexes the actual probabilities of states of the world, Q(i) indexes our estimates of the probabilities of these states. Cause Data

  10. ‘Greedy’ layerwise learning of RBM’s First learn H0 based on input. Then learn H1 based on H0 Etc… Then ‘fine tune’ says Hinton – but maybe the fine tuning is unnecessary? Stacking RBM’s

  11. See it in action: http://www.cs.toronto.edu/~hinton/adi/index.htm

  12. Questions about S&Z • What are the target issues and findings? • How do the authors show that their model addresses the issues and findings? • What is the network architecture, learning algorithm, and training set? • What conclusions might we want to reach from the findings? • E.g., what does the paper say about whether Numerosity sensitivity is an evolved ability? • What are the limitations and remaining questions and how can they be addressed in future research?

  13. S&Z’s Deep RBM networkand Learning Algorithm, and Training Data • Greedy layer-wise training using this rule: • Training set constructed by independently varying: • Numerosity (1:32) • Cumulative area (32*[1:8]) • Then placing blobs with slightly varying sizes randomly within the 30x30 input array • Consider this to be a generative model • We can think of the network as learning to extract it • Numerosity and Area Units capture the 2 independent factors

  14. Stoianov and Zorzi Model and unit analysis results

  15. Basis Functions and Numerosity Detectors

  16. 13x13 first layer Off Detectors, logistic of: 6x6 Second layer Filters, logistic of: where c is a normalization factor given by where cmax is the largestcumulative area

  17. S&Z’s Deep RBM networkand Learning Algorithm • Greedy layer-wise training using this rule: • How does the network produce a numerosity judgment?

  18. Results of simulation of numerosity judgment task

  19. 13x13 first layer Detectors, logistic of: 6x6 Second layer Filters, logistic of: where c is a normalization factor given by and cmax is the largestcumulative area

  20. Questions about S&Z • What are the target issues and findings? • How do the authors show that their model addresses the issues and findings? • What is the network architecture, learning algorithm, and training set? • What conclusions might we want to reach from the findings? • E.g., what does the paper say about whether Numerosity sensitivity is learned or innate? • What are the limitations and remaining questions and how can they be addressed in future research?

More Related