1 / 27

Subject wearing a VR helmet immersed in the virtual environment on the right, with

Subject wearing a VR helmet immersed in the virtual environment on the right, with o bstacles and targets. Subjects follow the path, avoid the obstacels , and touch the t argets. Top down view of the experiment. A sequence of fixations are shown, directed at the

lei
Download Presentation

Subject wearing a VR helmet immersed in the virtual environment on the right, with

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Subject wearing a VR helmet immersed in the virtual environment on the right, with obstacles and targets. Subjects follow the path, avoid the obstacels, and touch the targets.

  2. Top down view of the experiment. A sequence of fixations are shown, directed at the path (green), obstacles (red), and targets. The subject’s path is shown by the thin red line and shows two instances of avoidance of an obstacle (pink). In both cases avoidance occurs while the S is fixating the next target (blue).

  3. General question: How is vision used when walking? How far away is the subject when the obstacle is fixated? (Choose the end of the fixation.) 2. Does this distance very for chairs vs people vs bins? Why? 3. Are some obstacles not fixated at all? Why? 4. What location on the obstacle is fixated? Eg is it the edge closest to the body? 5. Is there any change in gaze behavior as the path becomes more familiar? Why?

  4. A basic visual task: walking

  5. Where do we look when we walk? Why do we look there? • Suppose that complex behavior can be broken down into simpler sub-tasks. • What sub-tasks can be identified when walking? • - Control heading • - Avoid obstacles • - Update location • Remember surroundings • Can we identify the function of the fixations made while walking? • Each sub-task can be accomplished either by bottom up capture of attention or • by top down search for relevant visual information. • The top down factors reflect learning. • In this lab we would like to: • Describe gaze behavior as a first step to understanding how vision is used. • See if we can find evidence for learning where obstacles are. Such learning is • critical for top down control.

  6. Looming – a potential bottom up cue that attracts gaze Neurons in Area MT sensitive to looming stimuli.

  7. Neural Circuitry for Saccades target selection Motion sensitive area MT saccade decision saccade command inhibits SC Substantianigra pc (Dopamine) signals to muscles

  8. Optic Flow – the pattern on the retina when walking

  9. Optic Flow might be used for controlling heading direction. This is commonly thought to be a bottom up mechanism.

  10. How Gaze Patterns are Learned Neuroeconomics

  11. Fixation on Collider

  12. Learning to Adjust Gaze • Changes in fixation behavior fairly fast, happen over 4-5 encounters (Fixations on Rogue get longer, on Safe shorter)

  13. Shorter Latencies for Rogue Fixations • Rogues are fixated earlier after they appear in the field of view. This change is also rapid.

  14. Neural Substrate for Learning Neurons at all levels of saccadic eye movement circuitry are sensitive to reward. Neurons in substantia nigra pc in basal ganglia release dopamine. These neurons signal expected reward. This provides the neural basis for learning gaze patterns in natural behavior, and for modeling these processes using Reinforcement Learning.

  15. Neural Circuitry for Saccades planning movements target selection saccade decision saccade command inhibits SC Substantia nigra pc signals to muscles Substantia nigra pc modulates caudate

  16. Neurons at all levels of saccadic eye movement circuitry are sensitive to reward. LIP: lateral intra-parietal cortex. Neurons involved in initiating a saccade to a particular location have a bigger response if reward is bigger or more likely SEF: supplementary eye fields FEF: frontal eye fields Caudate nucleus in basal ganglia

  17. This provides the neural basis for learning gaze patterns in natural behavior, and for modeling these processes using Reinforcement Learning. (eg Sprague, Ballard, Robinson, 2007)

  18. Modelling Natural Behavior in Virtual Environments. Virtual environments allow direct comparison of human behavior and model predictions in the same natural context. Use Reinforcement Learning models with an embodied agent acting in the virtual environment.

  19. Modelling behaviors using virtual agents Sprague, Ballard, Robinson, 2007; Rothkopf ,2008 Assume behavior composed of a set of sub-tasks

  20. Model agent after learning Pickup litter Follow walkway Avoid obstacles

  21. Controlling the Sequence of fixations Choose the task that reduces uncertainty of reward the most obs can side

  22. Humans walk in the avatar’s environment

  23. Avatar path Human path Reward weights estimated from human behavior.

  24. Detection of signs at intersection results from frequent looks. Shinoda et al. (2001) “Follow the car.” or “Follow the car and obey traffic rules.” Time fixating Intersection. Road Car Roadside Intersection

  25. How well do human subjects detect unexpected events? Shinoda et al. (2001) Detection of briefly presented Stop signs. Intersection P = 1.0 Mid-block P = 0.3 Greater probability of detection in probable locations Suggests Ss learn where to attend/look.

More Related