520 likes | 905 Views
Restoring vision to the blind Part II: What will the patients see?. Gislin Dagnelie, Ph.D. Lions Vision Research & Rehabilitation Ctr Wilmer Eye Institute Johns Hopkins Univ Sch of Medicine Department of Veterans Affairs Rehabilitation Center Augusta, GA April 15, 2005. Lines of attack.
E N D
Restoring vision to the blindPart II: What will the patients see? Gislin Dagnelie, Ph.D. Lions Vision Research & Rehabilitation Ctr Wilmer Eye Institute Johns Hopkins Univ Sch of Medicine Department of Veterans Affairs Rehabilitation Center Augusta, GA April 15, 2005
Lines of attack • Systems engineering (“brute force” or maybe just pragmatic) • Electrode/tissue engineering (“remodeling the interface”) • Likely limitations (space and time) • (Low) vision science/rehab
Spatial limits: retinal rewiringRobert Marc • Ultrastructural evidence from donor RP/AMD retinas: • Extensive rewiring of inner retinal cells • Neurite processes spread over long distances (~300 μm) • Glial cells migrate into choroid • Injected electrical current may spread through neurite tangle Marc RE, Progr in Retin Eye Res 22:607-655 (2003)
Spatial limits: implications of retinal rewiring • Stimulating degenerated retina may be like writing on tissue paper with a fountain pen: • Charge diffusion over distances up to 1o • Phosphenes likely to be blurry (Gaussian blobs), not sharp • Minor effect if electrodes are widely spaced (>= 2o) • Phosphenes from closely spaced electrodes may overlap/fuse Retinal prosthetic vision may be pretty blurry…
Temporal limits: persistenceHumayun et al. • Single electrode, acute testing: • Flicker fusion occurs at 25-40 Hz • Multi-electrode implant testing: • Rapid changes are hard to detect • Flicker fusion at lower frequency? Maybe prosthetic vision will be not just blurry, but also streaky…
And then there is background noise:Many blind RP patients see “flashes” like this…
so reading with a (high-resolution, retinal) prosthesis may look like this…
Caution It is naïve to expect that we will implant a retinal prosthesis, turn on the camera, and just send the patient home to practice
Lines of attack • Systems engineering (“brute force” or maybe just pragmatic) • Electrode/tissue engineering (“remodeling the interface”) • Likely limitations (space and time) • (Low) vision science/rehab
Developing an implantable prosthesis • How does it work? • Why should it work? • What did blind patients see in the OR? • What do the first implant recipients tell us? • What could the future look like? • What’s up next?
Simulation techniques • “Pixelized” images shown to normally-sighted and low vision observers wearing video headset • Images are gray-scale only, no color • Layout of dots in crude raster, similar to (current and anticipated) retinal implants • Subject scans raster across underlying image through: • Mouse/cursor movement, or • Head movement (camera or head tracker)
Performance under “idealized” conditions Subjects performed the following tasks: • Use live video images to perform “daily activities” • Walk around an office floor • Discriminate a face in 4 alternative forced choice • Read meaningful text
Face identification: Methods • 4 groups (M/F, B/W) of 15 models (Y/M/O, 5 each) • Face width 12º • Parameters (varied one by one from standard): • Dot size: 23-78 arcmin • Gap size: 5-41 arcmin • Grid size: 10X10, 16X16, 25X25, 32X32 • Random dropout: 10%, 30%, 50%, 70% • Gray levels: 2, 4, 6, 8 • Tests performed at 98% and 13% contrast • Each parameter combination presented 6 times • Data from 4 normally-sighted subjects
Face identification: Summary • Performance well above chance, except for: • large dots and/or gaps (i.e., <6 c/fw) • small grid or small dots (< 0.5 fw) • >50% drop-out • <4 gray levels • Low contrast does not seriously reduce performance • Significant between-subject variability (unfamiliar task?)
Reading test: Methods • Novel, meaningful text; grade 6 level • Scored for reading rate and accuracy • Font size 31, 40, 50, 62 points (2-4º characters) • Parameters (varied separately from standard): • Dot size: 23-78 arcmin • Gap size: 5-41 arcmin • Grid size: 10X10, 16X16, 25X25, 32X32 • Random dropout: 10%, 30%, 50%, 70% • Gray levels: 2, 4, 6, 8 • Tests performed at 98% and 13% contrast
Reading test: Summary • Reading adequate, but drops off for: • Small fonts (<6 dots/char) • Small grid (plateau beyond 25X25 dots) • >30% drop-out (esp. low contrast) • Note: even 2 gray levels adequate • Low contrast reduces performance, but reading still adequate • Much less intersubject variability than for face identification (familiar task?)
Introducing Virtual Reality • Flexible tasks: • Object and maze properties can be varied “endlessly” • Difficulty level can be adjusted (even automatically) • Precise response measures: • Subjects’ actions can be logged automatically • Constant response criteria can be built in • It’s safe!
Virtual mobility task • Ten different “floor plans” in a virtual building • Pixelized and stabilized view, 6x10 dots • Drop-out percentage and dynamic noise varied • Use cursor keys to maneuver through 10 rooms
Prosthetic vision simulations:Visual inspection/coordination Playing checkers: A challenge for visually guided performance
Introducing Eye Movements • Until now, free viewing conditions: • Subject can scan eye across dot raster • Mouse or camera movement used to scan raster across scene • Electrodes will be stabilized on the retina: • When the eyes move, dots move along • Mouse or camera used to move scene “behind” dots • Tough task !
Video pair: Face identification taskFree-viewing vs. gaze-locked
Face identification, free-viewing vs. gaze-locked: Learning FV= free viewing, FX= fixation controlled
Prosthetic vision simulations:Low Vision Science • Reading with pixelized vision, stabilized vs. free-viewing: • Accuracy falls off a little sooner, and reading rate is 5x lower, BUT • Spatial processing properties (dots/charwidth and char/window drop-off) do not change • At low contrast, window restriction more severe (not shown)
Prosthetic vision simulations:Rehabilitation • Learning makes all the difference: • Accuracy increases over time, both for high and for low contrast • Reading speed increases over time, for high and low contrast • Stabilized reading takes longer to learn, but improves relative to free viewing, both in accuracy and speed
So what’s the use of simulations? Simulating prosthetic vision can help in: • Determining requirements for vision tasks • Exploring and understanding wearers’ reports • Helping to find solutions for wearers’ problems • Conveying the “prosthetic experience” to clinicians and public AND: • Designing rehabilitation programs to help future prosthesis recipients
Functional prosthetic vision:How far off ? • Our subjects perform quite well with 16X16 (or more) electrodes • They can learn to perform most tasks with 6X10 • They can learn to avoid obstacles with 4X4 • Typical daily living activities will require larger numbers of electrodes (at least 10X10), and intensive rehabilitation