270 likes | 427 Views
Optimal Therapy After Stroke: Insights from a Computational Model. Cheol Han June 12, 2007. What is rehabilitation?. Behavioral Compensation Use the un-paralyzed arm instead of using the paralyzed arm Develop the alternative strategy Neural Recovery
E N D
Optimal Therapy After Stroke: Insights from a Computational Model Cheol Han June 12, 2007
What is rehabilitation? • Behavioral Compensation • Use the un-paralyzed arm instead of using the paralyzed arm • Develop the alternative strategy • Neural Recovery • Use the paralyzed arm in order to be as same as the normal person • Change neuro-plasticity to use the peri-lesion neurons
Learned non-use Dr Taub: (1966) "Right after a stroke, a limb is paralyzed," “Whenever the person tries to move an arm, it simply doesn't work." “Even when all the cells that represent the arm in the brain are not dead, the patient, expecting failure, stops trying to move it. "We call it learned non-use," (from http://www.mult-sclerosis.org/news/Aug2001/RehabTherapy.html)
The first question: Is “Learned non-use” myth? or reality? • Hypothesis (Learned non-use is myth): Less use of the arm due to lower performance after stroke, but the use is more or less proportional to performance. • Alternative hypothesis (Leaned non-use is reality): % of spontaneous hand use is very small even with non zero performance
Learned non-use " How to define or measure Motor performance? Another possible explanation From Dr Gordon and Dr Winstein % of spontaneous hand use Learned Non-use Motor Performance
The second question: How to find optimal schedule of rehabilitation? • Rehabilitation program is expensive • Optimal duration of rehabilitation may be different, when • Speed of learning is different • Size of stroke is different • And so on. • One-fits-all rehabilitation is not cost-efficient. • -> Optimal therapy fitting to individuals.
Approach • Find “optimal therapy schedule”using a SIMPLE computational model that has TWO components: • 1. Motor cortex for arm reaching • Motor learning and re-learning • Motor lesion due to stroke • Error-based learning • 2. Adaptive spontaneous arm use • “Action chooser” • Reward-based learning
Error-driven learning vs. Reward-driven learning Reward-driven learning (Reinforcement learning) Error-driven learning (Supervised learning) Overall Grade only (Reward) Specific Error “Therapist”: Your initial direction is off 20 degree leftward and your final hand position is 5 cm far from the target in the left. “Therapist”: Your movement was better than what it was before! Great, your are making progress Specify how much and which direction patient should update Tell patient whether the movement was good or not.
Experimental Setup for simulation • Each hand starts at the same position. • Reach to the randomly selected target (equal distance) • Two conditions after stroke • Free choice (no rehabilitation) • Rehabilitation: force to use the affected arm in all directions. “constraint induce therapy”
Motor cortex model: simplifying assumptions • Assumption 1: The motor cortex has directional coding neurons with signal dependent noise. (Georgopoulos et al., 1982 and Reinkensmeyer, 2003) • Todorov (2000) showed with a simple model that directional coding is correlated with muscle movements. • Assumption 2: Stroke lesions part of preferred direction coding. • Based on Beer et al.’s • Assumption 3: Rehabilitation retunes preferred directions of remaining cells. • Li et al.(2001)’s data showed that directional tuning of the muscle EMG is retuned during motor training. • Based on Todorov (2000)’s idea above, retuning in directional tuning of muscle EMG (Li et al., 2001) may be interpreted as retuning in directional tuning of the motor cortex neurons.
Each neuron in the motor cortex has directional coding Georgopoulos et al, 1982
Population coding is a vector sum of each neuron’s activation Georgopoulos et al, 1986
Stroke deteriorates part of movements Thin line: unaffected arm, Solid line: affected arm RF Beer, JPA Dewald, ML Dawson, WZ Rymer (2004, Exp Brain Res)
Motor Learning induces change in directional tuning of muscle EMG • Li et al, Neuron, 2001
Motor Cortex model • Cosine coding extended with signal dependent noise (Reinkensmeyer, 2003) • Each cell has its own preferred direction. • Same activation rule with Georgopoulos et al.’s. • Stroke lesions preferred direction with equal distribution. • # of cell surviving affects the motor variance.
Supervised learning in the motor cortex • We extended the model with different simulation of stroke and learning process. • Stroke lesions preferred direction with unequal distribution • Rehabilitation retunes preferred directions of remaining cells • How to retune the preferred direction? Activation Rule Error-driven (Supervised) Learning
Action Chooser: Action value Action value • “Action value” is an expected cumulative sum of rewards by performing a specific action • Here, for each target, we have two action values: one for the left arm VL(theta), and one for the right arm Vr(theta). • The arm selected will be the arm that correspond to the higher value. Three types of rewards • 1. Directional Reward (transformation from directional error) • 2. Reward for workspace efficiency • Right arm uses for right hand side workspace is rewarded • Left arm uses for left hand side workspace is rewarded • 3. Possible learned non-use negative rewards (punishments).
Action Chooser: Probabilistic selection • Based on the action value, probabilistically select which arm will be used to generate movement. • Probabilistic formulation implies competition between two arms
Preferred direction redistribution Initial Affected range Afterstroke After rehabilitation Free Choice Condition Rehabilitation condition
Future work • Model “learned non-use” by modeling “expected failures” (add negative rewards). • Motor cortex model • More realistic lesions • Unsupervised learning to account for spontaneous recovery • Mapping the direction coding to the muscle coding • Experiments with stroke subjects • using the new VR system • Updating the model parameters based on real data
Acknowledgements • Dr Arbib • Dr Schweighofer • Dr Winstein • Jimmy Bonaiuto