1 / 20

Unit 7: Base-Level Activation

Unit 7: Base-Level Activation. February 25, 2003. Activation Revisited. Both latency and probability of recall depend on activation Activation Equation. Base-level activation. Spreading activation. Partial matching. Noise. (sgp :bll t). Base-Level Activation.

louvain
Download Presentation

Unit 7: Base-Level Activation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Unit 7: Base-Level Activation February 25, 2003

  2. Activation Revisited • Both latency and probability of recall depend on activation • Activation Equation Base-level activation Spreading activation Partial matching Noise (sgp :bll t) Unit 7

  3. Base-Level Activation • Base-level activation depends on the history of usage of a chunk • Memory strength depends on • How recently you used it in the past • How much you practiced it Unit 7

  4. Time now 0 Pres 1 Pres 2 Pres k Pres n Base-Level Learning Unit 7

  5. Time now 0 Pres 1 Pres 2 Pres k Pres n Base-Level Learning time since the k-th presentation of the chunk i Mathematically transforming the ages to conform to the functions optimal in the environment decay parameter (sgp :bll 0.5) Unit 7

  6. Power Law of Forgetting • Strength of memories decreases with time E.g. -Speed to recognize a sentence at various delays • Number of paired associates that subjects recall • People’s ability to recognize the name of a TV show for varying numbers of years after it’s been canceled • More and more delay produces smaller and smaller losses • This is the idea that individual events are forgotten according to a power function Unit 7

  7. p=probability • p is a decreasing function of retention time • p/(1-p) is a power function of retention time with exponent d • ln(p/(1-p)) is a linear function of ln(retentiontime)-d • Accounts for the fact that each event age (tk) decays at rate d Unit 7

  8. Power Law of Learning Memory improves with practice; recall often gets close to perfection, but speed increases with practice even after that: This is the idea that the accumulating sum of events is also a power function. Proof omitted shows that this holds true for evenly spaced presentations Unit 7

  9. p= need probability n=number of occurrences • p is a linear function of n • p/(1-p) is approximately a power function of n • ln(p/(1-p)) is a linear function of ln (n) • Accounts for the sum of all event ages (tks) contributing Unit 7

  10. How many tks are there at time 40, 10, and 100. What are they? Unit 7

  11. What Is a Event Presentation? • Creating a new chunk (p my-production =goal> isa associate term1 vanilla term2 3  +goal> isa associate) • Re-creating an old chunk • Retrieving and harvesting a chunk Unit 7

  12. Optimized Learning • At each moment when chunk i could be potentially retrieved, ACT-R needs to compute new  n computations; for each chunk ACT-R needs to store the presentations • Optimized learning is a fast approximation  1 operation per potential retrieval • (sgp :ol t) Unit 7

  13. Optimized Learning Equation Optimized learning works when the n presentations are spaced approximately evenly Time now 0 Pres 1 Pres 2 Pres k Pres n Unit 7

  14. Unit 7

  15. Paired-Associates Example Study and recall pairs word-digit: vanilla 3 Each digit was used as a response twice. 20 paired associates; 8 trials Unit 7

  16. Paired Associates: Results Accuracy: items get under retrieval threshold if not rehearsed soon Latency: power law of learning Unit 7

  17. Homework: Zbrodoff’s Experiment True or false? A + 3 = D (true) G + 2 = H (false) • Possible addends: 2, 3 or 4 • Frequency manipulation: • Control -- each problem x 2 • Standard – 2-add x 3, 3-add x 2, 4-add x 1 • Reverse -- 2-add X 1, 3-add X 2, 4-add x 3 • 3 Blocks Unit 7

  18. Zbrodoff’s Data Control Two Three Four Block 1 1.840 2.460 2.820 Block 2 1.210 1.450 1.420 Block 3 1.140 1.420 1.170 Standard Group (smaller problems more frequent) Two Three Four Block 1 1.840 2.650 3.550 Block 2 1.080 1.450 1.920 Block 3 0.910 1.080 1.430 Reverse Group (larger problems more frequent) Two Three Four Block 1 2.250 2.530 2.420 Block 2 1.470 1.460 1.110 Block 3 1.240 1.120 0.870 Unit 7

  19. Tips • Compute the addition result when it’s not available for retrieval • May add extra effort to the productions that make the computation (articulation) (spp myproduction :effort .1) • (setallbaselevels <n> <T>) • (spp :ga 0 :pm nil) • Change retrieval threshold, latency factor, noise Unit 7

  20. Activation, Latency, and Recall Activation Probability of Retrieval Base Level Retrieval Latency Back Unit 7

More Related