140 likes | 263 Views
Integrated Episodic and Semantic Memory in Robotics Steve Furtwangler, sfurtwangler@soartech.com with Robert Marinier , Jacob Crossman. Introduction. Robotics domain has some unique challenges General patterns or issues we encountered working in robotics Specifically, I will talk about
E N D
Integrated Episodic and Semantic Memory in RoboticsSteve Furtwangler, sfurtwangler@soartech.comwith Robert Marinier, Jacob Crossman
Introduction • Robotics domain has some unique challenges • General patterns or issues we encountered working in robotics • Specifically, I will talk about • Measuring similarity in semantic memory • Using episodic and semantic memory together • Required or prohibited query conditions • Recreation of state
Using Episodic Memory for Partial Matches • The agent creates a statistical model of its world • The statistics are stored in semantic memory • Long-term identifiers created for each thing we are modeling • Statistics kept on these identifiers • Sometimes need to find similar things • Semantic memory doesn’t support partial matches • We decided to leverage episodic memory to do this instead • Example: • If the agent has no/little statistical data for this exact situation • Can ask if it was ever in a situation like this one • If so, look up statistical data for that situation in semantic memory
Episode Representation - Unique Cues - Cell S1 1 C1 Invariable Attributes x cell 2 y road trees High Variable Attributes Low 1 1 Q1 @Q5 Find LTI with that cue, or create a new one if it is not found x x id cue y y 2 2
Episode Representation - Unique Cues - Path C2 C3 S1 C1 P1 cell cell cell path @Q1 @Q2 id id road trees High Paths which may have complex, deep working memory structures Low @Q1 Q9 1 cue @Q2 2 3 Create one level deep cue for path, using unique-ids of cells and order of cells @Q3
Measuring Similarity High Med Low C1 C1 C1 Results Cue Match Scores feature1 feature1 feature1 0 M.Low M.Low M.High feature2 feature2 feature2 0 High Med Low C1 C1 C1 feature feature feature 1 V.Low Low M.Low One dimension doesn’t capture similarity Med 0 M.High High Adding a second dimension helps V.High
Episodic and Semantic Memory Conflicts • The objects in memory are identified in semantic memory • Some of the attributes on these objects (statistics) change over time • These long-term identifiers are referenced on the topstate • So they show up in episodic memory • However, when episodic memory recreates the episode • It recreates the attributes and values that the LTI had at the time
Example of Problem S1 @L1 Old value result query query object S1 @L1 New Attribute “value” becomes a multivalued attribute object value S1 @L1 New Q1 object value Q1 object value R1 Old Problem: Cannot distinguish value in episode from current value
trees Solution: Long-Term Identifier Usage Pattern Problem Space S1 Q1 1 Med C1 C1 x road cell cell Episodic Memory Episodic Memory 2 Low Cue result trees y road High Semantic Memory Q2 cue Low Semantic Memory @Q6 Conceptually, two kinds of LTIs result 1 id @Q5 x id Statistics not stored with episodes y 2 id 2 failure success 12
Required (or Prohibited) Query Conditions • Queries to Episodic Memory often have two different kinds of conditions • Things that have to exist in the episode (or should not exist) • This tends to decide of the episode is even relevant or not • Things which are optional, but should be as similar as possible • Example: • Query for a similar situation where the agent decided to go right • In order to reason about what might happen if I turn right, now • Result is a situation like the current situation, only the agent went left • Has to be prohibited, until the agent gets a memory of going right • Leads to a common pattern…
Solution: Episodic Memory Loop Pattern Construct Query Retrieve Episode Filter Episode Continue? Map Result Annotate Input
Time Spent Recreating State • Often create episodic memory queries to answer a specific question • “When I was last in this location, what time of day was it?” • Retrieving the episode creates a lot of WMEs to recreate the whole state • “Last time you were at this location, it was a Tuesday, it was raining, your fuel was at 90%... Yadayadayada… oh, and it was 5:35pm.” • The time to recreate a state is, in part, based on size of that state • We often look at one small piece of that result and throw it all way • Causing all of those WMEs to immediately be removed • The filtering loop may cause this to happen many times
Nuggets • Reduced instances of repeated failure • Agent doesn’t do the same dumb thing twice • Constructed model for environment/plans • Accuracy of estimations improve with experience • Incorporated models of similar environments/plans • Agent came to useful conclusions for new (untested) plans
Wish List (Coal) • Partial matching for semantic memory • Using episodic memory to achieve this is a hack • Metric/Custom comparison functions • Necessary for queries about similarity in space or to weight features • Safeguards for episodic memory retrievals of long-term identifiers • To reason about what a LTI looked like in the past as opposed to now • Require/Prohibit queries in episodic and semantic memory • Eliminate the epmem loop pattern used to filter out bad results • Ability to specify sub-section of state to retrieve from episodic memory • Often only care about a few key WMEs • Reconstructing the entire state takes time