E N D
Aren’t agents just objects ? • There is a tendency to think of objects as “actors” and endow them with human-like intentions and abilities. It’s tempting to think about objects “deciding” what to do about a situation, “asking” other objects for information … Objects are not passive containers for state and behavior, but are said to be the agents of a program’s activity. (NeXT Computer Inc., 1993,p 7)
Locus of Control • Objects • Exhibit autonomy over private state • Exhibit no control over its behavior • Decision lies with the object that invokes the method • Agents • Exhibit autonomy over both state and behavior • No guarantee that an agent will perform an action that another agent wants it to perform (say, if its not in the requestee’s best interest) • Decision lies with the agent that receives the request • Objects do it for free, agents do it because they want to • What about decision making about whether or not to execute a method into the method itself ?
Flexible Behavior • Agents • Reactive • Deliberative • Social • Reflective • Objects • OOP model has nothing to do with this kind of flexible behavior • What about implementing such behavior using object-oriented language ?
Thread of Control • Agents have their own thread of control • Standard OOP model doesn’t require that • What about OO languages that support concurrent programming (eg. Java) ? • OOP concept that comes closes to agents is that of “Active Objects” • Encapsulate their own thread of control • Generally autonomous as they can exhibit some behavior without being operated on by another objects • The are essentially agents without the ability for flexible autonomous behavior
Agent UML • Extending UML • Agents are active / autonomous • Agents are social • Interaction Diagrams (Sequence & collaboration diagrams) • Not suitable to describe complex social interaction • State Diagrams • Not appropriate to describe behavior of a group of collaborating entities • Protocol Diagrams • Agent UML class diagrams
Main Agent Control Loop B := B0 I:= I0 while true do get next percept ρ B := brf(B,ρ) I := deliberate(B) π := plan(B,I) execute(α ) end-while B := B0 I:= I0 while true do get next percept ρ B := brf(B,ρ) D := options(B,I) I := filter(B,D,I) π := plan(B,I) execute(α ) end-while
Reacting to failed plans B := B0 I:= I0 while true do get next percept ρ B := brf(B,ρ) D := options(B,I) I := filter(B,D,I) π := plan(B,I) while not empty(π) do α := hd(π) execute(α ) π := tail(π) get next percept ρ B := brf(B,ρ) if not sound(π,I,B) then π := plan(B,I) end-if end-while end-while
Dropping Intentions – Achieved or Impossible B := B0 I:= I0 while true do get next percept ρ B := brf(B,ρ) D := options(B,I) I := filter(B,D,I) π := plan(B,I) while not (empty(π) or succeeded(I,B) or impossible(I,B)) do α := hd(π) execute(α ) π := tail(π) get next percept ρ B := brf(B,ρ) if not sound(π,I,B) then π := plan(B,I) end-if end-while end-while
Always reconsidering - Cautious B := B0 I:= I0 while true do get next percept ρ B := brf(B,ρ) D := options(B,I) I := filter(B,D,I) π := plan(B,I) while not (empty(π) or succeeded(I,B) or impossible(I,B)) do α := hd(π) execute(α ) π := tail(π) get next percept ρ B := brf(B,ρ) D := options(B,I) I := filter(B,D,I) if not sound(π,I,B) then π := plan(B,I) end-if end-while end-while
Between Bold and Cautious B := B0 I:= I0 while true do get next percept ρ B := brf(B,ρ) D := options(B,I) I := filter(B,D,I) π := plan(B,I) while not (empty(π) or succeeded(I,B) or impossible(I,B)) do α := hd(π) execute(α ) π := tail(π) get next percept ρ B := brf(B,ρ) if reconsider(I,B) then D := options(B,I) I := filter(B,D,I) end-if if not sound(π,I,B) then π := plan(B,I) end-if end-while end-while
Case Study: JAM -A BDI Agent Architecture • Intention Structure • Stack of goals, and plans • Incorporating Utilities • Goals • Achieve, Perform, Maintain • Plans • World Model (Belief base)