1 / 5

Understanding the Notions of Self-Awareness: Social Agents and Monitoring Executives.

This discussion explores the distinction between two notions of self-awareness: the social agent sense and the monitoring executive sense. It examines how humans attribute agency to themselves and others based on self-observation and the role of empathy in applying insights gained from self-experience to understand others. The applications of accurate modeling of agency, including the self as a social agent, in extended human conversations and fail-soft systems are also discussed.

Download Presentation

Understanding the Notions of Self-Awareness: Social Agents and Monitoring Executives.

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Notes from the breakout group on social stuff. Pat Hayes Michael Cox Thomas Hinrichs Owen Holland Jim van Overschelde Marvin Minsky (approx 10%) Chris Welty Michael Whitbrock Various others from time to time

  2. Distinguish two notions of 'self-aware', roughly distinguished as the 'social-agent' sense and the 'monitoring-executive' sense. First sees the person as primarily an agent in a community of agents and the self as the locus of social commitment. Second sees the person as an isolated body/mind with the self as a locus of internal observation (interoception) and executive control. In humans these are identified, but are conceptually distinct and so could (must?) be distinguished in artificial systems, eg consider a QA system with no body or bodily location, or a distributed robotic system.

  3. Primary concept is agency/personhood Debate about whether we (humans) attribute agency to others based on concept arising from internal self-observation, or whether we attribute it to ourselves after recognizing it in others. Evidence suggests latter, e.g. recognizing agency occurs early in childhood, before evidence of self/other contrast; also, agency seems conceptually/ontologically prior to selfhood. Either way, when mature, the self is the person-concept we "know most about". We have a richer theory of ourselves than of others. ( (“richer” /= more accurate.) Empathy is (?) application to others of insights gained from interoexperience of self-person.

  4. A side comment. The view of ourselves as rational agents with an 'executive' which 'plans' &c may be a fiction arising from a inaccurate human self-theory. (Interesting question for this POV: What is the utility of this inaccurate (naïve, folk, non-veridical) self-theory? Possible answer might be the maintenance of social relationships, for which there is independent strong evolutionary pressure, c.f. “evolution of cooperation”)

  5. Applications. Natural extended human conversation seems (?) likely to require accurate modeling of agency, including the self as a social agent. Especially in areas where success depends on establishing an enduring relationship with human participants, eg medical advisers for long-term medical care, artificial pets, personal assistants, ‘companions’. Another area would be fail-soft systems which could communicate problems to humans early on, before failure. This requires self-monitoring, obviously, but also it requires ability to maintain natural conversations with humans which requires an adequate social self. However, for most other applications it is not clear that artificial systems need to be particularly social most of the time, or that a social artifact would necessarily live in a society of only humans. Intuitions we get from thinking about 'human-level AI' made up of artifacts which are conceptualized as little robot people may be misleading in these other contexts.

More Related