1 / 55

Socially Adept Technologies

Socially Adept Technologies. Steve Marsh National Research Council Canada steve.marsh@nrc.ca http://www.stephenmarsh.ca/ March 21st, 2002. Motivation. An introduction to the field Pointers to relevant work Questions about ‘suitability’ Suggestions for future projects A wake-up call

tonya
Download Presentation

Socially Adept Technologies

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Socially Adept Technologies Steve Marsh National Research Council Canada steve.marsh@nrc.ca http://www.stephenmarsh.ca/ March 21st, 2002

  2. Motivation • An introduction to the field • Pointers to relevant work • Questions about ‘suitability’ • Suggestions for future projects • A wake-up call • A ‘call to arms’ • Propaganda :-)

  3. Outline • Introduction • What is Social Adeptness? • What is a Socially Adept Technology? • Examples of work in Social Adeptness • Questions • Problems • Answers? • Conclusions and more questions (from you? :-)

  4. Here’s a thought...

  5. Introductions When an individual enters the presence of others, they commonly seek to acquire information about him or bring into play information about him already possessed ... Information about the individual helps to define the situation, enabling others to know in advance what he will expect of them and what they may expect of him. GOFFMAN, E., page 13, 1959; “The Presentation of Self in Everyday Life”, Penguin: Middlesex.

  6. So? What does this mean? • Humans have a sense of self (Mead) and through this they adapt to situations and decide how to interact with others (by trying to figure out the ‘self’ of the other person) • Human-human interaction is social • It is also cultural • Our culture dictates what is and what is not acceptable in given situations • Novel situations are handled by prior similar situations if possible (cf scripts) • And there may be rules, implicit or explicit, we follow...

  7. Fair enough, but so what? We’re AI (agents?!) people • This ability to behave socially and culturally correctly towards people (or entities) with whom we are interacting is what we call ‘social adeptness’ • What’s en entity? • People, animals, agents… • So what? • So, we argue that if this social adeptness works so well for humans, why shouldn’t it work just as well for machines? • (and we’re not the only ones, as you’ll see) • So, in other words, human-human and human machine and machine-machine interactions are social

  8. Social Adeptness • One sensible definition of social adeptness is:The ability to behave correctly in any given situation according to the culture of the agents with whom one is interacting in any social setting. • And correctly means sensibly, carefully, and in an ‘expected’ manner (which is, granted, a slight tautology…)

  9. What is it, in a practical sense? • Social adeptness is an understanding of, reasoning about, and behaviour according to social norms such as: • Ethics • Emotions • Morality • Trust • Personality • A sense of self • A sense of others • Cultural awareness • Social awareness (which is often the same thing) (Marsh, 1995)

  10. Socially Adept Technologies (or Agents)? • A Socially Adept Technology is capable of reasoning with these norms in order to determine correct behaviour in any given interaction • The word interaction is important here… • Correct social behaviour may or may not be necessary in private(Remember the tree falling in an empty forest?) • But, interactions can be asynchronous (as with email) • Basically, the onus is on us to behave correctly toward those we are interacting with

  11. ? • It should be clear that, in order to allow technologies (agents, interfaces, etc.) to reason with these social norms, we need to have formal, or at least computationally tractable, models of them • Obtaining these models is the goal of research in Social Adeptness… • Interdisciplinary • Wide ranging • And hard… :-)

  12. Actually, it’s quite easy to imagine... • Imagine: • Cellphones that don’t ring when you’re at the theatre (or at a lecture…) • Robotic vacuum cleaners that don’t vacuum when you’re in the middle of a dinner party • … or a good movie… • Interfaces that can adapt to your mood • Tools that can help you interact with people from different cultures, in various situations • Agents that organise meetings according to your personal requirements • … and all without ever having to be told what is right… • All of these are Socially Adept Technologies...

  13. … whilst quite hard to do • Work in the field is inherently multidisciplinary, ranging over topics such as • Philosophy • Sociology • Computer science • AI • Psychology • etc...

  14. Work in Social Adeptness • Several researchers are working in and around the topic. To name some (and discuss fewer)... • Socially Intelligent Agents • Dautenhahn (Hertfordshire, England) • Artificial Morality • Danielson (UBC) • Trust • Marsh, Dibben (St Andrews, Scotland), Davenport (Napier, Scotland), Esfandiari, Chdrasekharan (Carleton), Castelfranchi (NRC Italy) • Personality • Meech (AmikaNow!), Reeves & Nass (Stanford) • Interface Agents • Extempo (Hayes-Roth), Microsoft

  15. Socially Intelligent Agents • Prinicipally, this is Kerstin Dautenhahn’s work • Dautenhahn has organised several workshops in this field, and a book is forthcoming • See her web pages for details • The basic premise is similar to SATs • Make systems that can interact properly with humans • Robotics in fact play a large part in this work • Dautenhahn’s web pages can be found at:http://homepages.feis.herts.ac.uk/~comqkd/

  16. Artificial Morality • Peter Danielson, UBC (see his book: Artificial Morality: Virtuous Robots for Virtual Worlds, Routledge, 1992) • Danielson provides simple agents with an understanding of morality and its workings • He has extended his work in several very interesting areas (including ecology and the business world) • See his web pages at:http://www.ethics.ubc.ca/pad/

  17. Trust • (A topic dear to my heart…!) • Introduced for autonomous agents in 1991-2 (Marsh) • Trust is the basis of sociability • Without trust, society would cease to exist (Bok) • Thus, an understanding of and concrete implementations of trust are vitally important to the study of Social Adeptness, acting as the keystone of a Socially Adept (or Intelligent) Technological thrust • As evidence of its importance, it has received more attention than most other SA attributes, especially recently (because of E-Commerce, which we’ll come to sooner or later…)

  18. Trust contd. - Other work • The past 3 years have had workshops on Deception, Fraud and Trust in Agent Societies organised at the Autonomous Agents conferences • The proceedings from these workshops are an invaluable aid to finding out more

  19. Personality • An understanding of and subsequent representation of personality is an important part of any interaction • This applies to human-technology interactions just as much as human-human • The most visible work in this area is that of Reeves and Nass from Stanford, reported in their book, The Media Equation • A significant result from this work was that people like to interact with systems that show the same personality as them • e.g., dominant with dominant, submissive with submissive, etc.

  20. Personality contd. - Reeves and Nass • The Media Equation presents compelling evidence for this and other findings • Although subsequent work may have put this in doubt… • Quite simply however, people anthropomorphise • They ascribe personalities to technology • (do you talk to your car?) • … and because of this they find it easier to interact with (and put up with) the technology • This is powerful stuff - understanding it gives us a key to designing more acceptable systems and interfaces

  21. Personality contd. - Meech • Meech, for his thesis, looks into the Media Equation’s results and takes them further into the design of human-computer interfaces • The conclusions drawn are similar - that people like to interact with like personalities • Such personalities can also be promoted, even in textual interfaces, with different wording, emphasis, etc. • We are using this work in a novel web site architecture, as you’ll see later

  22. Time for a look back... • There are many more examples of work in Social Adeptness • Too many to cover in this talk, including: • Emergent behaviour (Artificial Life, studies of societies…) • Emotions (e.g. Roz Picard’s Affective Computing, MIT) • Attention-based systems (e.g. Roel Vertegaal, Queens U) • Narrative and communicative systems (Bickmore & Cassel, MIT; Mateas and Sengers, CMU) • Systems that make jokes… (Kim Binstead, Sony) • etc. • Bringing them together under a single moniker is worthwhile and informative

  23. …and forward, and ... • Given what we have seen so far in the area, it’s time to think about the ultimate goal (implicit or explicit, worked towards or not) of this combined research:The creation of (potentially physically) embodied social agents capable of existing in the ‘real’ human social world, behaving correctly according to the norms of society and culture. • Such agents may not be artificially intelligent, but they will undoubtedly be socially intelligent • But we’re a way away from that yet

  24. … sideways • Given the theoretical work, what’s being done practically?(Now we come to the promotional part…) • My work at NRC is specifically concerned with Socially Adept Technology, its uses and how to apply it to different avenues of work • This will proceed in a new lab, with the code name Project Mole Rat :-) • I’ll discuss some of the relevant work here

  25. A Prolegomenon for all Future Social Technologies Research… • (with apologies fo Immanuel Kant) • I believe (and you’re free to disagree…): • Technology should be seen as a social actor (cf. Reeves and Nass) • Incorporation of social norms into technology can result in increased user comfort and efficiency (no second guessing) • Social norms can be incorporated both in the interface between human and technology, but also • Within the technology itself • In the interface between technologies • Many of these beliefs stem from my focus on Multi-Agent Systems (in itself a ‘European’ concept)

  26. Trust, contd. - Marsh • My own work in trust was devoted towards • Better understanding how cooperative trust worked • Developing a computationally tractable formalisation of Trust • Allowing for trust reasoning agents • Allowing for social science studies involving formal models of trust • Implementing and testing the model • For in-depth details, see the website

  27. Formalising Trust? • Some basic terminology: • Trusting entities have 3 kinds of trust: • Basic Tx • The amount of trust you might have in the world • General Tx(y) • Trust you have in a specific person in general • Situational Tx(y,a) • Trust you have in a specific person in a specific situation • Trust values are in the range [-1,+1) (now, is that odd?)

  28. Formalising Trust? • Formal models can be used to model trust through interactions: Tx(y,a) =Ux(a) * Ix(a)*Tx(y) • Cooperation threshold: C_Tx(y,a) = (Rx(a) / Cx(y, a)) * Ix(a) Marsh(1994) see: http:// www.iit.nrc.ca/~steve/pubs/Trust

  29. But • The models aren’t perfect • They were never meant to be • But they are simple • And they do work (even with humans (Dibben, 1998)) • We’re applying them to E-Commerce, as you’ll see later

  30. ACORN Portions of this work were carried out in collaboration with researchers at University of New Brunswick’s Faculty of Computer Science… Thanks to Profs Ali Ghorbani and Virendra Bhavsar in particular, who have taken ACORN from its humble beginnings to new heights… Students and programmers that have worked on this project are: Youssef Masrour, Hui Yu, Leigh Wetmore, and Jonathan Carter.

  31. ACORN - Introduction and Motivation • ACORN is a ‘tool-based’ SAT, whose relevance becomes clearer with some thought • ACORN is a peer to peer multi-mobile-agent architecture based on community-oriented communication paths in human society - Stanley Milgram’s Small World Problem • (how many buzzwords do you need in a sentence…?) • ACORN was conceived as a replacement for ‘static’ information systems such as bog standard email and static web servers • We see information in this sense as a dynamic entity which has to work to exist in the world… • ACORN is ‘one of those’ acronyms: Agent-based Community Oriented Routing Network

  32. ACORN - Basics • In ACORN, every piece of information is (potentially represented by) an autonomous mobile agent - the InfoAgent • sounds, images, movies, frames, documents and parts of documents, files, links, and so on… • Note - anything you can send via email, you can send in ACORN too • Every InfoAgent carries with it • metadada for and a link to its information (not necessarily the information itself) • owner information (it is given) • community information (it learns and can be given) • community ‘paths’ (it builds itself and can be given)

  33. Uses of ACORN • As an email replacement - ‘email with attitude…’ • As community building and enhancing technology • As a people finder • As a novel peer review system • As a personalised directed information architecture • (directed ads, anyone...?) • B2B and B2C applications

  34. Current Status • ACORN is fully implemented in Java (uses JSP) • There will be a port to C this summer • Development is ongoing in privacy, anonymity, and ‘thin’ InfoAgents • Integration of summarisation and additional search technologies are also ongoing

  35. Socially Adept Web Sites • An application of SAT to adaptive web site technology, this project aims to show how some simple rules can be applied to already existing technology in order to facilitate its better usage and integration in society • It’s also an approach to answering Etzioni’s (1997) call for adaptive webs • Finally, although it is applied presently to eCommerce and web interfaces, we believe some at least of what we’ve learned can be applied to other interfaces. • This work was carried out jointly with John Meech. Our thanks also goes to Ala’a Dabbour for a first implementation of the prototype site

  36. Seals of Approval Brand Navigation Fulfillment Presentation Technology Studio Archetype/Sapient Static Trust Factors in E-Commerce

  37. Web Site As Agent • Web site acts as an intelligent, adaptive interface - can be viewed as an agent • Constructs a user profile from interactions, history and other data • Uses models of trust, personality and context to evaluate user behaviour • Adapts web page content/structure accordingly • A prototype site has been developed for this paradigm

  38. Web Site Architecture

  39. SociAware • SociAware is ‘Socially Aware’ technology - a simple means of thinking realistically about SATs in general. • It was first introduced at MICON in August 2001 • The most basic aspect of SociAware is the extension of the trust model in simple ways to enable social trust reasoning (that is, to allow society to reason about how it trusts ‘things’ such as information.

  40. SociAware Applications: infoDNA • A standard of Trust in information agents, implemented as an extension to the ACORN architecture • Problem: agents judging information in ACORN… • i.e., which pieces of information to forward to owner, and which to discard • Solution: each piece of information is socially rated • Then each agent can use these ratings in decision making • Note that this solution is not perfect • Societies can be fooled into believing things that are not true…

  41. ACORN and infoDNA • Each piece of information carries with it additional infoDNA: • Originator and signature • Set of reader ratings and signatures • Ratings in our system are [-1,+1) but any suitable representation would work • Agents can judge information based on these societal rankings • Naturally, much more information is also available • Owner of information • Metadata • This is a simple application but worthwhile, also it gives us a set of results to work with when implementing more complex approaches

  42. Social Web Technology • Socially Adept Web Site adapts to User personality, trust • However, initial stages of adaptation are problematic • Unknown user, unknown requirements • Site strange to user • Potential privacy concerns with adaptation, user profile • Using SociAware technology, we will be addressing these concerns

  43. Social Web Technology • User represented by SociAware Agent • This maintains user profile • Site represented by Site Agent • At first visit, negotiations between user and site agents result in pre-built user profile with no identifying capacity (except through user agent, which reveals only what is necessary) • In addition, because SociAware, user agent can query society (e.g. via SociAware server) for views on site policies, etc. • SociaAware server maintains data. Also becomes indispensable in browsing new unknown sites • Other value added - negotiation via SociAware server preserves even more privacy/control

  44. Table Manners - a ‘physical’ SAT • For this implementation of the SAT concept, we wanted to take some physical aspect of collaborative technology and use it as a base toolset with which to experiment on various topics • Group formation in distributed settings • The detection and facilitation of group dynamics in local and distributed settings • The locus of ‘command’ in a collaborative technology • Remote control and tele-haptic technologies • Computer Supported Collaborative Play • For this, we chose to implement two HI-Space tables over two sites in Ottawa (CRC’s Virtual Classroom and NRC’s MoleRat lab), with an option to network further tables • The umbrella name of this technology toolset is ‘Table Manners’ (thanks to Monica…) • We’re still building the tables - they will be online by June

  45. Architecture • Table Manners will use agents to represent individual users • Each user will then have a model the agent can use to predict behaviour, analyse the same, and come up with worthwhile group building/reinforcing structures amongst the other users and their agents • This raises interesting questions of privacy, sensing, avatar potential, etc.

  46. Table Manners continued • We will have • 2 networked tables over a dedicated research fibre for high bandwidth • 2 large plasma displays for video conferencing • Palm pilot control and personalisation of table and associated information • Wireless Haptic capabilities (prototypes based on MindStorms stuff, with more to come) • Several potential projects to implement and observe • (and I’m very excited about the potentials!)

  47. Potential Table Research Approaches • Remote control of robotics • Agent based user modeling - truly socially adept agents… • Avatars • Advanced video conferencing • Active environments • Collaborative gaming • Privacy and trust • Ubiquitous individual information handling

  48. The Wacky Idea File • Wouldn’t it be nice if… • Your PDA could guide you in real time about the customs and social expectations of the new country you’ve just arrived in… • Your PDA could link with an active environment and show you how what you see now relates to what you saw (perhaps in another country) last week • Wireless was so ubiquitous you really were always wired, and your machines were always contextually and socially aware • Your ‘machines’ really were invisible, really were ‘personal’ and really did ‘know’ what do do at any given time • We really could trust our technology to do the ‘right’ thing • Technology simply faded away when you didn’t need it…

  49. Questions • Work on Socially Adept Technologies raises its own questions • Ironically, of ethics and morality amongst others • Some of them I mention briefly here, others may be plain to you • Is this a good thing to do? • Are we hurting people by deceiving them? • Is anthropomorphising technology good for people? • Can ‘bad people’ use this in naughty ways? How? • What does this give us regular AI (or even dumb) technologies don’t? • Do we need Artificial Intelligence if we have Social Intelligence? • Where is all this going?

More Related