1 / 45

Code 342 Cognitive and Neural Sciences Office of Naval Research Aug 16, 2005

Passing the Bubble: Cognitive Efficiency of Augmented Video for Collaborative Transfer of Situational Understanding Review of Human Factors Discovery and Invention Projects. Code 342 Cognitive and Neural Sciences Office of Naval Research Aug 16, 2005. Research Team. David Kirsh – PI

morrisa
Download Presentation

Code 342 Cognitive and Neural Sciences Office of Naval Research Aug 16, 2005

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Passing the Bubble: Cognitive Efficiency of Augmented Video for Collaborative Transfer of Situational UnderstandingReview of Human FactorsDiscovery and Invention Projects Code 342Cognitive and Neural SciencesOffice of Naval Research Aug 16, 2005

  2. Research Team • David Kirsh – PI • Bryan Clemons – run experiments, data analysis • Mike Lee – run experiments • Dr. Thomas Rebotier – statistical analysis • Dr. Alan Calvitti • Undergraduate assistants • Jason • Brian • Mehmet • Ryan • Charles Koo • Mike Butler • Kelly Miyashiro • Brian Iligan • Rigie Chang • Brian Balderson • Bryan Clemons • Mike Dawson • ……

  3. I. Project Summary Overview • Project Summary OverviewLong term goalsExpected final product, potential impact, applicationsProject objective, approach

  4. Long-termGoals • Determine how to annotate videos or images to share situational awareness better • Strategic conditions, • tactical conditions, • environmental conditions • commander’s intent and future plans. • distributed team members • Guidelines for annotating video and well chosen stills

  5. Long-term Goals • Develop methodology for quantitatively measuring the value of asynchronous briefing- presentation impact on performance • Deepen Theoretical Framework • Dynamical representations • What is shared understanding • Distributed cognition and Annotation • Annotation and attention management

  6. This Year’s Goals • Slice away at the value of co-presence, gesture, real-time interactivity • How good can remote ‘over the shoulder’ observation of a face to face presentation be? • How important is interactivity, even if not face to face? (ie. Asking questions)

  7. Expected Final Products • Guidelines: When and how to use annotation to improve shared understanding. Major factors: • Annotating stills vs. annotating video’s, cost and benefits • Using annotation types (circles, arrows, moving ellipses) for specific information needs • Annotation and expertise level – who needs it most and when • How to tell good from bad (pointless) annotation • Metrics: cognitive efficiency of different annotational and presentation techniques • Relativized to expertise • Relativized to knowledge types • Articles & Theoretical models • Dynamic Representations • Annotation and Sharing Understanding • Empirical Findings and relation to Illustration

  8. II. Technical Plan • Experimental plan, data collection

  9. Experimental Plan • Removed some new conditions – see factorial table • Focus more on differences between live and canned presentations • Analyze if certain graphic objects are more effective at conveying certain knowledge objects

  10. Conditions Total 192 Completed 182

  11. Examples Experts talking from different venues Intermediates Face to face

  12. Experimental Design

  13. Intermediate – Intermediate Expert - Expert No Annotations Static Annotations Dynamic Annotations Control (random stills) Stills Video Snippets Original Factorial Design Annotating video snippets and stills Takes long time to create presentations

  14. Added Live Presentation Intermediate – Intermediate Live Expert - Expert No Annotations Static Annotations Dynamic Annotations Control (random stills) Gesture & Questions Stills Video Snippets Live was a new condition to support face to face presentation And near real-time production – improvised debriefing

  15. Separate Live Factors Intermediate – Intermediate Live Expert - Expert No Annotations Static Annotations Dynamic Annotations Control (random stills) Stills Video Snippets Face to face = synchronous, co-located, gesture, questioning

  16. B Lapsang A B Oolong Hyson Lab set-up Venue A Context and gesture B plays A then passes to C Wireless audio C Boba Time: presentation Time: Game 0 – 15 min

  17. Project Status Main Results

  18. Live not better than good canned FF > Live Remote > Taped Experts get less Graphical annotation adds to performance Video is better than Stills Wrong True (unfortunately) Not quite significant Usually Usually not – unless temporal information Research Hypotheses

  19. Annotation is always helpful Dynamic annotations on video not helpful Being didactic with other experts is a bad thing Info about the enemy is more valuable than about our own side Depends on type and background True True False Research Hypotheses

  20. Live vs Canned

  21. Live presentations • Surprise: Live presentations are better than all forms of canned presentations. • faster and easier to make • Live presentations contain gestures that function like graphic annotation and • they contain mouse pointing which also serves as an attention management mechanism, much like many of the graphical annotations found in our canned presentations.

  22. LIVE versus CANNED 300 Well Chosen Video Snippets Random 200 100 % 0 -100 -200 Live presentations

  23. Live Live Random Canned Live versus all Canned p=.001 Live versus Canned Live versus Random p=.003 Live Live Chosen Stills Video Live versus Chosen Stills p=.006 Live versus Video p=.0004

  24. Why Live? • Is the performance boost coming from: • giving more relevant information • interaction with presenters – asking questions or showing interest in certain areas • the pace that presenters adopt as a result of subtle interpersonal cues apparent in the face to face condition • gesture

  25. Expert to Expert Live Recorded Live 4 min - unlimited good good 2 min good Not good Results: Canned more sensitive to Length – less adaptive • We tested live transfer in two ways • Finding: presentations have to be long enough for canned to be effective p = .0003 live is better than canned for 2 min

  26. Why are live stims better • They have more Good KOs • They have virtually no didactic KOs

  27. Observations - Conjectures • From surveys we found that intermediates claim briefings are more valuable (than experts claim): • They need more explanation • They still benefit from teaching • Results from new conditions support this

  28. Observations - Conjectures • Experts find live briefings more valuable than highly prepared canned because • Experts like to drive information transfer more • Hence strongly prefer interactive (live) transfers, • respond less well to attention management (more headstrong) • Experts strongly prefer strategic information and stimulus makers did not include enough of them

  29. Methodology and Analysis

  30. Score grows exponentially log(score) score time time

  31. Growth Curves better players mean growth log(score) worse players 13 min 18 min time

  32. Conjectured effect of bubble takeover time actual score subject log(score) projected score takeover score t b passer 13 min 18 min

  33. Information from control subject score subject log(score) control score projected score passer 13 min 18 min

  34. Main Effect: Bubble is good

  35. Experts: FF > Canned

  36. Live Remote = Face to Face for experts

  37. Contribution to Resolving CKM Technical Issues • Contribution to Resolving CKM Technical Issuesannual identification of research issuesmapping project’s scientific focus to CKM stage model

  38. Relevance to CKM Goals

  39. Relation to stage model • Finding efficient styles of briefing both live and canned are mainly relevant to • Team knowledge base construction • Team shared Understanding • Team consensus • Knowledge interoperability

  40. Progress toward a Demonstrable Product Demonstrable ProductTransferable productsTransferable project concepts

  41. Progress toward a Demonstrable Product • Methodology for experimentally determining quality of shared understanding • Partial progress toward guidelines for using annotation to asynchronously share understanding • type of graphics to use • How to create context to improve presentations

  42. Relevance to Operational Requirements of Program • Relevance to Programconcepts improve team collaboration performance.Clear fit with CKM scenarios

  43. Concepts Empirically Shown To Improve Team Performance • Annotation has been empirically shown to improve team performance but: • Video not always better than still • Moving annotation not always better than static annotation • Experts need less annotation and less transfer time and want different info than intermediates • One size does not fit all

  44. Fit with CKM goals • Analysts need briefings on real time info coming from UAV’s – in situation rooms, privately during the information gathering phase • Commanders must communicate intent to distributed teams: both to other decision makers for ratification and to action teams in the field. • In long scenarios analysts will need to be spelled and pass off their station to another analysts • New action teams coming into the field will have to be briefed on both the commanders intent and the experience of other action teams

  45. The End

More Related