80 likes | 224 Views
Challenges in Multi-Modal and Context-Aware UI. (the 5 minute version). Multi-Modal. I’m using multiple output media (audio, visual, gestural, stance, posture, eye gaze, etc.). Challenge : apply this same fluidity, range, and artful redundancy/overlap to computer input techniques.
E N D
Challenges in Multi-Modal and Context-Aware UI (the 5 minute version) Ken Fishkin;SoftBook Press;fishkin@softbook.com
Multi-Modal • I’m using multiple output media (audio, visual, gestural, stance, posture, eye gaze, etc.). • Challenge: apply this same fluidity, range, and artful redundancy/overlap to computer input techniques. Ken Fishkin;SoftBook Press;fishkin@softbook.com
Context-Aware • Where am I? • What room? • What noise level? • What lighting level? • What elevation? Ken Fishkin;SoftBook Press;fishkin@softbook.com
Context-aware (2) • Who am I? • Biometrics • best tailoring extreme interfaces • Who else is nearby? Ken Fishkin;SoftBook Press;fishkin@softbook.com
Context-aware (3) • What is around me? • Other devices (IR/RF communication networks) • true plug-n-play (Handspring). Ken Fishkin;SoftBook Press;fishkin@softbook.com
Context-aware (4) • How do you know what I mean? • Clutching problem • How far back does context apply? • Affects all the other issues (what, when, etc.) Ken Fishkin;SoftBook Press;fishkin@softbook.com
2 modest proposals • get 20 itsy’s (or similar technology), “embrace and extend” them. • Lego Mindstorms • Study sign language Ken Fishkin;SoftBook Press;fishkin@softbook.com
Hmmm…. • When two fluent speakers of North American Indian Sign Language communicate • 60% - redundant • 30% - augment • 10% - disjoint! Ken Fishkin;SoftBook Press;fishkin@softbook.com