500 likes | 517 Views
Explore the pros and cons of direct and indirect input devices and modalities of interaction for Human-Computer Interaction tasks. Learn about sensor-based inputs in Chapter 7.
E N D
Chapter 6 Input Technologies and Techniques
Input device choice example Pen vs touch + accurate pointing + drawing, hand-writing + extra functions on the pen (e.g. buttons) - pen takes one hand - pen should be always available - starting the interaction is slow Choose the input device for your HCI task!
Interaction technique • Input devices sense physical properties of people, places, or things. • An interaction technique provides a way for users to accomplish tasks by combining input with appropriate feedback (e.g. pinch-to-zoom) • A user interface consists of input device(s), interaction technique(s), a mental model etc.
Input Device Properties • linear position vs motion vs force vs angle • set of device’s states • number of dimensions • metrics • pointing speed and accuracy, error rates • device acquisition time:time to moveone’s hand to a device • learning time, cognitive comfort
Direct input devices • unified input and display surface • touchscreens or display tablets operated with a pen • Disadvantages: • lack buttons • occlusion • bad scaling for big screens
Direct input devices • special input devices (pen) • soft-touch vs. hard-touch • multi-touch • force sensing (7 levels) • parallax (<2mm), lag (<100ms)
Direct input devices • guestures pl. SecondLight • fingerprintdetection • fingeridentification
Indirect input devices • input and display are on different devices • mouse, keyboard etc • Disadvantages: • input signal has to be shown on the display • no explicit feedback
Indirect input devices • absolute (digitizing tablets) vs relative position (mouse) • touchpad • touchpad and touchscreen are different! • trackball • joystick • isometric vs. isotronic
Text entry • QWERTY (1868): 30-80 wpm • Alternative keyboard layouts offer ~ 5%performance gain • touchscreen keyboards • size of keys • „attention blinking” • software enhancement: word prediction, guestures • hand-writing (paper and pencil ~ 15 wpm) • might be useful at very short texts (words) • Speech recognition (speech ~150 wpm)
Modalities of interaction • both hands • keyboard + mouse • hand-held devices • geustures • good mental model, association is required • speech • instructions with limited dictionary • speech to text
Modalities of interaction • free-space gestures semiotic, ergotic, epistemic • whole-body input pl: Microsoft Kinect • Sensors…
Trends • newsensors and input devices e.g. fingerprintinstead of password • higherabstractionlevel of APIs e.g. Hungarian speech2text is availableas a module • syntesis of variousmodalities, input signals • machinelearning • „do less, butdoitwell”
Chapter 7 Sensor- and recognition-based input for interaction
Sensors • Sensors convert a physical signal into an electrical signal thatmay be manipulated symbolically on a computer. • cost of sensors has been decreasing • optical mouse • accelerometers (automotive air-bag systems)
Sensor inputs Sensor2 SensorN Sensor1 Environment Current state of the user User model ApplicationN Application1 Application2
Sensors in HCI • Occupancy and Motion • Infrared motion detectors (~10 μm) • Air pressure sensors (door opens) • Pressure mat switches • computer vision, acoustic • Range Sensing • triangulation (LED or ultrasonic) • stereo cameras
Sensors in HCI • Position • GPS • in-building GPS: triangulation (RF, WiFi …) • multiple cameras (position of body parts) • time-of-flight • Movement and Orientation • wearable sensors • gyroscope (MEMS)
Sensors in HCI • Touch sensors • Gaze and eyetracking) • computer vision • What object the user is looking at? • Speech • challenges: background noise, speaker identification • non-verbal information (prosody, intonation etc)
Sensors in HCI • Gestures • hand pose, spatial trajectory of the hands, pointing to indicate an object • computer vision or wearable • Identity detection • biometric sensors: fingerpring, retina, shape of hand, hand-writing, speech… • computer vision (eg. face detection, QR code) • RFID
Sensors in HCI • Context • temperature, air pressure, lights • Sensing affect • boredom, interest, pleasure,stress, or frustration • galvanic skin response, blood volume pulse etc… • Brain-computer interfaces • invasive vs non-invasive technologies • EEG: measures electricalactivity at local parts of the brain using electrodes placed carefullyon the scalp
Sensor inputs Sensor2 SensorN Sensor1 SIGNAL PROCESSING Environment Current state of the user Usermodel ApplicationN Application1 Application2
Signal processing Time series analysis • Preprocessing eg. Kálmán-filter • Featureextraction eg. Fourier transformation • Classification/modeling
Chapter 61 Social Networks and Social Media
web 1.0 – static web (contentcreators and readers) web 2.0 – usersactivelycreatecontent and consume web 3.0 – semantic web ???
5.6M English Wikipedia article • 50M Flickr photo/month • 300h/min video upload to Youtube • 500M tweet/day
Social networks • information-sharing and finding, gaming, shoping, eventorganisation, dating, professionalforum • Recommendersystems • Collaborativefiltering • more trustinfriends’ preferencesthanunknowpeople
Filter bubble [video]
Summary • Direct and indirect input devices • Interaction modalities • Sensor types