420 likes | 544 Views
Cosc 6326/Psych6750X. Enabling Technology for Advanced Displays. Virtual reality and other advanced interactive displays simulate and maintain a model of the world to be created or augmented present or display the world to the user (displays and effectors)
E N D
Cosc 6326/Psych6750X Enabling Technology for Advanced Displays
Virtual reality and other advanced interactive displays • simulate and maintain a model of the world to be created or augmented • present or display the world to the user (displays and effectors) • sense the actions of the user and environmental state to enable the simulation to react (sensors)
A typical VR system has • sensors to collect information about the actions of the user • a processor to collect this information, model the virtual world and generate the output for the display devices. • displays and other sensory stimulators generate the sensory input provided to the user.
The sensation-perception-action cycle of the user is an integral part of VR system. • Normally when one acts in the world feedback from the senses confirms the expected result • Current VR systems have serious limitations that limit the ability to create high fidelity realistic virtual worlds.
In a sense VR closes the users sensorimotor loop • User acts in the world. • Simulation detects the action using sensors • Feedback provided by the simulation via the displays
Hardware • Need to provide real time update to the user • Processor speeds and technology have improved exponentially although modelled VR worlds are still limited • Recent trend move from ‘big-iron’ to clusters of PCs
Software: • input, simulation, rendering • often done in parallel loops (more parallelization possible) • input loop handles interfacing with sensors to get current state
simulation loop: • for each time interval simulate behaviour of objects in virtual environment • physical behaviour, reaction to user actions, higher level behaviour (intelligent entities, avatars …), collision detection … • real time – feedback to user must be timely (e.g. 60 Hz) • distributed, multiprocessor pipelines …
Rendering loop: • generate displays to present: graphics, haptics, audio • modern raster graphics has a number of stages to convert world model to raster image • transformation, projection • lighting, shading • texture mapping • rasterisation • anti-aliasing • visibility, clipping, culling … • recently, substantial hardware support on fast, low cost graphics cards – ‘graphics pipeline’
Low end HMDs • Targetted for personal entertainment (games, dvd, …) • Sony Glasstron, Olympus Eyetrek • currently NTSC, PAL, VGA resolution. HDTV?
VR HMDs • Sutherland’s HMD was boom supported. Often need free head motion. • Characterizing HMDs • Configuration: projection versus direct viewing • Optics: simple magnifier vs. compound microscope • Display image source: CRT, DLP, LCD … • Opaque or see-through
VR HMD Projection type • head mounted optics • external electronics & projection display • CAE FOHMD • images generated by high-resolution data projectors • coherent fibre optic bundle and optics direct image to eyes
Direct viewing: many modern displays have head mounted miniature displays • CRT: e.g. N-Vision, Kaiser (KEO) • LCD: e.g. Virtual Research, KEO • laser retinal scanning • DMDs • FEDs …
HMD Optics • Simple magnifer • single magnifying lens, short optical path • no exit pupil formed • simple, inexpensive • Compound optics • several lens: eyepiece, objective • exit pupil formed; must align with eye’s entrance pupil • more complicated, longer optical path, permits focusing
See-through HMD capability • Non-see-through • No need for optical combiner • Eye sees only the virtual image • Pure virtual reality application
See-through HMD capability • Optical see-through • images of the real and virtual worlds optically superimposed • need optical combiner (transmission ratio?) • useful for AR, wearables; similar technology for heads-up displays • distortions and time-lags a problem • direct view of real world
See-through HMD capability • Video See-through • non-see-through HMD plus ‘scene’ cameras • the virtual world is superimposed on a video image of the real world • electronic (not optical) combiner • can match time delays and distortions • system has access to user’s view • low resolution image of the real world
Figures of Merit/Design factors • field of view • resolution (tradeoff with FOV) • luminance, contrast • colour • monocular, biocular, binocular • exit pupil size, eye relief, adjustments (inter-pupillary distance, focus) • distortion
Projection-based displays • Walls • large screen interactive displays • suggested for collaborative design • curved screen, flat, wrap around, dome • e.g. Elumens Vision Dome • Desks • ImmersaDesk (University of Illinois EVL), …
CAVE/CAVE-like • University of Illinois EVL, Fakespace, Trimension (ReaCToR), Mechdyne (SSVR ) from fakespace
Large format immersive displays • Large format film, domes, planetariums, ride simulators • SEOS, Trimension, Spitz, Disney Quest, IMAX • immersive but often not very interactive (large groups) • used in simulators, $$$ for VR • Mechdyne V-dome has been used for VR
projection technology issues • projectors • cathode ray tube (CRT) • digital light processing (DLP) • DILA • liquid crystal display (LCD) • Laser
screens • material: glass, fabric, plastic, fog! • reflectivity, gain, polarisation • inter-reflection (black screens) • structure • single vs multiple • tiling, blending • colour and luminance matching/uniformity • support for stereopsis
Audio displays: • stereophonic, surround sound • spatial sound displays • sound modeling and synthesis • haptics, tactile displays … • more on these later …
Sensor technology is currently particularly rudimentary. • Position of a limited number of joints or limbs is normally sensed such as the position of the head and hand. • Buttons and joysticks etc can also provide input.
Sense only a limited range of the possible motions and have limited resolution. • Lag is a major problem with some sensors
To generate the displays, need to know users position and orientation • Need to track user’s head (hand, body …) in real time in order to respond to head (hand, body …) motion in real time • Current tracking does not measure degrees of freedom possible in human motion
magnetic • pulsed DC, AC • earths magnetic field • ultrasound • optical • GPS (outside) • mechanical • gyroscopes, accelerometers www.3rdtech.com
3D input devices • a number of 6 degree of freedom input have been proposed for 3D interaction • spaceball, 3D mice, hand/stylus tracking • isometric versus isotonic • maps to rate versus position control
Gloves/Motion capture • one of the early VR input devices was the Dataglove • typically many degrees of freedom • additional tracking for position • animation/gesture recognition Immersion Cybergrasp Gypsy
Other input technology • speech recognition • eye gaze tracking • gesture recognition • biopotentials