960 likes | 1.11k Views
Scientific Visualization CPE 478 Dr. Chris Buckalew, Professor. ã 2008 by Chris Buckalew. Class Outline. Intro to visualization Exploration visualization vs educational visualization Data set types Human perception Data. Types of Data Sets – Integer Domain.
E N D
Scientific VisualizationCPE 478Dr. Chris Buckalew, Professor ã 2008 by Chris Buckalew
Class Outline • Intro to visualization • Exploration visualization vs educational visualization • Data set types • Human perception • Data
Types of Data Sets – Integer Domain • Integer domain -> integer range • example: • Integer domain -> real range • example: • Integer domain -> 2D real range • example: • Integer domain -> 3D real range • example: • Etc
Types of Data Sets – Real Domain • Real domain -> integer range • example: • Real domain -> real range • example: • Real domain -> 2D real range • example: • Real domain -> 3D real range • example: • Etc
Types of Data Sets – 2D Real Domain • 2D real domain -> integer range • example: • 2D real domain -> real range • example: • 2D real domain -> 2D real range • example: • 2D real domain -> 3D real range • example: • Etc
Types of Data Sets – 3D Real Domain • 3D real domain -> integer range • example: • 3D real domain -> real range • example: • 3D real domain -> 2D real range • example: • 3D real domain -> 3D real range • example: • Etc
Sensory and Arbitrary Representation • Sensory representation (like graphs) • understanding without training • subject to persistent illusions • understanding immediately • cross-cultural validity • Arbitrary representation (like letters) • hard to learn • easy to forget • culture-dependent • formally powerful (compact representation) • rapidly changeable
Perceptual Processing • Stages in perception: 1: Parallel processing to extract low-level properties of the scene, such as color, features, orientation, texture, movement 2: Pattern perception: contours, similar regions, motion patterns 3: Sequential goal-directed processing: example – finding a route on a map
The Light Spectrum • Electromagnetic spectrum “DC to daylight” (actually well beyond) • Radio, microwave, radar are long wavelength, low frequency • Infrared, visible, ultraviolet bracket human sensitivity • X-ray, gamma, cosmic radiation short wavelength, high frequency • Human visual system sensitive to 400 (violet) to 700 (red) nanometers wavelength.
Simulating Illumination for Visualization • Render surfaces with the Illumination Equation: InRdcosq + Inks(cosf)m + Iaka • Diffuse component helps to discern shape and relationships of objects • Ambient component fills in shadow voids • Specular component helps to discern fine surface details
Eye Lens System • Lens system: lens and cornea • cornea is fixed-focus, lens is variable-focus • Visual angle: angle = 2*atan(h/2d), h=size of obj, d=dist to obj • Distance is from optical center of lens system; about 17mm from retina • Focal length f of eye: 1/f = 1/d + 1/r, r=dist to image • For distances in meters, 1/f is power of lens in diopters • power of eye lens system is about 59 diopters • Power of lens system is approx sum of powers of lens and cornea • cornea approx 40 diopters • lens variable with a range of 12 diopters, decreasing by 2 diopters/decade • Depth of field: range that is in focus when the eye is focused at d • eye has 1/3 diopter range, assuming 3mm pupil width • decreases with larger pupil
Focus Issues with Computer Displays • Augmented Reality – real image is combined with synthetic image • HUDs (Heads-Up Displays) used for aircraft are focused at infinity • eyes focus closer anyway, which causes overestimation of distance to real objects, resulting in crashes • HMDs (Head-Mounted Displays) have problems with head and eye movements to direct the gaze • If HMD is only over one eye then “binocular rivalry” can cause parts of the real image to erase parts of the synthetic and vice versa • Virtual Reality displays don’t normally simulate depth-of-field • must detect the synthetic object that the user is looking at • blur out other objects too near and too far • detect pupil diameter to get proper depth of field
ChromaticAberration Some people see the red Closer than the blue But some see the Opposite effect
Pupil Aperture and Depth of Field • Approximately 3mm, but variable • Controls amount of light that enters the eye • Larger aperture opening reduces the depth of field • Smaller aperture more closely approximates a “pin-hole” camera, for which depth of field is infinite • Some depth-of-field examples: • focus distance=50 cm; near dist is 43 cm and far dist is 60 cm • focus distance=100 cm; near dist is 75 cm and far dist is 150 cm • focus distance=200 cm; near dist is 120 cm and far dist is 600 cm • focus distance=300 cm; near dist is 150 cm and far dist is infinity
Receptors • 100 million rods (intensity) and 6 million cones (color) • Rods shut down in normal light; cones shut down in low light • Fovea is center of retina and contains only cones • Fovea covers 2 degrees of arc but sharpest vision is in center ½ degree (your thumbnail at arm’s length is about one degree of arc) • Cones are 20-30 seconds of arc apart in fovea (a second is 1/3600 degree) • Cones form an irregular pattern in the retina; natural antialiasing! • Many rods contribute to a single visual signal to the brain; fewer cones do • Connection of rods and cones into a single nerve signal facilitate edge detection • One million axons in optic nerve
Acuities • Distinguish two points (with a gap between them): 1 minute of arc • Distinguish black and white bars: 1-2 minutes of arc • Reading letters: 5 minutes of arc • Resolving objects at different depths: 10 seconds of arc • Determining that two line segments are collinear: 10 seconds of arc • Last two are superacuities; time can also give superacuity
Acuity Distribution and the Ideal Display • Obviously best at center of fovea, falls off away from center • At 10 degrees off-center, down to 20% of max acuity • At 30 degrees off-center, less than 10% of max • At 50 degrees off-center, less than 5% of max • Half of brain area for receiving visual signals is used for central 3% of visual field • Small screen with small pixels=>most pixels in view of fovea, but pixels on edge of screen are “too small” – megapixel display, 10cm wide, at arm’s length is equal to fovea resolution • Large screen with large pixels=>pixels in center of screen are way too big, pixels on edge may be OK • Ideal? Small pixels where we’re looking, bigger pixels elsewhere.
Visualizing Real Domain Data • R->R: graph, etc • R->2R:
CIE Chromaticity • Combinations of colors are contained in the convex hull of the colors • Spectrum locus is curved boundary that represents monochromatic colors of light – all visible colors fall within this boundary • Purple boundary connects the two ends of the spectrum • Curve is black-body curve – the color of a black-body radiator at increasing temperatures. Stars are good approximations of black-body radiators • Line between white point and spectrum locus show decreasing saturation • Complementary color is on opposite side of white point • Not perceptually accurate, but better than HSV
Fun Color Facts • In more than 100 languages, words for colors are consistent: • black and white most common • then red, followed by green or yellow, then blue, then brown • then pink, purple, orange, and gray on no particular order • Humans are extremely accurate at setting a pure yellow • 2/3 set green at 514 nm, 1/3 set green at 525 nm • Setting hues is independent of brightness • 1% of females, 10% of males suffer some degree of red-green color blindness
Color Opponent Process Model • Light information is grouped into channels between the cones and the visual cortex, so some color organization is done before the brain starts to work on it • Three channels • Luminance (R+G+B) • Red-Green (R-G) • Yellow-Blue ((R+G) – B) • Many color names are combinations of these channels... • reddish-gray (pink), greenish-blue (cyan), reddish-yellow (orange) • ...but not from same channel • reddish-green? bluish-yellow? whitish-black?
Color is not very good! • R-G and Y-B channels each carry only 1/3 the information of the luminance channel – example next slide • Stereo depth sensitivity based primarily on luminance information • Motion perception based based primarily on luminance information • Shape perception based primarily on luminance information • Lesson: important motion, shape, and detail information shouldn’t rely entirely on color differences
It is very difficult to read text that is isoluminant with its background color. If clear text material is to be presented it is essential that there be substantial luminance contrast with the background color. Color contrast is not enough. This particular example is especially difficult because the chrominance difference is in the yellow-blue direction. The only exception to the requirement for luminance contrast is when the purpose is artistic effect and not clarity.
Continuous Pseudocolor • Greyscale
Continuous Pseudocolor • Saturation
Continuous Pseudocolor • Spectrum
Continuous Pseudocolor • Luminance
Depicting Intervals • Banded color: • Contours:
Ratios • Difficult to convey quantitative values perceptually • A is twice as big as B? A is as much negative as B is positive? • Easier to convey qualitative differences
Two Color Variables • Hard to quickly understand
Getting the User’s Attention • Users easily see things they expect to; less easily see unexpected things • Useful field of view – field of view is wider or narrower to keep attention on a constant number of targets • Tunnel vision – narrowing of UFOV due to stress (high cognitive load) • The more stress, the narrower the UFOV • Same principle applies to audio input • Pilot alarms are good example • Moving targets more easily detected outside of UFOV
Preattentive Processing • Preattentive processing: a symbol takes < 10ms to detect • If it takes some “thinking” detection requires > 40ms • Example: 08909684209457598173245875440698823457804958 45908502938450595845098545098243509845843059 43909878768764985609456927564095657598745349 05825847875609458609234785982754585525058054 08909684209457598173245875440698823457804958 45908502938450595845098545098243509845843059 43909878768764985609456927564095657598745349 05825847875609458609234785982754585525058054
Preattentive Differences • Form: • line orientation, line length, line width, line collinearity, size, curvature, spatial grouping, blue, added marks, numerosity • Color: • hue, intensity • Motion: • flicker, direction of motion • Spatial position: • 2D position, stereoscopic depth, convex/concave shape
Combinations • Sometimes slower – example: • Faster if separable:
Highlighting • Easy to highlight simple visualizations – use preattentive cues • For complex visualization, use whatever graphical dimension that is otherwise least used. • Example: blur
Textures in Visualization • Dimensions: orientation, scale, contrast • Example:
Texture Differences • Orientation difference • Orientation and size • Contrast
Texture Example • Four dimensions:
Another Texture Example • Six dimensions: arrow direction and area for velocity, color of ellipses is vorticity, shape and orientation of ellipses represents rate of deformation