950 likes | 1.11k Views
MSc in Computer Sciency by Research. Project Viva. C o lo ur an algorithmic approach. Thomas Bangert tb300@eecs.qmul.ac.uk. understanding how visual system process information. Visual system: about 30% of cortex most studied part of brain best understood part of brain. Image sensors.
E N D
MSc in Computer Sciency by Research. Project Viva Colouran algorithmic approach Thomas Bangert tb300@eecs.qmul.ac.uk
understanding how visual system process information Visual system: • about 30% of cortex • most studied part of brain • best understood part of brain
Image sensors • Binary sensor array • Luminance sensor array • Multi-Spectral sensor array
Where do we start? We first need a model of what light information means. Any visual system starts with a sensor: What kind of information do these sensors produce? Let’s first look at sensors we have designed!
Sensors we build X Y
The Pixel Sensors element may be: • Binary • Luminance • RGB The fundamental unit of information!
The Bitmap 1 2 0 2-d space represented by integer array 0 1
What information is produced? 2-d array of pixels: • Black & White Pixel: • single luminance value, usually 8 bit • Colour Pixel • 3 colour values, usually 8-bit
? Where we need to start: the fundamentals of the sensor
Human Visual System (HVS) The fundamentals!
The Sensor 2 systems: day-sensor & night-sensor To simplify: we ignore night sensor system Cone Sensors very similar to RGB sensors we design for cameras
BUT: sensor array is not ordered arrangement is random note:very few blue sensors, none in the centre
First Question: What information is sent from sensor array to visual system? Very clear division between sensor & pre-processing (Front of Brain) andvisual system (Back of Brain) connected with very limited communication link
Receptive Fields All sensors in the retina are organized into receptive fields Two types of receptive field. Why?
What does a receptive field look like? In the central fovea it is simply a pair of sensors. • Always 2 types: • plus-centre • minus-centre
What do retinal receptive fields do? Produce an opponent value:simply the difference between 2 sensors This means: it is a relative measure, not an absolute measure and no difference = no information to brain
Sensor Input Luminance Levels it is usual to code 256 levels of luminance Linear: Y Logarithmic: Y’
- - -- - -- - - + + ++ + ++ + + - - -- - -- - - + + ++ + ++ + + - - -- - -- - - + + ++ + ++ + + - - -- - -- - - + + ++ + ++ + + + + ++ + ++ + + - - -- - -- - - - - -- - -- - - + + ++ + ++ + + - - -- - -- - - + + ++ + ++ + + - - -- - -- - - + + ++ + ++ + + - - -- - -- - - + + ++ + ++ + + Receptive Field Function Min Zone Max-Min Function Output is difference between average of center and max/min of surround Tip of Triangle Max Zone
Dual Response to gradients Why? Often described assecond derivative/zero crossing
Abstracted Neurons only produce positive values. Dual +/- produces positive & negative values. Together: called a channel Produces signed values. Co-ordinate
HVS Luminance Sensor Idealized A linear response in relation to wavelength. Under ideal conditions can be used to measure wavelength.
Spatially Opponent HVS:Luminance is always measured by taking the difference between two sensor values.Produces: contrast value Which is done twice, to get a signed contrast value
Moving from Luminance to Colour • Primitive visual systems were in b&w • Night-vision remains b&w • Evolutionary Path • Monochromacy • Dichromacy (most mammals – eg. the dog) • Trichromacy (birds, apes, some monkeys) • Vital for evolution: backwards compatibility
Electro-Magnetic Spectrum Visible Spectrum Visual system must represent light stimuli within this zone.
Colour Vision Young-HelmholtzTheory Argument:Sensors are RGB thereforeBrain is RGB 3 colour model
Hering colour opponency model Fact: we never see reddish green or yellowish blue. Therefore: colours must be arranged in opponent pairs: RedGreen BlueYellow 4 colour model
How to calculate spectral frequency with 2 luminance sensors. Roughly speaking:
the ideal light stimulus Monochromatic Light Allows frequency to be measured in relation to reference.
Problem:natural light is not ideal • Light stimulus might not activate reference sensor fully. • Light stimulus might not be fully monochromatic. ie. there might be white mixed in
Solution: Then reference sensor can be normalized Which is subtracted. A 3rd sensor is used to measure equiluminance.
Equiluminance & Normalization Also called Saturation and Lightness. • Must be removed first – before opponent values calculated. • Then opponent value = spectral frequency • Values must be preserved – otherwise information is lost.
a 4 sensor design 2 opponent pairs • only 1 of each pair can be active • min sensor is equiluminance
What does a colour opponent channel look like? luminance contrast opponent channel each colour opponent channel codes for 2 primary colours Total of 4 primary colours
What is Colour? Colour is calculated exactly the same as luminance contrast. The only difference is spectral range of sensors is modified. Colour channels are: RG BY Uncorrected colour values are contrast values. But with white subtracted and normalized: Colour is Wavelength!
How many sensors? 4 primary colours require 4 sensors!
Human Retina only has 3 sensors!What to do? Because of opponencywhen R=G, RG colour channel is 0. Why not pair RG and reuse it as a Yellow sensor! Yellow can be R=G
How do we abstract information from sensor array? Luma (Y’)Red-Green (CB)Blue-Yellow (CR)
Luminance + 2 colour values+ 2 sensor correction values Chroma BlueChroma Red + Lightness + Saturation
Testing Colour Opponent model What we should see What we do see Unfortunately it does not matchThere is Red in our Blue
The strange case of Ultra-Violet Light with frequency of 400nm is ultra-blue Red sensor is at opposite of spectrum & not stimulated. Yet we see ultra-violet – which is Blue + Red …and the more we go into UV the more red
Colour Matching Data (CIE 1931)(indirect sensor response) a very odd fact – a virtual sensor response
Pigment Absorption Data of human cone sensors Red > Green
Therefore: HVS colour representation must be circular! Which is not a new idea, but not currently in fashion. 480nm 620nm 540nm
Dual Opponency with Circularity an ideal model using 2 sensor pairs
Colour Wheel Goethe & Munsell Colours are represented by a single value: Hue