520 likes | 631 Views
Unit 4 – Week 1. Chapter 10, Sound localization, pages 275-301. Some TalkStuffs . How did Exam 3 go? How is morale going into the final? What hopes & desires do you have for the rest of the semester? An announcement about finals help: Final is last day of class 5/8 (Wednesday)
E N D
Unit 4 – Week 1 Chapter 10, Sound localization, pages 275-301
Some TalkStuffs.. • How did Exam 3 go? • How is morale going into the final? • What hopes & desires do you have for the rest of the semester? • An announcement about finals help: • Final is last day of class 5/8 (Wednesday) • I have 2 finals Tuesday arvo/evening, and will not be able to respond to emails until after 8:30pm. I’m telling you well in advance so you guys can hopefully work with me. It is unfortunate that they are scheduled in this way, but I can’t do anything about it, no more than you can yours.
A Vision For You! • Now that we have scores from all students for first 3 exams, we can (once graded) show some meaningful statistics and help you create a vision going into the final about where you stand against the curve. When the grades are in (sometime next week), I will post this with week 2 (Thursday) 3 (Tuesday)’s ppt slides. • RRRRREADY, BRRRREAK!
Topics Covered • Some elaboration on Bandwidth & partial hesitation on Volley Prinicple • ITD (interaural time difference) • ILD/IID (interaural level difference/interaural intensity difference) • Physiology: medial superior olives (for ITDs), lateral superior olives (for ILDs) • “Cone of confusion” • Directional transfer function • Inverse-square law • Attack & Decay
Cochlear Microphonic (Wever) • Small electrical signal that can be measured by an electrode placed near the hair cells of the cochlea • Mimics the form of the sound pressure waves that arrive at the ear.
Cochlear Karaokestuffs! • Silly anamorphic/non perfect illustration: The inner hair cells are “singing” to the brain, and the cochlear microphonic is the microphone that enable us to listen in on what the brain spits back (via the outer hair cells - microphone output), as we sway our cell phones back and forth and be inspired by the bravery it must take to perform in front of your peers and superiors ;) • AXES: mV (y) versus msec (x) • Typically in a Fourier spectra, we see “Amplitude versus Frequency”. • In a cochlear microphone, it measures the MODULATION over time. • When we say modulation, we really mean how much change has occurred in one of the 3 dimensions, below • 3 DIMENSIONS OF PERIODIC WAVEFORMS: AMPLITUDE, FREQUENCY, PHASE
Modulation Types: • Watch these videos! • Amplitude modulation: • http://www.youtube.com/watch?v=3ZMPcPR7W3Q • Frequency modulation: • http://www.youtube.com/watch?v=ens-sChK1F0
Low frequency tones result in low frequency modulations of the cochlear microphonic electrical signal. • High frequency tones result in high frequency modulations of the electrical signal. • Combinations (sums) of high plus low frequency tones result in sums of high and low frequency modulations in the cochlear microphonic electrical signal. • This all means that the CM is encoding information exactly like the physical sound source over time.
Volley Principle • Cochlear microphonic emboldens temporal code over place code because the CM shows that these cells are producing a signal which temporally matches the auditory signal (shift invariant linear system) • Reconciles Cochlear Microphonic data with the implausibility of Temporal Theory • Wever: 1 neuron not enough to account for 20,000 Hz firing rate, but 20 may be!
What is phase locking? • Neurons fire in synchrony with the phase of a stimulus (waveform). • Evidence for the volley principle
Band-limited Noise • Band-limited noise stimuli have equal energy at all frequencies within some region, and no energy outside of that region. One can describe this stimulus with three values: • Center frequency: frequency (in Hz) marking the center of the region where there is energy. • Bandwidth: width (in Hz) of the region of frequencies where there is energy. • Total energy: summed energy of all pure tone components, which is the area under the Fourier spectrum curve.
Band-limited Noise on the Basilar Membrane • Testing the number of neurons hypothesis • If we change only the center frequency, we shift the position along the basilar membrane that is excited. If we change only the bandwidth, we change how much of the basilar membrane we are exciting. • Testing the firing rate hypothesis • If we change only the total energy, we change the amount of the displacement at each excited position (amplitude)
Zwicker’s Loudness Matching Experiment • Method – magnitude production • Test stimulus was band-limited noise centered at, let's say, 1000 Hz, with a bandwidth of 20 Hz. • Procedure • Subjects adjusted the intensity of a pure tone of 1000 Hz so that it appeared to be equally loud as the band-limited noise. Then, he increased the bandwidth of the noise, while decreasing the intensity of each pure tone frequency component so that the total energy was unchanged, and repeated the loudness matching judgment with the new bandwidth.
Critical Bandwidth • Region over which the cochlea adds up the energy that it is receiving. • Within this critical region sounds of equal total energy have equal loudness. • As soon as the sounds that we present are spread out over a larger frequency range, however, the sound with the larger bandwidth sounds louder. • The critical band corresponds to the width in terms of frequency, an estimate of the physical length, along the membrane, over which auditory nerve signals are pooled.
Sound Localization – Variables • Azimuth (left to right) • Elevation (up-down) • Distance • When taken together, azimuth and elevation create a sphere where azimuth represents latitude and elevation represents longitude and any point in space can be describe by its location on that sphere.
Interaural Time Difference (ITD) • Medial superior olive (MSO) • Effective sound localization cue for sounds with an abrupt onset • Effective for low frequency sounds too (with phases bigger than length of head [800Hz and below])
Interaural Level Difference (aka Interaural Intensity Difference) • Lateral superior olive (LSO) • Sound intensity decreases with distance • Interacts with azimuth (angle of head with sound source) • Interacts with sound source’s loudness and frequency • Effective sound localization cue for high frequencies
IID and ITD work best for different types of sound • High frequency sounds give rise to an IID signal • High frequencies are absorbed by our heads (they’re soft!) • Low frequency sounds and abrupt onset sounds give rise to an ITD signal • Sound reaches one ear before the other (sound wave travels slowly for low frequency sounds) • Conclusion: The auditory system uses both IID and ITD
Directional Transfer Function • Describes how the pinna, ear canal, head and torso change the intensity of sounds with different frequencies that arrive at each ear from different locations in space (azimuth and elevation) • Like each person has their own CSF (contrast sensitivity function) in vision, we all have our own DTF (directional transfer function) too • Also known as Head-related transfer function (HRTF) • Describes how a sound from a specific point will arrive at the ear • Consider how a sound from an orchestra travels and it received by the ears versus listening to the same stimulus via headphones (and how interaction with the pinnae change sound localization and perception of sound)
Sound Localization Review • Ambiguous cues, naturally • We disambiguate it by • By rotating the head • By using the Direction Transfer Function (DTF) aka Head-Related Transfer Function (HRTF) • This function describes the IID as a function of frequency by the attenuation characteristics, and the ITD as a function of frequency in the phase delay.
CHALLENGE QUESTIONS! • What would be the implications of having a head 2 times the diameter of your current head? How would this affect your sound localization?
CHALLENGE QUESTIONS! • What would be the implications of having a head 2 times the diameter of your current head? How would this affect your sound localization? • Suggestion: Sound shadow would be bigger, and start start at lower frequencies, as the phase would be bigger. Therefore the ITD cue would be increased, because there is more distance the signal needs to pass in order to reach the other ear.
ANOTHA! • Suppose your cochlea grew in size by a factor of 2 as well and your brain could process frequencies all the way up to 40,000 Hz instead of 20,000 Hz. What would have to change in cortex to accommodate these changes made in the cochlea?
ANOTHA! • Suppose your cochlea grew in size by a factor of 2 as well and your brain could process frequencies all the way up to 40,000 Hz instead of 20,000 Hz. What would have to change in cortex to accommodate these changes made in the cochlea? • Suggestion: A1 would have to be 2x as big, to up the processing power in cortex.
Inverse-square law • As distance from a source increases, intensity decreases faster such that decrease in intensity is the distance squared • Spectral composition of sounds: Higher frequencies decrease in energy more than lower frequencies as sound waves travel from source to one ear
Harmonics & Virtual Pitch • Lowest frequency of harmonic spectrum = Fundamental frequency (base frequency), what we perceive as pitch • Missing-fundamental effect (aka Virtual pitch) • The pitch corresponds to fundamental frequency, even when it is missing from the sound source
Construct a sound that is made by adding pure tones with frequencies 400, 800, 1200, and so on. The 400 Hz component is called the base or fundamental frequency of the complex tone, and the other frequencies are called the higher harmonics.
Virtual Pitch • What if you switched from 400, 800, 1200… to 500, 900, 1300…? How would this affect pitch?
Virtual Pitch ctd. • Challenges place code • The complex tone in (c) does not contain any energy that would stimulate the auditory nerve at the point where a tone of 400 Hz would stimulate the nerve. If pitch is encoded by position alone, then how can these two yield the same pitch? • Challenges volley theory • There is no energy (or oscillation) in the tone complex at 400 Hz, i.e., there is no 400 Hz component in the cochlear microphonic
Timbre • Distinguished by relationship of harmonics and other frequencies between complex tones of same fundamental frequency
Attack, Decay, Tilt • Listeners use spectral tilt and frequencies of spectral peaks to identify vowels • Tilt: how much energy is allotted to each frequency in the frequency spectrum? When more energy is allotted to higher or lower frequencies than the other, a tilt is manifested • How quickly does the sound energy increase and decreases defines the sound markers of attack & decay. • Attack: Part of a sound during which amplitude increases (onset) • Decay: Part of a sound during which amplitude decreases (offset)
Auditory Scene Analysis • How do we group sounds in a cause-effect, or same source categories? • Group by timbre • Group by timing • Spatial separation between sounds • Separation based on sounds’ spectral or temporal qualities • Grouping by onset (comparable to common fate in Gestalt psychology in the modality of vision)
Continuity/Restoration Effects • Principle of good continuation: In spite of interruptions, one can still “hear” a sound • Experiments using a signal detection task (e.g., Kluender and Jenison, 1992) suggest that missing sounds are restored and encoded in the brain as if they were actually present!
Intro to chapter 11 • Music & Speech
Phonemes • Phonemes are defined as the smallest unit that, if changed, can potentially change the word's meaning. For example, the "i" sound in "hit" is a phoneme, because if you change it to "a" you get "hat"