1 / 38

Roy Patterson Centre for the Neural Basis of Hearing

Part II: Lent Term 2014: ( 4 of 4). Central Auditory Processing. Roy Patterson Centre for the Neural Basis of Hearing Department of Physiology, Development and Neuroscience University of Cambridge. email   rdp1@cam.ac.uk. Lecture slides on CamTools.

verne
Download Presentation

Roy Patterson Centre for the Neural Basis of Hearing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Part II: Lent Term 2014: ( 4of 4) Central Auditory Processing Roy Patterson Centre for the Neural Basis of Hearing Department of Physiology, Development and Neuroscience University of Cambridge email   rdp1@cam.ac.uk Lecture slides on CamTools https://camtools.cam.ac.uk/portal.html Lecture slides, sounds and background papers on http://www.pdn.cam.ac.uk/groups/cnbh/teaching/lectures/

  2. Overview: Act I:Communication sounds and the information in these sounds: message, SsandSf Act II:Behavioural evidence for the role of these different forms of information in the perception of communication sounds Act III: The processing of communication sounds in the early stages of the auditory system, and hypotheses about the representation of communication sounds in later stages of the auditory pathway Act IV:Brain imaging evidence concerning the representation of communication sounds in auditory cortex

  3. We have discussed a model of auditoryperception that describes how sounds might be processed and represented at a sequence of stages in auditory system. All of the stages are mandatory and the order is crucial. One representation is intended to simulate your initial Auditory Image of the incoming sound and it is central to the model. Sensations like pitch and loudness are summary statistics calculated from the auditory image after it has been constructed. Speech and music perception are thought to be based on the patterns that arise in the auditory image. So, this Auditory Image Model (AIM) predicts that we should find a hierarchy of processing modules in the auditory pathway.

  4. LL LL Overview 2 There is a sequence of neural centres in the auditory pathway. It looks like it could be a processing hierarchy. The centres are separated by distances that are large relative the resolution of functional brain imaging (fMRI). The correspondence between the perceptual model and the anatomy suggests that (1) AIM could be useful when designing brain imaging studies of the auditory system and (2) the brain imaging data could help us locate the auditory image.

  5. Basilar membrane motion in the cochlea Anatomy of the Auditory Pathway: 1 }

  6. } Neural activity pattern in the cochlear nucleus

  7. } Strobed temporal integration in the inferior colliculus?

  8. } The initial auditory image in the MGB??

  9. Auditory Image The normalized auditory image in primary auditory cortex???

  10. The Auditory Image Model describes how the auditory system separates pulse-resonance sounds from noise, and how it normalizes and segregates the information about the pulse-rate (Ss) and the resonance scale (Sf) from the message. So the brain imaging research focuses on finding evidence that the neural centres in the auditory pathway are involved in source segregation and normalization, and that the segregation and pulse-rate normalization come before the resonance scale normalization. Moreover, speech-specific analysis and music-specific analysis should occur in neural centres beyond, but not too far from, those associated with segregation of pulse-resonance sounds from noise and their normalization.

  11. Brain Imaging with Simple Contrasts Find two sounds that differ only in the perceptual property of interest (like pitch). Scan the brain while people are listening, first to one sound and then to the other sound. Compare the brain activity produced by the two sounds looking for places where one sound produces more activity than the other.

  12. Brain imaging with Regular Interval Noise Copy a sample of random noise; delay it by N ms; add it to the original noise. The process emphasises time intervals of N ms in the sound and we hear a weak tone in the noise. As you repeat the delay and add process, the relative strength of the tonal component of the sound increases. RIN makes a good imaging stimulus because the sounds have similar distributions of energy over time and frequency. In the experiment the RIN had 8 iterations of the delay and add process.

  13. Neural activity patterns of Noise and RIN B A Auditory Image Noise D G F RIN J

  14. Initial auditory images of noise and RIN B C E G H K

  15. Continuous Imaging vs Sparse Imaging continuous imaging sparse imaging haemodynamic response to test stimulus haemodynamic response to scanner noise Difference in sensitivity to stimulus: positive negative [original figure by D. Hall, IHR, Nottingham]

  16. Imaging pitch and melody in the brain On a given scan, the listener is presented a sound with a pulsing rhythm. The sound has no pitch (a noise), a fixed pitch (boring melody) or changing pitch (proper melody). Asked to listen for pattern in the sound, but no response is required. http://www.pdn.cam.ac.uk/groups/cnbh/teaching/lectures/PUJG02.pdf

  17. Sound minus silence contrast CN IC CN CN Parasagital view showing CN/ IC Axial view at level CN AC AC AC AC MGB IC Coronal view showing IC + superior temporal lobe Coronal view showing MGB + superior temporal lobe T value 40 35 30 25 20 15 10 5 0 Griffiths et al. Nature Neuroscience (2001)

  18. Group Analysis saggital saggital axial axial coronal structural coronal structural x -78 -10 10 78 noise-silence fixed-noise diatonic-fixed random-fixed 34.4° 34.4° RightHemisphere Left Hemisphere Patterson, Uppenkamp, Johnsrude and Griffiths (2002) Figure 2

  19. saggital axial axial coronal structural coronal structural x -78 -10 10 78 noise-silence fixed-noise tonic-fixed random-fixed 34.4° 34.4° saggital Patterson, Uppenkamp, Johnsrude and Griffiths (2002) Group analysis

  20. equal energy click trains Neural Activity Pattern Auditory Image regular strong pitch Gutschalk, Patterson, Scherg, Uppenkamp, and Rupp, (2002) irregular no pitch

  21. effect of regularity in anterior source effect of level in posterior source anterior source: HG posterior source: PT Effects of regularity and intensity in MEG Gutschalk, Patterson, Scherg, Uppenkamp, and Rupp, (2002)

  22. Proposed functional organisation of auditory cortex all sounds primary auditory cortex auditory cortex tonal sounds loudness lively pitch fixed pitch lively pitch ConjectureConjecture Conjecture

  23. http://www.pdn.cam.ac.uk/groups/cnbh/teaching/lectures/PUJG02.pdfhttp://www.pdn.cam.ac.uk/groups/cnbh/teaching/lectures/PUJG02.pdf http://www.pdn.cam.ac.uk/groups/cnbh/teaching/lectures/GPRUS02.pdf Where does the auditory system segregate the information associated with Ss, Sfand the message?

  24. A damped sinusoid (12-ms period) pulse ringing

  25. Auditory image of a damped sinusoid 6000 Hz pulse 1000 Hz ringing 100 Hz

  26. Stimuli for Phonology Study onset timing regular irregular formant frequencies irregular regular

  27. mpmr-silence nvdvpv-mpmr mpmr-nvdvpv noise-silence fixed-noise lively-fixed z = 4mm z = 4mm y = -24mm y = -17mm z = -5mm z = -5mm Comparison of speech and music regions

  28. saggital axial axial pitch vtl AudIm phonology phonology coronal structural coronal structural x -78 -10 10 78 noise-silence fixed-noise tonic-fixed random-fixed 34.4° 34.4° RightHemisphere Left Hemisphere saggital Group analysis

  29. Proposed functional organisation of auditory cortex all sounds primary auditory cortex auditory cortex tonal sounds loudness lively pitch receptive phonology fixed pitch lively pitch ConjectureConjecture Conjecture

  30. Done! Act I: the information in communication sounds (animal calls, speech, musical notes) Act II: the perception of communication sounds (the robustness of perception) Act III: the processing of communication sounds in the auditory system (signal processing) Act IV: the processing of communication sounds (anatomy, physiology, brain imaging)

  31. End of Act IV Thank you Patterson, R.D., Uppenkamp, S., Johnsrude, I. and Griffiths, T. D. (2002). The processing of temporal pitch and melody information in auditory cortex. Neuron 36 767-776. http://www.pdn.cam.ac.uk/groups/cnbh/teaching/lectures/PUJG02.pdf Gutschalk, A., Patterson, R.D., Rupp, A., Uppenkamp, S. and Scherg, M. (2002). Sustained magnetic fields reveal separate sites for sound level and temporal regularity in human auditory cortex. NeuroImage 15 207-216. http://www.pdn.cam.ac.uk/groups/cnbh/teaching/lectures/GPRUS02.pdf Kriegstein, K. Von,  Smith, D. R. R., Patterson, R. D., Kiebel, S. J. and Griffiths, T. D. (2010). “How the human brain recognizes speech in the context of changing speakers,”J. Neuroscience 30(2) 629–638. http://www.pdn.cam.ac.uk/groups/cnbh/teaching/lectures/KSPKGjn2010.pdf

  32. Cast list fMRI in Cambridge: Ingrid Johnsrude, Dennis Norris, Matt Davis, Alexis Hervais-Adelman, William Marslen-Wilson MRC Cognition and Brain Sciences Unit, 15 Chaucer Road, Cambridge Roy Patterson, David Smith, Tim Ives, Ralph van Dinther Centre for the Neural Basis of Hearing, Physiology Department, University of Cambridge MEG in Heidelberg: Andre Rupp, Alexander Gutschalk, Stefan Uppenkamp, Michael Scherg MEG in Muenster: Katrin Krumbholz, Annemarie Preisler, Bernd Lutkenhoner

  33. D) Speech/Speaker overlap A) Anatomy C) Vocal tract B) Glottal folds Glottal fold parameters Filtering by vocal tract /u/ - Long vocal tract Voiced GPR 120 Hz GPR 120 Hz, VTL 14cm, /a/ GPR 120 Hz, VTL 14cm, /u/ Voiced 0 GPR 120 Hz Amplitude (dB) -20 Speech-related /a/ - Long vocal tract Frequency (Hz) GPR 200 Hz GPR 120 Hz, VTL 14cm, /a/ GPR 200 Hz, VTL 14cm, /a/ GPR 200 Hz Amplitude (dB) Frequency (Hz) Whispered /a/ - Shorter vocal tract Whispered GPR 0 Hz GPR 0 Hz GPR 0 Hz, VTL 14cm, /a/ GPR 120 Hz, VTL 10cm, /a/ Speaker-related glottal folds vocal tract GPR- glottal pulse rate VTL – vocal tract length Vowels in voiced and whispered speech Voiced Kriegstein, Smith, Patterson, Kiebel, and Griffiths, T. D., J. Neuroscience 30(2), (2010)

  34. Activation of voiced vs whispered speech x=+51 *** *** *** signal change (%) y=-2 L TE1.1 TE1.2 voiced > whispered whispered > voiced voiced > silence GPR varies > VTL varies whispered > silence

  35. axial axial coronal structural coronal structural x -78 -10 10 78 noise-silence fixed-noise tonic-fixed random-fixed 34.4° 34.4° ConjectureConjecture Conjecture saggital saggital Patterson, Uppenkamp, Johnsrude and Griffiths (2002) Group analysis

  36. x=-58 L y=-37 VTL varies > VTL fixed Speech task > control task Vocal Tract Length (red) vs Speech Recognition (green)

  37. axial axial coronal structural coronal structural x -78 -10 10 78 noise-silence fixed-noise tonic-fixed random-fixed 34.4° 34.4° ConjectureConjecture Conjecture saggital saggital Patterson, Uppenkamp, Johnsrude and Griffiths (2002) Group analysis

  38. www.auditoryneuroscience.com.  Auditory Neuroscience – Making Sense of Sound Jan Schnupp,Eli Nelken, andAndy King, published at MIT Press

More Related