1 / 53

CSD 5400 REHABILITATION PROCEDURES FOR THE HARD OF HEARING

CSD 5400 REHABILITATION PROCEDURES FOR THE HARD OF HEARING. Auditory Perception of Speech and the Consequences of Hearing Loss. Overview. The aural rehabilitation goal is to remediate the effects of a hearing impairment

lavina
Download Presentation

CSD 5400 REHABILITATION PROCEDURES FOR THE HARD OF HEARING

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CSD 5400REHABILITATION PROCEDURES FOR THE HARD OF HEARING Auditory Perception of Speech and the Consequences of Hearing Loss

  2. Overview The aural rehabilitation goal is to remediate the effects of a hearing impairment Ultimately comes down to the effect of the hearing loss on speech recognition and perception Develop a general understanding of what a hearing loss does to the speech signal

  3. The Auditory System in Review The primary purpose of the auditory system is to take the speech code at the periphery and convert it to a representation used by the CNS to extract meaning

  4. The Auditory System in Review Speech arrives to the auditory periphery as a series of pressure variations as a function of time The normal auditory periphery converts these pressure variations into physical movement of the middle ear structures, which in turn causes fluid movement in the cochlea

  5. The Auditory System in Review Cochlear fluid movement gives rise to the traveling wave along the basilar membrane Spectral code Depending on the site of maximum amplitude of displacement of the traveling wave, certain auditory nerves will be activated Neural activity Critical band theory Spectral and temporal code

  6. The Auditory System in Review As the signal moves higher into the central pathways, more complicated processing occurs Binaural processing Temporal processing By the time the signal reaches the cortex, it has been analyzed and re-coded in a number of different ways The cortex recognizes these various forms of analysis and extracts what is necessary, given the job at hand

  7. When the Auditory System is Impaired Speech is inaccurately coded at the periphery Distorted Missing Attenuated Loss of redundancy When the signal reaches the cortex, the coded representation may be unrecognizable

  8. Who’s Making Use of the Signal? Important consideration Adults Rely very heavily on the linguistic, contextual, and nonverbal cues available Children No extensive language base

  9. Acoustic Cues of Speech • Frequency • Intensity • Temporal Characteristics

  10. Flexer’s Analogy

  11. Illustrating Hearing Loss Tape examples

  12. Acoustic Cues of Speech • Short Term Characteristics • Long Term Characteristics

  13. Long Term Characteristics of Speech • Average changes over relatively long periods of time • Provides general acoustic characteristics of speech

  14. Long Term Characteristics of Speech • Mean intensity level of conversational speech is 65-70 dB SPL • Individual speech segments fluctuate around this mean by 40 dB

  15. Long Term Speech Spectrum • Long-interval acoustic spectrum of male voices taken 17 inches from speaker’s lips • Maximum energy is at approximately 500 Hz • Roll-off rate of 9 dB/octave

  16. Phonemes • Smallest unit of speech to have linguistic meaning • Traditional unit of speech to study short term acoustic characteristics

  17. Phonemes Classification system Vowels Consonants

  18. Differences Between Vowels and Consonants These two classes of sounds differ in the manner they are produced and in the way we perceive them • Vowels are considered more “prime” • Rhyming • Speech Errors • Vocal tract configuration • Voicing

  19. Short Term Acoustic Characteristics of Vowels • Vowels are always voiced • The vocal tract is relatively open • Source-Filter Theory of vowel production

  20. Sound Source of Vowels The glottal pulse The lowest component is the fundamental frequency (f0) Harmonics are labeled Hx. Maximum energy is at the fundamental frequency of the speaker Above the fundamental frequency, the spectrum rolls off 10-12 dB/octave

  21. Filter of Vowels The vocal tract, which can be thought of as a tube open at one end, closed at the other, and of a specified length

  22. Putting the Source and Filter Together

  23. Putting the Source and Filter Together • The panel at the left shows the glottal source. The panel at the right shows the spectrum of the source after filtering by a filter representing a neutral vocal tract. The spectral characteristics of the filter is indicated in the middle panel

  24. Changing the Effects of the Filter In order to produce these three different vowels, we change the characteristics of the vocal tract. This will alter the resonant frequency characteristics of the tube and change the combined spectrum of the glottal pulse and the vocal tract

  25. Changing the Effects of the Source • This is what happens when the same vowel is produced by a man, a woman and a child

  26. An Important Short Term Acoustic Characteristic of Vowels • Formants are the regions of increased spectral energy • They are only a characteristic of vowels • The frequency regions they occupy, as well as their relative intensities change as the vocal tract changes with each vowel production • All English vowels have 5-7 formants • Vowels can be distinguished from one another using the lowest (frequency) 2-3

  27. Vocal Tract Shapes and Spectra Vocal tract shapes and corresponding spectra (F1 and F2 only) for four back vowels

  28. Vocal Tract Shapes and Spectra • Vocal tract shapes and corresponding spectra (F1 and F2 only) for four front vowels

  29. Peterson & Barney (1954) • Landmark spectrographic study of 76 men, women, and children producing vowels in isolation • Measured and reported the average fundamental frequency and the frequency/intensity of the first three formants of the ten English vowels

  30. A Summary of Peterson & Barney’s Results

  31. Articulation and the Formant Frequencies • F1 corresponds to the degree of tongue constriction in the vocal tract • F2 corresponds to how forward in the mouth the tongue is • F3 is not related in a simple way to articulatory parameters

  32. Vowel Normalization • Vowel quadrilaterals for men, women, and children • What’s thought to be important for vowel perception is the relative spacing between F1 and F2; not their absolute frequencies

  33. Consequences of Hearing Loss on Vowel Perception Vowel perception is impaired when a hearing loss erodes the acoustic information in the F2 range Generally 1000 Hz and above Vowels are generally robust to the effects of hearing loss

  34. Short Term Acoustic Characteristics of Consonants Differences Between Vowels and Consonants Consonants: • Have a shorter duration • Can’t be isolated • Don’t have just one noise source • Aren’t static • Identification seems to rely primarily on the vowel that precedes or follows • Have a variety of methods of production and places in the vocal tract where they are produced

  35. Spectral Regions of Various Speech Sounds A common spectral representation of major speech sounds Related to the threshold of audibility curve

  36. Spectral Regions of Various Speech Sounds Another example Lines A, B, and C represent three different configurations and degrees of hearing loss What predictions can you make?

  37. Spectral Regions of Various Speech Sounds Intensity and frequency distribution of speech sounds overlaid on an audiogram Predictions based on characteristics of the hearing loss

  38. Predicting the Degree and Type of Phoneme Errors These type of charts are used often to help predict the effect of a particular degree and configuration a hearing loss might have on speech understanding This works somewhat, but it only looks at the influence of a hearing loss in terms of a filter Sensorineural hearing loss is more complicated than this Attenuation and distortion

  39. Hearing Loss as a Loss of Redundancy Illustrates the reduction of pattern details (redundancy)

  40. The Consonant Classification System • Every American English consonant can be identified uniquely according to its • Manner of articulation • Place of articulation • Voicing

  41. Consonant Feature Classification System • Classification of the consonants of American English according to the articulatory feature system

  42. Acoustic Properties of Articulatory Features Voicing: Energy is broadband and extends from 100-4000 Hz

  43. Acoustic Properties of Articulatory Features • Place of Articulation • Energy is very high frequency and confined to 1000-8000 Hz

  44. Acoustic Properties of Articulatory Features Manner of Articulation Energy is spread through the mid frequencies (250-3500 Hz)

  45. Consonants and Vowels Together Schematic oral tract movements, etc for phrases a buy, a pie in the top spectrograms and a dye and a tie in the lower spectrograms

  46. Formant Transitions • Schematic of a transition and steady-state portion of a formant frequency

  47. F2 Formant Transitions • The second formant transition provides a lot of information about the consonant • Place of articulation is related to the direction of the transition • Manner of articulation is related to the rate of the transition

  48. Error Patterns with SNHL Place of articulation and manner of articulation error rates for 38 SNHI listeners Place of articulation errors are more prevalent, followed by manner of articulation errors

  49. Feature Recognition as a Function of Degree of HL Auditory identification of temporal patterns of vowels and consonant features by 121 HI children as a function of PTA Notice how place of artic feature recognition is adversely affected by HL Voicing and vowel id are better preserved

  50. Summary of Findings… General findings of studies of phoneme perception for SNHL when using meaningful CVC stimuli Relatively few errors are made with the vowel When they do occur, they occur more often for front vowels Higher F2 frequency More errors are made with consonants Final position is extremely vulnerable Most common error type is place of articulation, followed by manner of articulation Voicing errors are rare

More Related