1.04k likes | 1.2k Views
Sound Production and Perception. E.M. Bakker. Overview. The Physics of Sound The Production of Speech The Perception of Speech and Audio Frequency Masking Noise Masking Temporal Masking Phonetics and Phonology Analog and Digital Signals. The Physics of Sound. Speed of Sound
E N D
SoundProduction and Perception E.M. Bakker LML Audio Processing and Indexing 2010
Overview • The Physics of Sound • The Production of Speech • The Perception of Speech and Audio • Frequency Masking • Noise Masking • Temporal Masking • Phonetics and Phonology • Analog and Digital Signals LML Audio Processing and Indexing 2010
The Physics of Sound • Speed of Sound • Proportional to sqrt(elastic modulus/density) • Second order dependency on the amplitude of the sound => nonlinear propagation effects LML Audio Processing and Indexing 2010
Words Speech Recognition “How are you?” Speech Signal The Production of Speech • How is SPEECH produced? • Characteristics of Acoustic Signal LML Audio Processing and Indexing 2010
Human Speech Production • Physiology • Schematic and X-ray Sagittal View • Vocal Cords at Work • Transduction • Spectrogram • Acoustics • Acoustic Theory • Wave Propagation (Picture by Y.Mrabet from Wikipedia) LML Audio Processing and Indexing 2010
Sagittal Plane View of the Human Vocal Apparatus LML Audio Processing and Indexing 2010
Sagittal Plane View of the Human Vocal Apparatus LML Audio Processing and Indexing 2010
Characterization of English Phonemes LML Audio Processing and Indexing 2010
English Phonemes Bet Debt Get Pin Sp i n Allophone p LML Audio Processing and Indexing 2010
The Vowel Space • We can characterize a vowel sound by the locations of the first and second spectral resonances, known as formant frequencies. • Some voiced sounds, such as diphthongs, are transitional sounds that move from one vowel location to another. LML Audio Processing and Indexing 2010
PhoneticsFormant Frequency Ranges LML Audio Processing and Indexing 2010
Vocal Chords LML Audio Processing and Indexing 2010
Models for Speech Production LML Audio Processing and Indexing 2010
Models for Speech Production LML Audio Processing and Indexing 2010
Vocal Tract Workshop Articulatory Speech Synthesis using TractSyn (P. Birkholz, 2004). Reading material: P. Birkholz, D. Jackel, A Three-Dimensional Model of the Vocal Tract for Speech Synthesis. In Proceedings of the 15th International Congress of Phonetic Sciences, pp. 2597-2600, Barcelona, Spain, 2003. See: http://www.vocaltractlab.de/ LML Speech Recognition 2009
Words Speech Recognition “How are you?” Speech Signal The Perception of Speech How is SPEECH perceived LML Audio Processing and Indexing 2010
The Perception of SpeechSound Pressure • The ear is the most sensitive human organ. Vibrations on the order of angstroms are used to transduce sound. It has the largest dynamic range (~140 dB) of any organ in the human body. • The lower portion of the curve is an audiogram - hearing sensitivity. It can vary up to 20 dB across listeners. • Above 120 dB corresponds to a nice pop-concert (or standing under a Boeing 747 when it takes off). • Typical ambient office noise is about 55 dB. x dB = 10 log10(x/x0)), x0 = 1kHz signal with intensity that is just hearable. LML Audio Processing and Indexing 2010
The Perception of Speech:The Ear • Three main sections, outer, middle, and inner ear. • The outer and middle ears reproduce the analog signal (impedance matching). • the inner ear transduces the pressure wave into an electrical signal. • The outer ear consists of the external visible part and the auditory canal. The tube is about 2.5 cm long. • The middle ear consists of the eardrum and three bones (malleus, incus, and stapes). It converts the sound pressure wave to displacement of the oval window (entrance to the inner ear). LML Audio Processing and Indexing 2010
The Perception of Speech:The Ear • The inner ear primarily consists of a fluid-filled tube (cochlea) which contains the basilar membrane. Fluid movement along the basilar membrane displaces hair cells, which generate electrical signals. • There are a discrete number of hair cells (30,000). Each hair cell is tuned to a different frequency. • Place vs. Temporal Theory: firings of hair cells are processed by two types of neurons (onset chopper units for temporal features and transient chopper units for spectral features). LML Audio Processing and Indexing 2010
PerceptionPsychoacoustics • Psychoacoustics: a branch of science dealing with hearing, the sensations produced by sounds. • A basic distinction must be made between the perceptual attributes of a sound vs measurable physical quantities: • Many physical quantities are perceived on a logarithmic scale (e.g. loudness). Our perception is often a nonlinear function of the absolute value of the physical quantity being measured (e.g. equal loudness). • Timbre can be used to describe why musical instruments sound different. • What factors contribute to speaker identity? LML Audio Processing and Indexing 2010
PerceptionEqual Loudness • Just Noticeable Difference (JND): The acoustic value at which 75% of responses judge stimuli to be different (limen) • The perceptual loudness of a sound is specified via its relative intensity above the threshold. A sound's loudness is often defined in terms of how intense a reference 1 kHz tone must be heard to sound as loud. 0 dB LML Audio Processing and Indexing 2010
Perception, Non-Linear Frequency Warping: Bark and Mel Scale • Critical Bandwidths: correspond to approximately 1.5 mm spacings along the basilar membrane, suggesting a set of 24 bandpass filters. • Critical Band: can be related to a bandpass filter whose frequency response corresponds to the tuning curves of auditory neurons. A frequency range over which two sounds will sound like they are fusing into one. • Bark Scale: • Mel Scale: LML Audio Processing and Indexing 2010
PerceptionBark and Mel Scale • The Bark scale implies a nonlinear frequency mapping LML Audio Processing and Indexing 2010
PerceptionBark and Mel Scale • Filter Banks used in ASR: • The Bark scale implies a nonlinear frequency mapping LML Audio Processing and Indexing 2010
Comparison of Bark and Mel Space Scales LML Audio Processing and Indexing 2010
Perception: Tone-Masking Noise • Frequency masking: one sound cannot be perceived if another sound close in frequency has a high enough level. The first sound masks the second. • Tone-masking noise: noise with energy EN (dB) at Bark frequency g masks a tone at Bark frequency bif the tone's energy is below the threshold: TT(b) = EN - 6.025 - 0.275g + Sm(b-g) (dB SPL) where the spread-of-masking function Sm(b) is given by: Sm(b) = 15.81 + 7.5(b+0.474)-17.5* sqrt(1 + (b+0.474)2) (dB) • Temporal Masking: onsets of sounds are masked in the time domain through a similar masking process. • Thresholds are frequency and energy dependent. • Thresholds depend on the nature of the sound as well. LML Audio Processing and Indexing 2010
Perception: Noise-Masking Tone • Noise-masking tone: a tone at Bark frequency gwith energy ET (dB) masks noise at Bark frequency bif the noise energy is below the threshold: TN(b) = ET - 2.025 - 0.17g + Sm(b-g) (dB SPL) • Masking thresholds are commonly referred to as Bark scale functions of just noticeable differences (JND). • Thresholds are not symmetric. • Thresholds depend on the nature of the noise and the sound. LML Audio Processing and Indexing 2010
Masking LML Audio Processing and Indexing 2010
Perceptual Noise Weighting • Noise-weighting: shaping the spectrum to hide noise introduced by imperfect analysis and modeling techniques (essential in speech coding). • Humans are sensitive to noise introduced in low-energy areas of the spectrum. • Humans tolerate more additive noise when it falls under high energy areas of the spectrum. The amount of noise tolerated is greater if it is spectrally shaped to match perception. • We can simulate this phenomena using "bandwidth-broadening" LML Audio Processing and Indexing 2010
Perceptual Noise Weighting Simple Z-Transform interpretation: • can be implemented by evaluating the Z-Transform around a contour closer to the origin in the z-plane: Hnw(z) = H(az). • Used in many speech compression systems (Code Excited Linear Prediction). • Analysis performed on bandwidth-broadened speech; synthesis performed using normal speech. Effectively shapes noise to fall under the formants. LML Audio Processing and Indexing 2010
Perception: Echo and Delay • Humans are used to hearing their voice while they speak - real-time feedback (side tone). • When we place headphones over our ears, which dampens this feedback, we tend to speak louder. • Lombard Effect: Humans speak louder in the presence of ambient noise. • When this side-tone is delayed, it interrupts our cognitive processes, and degrades our speech. • This effect begins at delays of approximately 250 ms. • Modern telephony systems have been designed to maintain delays lower than this value (long distance phone calls routed over satellites). • Digital speech processing systems can introduce large amounts of delay due to non-real-time processing. LML Audio Processing and Indexing 2010
Perception: Adaptation • Adaptation refers to changing sensitivity in response to a continued stimulus, and is likely a feature of the mechano-electrical transformation in the cochlea. • Neurons tuned to a frequency where energy is present do not change their firing rate drastically for the next sound. • Additive broadband noise does not significantly change the firing rate for a neuron in the region of a formant. Visual Adaptation • The McGurk Effect is an auditory illusion which results from combining a face pronouncing a certain syllable with the sound of a different syllable. The illusion is stronger for some combinations than for others. For example, an auditory 'ba' combined with a visual 'ga' is perceived by some percentage of people as 'da'. A larger proportion will perceive an auditory 'ma' with a visual 'ka' as 'na'. Some researchers have measured evoked electrical signals matching the "perceived" sound. LML Audio Processing and Indexing 2010
Perception: Timing • Temporal resolution of the ear is crucial. • Two clicks are perceived mono-aurally as one unless they are separated by at least 2 ms. • 17 ms of separation is required before we can reliably determine the order of the clicks. (~58bps or ~3530bpm) • Sounds with onsets faster than 20 ms are perceived as "plucks" rather than "bows". • Short sounds near the threshold of hearing must exceed a certain intensity-time product to be perceived. • Humans do not perceive individual "phonemes" in fluent speech - they are simply too short. We somehow integrate the effect over intervals of approximately 100 ms. • Humans are very sensitive to long-term periodicity (ultra low frequency) – this has implications for random noise generation. LML Audio Processing and Indexing 2010
Analog and Digital Signals • From Analog to Digital Signal • Sampling & Aliasing LML Audio Processing and Indexing 2010
Analog Signals Digital Signals Discrete functionFkof a discrete (sampling) variable tk, with k an integer: Fk= F(tk). Continuous functionF of a continuous variable t(tcan be time, space etc) : F(t) Analog and Digital Signals Uniform (periodic) sampling with sampling frequencyfS= 1/ tS, e.g., ts = 0.001 sec => fs = 1000Hz LML Audio Processing and Indexing 2010
Digital System Implementation Important issues: Analysis bandwidth, Dynamic range • Pass/stop bands • Sampling rate, Number of bits, and • further parameters • Digital format LML Audio Processing and Indexing 2010
Sampling How fast must we sample a continuous signal to preserve its information content? Example: turning wheels in a movie • 25 frames per second, i.e., 25 samples/sec. • Train starts => wheels appear to go clockwise • Train accelerates => wheels go counter clockwise Frequency misidentification due to low sampling frequency. Sampling: independent variable (for example time) continuous -> discrete Quantisation: dependent variable (for example voltage) continuous -> discrete. Here we’ll talk about uniform sampling. LML Audio Processing and Indexing 2010
__ s(t) = sin(2f0t) s(t) @ fS f0 = 1 Hz, fS = 3 Hz __ s1(t) = sin(8f0t) __ s2(t) = sin(14f0t) s(t) @ fS represents exactly all sine-waves sk(t) defined by: sk (t) = sin( 2 (f0 + k fS) t ) , k N Sampling LML Audio Processing and Indexing 2010
F1 F2 F3 F1=25 Hz, F2 = 150 Hz, F3 = 50 Hz fMAX The sampling theorem Theorem A signal s(t) with maximum frequency fMAX can be recovered if sampled at frequency fS > 2 fMAX . *Multiple proposers: Whittaker(s), Nyquist, Shannon, Kotel’nikov. Nyquist frequency (rate) fN = 2 fMAX Example Condition on fS? fS > 300 Hz LML Audio Processing and Indexing 2010
Frequency Domain • Time and Frequency are two complementary signal descriptions. Signal can be seen as projected onto the time domain or frequency domain. • Bandwidth indicates the rate of change of a signal: High bandwidth => signal changes fast • Passband Bandwidth => lower and upper cutoff frequencies Example Ear + brain act as frequency analyser: audio spectrum split into many narrow bands low-power sounds detected out of loud background. LML Audio Processing and Indexing 2010
(a)Band-limited signal: frequencies in [-B, B] (fMAX = B). (a) (b) (b)Time sampling frequency repetition. fS > 2 B no aliasing. (c) (c)fS 2 B aliasing ! Aliasing: signal ambiguity in frequency domain Sampling Low-Pass Signals Note: s(t) at fS represents all sine-waves sk(t) defined by: sk (t) = sin( 2 (f0 + k fS) t ) , k N LML Audio Processing and Indexing 2010
(a) (a),(b)Out-of-band noise can aliase into band of interest. Filter it before! (b) (c) (c)Antialiasing filter Passband: depends on bandwidth of interest. Antialiasing Filter LML Audio Processing and Indexing 2010
mN, selected so that fS > 2B Example Advantages • Slower ADCs / electronics needed. • Simpler antialiasing filters. fC = 20 MHz, B = 5MHz Without under-sampling fS > 40 MHz. With under-sampling fS = 22.5 MHz (m=1); = 17.5 MHz (m=2); = 11.66 MHz (m=3). Under-sampling Using spectral replications to reduce sampling frequency fS requirements. LML Audio Processing and Indexing 2010
Oversampling : sampling at frequencies fS >> 2 fMAX . Over-sampling & averaging may improve ADC resolution fOS = over-sampling frequency, w = additional bits required. fOS = 4w· fS Each additional bit implies/requires over-sampling by a factor of four. Over-sampling LML Audio Processing and Indexing 2010
Different applications have different needs. • Number of bits N (~resolution) • Data throughput (~speed) • Signal-to-noise ratio (SNR) • Signal-to-noise-&-distortion rate (SINAD) • Effective Number of Bits (ENOB) • … Radar systems Static distortion Communication Imaging / video NB: Definitions may be slightly manufacturer-dependent! (Some) ADC parameters LML Audio Processing and Indexing 2010
Continuous input signal digitized into 2N levels. Uniform, bipolar transfer function (N=3) Quantisation step q = V max 2N Ex: Vmax = 1V , N = 12 q = 244.1 V Voltage ( = q) Scale factor (= 1 / 2N ) Percentage (= 100 / 2N ) Quantisation error ADC - Number of bits N LML Audio Processing and Indexing 2010
Quantisation Error eq in [-0.5 q, +0.5 q]. • eq limits ability to resolve small signal. • Higher resolution means lower eq. QE for N = 12 VFS = 1 ADC - Quantisation error LML Audio Processing and Indexing 2010
Assumptions (1) • Ideal ADC: only quantisation erroreq (p(e) constant, no stuck bits…) • eq uncorrelated with signal. • ADC performance constant in time. Also called SQNR (signal-to-quantisation-noise ratio) (RMS = root mean square) Input(t) = ½ VFSR sin(t). (sampling frequency fS = 2 fMAX) SNR of ideal ADC LML Audio Processing and Indexing 2010
One additional bit SNR increased by 6 dB Real SNR lower because: • Real signals have noise. • Forcing input to full scale unwise. • Real ADCs have additional noise (aperture jitter, non-linearities etc). SNR of ideal ADC Substituting in (1) => (2) Actually (2) needs correction factor depending on ratio between sampling freq & Nyquist freq. Processing gain due to oversampling. LML Audio Processing and Indexing 2010