420 likes | 503 Views
A Novel Approach to Speech Cod ing After Time Scale Modification. Presented by, H. Gokhan Ilk, Ph.D. Something about the presenter. B.Sc , Ankara University Electronics Eng. Dept. M.Sc.
E N D
A Novel Approach to Speech Coding After Time Scale Modification Presented by, H. Gokhan Ilk, Ph.D
Something about the presenter B.Sc, Ankara University Electronics Eng. Dept. M.Sc. Instrument Design & Applications UMIST, University of Manchester, Institute of Science and Technology, UK FIT, Brno University of Technology
Something about the presenter Ph.D DCT Based Prototype Interpolation Speech Coding University of Manchester, UK FIT, Brno University of Technology
Where is the Department? FIT, Brno University of Technology
Contact Details Address : Ankara University, Faculty of Engineering Electronics Engineering Department Beşevler 06100 Ankara, Turkey ilk@ieee.org FIT, Brno University of Technology
Speech Production Medical Doctors are more interested in this figure LUNGS FIT, Brno University of Technology
How does it look like? Short term correlation Long term correlation This figure is more interesting for a DSP course/seminar FIT, Brno University of Technology
How does it look like? Speech can be generally classified as Voiced or Unvoiced. Voiced part is a quasi-periodic (almost periodic) signal with higher energy and less zero crossing. Unvoiced part is a noise like signal FIT, Brno University of Technology
How does it look like in the freqency domain b) Unvoiced • Voiced • PSD: Power Spectrum Density FIT, Brno University of Technology
Now is a good time for maths Anyone heard of Wiener Filter Theory, Optimal Filtering Convolution sum Wiener filter turns out to be an FIR filter with N coefficients FIT, Brno University of Technology
Optimal Filtering Error is the difference between our signal and optimal estimate ? FIT, Brno University of Technology
Prediction as an Optimum Filtering Problem FIT, Brno University of Technology
LPC Analysis Filter + - Linear Prediction Filter FIT, Brno University of Technology
The AR (Auto Regressive) Model Considering optimum filter theory and regression analysis, since both independent and dependent variables belong to the same random process, x, x[n] is called an autoregressive or AR process. That is the process is regressed upon itself. Thanks to the people from Statistics, who called this analysis regression analysis of time series, long long time ago. FIT, Brno University of Technology
Innovations representation H(z)=A(z) H-1(z)=1/A(z) white noise LPC Analysis LPC Synthesis • From Linear System Theory • The inverse system has many advantages. • In communications (left and right systems are apart) • The system on the right does not need any input ??? FIT, Brno University of Technology
Innovations representation Innovations representation is basically an inverse system. Why called innovations?? Assume that x, our discrete random signal is speech. It can be either voiced, which means it is quasi-periodic or unvoiced, then it is noise. If x is voiced, then LPC analysis works very well and e[n] is close to zero If x is unvoiced, then LPC analysis works well again because e[n] is white noise In any case we do not need e[n] and thus the filters themselves present the information. That is why the representation is called INNOVATIONS. FIT, Brno University of Technology
What are these filters? Linear Prediction Synthesis Filter A-1(z) E(z) X(z) FIT, Brno University of Technology
What are these filters? Finally LPC Analysis and Synthesis Filters The filter in the AR model is therefore an IIR filter and AR model is therefore said to be an “all pole” model Useful information for statisticians A(z) LPC analyses filter, 1/A(z) LPC synthesis filter FIT, Brno University of Technology
What is the deal with these filters??? Since 1/A(z) is a causal filter (does everybody see that???), this implies that it is minimum phase (It is causal stable (???) with a causal stable inverse) Since A(z) is an FIR filter, it is always stable and we know that it is causal. We also know that 1/A(z) is also causal. BUT IS IT ALWAYS STABLE??? We will now see that the ai (LPC coefficients) are found by solving Normal equations with a positive definite correlation function. Since they are found by solving a positive definite matrix inverse, the poles always lie within the unit circle... FIT, Brno University of Technology
How do we calculate LPC coefficients ? The problem is to determine the parameters aj, j=1,2,....p If j : represents the estimates of aj then the error (or residual) is given by FIT, Brno University of Technology
Derivatives again ? It is now possible to determine the estimates by minimising the mean squared error, i.e. Setting the partial derivatives of Error with respect to j to zero for j = 1,2,...,p, we get where E{.} is the expectation operator FIT, Brno University of Technology
Solving the linear equation That is, e(n) is orthogonal to s(n-i) for i = 1,2,...p. Equation can be rearranged to give This is auto correlation? Or is it not? • Signal assumed stationary FIT, Brno University of Technology
Are we good with linear algabra? That is, e(n) is orthogonal to s(n-i) for i = 1,2,...p A x = b Obtained from University of Chicago web site FIT, Brno University of Technology
Method I Auto-Correlation Method N : length of the sample sequence sn(m) = 0 outside the interval 0 m N-1 FIT, Brno University of Technology
Short time auto correlation Levinson-Durbin recursion Symmetric Toeplitz FIT, Brno University of Technology
Method II Covariance Method It requires the use of the samples in the interval -p m N-1 FIT, Brno University of Technology
Covariance Method Symmetric covariance matrix, Cheolesky decomposition FIT, Brno University of Technology
What is next??? Now that we have the LPC aicoefficients, we can present speech with a compact representation This further requires an efficient representation of the excitation (residual, error) signal. In fact for example optimum magnitude calculation of regularly spaced pulses for the excitation constitutes GSM (Global System for Mobile Communications) FIT, Brno University of Technology
State of Art Efficient quantization of LPC parameters (called LSP or LSF (line spectral frequencies or pairs) together with the efficient representation and quantization of the excitation results in today’s state of art voice coding. Examples: GSM, CELP (code excited linear prediction), MELP (mixed excited linear prediction) etc. FIT, Brno University of Technology
Anything novel and interesting? Linear predictive coding and efficient representation of the excitation signal attracted so much interest that these poor subjects had been beaten to death. Therefore one has to do A LOT in order to gain A LITTLE Or merge two different disciplines in a clever way. It turns out that Prof. Verhelst has already developed one of the most important tools in one of these disciplines. FIT, Brno University of Technology
What is the novelty? Since speech signal exhibit both short and long term correlation and LPC analysis removes most of the short term correlation, we can remove the long term correlation, i.e. get rid of long term redundancy. The key is not to disturb pitch and formant frequencies. A detailed investigation of these parameters could be found in: W. Verhelst, “Overlap-add methods for time-scaling of speech”, Speech Commun. 30 (2000) 207–221. FIT, Brno University of Technology
How does it work? If pitch and formant frequencies are not disturbed by the WSOLA algorithm then one can compress speech (before coding) with a compression rate of beta and then expand the decoded speech at the receiver side with an expansion factor of 1/beta. If for example beta=0.5, then one can have a full duplex channel at a half duplex bandwidth. Why? Because the same signal is represented at half duration with minimum distortion. FIT, Brno University of Technology
Waveform Similarity Overlap and ADD FIT, Brno University of Technology
How does it work? U=N/2 No rate change (WSOLA =1) U<N/2 Speech slows down, expansion(WSOLA >=1) U>N/2 Speech speeds up, compression (WSOLA <=1) This is for 50% overlapping frames. A good way to test the algorithm::: Compress with =1 and expand with =1 FIT, Brno University of Technology
Is that it ??? We have tried this approach with many different algorithms operating in time and frequency domains. Our experiments with the new NATO standard, Stanag 4591, MELP (mixed excitation linear predictive vocoder) indeed proved that WSOLA produces high quality output and it is computationally efficient. Details can be found H.G. Ilk, S. Tugac, “Channel and source considerations of a bit rate reduction technique for a possible wireless communications system’s performance enhancement”, IEEE Trans. Wireless Commun. vol. 4(1), January 2005, pp. 93–99 But what if we would like to make most of our bandwidth? Then the system should be adaptive. It means WSOLA should operate at different time compression factors. This is an engineer’s dream come true. You dont operate at constant or multi-rate bit rates but you operate at flexible bit rates. That is YOU tell me how much bandwidth you got and I give tou the best quality possible. Not the other way around !!! FIT, Brno University of Technology
We are more clever than that Up to this point we are only using Werner’s WSOLA algorithm, that has been developed for hearing disabled. What is we want to change beta seamlessly. How do we do that? To change beta, you can either change U or N. Restrictions::: Frame size (N) should not change at the transmitter, during compression. That is determined by your codec and it is standard. FIT, Brno University of Technology
What is the extension then?? Different beta as we proceed, Compression • As you can see from solid black lines N is constant. • As you can see from dashed blue lines U changes for each frame FIT, Brno University of Technology
During synthesis at the receiver, N has to change for synchronous output speech Half symmetric windows in order to go back to the original time scale Expansion FIT, Brno University of Technology
What is the originality? This approach is particularly useful in packet switching network applications like VoIP (Voice Over IP) in dynamic networks because the load may change abruptly and it is not symmetric at each direction. It is also equally valuable in circuit switching congested voice networks because today’s networks either allow multi-rates (2.4, 4.8 or 8.0 kb/s) or simply drops your call. This will allow priority in phone calls or cheaper tariffs leading to QoS in a circuit switching network (That is novel is it NOT???) Details can be found in Hakkı Gökhan İlk and Saadettin Güler,“Adaptive time scale modification of speech for graceful degrading voice quality in congested networks for VoIP applications”, Signal Processing, Volume 86, pp 127-139, 2006 FIT, Brno University of Technology
Samples Male “Steve wore a bright red cashmere sweater” Female “Before Thursday’s exam review every formula” 128 kb/s PCM 128 kb/s PCM 2.4 kb/s 2.4 kb/s 1.0 kb/s 1.0 kb/s FIT, Brno University of Technology
Reward! Our algorithm has been selected as one of the two finalists in a competition by TURKCELL (a GSM giant in Turkey). We hope to win the competition by our presentation and demo on 28 September. FIT, Brno University of Technology
I would like to thank Honza and FIT for making this exchange possible FIT, Brno University of Technology