1.45k likes | 1.46k Views
Explore DSP basics, random signal processing, optimal filters, spectrum estimation. Study Fourier analysis, LTI systems, convolution, transfer functions, and convolution sums.
E N D
Unit Review&Exam Preparation M513 – Advanced DSP Techniques
M513 – Main Topics Covered in 2011/2012 ay • Review of DSP Basics • Random Signal Processing • Optimal and Adaptive Filters • (PSD) Spectrum Estimation Techniques (exam questions will mainly come from parts 2 and 3 but good knowledge of part 1 is needed !!!)
Part 1 – Review of DSP Basics DSP = Digital Signal Processing = Signal Analysis + Signal Processing … performed in discrete-time domain • Fourier Transform Family • More general transform (z-transform) • LTI Systems and Convolution • Guide to LTI Systems
Signal Analysis • To analyse signals in time domain we can use appropriate member of Fourier Transform family
Fourier Transforms - Summary time continuous discrete Fourier Transform Discrete-Time Fourier Transform aperiodic periodic continuous discrete Fourier Series Discrete Fourier Transform aperiodic periodic frequency
Fourier Transforms • Following analogies can be seen: Periodic in time ↔ Discrete in Frequency Aperiodic in time ↔ Continuous in Frequency Continuous in Time ↔ Aperiodic in Frequency Discrete in Time ↔ Periodic in Frequency
More general transforms • Two more transforms are introduced in order to generalise Fourier transforms for both continuous and discrete-time domain signals • To understand their region of operation it is important to recognise that both CTFT and DTFT only operated on one limited part of the whole complex plane (plane of complex values) • CTFT operates on the frequency axis, i.e. line s=0 from the complex plane s=s+jW (i.e. s=W). • DTFT operates on the frequency circle, i.e. curve r=1 from the complex plane z=rejw (i.e. z=ejw ).
From Laplace to Z-Transform Evaluate Laplace transform of sampled signal xs(t) substitute:
From Laplace to Z-Transform Consider again substitution we made on the previous slide: i.e. left half of the s-plane (s<0) maps into the interior of the unit circle in the z-plane |z|<1
j Axis in s to z Mapping S-plane Im (jw) Z-plane Im Re Re
Signal Processing • Delay … signal • Scale … signal • Add … two or more samples (from the same or different signals) Signal Filtering Convolution
Convolution • Gives the system input – system output relationship for LTI type systems (both DT and CT). System x(t) y(t) x(n) y(n)
Impulse Response of the System • Let h(n) be the response of the system to d(n) impulse type input (i.e. Impulse Response of the System) • we note this as LTI System d(n) h(n)
Time-invariance • For LTI system, if than (this is so called time-invariance property of the system) LTI System d(n-k) h(n-k)
Linearity • Linearity implies the following system behavior: LTI System x(n) y(n) LTI System ax(n) ay(n) LTI System x1(n)+ x2(n) y1(n)+y2(n)
Linearity and Time-Invariance • We can now combine time-invariance and linearity: LTI System Sd(n-k) Sh(n-k) LTI System Sx(k)d(n-k) Sx(k)h(n-k)
Convolution Sum • I.e., if than • and: • i.e. system output is the sum of lots of delayed impulse responses (i.e. responses to individual, scaled impulse signals which are making up the whole DT input signal) • This sum is called CONVOLUTION SUM • Sometimes we use to denote convolution operation, i.e.
Convolution Sum for CT • Similarly, for continuous time signals and systems (but a little bit more complicated) • The above expression basically describes the analogue (CT) input signal x(t) as an integral (i.e. sum) of an infinite number of time-shifted and scaled impulse functions.
Important fact about convolution Convolution in t domain ↔ Multiplication in f domain but we also have Multiplication in t domain ↔ Convolution in f domain
Discrete LTI Systems in Two Domains h(n) x(n) y(n) H(z) X(z) Y(z) h(n) – impulse response, H(z) – transfer function of DT LTI system
Summary DT: • H(z) is z Transform of the System Impulse Response - System Transfer Function. • H(w) is Discrete Time Fourier Transform of the System Impulse Response – System Frequency Response. CT: • H(s) is Laplace Transform of the System Impulse Response - System Transfer Function. • H(W) is Fourier Transform of the System Impulse Response – System Frequency Response.
Guide to Discrete LTI Systems Impulse Response h(n) ZT DTFT IDTFT IZT z=ejw Transfer Function Frequency Response H(w) H(z) IZT ZT Difference Equation Including some mathematical manipulations
Guide to Continuous LTI Systems Impulse Response h(t) LT FT IFT ILT s=jw Transfer Function Frequency Response H(w) H(s) ILT LT Differential Equation Including some mathematical manipulations
Example(s) - Use guide to LTI systems to move between the various descriptions of the system. E.g. Calculate transfer function and frequency response for the IIR filter given with the following difference equation.
Example(s) - Use guide to LTI systems to move between the various descriptions of the system. Having obtained frequency response, system response to any frequency F1 can be easily calculated, e.g. for the ¼ of sampling frequency Fs we have: (note, in general this is a complex number so both phase and amplitude/gain can be calculated)
Example(s) - Use guide to LTI systems to move between the various descriptions of the system. Opposite problem can also be easily solved, e.g. for the IIR filter with transfer function: find the corresponding difference equation to implement the system.
Part 2 – Random Signals Random signals – unpredictable type of signals ( … well, more or less). • Moments of random signals – mx, rxx(m) • Autocorrelation ↔ PSD • Filtering Random Signals –Spectral Factorisation Equations (in three domains) • Deconvolution and Inverse Filtering • Minimum and Non-minimum phase systems/filters
Signal Classification • Deterministic signals • can be characterised by mathematical equation • Random (Nondeterministic, Stochastic) signals • can not be characterised by mathematical equation • usually characterised by their statistics Random (Nondeterministic, Stochastic) signals can be further classified as: • Stationary signals • if their statistics does not change with time • Nonstationary signals • if their statistics changes with time
Signal Classification • Wide Sense Stationary (random) signals – random signals with constant signals statistics up to a 2nd order • Ergodic (random) signals – random signals whose statistics can be measured by Time Averaging rather then Ensemble Averaging (i.e. expectation of the ergodic signal is its average) • For simplicity reasons, we study Wide-Sense Stationary (WSS) and Ergodic Signals
1st Order Signal Statistics • Mean Value m of the signal x(n) is its 1st order statistics: more general eq. If mx is constant over time, we are talking about stationary signal x(n). Single waveform averaging, so signal is ergodic (no need for ensemble averages). Expected value of x(n) (E – expectation operator)
2nd Order Statistics • Autocovariance of the signal is the 2nd order signal statistics: • It is calculated according to: where * denotes a complex conjugate in case of complex signals • For equal lags, i.e. k=l, the autocovariance reduces to variance
2nd Order Statistics • Variance s2 of the signal is: • Variance can be considered as some kind of measure of signal dispersion around its mean.
Analogies to Electrical signals • Two zero-mean signals, with different variances mx – mean – DC component of electrical signal mx2– mean squared – DC power E[x2(n)] – mean square – total average power sx2– variance – AC power sx– standard deviation – rms value
Autocorrelation • This is also a 2nd order statistics of signal and is very similar (in some cases identical) to autocovariance • Autocorrelation of the signal is basically a product of signal and its shifted version: where m=l-k. • Autocorrelation is a measure of signal predictability – correlated signal is one that has redundancy (it is compressible, e.g. speech, audio or video signals)
Autocorrelation • for m=l-k we sometimes use notation rxx(m) or even rx(m) instead of rxx(k,l), i.e.
Autocorrelation and Autocovariance • For zero mean, stationary signals those two quantities are identical: where mx = 0 • Also notice that the variance of the zero-mean signal then corresponds to zero-lag autocorrelation:
Autocorrelation • Two important autocorrelation properties: • rxx(0) is essentially a signal power so it must be larger than any other autocorrelation of that signal (other way of looking at this property of the autocorrelation is to realise that the sample is best correlated with itself).
Example (Tutorial 1, Problems 2, 3) Random phase family of sinusoids: A and wk are fixed constants and q is a uniformly distributed random variable (i.e. equally likely to have any value in the interval –p to p). Prove the stationarity of this process, i.e. a) Find the mean and variance values (should be const.) b) Find the autocorrelation function (ACF) (should only depend on time lag value m, otherwise const.)
Example (Tutorial 1, Problems 2, 3) Random phase family of sinusoids: A and wk are fixed constants and q is a uniformly distributed random variable (i.e. equally likely to have any value in the interval –p to p). Discuss two approaches to calculate ACF for this process. We can use: or we can go via:
Power Spectral Density (PSD) • Power Spectral Density is the Discrete Time Fourier Transform of the autocorrelation function • PSD contains power info (i.e. distribution of signal power across the frequency spectrum) but has no phase information (PSD is “phase blind”). • PSD is always real and non-negative.
White Noise • White noise signal has a perfectly flat power spectrum (equal to the variance of the signal s2). • Autocorrelation of white noise is a unit impulse with amplitude sw2 – white noise is perfectly uncorrelated signal (not realisable in practice, we usually use pseudorandom noise with PSD almost flat over a finite frequency range) Rww rww DFT IDFT w m
Filtering Random Signals • Filter scales the mean value of the input signal. • In time domain the scaling value is the sum of the impulse response. • In frequency domain the scaling value is the frequency response of the filter at w=0. Time Domain mx my DIGITAL FILTER Frequency Domain
Filtering Random Signals • Cross-correlation between the filter input and output signals: or
Filtering Random Signals • Autocorrelation of the filter output: using
Filtering Random Signals • The autocorrelation of the filter output therefore depends only on k, difference between the indices n+k and n, i.e.: • Combining with: we have:
Spectral Factorisation Equations rx ryx ry h(k) h*(-m) Taking the DTFT of above equation: Taking the ZT of above equation:
Filtering Random Signals • In terms of z-transform: • If H(z) is real, H(z)=H*(z*) so: • This is a special case of spectral factorisation.
Example – Tutorial 2, Problem 2 A zero mean white noise signal x(n) is applied to an FIR filter with impulse response sequence {0.5, 0, 0.75}. Derive an expression for the PSD of the signal at the output of the filter.