670 likes | 707 Views
ECE 480 Wireless Systems Lecture 8 Statistical Multipath Channel Models 8 Mar 2012. Fading Models for Multipath Channels. Time delay spread Consider a pulse transmitted over a multipath channel (10 – ray trace)
E N D
ECE 480 Wireless Systems Lecture 8 Statistical Multipath Channel Models 8 Mar 2012
Fading Models for Multipath Channels • Time delay spread • Consider a pulse transmitted over a multipath channel (10 – ray trace) • Received signal will be a pulse train, each delayed by a random amount due to scattering • Result can be a distorted signal • Time - varying nature • Mobility results in a multipath channel due to reflections
Time – Varying Channel Impulse Response Transmitted signal u (t) is the equivalent lowpass signal for s (t) with bandwidth B u with a carrier frequency, f c
Neglecting noise n = 0 LOS path • Unknowns: • N (t) = number of resolvable multipath components • For each path (including LOS): • Path length = r n (t) • Delay • Doppler phase shift • Amplitude n (t)
The nth resolvable multipath component may result from a single reflector or with multiple reflectors • Single reflector • n (t) is a function of the single reflector • is the phase shift • is the Doppler shift • is the Doppler phase shift
Reflector cluster • Two multipath components with delay 1 and 2 are "resolvable" if their delay difference considerably exceeds the inverse signal bandwidth • If u (t - 1) ~ u (t - 2) the two components cannot be separated at the receiver and are "unresolvable" • Unresolvable signals are usually combined into a single term with delay
The amplitude of unresolvable signals will typically undergo fast variations due to the constructive and destructive combining • Typically, wideband channels will be resolvable while narrowband channels may not • Since n (t), n (t), and Dn (t) change with time they are characterized as random processes • The received signal is also stationary and ergodic (can be characterized from a sample)
Let • n (t) is a function of path loss and shadowing • n (t) is a function of delay and Doppler • They may be assumed to be independent
The received signal can be obtained by convolving the equivalent lowpass time – varying channel response c ( , t) and upconverting it to the carrier frequency • The time 't' is when the impulse response is observed at the receiver • The time 't - ' is when the impulse is launched into the channel relative to 't' • If there is no physical reflector in the channel, c ( , t) = 0
For time – invariant channels • Set T = - t • c () is the standard time – invariant channel impulse response, the response at time '' to an impulse at time zero
Comparing these two expressions: Substituting back:
Consider the system in the figure where each multipath component corresponds to a single reflector • At time t 1 there are 3 multipath components • Impulses launched into the channel at time t 1 - i with i = 1, 2, 3 will all be received at time t 1 • Impulses launched at any other time will not be received
At time t 2 there are two multipath components • Impulses launched at time t 2 - 'i (i = 1, 2) will be received at time t 2 • The time – varying impulse response is
If the channel is time – invariant, the time – varying parameters are constant for channels with discrete multipath components for channels with a continuum of multipath components For stationary channels, the response to an impulse at time t 1 is just a shifted version of its response to an impulse at time t 2 t 1
Example 3.1 Consider a wireless LAN operating in a factory near a conveyor belt. The transmitter and receiver have a LOS path between them with gain 0 , phase 0 , and delay 0. Every T 0 seconds, a metal item comes down the conveyor belt, creating an additional reflected signal path with gain 1 , phase 1 , and delay 1. Find the time – varying impulse response, c ( , t) of this channel. Solution For t n T 0 the channel response is LOS. For t = n T 0 , the response will include both the LOS and the reflected path
For typical carrier frequencies, Where this is the case, a small change in n (t) can result in a large phase change This phenomenon, called “fading”, causes rapid variation in the signal strength vs. distance
The impact of multipath on the received signal is a function of whether the time delay spread is large or small wrt the inverse signal bandwidth • If the delay is small, the LOS and multipath components are typically unresolvable • If the delay spread is large, they are typically resolvable into some number of discrete components
Time – invariant channel model • The demodulator may sync to the LOS component or to one of the other components • If it syncs to the LOS component (smallest delay 0), the delay spread is a constant • If it syncs to a multipath component with delay equal to the delay spread will be given by • In time – varying channels, T m becomes a random variable
Some components have much lower power than others • If the power is below the noise floor, it will not contribute significantly to the delay spread • May be characterized by two factors determined from the power delay profile • Average delay spread • RMS delay spread (most common) • Range of delay spread • Indoors: 10 – 1000 ns • Suburbs: 200 – 2000 ns • Urban: 1 – 30 s
Narrowband Fading Models Assume delay spread is small compared to the bandwidth Delay of the i th multipath component original transmitted signal, s (t) scale factor
independent of s (t) and u (t) Let narrowband for any T m
If N (t) is large, the Central Limit Theorem applies • n (t) and n (t) are independent • rI (t) and r Q (t) can be approximated as Gaussian
Correlation • Correlation (correlation coefficient) indicates the strength and direction of a linear relationship between two random variables. • In general statistical usage, correlation refers to the departure of two variables from independence. A measure of the degree to which two variables are related
The correlation ρ X, Y between two random variables X and Y with expected values μ X and μ Y and standard deviations σ X and σ Y is defined as cov = covariance = E = Expected value = Mean value = Standard deviation
The main result of a correlation is called the correlation coefficient(or "r"). It ranges from -1.0 to +1.0. The closer r is to +1 or -1, the more closely the two variables are related. • If r is close to 0, it means there is no relationship between the variables. If r is positive, it means that as one variable gets larger the other gets larger. If r is negative it means that as one gets larger, the other gets smaller ("inverse" correlation).
If the variables are independent the correlation is 0 • The converse is not true because the correlation coefficient detects only linear dependencies between two variables. • Example: Suppose the random variable X is uniformly distributed on the interval from −1 to 1, and Y = X 2. Then Y is completely determined by X, so that X and Y are dependent, but their correlation is zero; they are uncorrelated. • However, in the special case when X and Y are jointly normal independence is equivalent to uncorrelatedness.
Pearson Product - Moment Correlation Coefficient Accounts for sample size Suppose we have a series of n measurements of X and Y written as x i and y i where i = 1, 2, ..., n and that X and Y are both normally distributed. = sample mean of x i = sample mean of y i s x = sample mean of x i s y = sample mean of y i
We can use the same basic formula for the sample as for the entire population Problem: This formula may be unstable Subtracting numbers in the denominator that may be very close to each other Why?
The sample correlation coefficient is the fraction of the variance in y i that is accounted for by a linear fit of x i to y i . where σy|x2 is the square of the error of a linear fit of yi to x i by the equation y = a + bx Since the sample correlation coefficient is symmetric in x i and y i , we will get the same value for a fit of x i to y i
Interpretation of the size of a correlation These criteria are somewhat arbitrary
Cross - Correlation • In signal processing, the cross-correlation is a measure of similarity of two signals • Used to find features in an unknown signal by comparing it to a known one • It is a function of the relative time between the signals
For discrete functions f i and g ithe cross-correlation is defined as For continuous functions f (x) and gi the cross-correlation is defined as
Properties of Cross - Correlation • Similar in nature to the convolution of two functions • They are related by if f (t) or g (t) is an even function
Autocorrelation • Autocorrelation is the cross-correlation of a signal with itself • Autocorrelation is useful for finding repeating patterns in a signal • Determining the presence of a periodic signal which has been buried under noise • Identifying the fundamental frequency of a signal which doesn't actually contain that frequency component, but implies it with many harmonic frequencies • Different definitions in statistics and signal processing
Statistics • The autocorrelation of a discrete time series or a process X t is simply the correlation of the process against a time-shifted version of itself • If X t is second-order stationary with mean μ then the definition is • E is the expected value and k is the time shift being considered (usually referred to as the lag). • This function has the property of being in the range [−1, 1] with 1 indicating perfect correlation (the signals exactly overlap when time shifted by k) and −1 indicating perfect anti-correlation.
Signal processing • Given a signal f(t), the continuous autocorrelation R f () is the continuous cross-correlation of f (t) with itself, at lag , and is defined as Basically, autocorrelation is the convolution of a signal with itself Note that, for a real function, f (t) = f * (t)
Formally, the discrete autocorrelation R at lag j for signal x n is For zero – centered signals (zero mean)
A fundamental property of the autocorrelation function is symmetry, R(i) = R(− i) • In the continuous case, R f (t) is an even function when f (t) is real when f (t) is complex • The continuous autocorrelation function reaches its peak at the origin, where it takes a real value • The same result holds in the discrete case
The autocorrelation of a periodic function is, itself, periodic with the very same period • The autocorrelation of the sum of two completely uncorrelated functions (the cross-correlation is zero for all τ) is the sum of the autocorrelations of each function separately • Since autocorrelation is a specific type of cross-correlation, it maintains all the properties of cross-correlation • The autocorrelation of a white noise signal will have a strong peak at = 0 and will be close to 0 for all other • A sampled instance of a white noise signal is not statistically correlated to a sample instance of the same white noise signal at another time
Autocorrelation, Cross – Correlation, and Power Spectral Density • Assumptions: • No dominant LOS component • Each of the multipath components is associated with a single reflector • n (t) n = constant • n (t) n = constant • f Dn (t) f Dn = constant • Dn (t) 2 f Dn t • n (t) 2 f c n + 2 f Dn t - 0 • 2 f c n changes more rapidly than the others • n (t) is uniformly distributed on [- , ]
Under these Assumptions: Similarly, Therefore, Zero – mean Gaussian process If there is a dominant LOS product, the assumption of a random uniform phase no longer holds
Correlation of In – Phase and Quadrature Components By the same process: • Conclusions: • r I (t) and r Q (t) are uncorrelated • They are independent
Autocorrelation of In – Phase Component We can show that this expression is equal to and Where this is the case, we say that r I(t) and r Q (t) are wide – sense stationary (WSS) random processes
Cross – Correlate The received signal is also WSS with autocorrelation
Uniform Scattering Environment Many scatterers densely packed wrt angle Dense Scattering Environment
Assumptions: • N multipath components with angle of arrival • P r = Total received power
Substitute Take the limit as N J 0 (x) is a Bessel function of the zeroth order