450 likes | 856 Views
Lecture 13. Fluctuations. Fluctuations of macroscopic variables. Correlation functions. Response and Fluctuation. Density correlation function. Theory of random processes. Spectral analysis of fluctuations: the Wiener-Khintchine theorem. The Nyquist theorem.
E N D
Lecture 13 • Fluctuations. • Fluctuations of macroscopic variables. • Correlation functions. • Response and Fluctuation. • Density correlation function. • Theory of random processes. • Spectral analysis of fluctuations: the Wiener-Khintchine theorem. • The Nyquist theorem. • Applications of Nyquist theorem.
We considered the system in equilibrium, where we did different statistical averages of the various physical quantities. Nevertheless, there do occur deviations from, or fluctuations about these mean values. Though they are generally small, a study of these fluctuations is of great physical interest for several reasons. • It enables us to develop a mathematical scheme with the help of which the magnitude of the relevant fluctuations, under a variety of physical situations, can be estimated. We find that while in a single-phase system the fluctuations are thermodynamically negligible they can assume considerable importance in multi-phase systems, especially in the neighborhood of the critical points. In the latter case we obtain a rather high degree of spatial correlation among the molecules of the system which in turn gives rise to phenomena such as critical opalescence.
It provides a natural framework for understanding a class of physical phenomena which come under the common heading of “Brownian motion”; these phenomena relate properties such as the mobility of a fluid system, its coefficient of diffusion, etc., with temperature trough the so-called Einstein’s relations. The mechanism of the Brownian motion is vital in formulating, and in a certain sense solving, problems as to how “a given physical system, which is not in a state of equilibrium, finally approaches a state of equilibrium”, while “a physical system, which is already in a state of equilibrium, persists to be in that state”. • The study of fluctuations, as a function of time, leads to the concept of correlation functions which play an important role in relating the dissipation properties of a system, such as the viscose resistance of fluid or the electrical resistance of a conductor, with the microscopic properties of the system in a state of the equilibrium. This relationship (between irreversible processes on one-hand and equilibrium properties on the other) manifests itself in the so-called fluctuation-dissipation theorem.
The deviation x of a quantity x from its average value is defined as (13.1) (13.2) (13.3) At the same time, a study of the “frequency spectrum” of fluctuations, which is related to the time-dependent correlation function through the fundamental theorem of Wiener and Khinthchine, is of considerable value in assessing the “noise” met with in electrical circuits as well as in the transmission of electromagnetic signals. Fluctuations We note that We look to the mean square deviation for the first rough measure of the fluctuation:
We usually work with the mean square deviation, although it is sometimes necessary to consider also the mean fourth deviation. This occurs, for example, in considering nuclear resonance line shape in liquids. One refers to as the n-th moment of the distribution. (13.4) (13.5) Consider the distributiong(x)dx which gives the number of systems in dxatx. In principle the distribution g(x)can be determined from a knowledge of all the moments, but in practice this connection is not always of help. The theorem is usually proved; we take the Fourier transform of the distribution: Now it is obvious on differentiating u(t) that
(13.6) Thus if u(t) is an analytic function we know from the moments all the information needed to obtain the Taylor series expansion of u(t) the inverse Fourier transform of u(t) gives g(x) as required. However, the higher moments are really needed to use this theorem, and they are sometimes hard to calculate. The function u(t) is sometimes called the characteristic function of the distribution. Energy Fluctuations in a Canonical Ensemble When a system is in thermal equilibrium with a reservoir the temperaturesof the system is defined to be equal to the temperature r of the reservoir, and it has strictly no meaning to ask questions about the temperature fluctuation. The energy of the system will however, fluctuate as energy is exchanged with the reservoir. For a canonical ensemble we have where =-1/. Now
(13.7) (13.8) (13.9) (13.10) (13.11) so that Further and thus Now the heat capacity at constant values of the external parameters is given by
(13.12) (13.13) (13.14) thus Here Cv refers to the heat capacity at the Actual volume of the system. The fractional fluctuation in energy is defined by We note then that the act of defining the temperature of a system by bringing it into contact with a heat reservoir leads to an uncertainty in the value of the energy. A system in thermal equilibrium with a heat reservoir does not have energy, which is precisely constant. Ordinary thermodynamics is useful only so long as the fractional fluctuation in energy is small.
(13.15) (13.16) (13.17) (13.18) For perfect gas for example we have thus For N=1022, F10-11, which is negligibly small. For solid at low temperatures. According to the Debye low the heat capacity of a dielectric solid for T<<D is also so that
(13.20) (13.21) Suppose that T=10-2deg K; D=200 deg K; N1016 for a particle 0.01 cm on a side. Then F0.03 (13.19) which is not inappreciable. At very low temperatures thermodynamics fails for a fine particle, in the sense that we cannot know E and T simultaneously to reasonable accuracy. At 10-5 degree K the fractional fluctuation in energy is of the order of unity for a dielectric particle of the volume 1cm3 Concentration Fluctuations in a Grand Canonical Ensemble We have the grand partition function from which we may calculate
(13.22) (13.23) (13.24) (13.25) and Thus Perfect Classical Gas From an earlier result thus and using (13.23)
(13.26) (13.27) The fractional fluctuation is given by Random Process A stochastic or random variable quantity with a definite range of values, each one of which, depending on chance, can be attained with a definite probability. A stochastic variable is defined • if the set of possible values is given, and • if the probability attaining each value is also given. Thus the number of points on a die that is tossed is a stochastic variable with six values, each having the probability 1/6.
(13.28) The sum of a large number of independent stochastic variables is itself a stochastic variable. There exists a very important theorem known as a central limit theorem, which says that under very general conditions the distribution of the sum tends toward a normal (Gaussian) distribution law as the number of terms is increased. The theorem may be stated rigorously as follows: Let x1, x2,…, xn be independent stochastic variables with their means equal to 0, possessing absolute moments 2+(i) of the order 2+,where is some number >0. If denoting by Bn the mean square fluctuation of the sum x1+ x2+…+ xn , the quotient tends to zero as n, the probability of the inequality
(13.28) (13.29) (13.30) tends uniformly to the limit For a distribution f(xi), the absolute moment of order is defined as Almost all the probability distributions f(x) of stochastic variables x of interest to us in physical problems will satisfy the requirements of the central limit theorem. Let us consider several examples.
(13.32) (13.33) but . We have Example 13a The variable x distributes uniformly between 1. Then f(x)=1/2, -1 x 1,and f(x)=0 otherwise. The absolute moment of order 3 exists: The mean square fluctuation is (13.34) If there are n independent variables xiit is easy to see that the mean square fluctuation Bnof their sum (under the same distribution) is
(13.35) (13.36) (13.37) Thus (for =1) we have for (13.28) the result which does tend to zero as n. Therefore the central limit theorem holds for this example. Example 13b The variable x is a normal variable with standard deviation - that means, that it is distributed according to the Gaussian distribution where 2 is the mean square deviation; is called standard deviation. The absolute moment of order 3 exists:
(13.38) (13.39) (13.40) (13.41) The mean square fluctuation is If there are n independent variables xi, then For =1 which approaches 0 as n approaches . Therefore the central limit theorem applies to this example. A Gaussian random process is one for which all the basic distribution functions f(xi) are Gaussian distributions.
(13.42) (13.43) Example 13c The variable x has a Lorentzian distribution: The absolute moment of order is proportional to But this integral does not converge for 1, and thus not for =2+, >0. We see that central limit theorem does not apply to a Lorentzian distribution.
Figure 13.1 Sketch of a random processx(t) Random Process or Stochastic Process By a random process or stochastic processx(t) we mean a process in which the variable xdoes not depend in a completely definite way on the independent variable t, which may denote the time. In observations on the different systems of a representative ensemble we find different functions x(t). All we can do is to study certain probability distributions - we cannot obtain the functions x(t) themselves for the members of the ensemble. In Figure 13.1 one can see a sketch of a possible x(t) for one system.
p1(x,t)dx =Probability of finding x in the range (x, x+dx)at timet; (13.44) p2(x1,t1; x2,t2)dx1dx2 =Probability of finding x in (x1, x1+dx1) at time t1; and in the range (x2, x2+dx2) at time t2 (13.45) The plot might, for example, be an oscillogram of the thermal noise current x(t)I(t) obtained from the output of a filter when a thermal noise voltage is applied to the input. We can determine, for example If we had an actual oscillogram record covering a long period of time we might construct an ensemble by cutting the record up into strips of equal length T and mounting them one over the other, as in Figure 13.2.
The probabilities p1 andp2 will be found from the ensemble. Proceeding similarly we can form p3, p4,…. The whole set of probability distributions pn (n=1,2,…,)may be necessary to describe the random process completely. Figure 13.2 Recordings ofx(t)versus tfor three system of an ensemble, as simulated by taking three intervals of durationT from a single long recording.Time averages are taken in a horizontal direction in such a display; ensemble averages are taken in a vertical direction.
(13.46) In many important cases p2contains all the information we need. When this is true the random process is called a Markoff process. A stationary random processis one for whichthe joint probability distributions pn are invariant under a displacement of the origin of time.We assume in all our further discussion that we are dealing with stationary Markoff processes. It is useful to introduce the conditional probability P2(x1,0x2,t)dx2 for the probability that given x1one finds xin dx2 at x2a timetlater. Than it is obvious that
(13.47) Wiener-Khintchine Theorem The Wiener-Khintchine theorem states a relationship between two important characteristics of a random process: the power spectrum of the process and the correlation function of the process. Suppose we develop one of the records in Fig.13.2 of x(t) for 0<t<T in a Fourier series: where fn=n/T. We assume that <x(t)>=0, where the angular parentheses <> denote time average; because the average is assumed zero there is no constant term in the Fourier series. The Fourier coefficients are highly variable from one record of duration T to another. For many type of noise the an, bn have Gaussian distributions. When this is true the process (13.47) is said to be a Gaussian random process.
(13.48) (13.49) (13.50) (13.51) Let us now imagine that x(t) is an electric current flowing through unit resistance. The instantaneous power dissipation is x2(t).Each Fourier component will contribute to the total power dissipation. The power in the n-thcomponent is We do not consider cross products terms in the power of the form because for nm the time average of such terms will be zero. The time average of Pis because
(13.52) (13.53) (13.54) We now turn to ensemble averages, denoted here by a bar over the quantity. As we mentioned above, every record in Fig.13.2 running in time from 0 to T. We will consider that an ensemble average is an average over a large set of independent records. From a random process we will have where for a Gaussian random process n is just the standard deviation, as in example 13b Thus
(13.55) (13.56) (13.57) Thus from (13.49) the ensemble average of the time average power dissipation associated with n-th component of x(t) is Power Spectrum We define the power spectrum or spectral density G(f) of the random process as the ensemble average of the time average of the power dissipation in unit resistance per unit frequency bandwidth. If fn equal to the separation between two adjacent frequencies we have Now by (13.51), (13.52) and (13.53)
(13.58) (13.59) (13.60) where the average is over the time t. This is the autocorrelation function. Without changing the result we may take an ensemble average of the time average Using (13.56) The integral of the power spectrum over all frequencies gives the ensemble average total power. Correlation Function Let us consider now the correlation function so that
(13.61) (13.62) (13.63) Using (13.57) Thus the correlation function is the Fourier cosine transform of the power spectrum. Using the inverse Fourier transform we can write This, together with (13.62) is the Winer-Khitchine theorem. It has an obvious physical content. The correlation function tells us essentially how rapidly the random process is changing.
(13.64) (13.65) Example 13d. If we may say that c is a measure of the above time the system exists without changing its state, as measured by x(t), by more than e-1. cin this case have a meaning of correlation time.We then expect physically that frequencies much higher than, 1/cwill not be represented in an important way in the power spectrum. Now ifC()is given by (13.64), the Wiener-Khintchine theorem tells us that Thus, as shown in Fig. 13.3, the power spectrum is flat (on a log. frequency scale) out to 2f1/c, and then decreases as 1/f2 at high frequencies. Note that the noise spectrum for the correlation function is “white” out of cutoff fc1/2c,
1 0.5 0 log102f (13.66) 1 2 3 4 Figure 13.3 Plot of spectral density versus log102ffor an exponential function with c=10-4 c. The Nyquist Theorem The Nyquist theorem is of great importance in experimental physics and in electronics. The theorem gives a quantitative expression for the thermal noise generated by a system in thermal equilibrium and is therefore needed in any estimate of the limiting signal-to-noise ratio of experimental set-ups. In the original form the Nyquist theorem states that the mean square voltage across a resistor of resistance R in thermal equilibrium at thermal T is given by
(13.67) Noise generator Filter R R’ (13.68) where f is the frequency band width which the voltage fluctuations are measured; all Fourier components outside the given range are ignored. Remember the definition of the spectral density G(f), we may write Nyquist results as This is not strictly the power density, which would be G(f)/R. Figure 13.4 The noise generator produces a power spectrum G(f)=4RkT. If the filter passes unit frequency range, the resistance R’ will absorb power 2RkT.R’is matched to R. The maximum thermal noise power per unit frequency range delivered by a resistor to a matched load will be G(f)/4R=kT; factor of 4 enters where it does because the power delivered to the load R’ is
which at match (R’=R) is (Figure.13.4). Zc=R R R l We will derive the Nyquist theorem in two ways: first, following the original transmission line derivation, and, second, using a microscopic argument. Transmission line derivation Figure 13.5 Transmission line of length l with matched terminations. Consider as in Figure 13.5 a loss less transmission line of length l and characteristic impedance Zc=R terminated at each end by a resistance R. The line is therefore matched at each end in the sense that all energy traveling down the line will be absorbed without reflection in the appropriate resistance.
(13.69) (13.70) (13.71) The entire circuit is maintained at temperature T. In analogy to the argument on the black-body radiation (Lecture 8) the transmission line has two electromagnetic modes (one propagation in each direction) in the frequency range where c’ is the propagation velocity on the line. Each mode has energy in equilibrium. We are usually concerned here with the classical limit , so that the thermal energy on the line in the frequency range f The rate at which energy comes off the line in one direction is
(13.72) (13.73) (13.74) Because the thermal impedance is matched to the line, the power coming off the line at one end is absorbed in the terminal impedance R at that end. The load emits energy at the same rate. The power input to the load is But V=I(2R), so that which is the Nyquist theorem. Microscopic Derivation We consider a resistance R with N electrons per unit volume; length l, area A and carrier relaxation time c. We treat the electrons as Maxwellian but it was shown that the noise voltage is independent of such details, involving only the value of the resistance regardless of the details of the mechanisms contributing to the resistance.
(13.75) (13.76) (13.77) (13.78) First note that here V is the voltage, I the current, j the current density, and is the average (or drift) velocity component of the electrons down the resistor. Observing that NAl is the total number of electrons in the specimen Summed over all electrons. Thus where ui and Vi are the random variables. The spectral density G(f) has the property that in the range f
(13.79) (13.80) (13.81) (m- mass of electron, average velocity of electron) (13.82) We suppose that the correlation function may be written as Then, from the Wiener-Khintchine theorem we have Usually in metals at room temperature c<10-13 s, so from dc through the microwave range c<<1 and may be neglected. We recall that So that
(13.83) (13.84) (13.85) (13.86) Thus in the frequency range f or Here we have used the relation from the theory of conductivity and also elementary relation is the electrical conductivity.
(13.87) (13.88) (13.89) (13.90) The simplest way to establish (13.85) in a plausible way is to solve the drift velocity equation so that in the steady state ( or for c<<1 ) we have giving for the mobility (drift velocity per unit electric field) Then we have for the electric conductivity