Lecture Notes: Statistical Physics PDF

Summary

These lecture notes cover Statistical Physics, including different ensembles and their corresponding partition functions. The document also delves into concepts such as white dwarfs and black holes, alongside a detailed discussion of the historical context of the topics mentioned.

Full Transcript

Lecture 8 Ritwik Mondal Course Syllabus and Plan Statistical Physics: Micro-Canonical, Canonical and Grand Canonical ensembles; Corresponding Partition Functions and their relations to thermodynamic poten- tials MB, BE and FD Statistical di...

Lecture 8 Ritwik Mondal Course Syllabus and Plan Statistical Physics: Micro-Canonical, Canonical and Grand Canonical ensembles; Corresponding Partition Functions and their relations to thermodynamic poten- tials MB, BE and FD Statistical distribution laws; Introduction to White dwarf and Black hole. Reading: There are many good books on Statistical Physics. Here is a list. ♣ Fundamentals of Statistical and Thermal Physics by F. Reif ♣ Statistical Physics by Franz Mendl ♣ An Introduction to Statistical Mechanics and Thermodynamics by Robert H. Swendsen ♣ An Introductory Course of Statistical Mechanics by Palash B. Pal ♣ Statistical Physics (University of Cambridge Part II Mathematical Tripos) by David Tong You can access this document here: Click Here Disclaimer: This is not a text book. Rather, these lecture notes are written by Dr. Ritwik Mondal, Department of Physics, IIT (ISM) Dhanbad. The contents are taken from several text books. Please read the above books for your reference. Note: The notes are edited by Dr. Tusharkanti Dey, Department of Physics, IIT (ISM) Dhanbad. If you find any mistakes or typos, please let me know. Office: Room 612 New Academic Complex Email: [email protected] 1 Lecture 8 Ritwik Mondal Introduction to Statistical Physics Figure 1: The opening sentences of an excellent textbook “States of Matter” (1975), by David L. Goodstein. Ludwig Boltzmann was an Austrian mathematician who made important advances in electromagnetism and thermodynamics. His theories connected the properties and behavior of atoms and molecules with the large scale properties and behavior of the substances of which they were the building blocks. Boltzmann obtained the Maxwell-Boltzmann distribution in 1871, namely the av- erage energy of motion of a molecule is the same for each direction. He was one of the first to recognise the importance of Maxwell’s electromagnetic theory. Boltzmann worked on statistical mechanics using probability to describe how the properties of atoms determine the properties of matter. In particular his work re- lates to the Second Law of Thermodynamics which he derived from the principles of mechanics in the 1890s. Boltzmann’s ideas were not accepted by many scientists. In 1904 Boltzmann visited the World’s Fair in St Louis, USA. He lectured on applied mathematics and then went on to visit Berkeley and Stanford. Unfortunately he failed to realize that the 2 Lecture 8 Ritwik Mondal new discoveries concerning radiation that he learned about on this visit were about to prove his theories correct. Boltzmann continued to defend his belief in atomic structure and statistical me- chanics proposed by him. However, attacks on his work continued and he began to feel that his life’s work was about to collapse despite his defence of his theories. In 1906, depressed and in bad health, Boltzmann committed suicide just before experi- ment verified his work. Paul Erenfest was a PhD student of Ludwig Boltzmann. Boltzmann killed himself just two short years after Ehrenfest earned his doctoral degree. Erenfest continued the work of Boltzmann. But, his son had down syndrome and and the Nazi party had seized power earlier that year in 1933. After arriving at the institute, Ehrenfest met his son in the waiting room. There he shot his son Wassik in the head with a pistol, and then he killed himself. It was an unfathomable end - an inconceivable act that betrayed a meaningful life. It is our turn to understand the statistical mechanics! In the last few lectures we have covered the topics in thermodynamics which deals with the macroscopic properties of system such as pressure, temperature, volume, entropy. We can study the overall system as a single system with thermodynam- ics - bulk or macroscopic properties. However, in reality any macroscopic system is composed of microscopic entities like atomic and subatomic particles. For exam- ple any liquid, gas or solid system contains many particles in terms of molecules, atoms, electrons etc. In order to understand the microscopic properties of a sys- tem, the study of thermodynamics is not enough. In fact a system of gas has 1023 molecules. Therefore, if we would like to describe the microscopic properties, we would require to write down the classical mechanical Newton’s law for each of these gas molecules. Therefore, you will have to solve a huge numbers of Newton’s law that is practically impossible. You are required to know the position and momen- tum of individual gas molecules which is impractical as well. Hence, one needs to use statistical distribution of position and momentum. As there are huge numbers of gas molecules in the system, the distribution function has less variance. There- fore, the statistical treatment of large number of particles is almost an exact theory. The macroscopic (bulk) properties of a system are governed by the microscopic (e.g., electrons) properties. The statistical physics calculates the microscopic prop- 3 Lecture 8 Ritwik Mondal erties of a system. Macrostate We have so far talked about the macrostates defining a thermodynamical processes. A macrostate can be defined by the set of state functions e.g., pressure, temper- ature, energy, no. of particles etc. For example, let us consider an ideal gas in an isolated cylinder where the pressure and temperature are constant. In this case if the gas is in equilibrium (uniform distribution of gas molecules) such an equilibrium can be represented by a set {E, V, N }, where E is the total energy, V is the vol- ume of the gas and N are the no of gas molecules. This constitutes the macrostate of a gas in equilibrium. The equilibrium density of gas molecules can be obtained by ρ = N/V that is constant throughout the gas volume. However, if the gas is not in equilibrium (non-uniform distribution of gas molecules), we need to intro- duce other macroscopic quantities. In the case of gas, we must introduce the parti- cle (gas molecule) density at a time t and position r. If the nonequilibrium particle density is represented by α (for many density we introduce α1 , α2 ,...). Therefore, the macrostate can be represented by the set {E, V, N, α}. Microstate The macrostates are composed by the microstates. In general 1023 particles are in- volved in a system of gas. Thus, a complete description of microstates are rather complicated. But, we are safe here because equilibrium statistical physics tells us that we do not require a detailed knowledge of all the microstates. Rather, we would like to know how many microstates correspond to a macrostate. The general idea is rather simple. Let us take a simple example for understanding of macrostates and microstates in flipping of a coin. If we flip a coin twice, the outcome has many probabilities. The outcomes could be one of the following: (H,H), (T,T), (H,T), (T,H). These are microstates which takes care of ordering e.g. (H,T) and (T,H). But, the macrostate does not depend on such ordering. You can see that the mi- crostates comes with the probablity factor as well. Therefore, the number of mi- crostates have some weights in terms of their probabilities. Let us take another example of identical particles. Assume that there are 5 boxes and two balls. You want to find the every possible ways to put 2 balls in 5 boxes. 4 Lecture 8 Ritwik Mondal Macrostate Macrostate 4 Microstates Macrostate Figure 2: Macrostates and microstates of a coin flipped twice. The no. of microstates can be calculated by n 5! C r = 5C 2 = = 10 (1) 3! 2! Therefore, there are 10 microstates possible. In a cylinder of gas, there are N no. of gas molecules, where N is very very large! The microstate will be determined by specifying 6N variables. There are 3N posi- tion coordinates q1 , q2 , q3 ,... , qN and other 3N momentum variables p1 , p2 , p3 ,... , pN. These 6N numbers constitute a “phase-space”. Now, you have studied Hamilton’s equation in classical mechanics. If the Hamiltonian describing the system is H(qi , pi ), we can extract the details of positions and momentums by solving the following ∂H q̇i = (2) ∂pi ∂H ṗi = − (3) ∂qi Unfortunately, we never have exact details of all the microstates. This is where quantum mechanics can become important! Even though we are arguing it for now. According to quantum mechanics, any microstate at a given time can be described by its wave function of the following kind ψ (q1 , q2 , q3 ,... , p1 , p2 , p3 ,... , t) (4) Now one could express the position and momentum in terms of angular momentum via li = ri × pi where × indicates a vector cross product. To this end one can have 5 Lecture 8 Ritwik Mondal Momentum, pi A Microstate, Position, qi Figure 3: An example of phase-space. The phase-space consists of position and mo- mentum. a wave function which can be expressed as ψ (l1 , l2 , l3 ,... , t) (5) where all the li defines all the quantum numbers necessary to describe the microstate. For a macrostate defined by a set {E, V, N }, the accessible microstates are those which are consistent with the fixed values of E, V, N. Remember that we have al- ready talked about the “Equilibrium” in thermodynamics, however, let us under- stand the meaning of equilibrium in terms of microstates. Suppose that at t = 0 the probability of finding any accessible microstate is equal. That is, every accessi- ble microstate occurs with equal probability. Now let the laws of mechanics evolve each microstate forward in time. Our claim is that: at later times (t > 0), every accessible microstate still occurs with equal probability. If an isolated system is found with equal probability in each accessible microstate, then it is in equilibrium. Review of Probability distributions: In order to deal with microstates, a review of probability theory is necessary. Let us consider a set of discrete random variables {x|x1 , x2 , x3 ,... xi ,... }. The probability of obtaining x = xi is P (xi ) can be called as probability distribution of x. The normalization of the probability tells us that the finding of x must result in any one 6 Lecture 8 Ritwik Mondal of xi such that X P (xi ) = 1 (6) i The mean value of x can be calculated using the probability distribution as X ⟨x⟩ = xi P (xi ) (7) i The mean square can similarly be calculated as X ⟨x2 ⟩ = x2i P (xi ) (8) i The variance is obtained as σ 2 = ⟨(x − ⟨x⟩)2 ⟩ = ⟨(x2 − 2x⟨x⟩ + ⟨x⟩2 )⟩ = ⟨x2 ⟩ − ⟨x⟩2 (9) √ The standard deviation is calculated from the variance as σ 2. Binomial Distribution A particularly important case is that of N independent, identically distributed ran- dom numbers that can each take on the value 1 with probability p and 0 with prob- ability 1 - p. Now, the probability of a specific subset of n random variables taking on the value 1, while the remaining N − n random variables that take on the value 0 is easily seen to be pn (1 − p)N −n (10) The probabilty distribution also depends on the number of permutations of having n subsets from a huge random numbers N. Thus the binomial probability distribu- tion is N! P (n|N ) = pn (1 − p)N −n (11) (N − n)! Striling’s Approximation: As mentioned above, a difficulty in using the binomial distribution is that N ! be- comes enormously large when N is even moderately large. For N = 25, N ! ≈ 1.6 × 1025 , and we need to consider values of N of 1023 and higher. The problem is solved by Stirling’s approximation, which is valid for large numbers-exactly the case in which we are interested. 7 Lecture 8 Ritwik Mondal Consider approximating ln N ! in the simplest possible way n=N ! Z N Y X ln N ! = ln n = ln n ≈ ln x dx = N ln N − N + 1 (12) n=1 n 1 Derivation of Equipartition theorem Generally we talk about energy in physics which has quadratic dependence. For example, the kinetic energy of any particle with a mass m and velocity v can be written as 1 EKE = mv 2 (13) 2 Similarly, the potential energy of a mass m suspended at one end of a spring and displaced by a small distance x is written as 1 EPE = kx2 (14) 2 where k is the spring constant. Therefore, be it kinetic energy or be it potential energy, these are quadratic in nature. At the moment we consider the energy to be quadratic such that E = αx2 (15) where now x is any variable (position, velocity etc.). As we are dealing with equi- librium, all the accessible microstates should have equal probability meaning that x could take any value with equal probability. Now the probability of a system having a particular energy E = αx2 is proportional to the Boltzmann factor e−βE where β = 1/kB T. The equal probability is then 2 e−αβx P (x) = R (16) ∞ e −αβx2 dx −∞ 8 Lecture 8 Ritwik Mondal So the mean energy is Z ∞ ⟨E⟩ = EP (x)dx −∞ R ∞ 2 −αβx2 αx e dx −∞ = R ∞ e −αβx2 dx −∞ R ∞ 2 −αβx2 αx e dx 0 = R (17) ∞ e−αβx2 dx 0 p To integrate we can consider z = αβx2 such that x = z/αβ and dz = 2αβxdx. We transform the integration R ∞ z dz q αβ e−z z 0 β 2αβ ⟨E⟩ =R ∞ dz q −z αβ e z 0 2αβ R ∞ 1 1 z 2 e−z dz 0 = R (18) β ∞ 1 z − 2 e−z dz 0 We introduce the Gamma function such that Z ∞ Γ(z) = xz−1 e−x dx (19) 0 The mean energy can thus be expressed in terms of Gamma function 1 Γ 32 1 12 Γ 12   1 1 ⟨E⟩ = 1 = 1  = = kB T (20) βΓ 2 β Γ 2 2β 2 Boltzmann’s Entropy: Boltzmann calculated the entropy from the number of microstates as S = kB ln Ω (21) where Ω is the no of microstates. Even though Max Planck was the first to derive the above expression, but Max Planck followed Boltzmann’s idea and named this as Boltzmann’s entropy. 9 Lecture 8 Ritwik Mondal Gas Liquid Solid Figure 4: A simple picture of gas, liquid and solid for calculation of microstates. From a macroscopic thermodynamical point of view, it is not possible to calculate the entropy of a system of gas or liquid or solids. Even though Boltzmann first calculated the entropy for gases, we can extend his theory for liquids and solids as well. According to Boltzmann, all we need is to calculate the number of mi- crostates. Before going to the actual calculation, can we guess which system should have higher entropy? Well, from second law of thermodynamics, we know that entropy of the universe is always increasing. This means that entropy somehow measures the ran- domness or disorder. If we talk about gas, the randomness of the gas molecules are much more than in a liquid or solid. From this simple observation, we can say that the entropy of a gas must be higher. Okay! Let us calculate the positional microstates in each case as in Fig. 4. There are 16 boxes. For a system of gas, there are only three molecules. This is because gas do not have a proper structure. The possible microstates of having 3 molecules in 16 boxes are 16! ΩGas = = 560 (22) 13! 3! The entropy for this gas SGas = kB ln ΩGas = kB ln 560 ≈ 6.33kB (23) Of course we would like to compare not the entropy of the whole system, but the entropy per molecules. For gas the entropy per molecules SGas 6.33kB ≈ = 2.11kB (24) N 3 10 Lecture 9 Ritwik Mondal For a liquid, there is no definite structure, however, the liquid is more dense than gas. Let us assume that there are 15 liquid molecules that has to be kept in 16 boxes. The possible number of microstates similarly can be calculated 16! ΩLiquid = = 16 (25) 15! 1! The entropy of the liquid SLiquid = kB ln ΩLiquid = kB ln 16 ≈ 2.77kB (26) Entropy per molecule for liquid SLiquid 2.77kB ≈ ≈ 0.185kB (27) N 15 A solid on the other hand has a definite structure. This means solid is more dense than liquid and gases. There would be 16 “solid molecules” in the 16 boxes. Now if you name the molecules according to their positions then there is only one mi- crostates. Thus, the entropy of a solid is zero. SSolid =0 (28) N Now, you can see that as you had guessed before that the entropy of a gas is always higher than liquid and solid. This is also the reason why we always refer to a sys- tem of gases while studying thermodynamics and statistical physics. 11 Lecture 9 Ritwik Mondal Why do we need Ensemble? Consider a system with a collection of N interacting or non-interacting atoms or molecules or any other particles and we are interested in finding an observable quan- tity like temperature which is the effect of the motion of these particles. Tempera- ture can be determined by performing the average over the time trajectory of each particle, and then average over all the particles at equilibrium. Time average of ve- locity is defined for a particle as 1 Tt Z ⟨v⟩ = lim v(t)dt (29) Tt →∞ Tt 0 We also know that according to the equipartition theorem the energy can be calcu- lated as 1 3 E = mv 2 = kB T (30) 2 2 Therefore, for N particles (N is huge!) the macroscopic quantity temperature T can be calculated from the microscopic states as i=N 1 Tt 2 Z m 1 X T = lim v (t)dt (31) 3kB N i=1 Tt →∞ Tt 0 where Tt represents time. Understand that the individual particle velocity is re- quired to calculate the temperature of a system. Let us find how the temperature of a system of harmonic oscillators is obtained. The differential equation for an one-dimensional harmonic oscillator is d2 y 2 + ω2y = 0 (32) dt where y(t) is the displacement and ω is the angular frequency of the oscillator. The general solution of the above equation is y(t) = a sin ωt + b cos ωt (33) The velocity can easily be calculated dy v(t) = = ω(a cos ωt − b sin ωt) (34) dt To find out the values of a and b, we must use initial conditions. Let us assume that at t = 0, y(t = 0) = y0 and v(t = 0) = v0. We then find b = y0 and a = v0 /ω. Therefore the velocity can be expressed as v  0 v(t) = ω cos ωt − y0 sin ωt (35) ω 12 Lecture 9 Ritwik Mondal We substitute this in the calculation of temperature i=N mω 2 X 1 Tt  vi0 Z 2 T = lim cos ωt − yi0 sin ωt dt 3N kB i=1 Tt →∞ Tt 0 ω i=N  2 mω 2 X vi0  2 = + yi0 (36) 3N kB i=1 ω 2 Can we get this temperature by substituting the unknown values? Look at the unknowns v0 and y0 , they are the velocity and displacement for each particle and there are N particles with different values of amplitudes and velocities. Thus it is, practically impossible to generate a long time trajectory of a macroscopic system consisting of a large number of atoms or molecules. Richard Feynmann said on this difficulties: “Anyone who wants to analyze the properties of matter in a real problem might want to start by writing down the fundamental equations and then try to solve them mathematically. Although there are people who try to use such an approach, these people are the failures in this field... ” Therefore, we move to the notion of ensembles. In thermodynamics, the universe is always divided into a system and its surroundings. The behavior of the system depends on how the system can interact with its surroundings: for example, can it exchange heat or other forms of energy? Can it exchange particles with the sur- roundings? The concept of ensemble is the collection of systems that have achieved equilibrium in the same state. In other words, an ensemble is a set of large number of copies of a given macrostate described by different microstates satisfying all the general conditions of the specific macrostate. If we know the no of microstates, we can compute the probability of having a mi- crostate. Thereby, we can calculate the average of any physical quantity in the question. The computation of microstates can be three ways. Therefore, there are three different ensembles possible. ♣ Micro Canonical Ensemble → the macrostate is defined by {E, V, N } ♣ Canonical Ensemble→ the macrostate is defined by {T, V, N } ♣ Grand Canonical Ensemble → the macrostate is defined by {T, V, µ}, where µ is the chemical potential. 13 Lecture 9 Ritwik Mondal System System System System System System System System System System System System Figure 5: A general definition of an ensemble. The ensemble consists of systems of same macrostates characterised by the microstates. Micro Canonical Ensemble The microstates of a micro canonical ensemble correspond to a microstate with fixed energy E, fixed volume V and fixed no. of particles N. The macrostate is thus defined by the set {E, V, N }. To keep the fixed energy of the macrostate, the system has to be isolated from the surroundings i.e., rest of the universe. Thus, an insulation is used in Fig. 6 for micro canonical ensemble such that the system can- not exchange any energy with the surroundings. Conservation of energy will guar- antee that the energy of the macrostate is independent of time. Let us first analyse the equilibrium properties in micro canonical ensemble. We will use the fundamental postulate of equilibrium statistical physics: An isolated system is in equilibrium iff all accessible microstates are equally probable. In other words, there is no preference of any particular microstate. All the mi- crostates can occur with same probability. If the number of microstates are Ω rep- 14 Lecture 9 Ritwik Mondal on Insulati Micro Canonical Canonical Grand Canonical Const. (E, V, N) Const. (T, V, N) Const. (T, V, μ) Figure 6: The three different ensembles. Microcanonical ensembles correspond to a macrostate with constant energy E, volume V and no. of particle N. Canonical ensembles correspond to the fixed temperature T , volume V and no. of particles N. Grandcanonical ensemble corresponds to macrostate with fixed temperature T , volume V and chemical potential µ. resenting the macrostate {E, V, N }, the probability of occuring each microstate is 1 p(E) = (37) Ω(E, V, N ) where Ω(E, V, N ) defines the total number of accessible microstates corresponding to a macrostate {E, V, N }. The total number of microstates Ω(E, V, N ) can some- times be called as Partition Function for micro canonical ensemble. Although Partition Function is strictly applicable to canonocal and grand canonical ensem- ble. Our intention is to calculate several thermodynamic state functions using the microstates. 15 Lecture 9 Ritwik Mondal System A System B EA, VA, NA EB, VB, NB Figure 7: Caption Consider an isolated system with a macrostate defined by fixed {E, V, N }. Now we divide the isolated system into two parts say A and B. The individual macrostates are now characterized by the sets {EA , VA , NA } for sub system A and {EB , VB , NB } for sub system B. As the walls are fixed, no work is done in this case. We find the following equations EA + EB = E (38) VA + VB = V (39) NA + NB = N (40) Using the fundamental postulate of microstates, we find that the probability that subsystem A has energy EA and subsystem B has energy EB is calculated as Total no of microstates of A having energy EA and B having energy EB p(EA ) = Total number of microstates without the wall ΩA (EA )ΩB (EB ) = Ω(E) ΩA (EA )ΩB (E − EA ) = (41) Ω(E) The reason is that the energies EA and EB are related as the subsystems A and B can exchange some heat between them, however, the total energy E must be con- served. Of course there are large numbers of microstates correspond to a macrostate. Now the question is which of these microstates will appear in the equilibrium? The an- 16 Lecture 9 Ritwik Mondal swer of this question cannot be given in terms of no. of microstates Ω, but in terms of probability. The equilibrium must correspond to the maximized probability mean- ing that the lowest no. of microstates. So, the macrostate {E, V, N } must corre- spond to maximize the probability p(EA ) such that ∂p(EA ) =0 (42) ∂EA We know that the no. of microstates is a huge number in any system. Therefore, it is always better to work with the ln p(EA ) because the function ln brings an expo- nential function to a monotonic function. In this case, we find ∂ ln p(EA ) =0 ∂EA ∂ ⇒ [ln ΩA (EA ) + ln ΩB (EB ) − ln Ω(E)] = 0 ∂EA ∂ ln ΩA (EA ) ∂ ln ΩB (EB ) ⇒ + =0 (43) ∂EA ∂EA Now we know that the total energy is constant EA + EB = E ⇒ ∂EA + ∂EB = 0 ⇒ ∂EA = −∂EB. Using this relation, we can write Eq. (43) as ∂ ln ΩA (EA ) ∂ ln ΩB (EB ) = (44) ∂EA ∂EB | {z } | {z } Property of system A only Property of system B only This is the proterty of the subsystems A and B such that the value of ∂ ln ∂E ΩA (EA ) A should be equal to ∂ ln ∂E ΩA (EB ) B when the total system A + B is in thermal equilib- rium. From thermodynamics we know that this must be related to the temperature T. Now, we know that ∂ ln ∂EΩA (EA ) A has a dimension of inverse energy. Therefore, at equilibrium of the total system at temperature T we must have ∂ ln ΩA (EA ) 1 ∂ ln ΩB (EB ) 1 = = = (45) ∂EA kB TA ∂EB kB TB where kB is Boltzmann’s constant. This above relation is also consistent with the zeroth law of thermodynamics. For the total system having energy E and temperature T we define ∂ ln Ω(E) 1 = ∂E kB T ∂ [kB ln Ω(E)] 1 ⇒ = (46) ∂E T 17 Lecture 10 Ritwik Mondal In this respect, we also introduce a parameter β = 1/kB T. Now, we remember the first and second law of thermodynamics dU = T dS − P dV (47) where U is the internal energy which is here defined as E. We know that   ∂U ∂S 1 T = ⇒ = (48) ∂S V ∂U T Note that Eq. (46) has also been obtained with constant volume V such that no work is done! Comparing Eqs. (46) with (48) we obtain the famous Boltzmann en- tropy S = kB ln Ω(E) (49) Now, see an interesting result. This exactly connects the thermodynamical quan- tities to the statistical ensembles. If we know the number of microstates, we can predict the entropy of a given thermodynamical system. In fact, once we know the entropy of a system using the microstates, we can calcu- late any thermodynamical quantity pressure P , temperature T and others. But the most difficult part in all this is the counting of microstates Ω. From Eq. (49), we can also write down the no of microstates as S Ω(E) = e kB (50) Later we will see that a similar thing will be called as partition function. 18 Lecture 10 Ritwik Mondal Canonical Ensemble Even though we have studied micro canonical ensemble in the previous section, that is only valid in an isolated system where no energy is exchanged between the system with the surroundings. In contrary, most of the thermodynamic processes occur where the exchange of energy happens between system and surroundings. In these processes we require to consider canonical ensemble. While the exchange of energy is possible, we can still keep the temperature of the macrostate constant. Therefore, the canonical distribution describes a system kept at a constant temperature T , with a given volume V and no of particles N. The macrostate will thus be determined by the set {T, V, N }. Canonical ensemble consists of a system and a heat reservoir (resource of heat) that keeps a constant temperature. The composite system of the system and heat reservoir can be taken as an isolated system. Let us consider the total energy of the composite is fixed as it is isolated. The total energy isolated composite system is ET = E + ER (51) The energy of the system is E and the energy of the reservoir is ER As we have done in previous section, we want to calculate the probability finding the composite system where the system has energy E and reservoir has energy ER No of microstates of Sys having energy E and Rev having energy ER P (E) = Total number of microstates without the wall Ω(E)ΩR (ER ) = ΩT (ET ) Ω(E)ΩR (ET − E) = (52) ΩT (ET ) We also note that reservior has infinite amount of energy such that any small amount of heat taken by the system does not change the temperature of the reservior. There- fore, ET >> E and ER >> E. As the no. of microstates are huge numbers, we must use the logarithmic function to calculate the probability. ln P (E) = ln Ω(E) + ln ΩR (ET − E) − ln ΩT (ET ) (53) We can expand the term ln ΩR (ET − E) around ET as ET >> E: ∂ ln ΩR (ET ) ln P (E) = ln Ω(E) + ln ΩR (ET ) − E − ln ΩT (ET ) + O(E 2 ) (54) ∂E E=ET 19 Lecture 10 Ritwik Mondal We have only kept up to first order terms and the other higher-order terms O(E 2 ) have been neglected! Since the system and the reservoir must have same temperature at equilibrium, we have T = TR (55) We also know from micro canonical ensemble Eq. (45) that ∂ ln ΩR (ET ) 1 = =β (56) ∂E E=ET kB T Due to the fact that the total energy ET is constant, we anticipate that the two terms ln ΩR (ET ) and ln ΩT (ET ) are also constant at equilibrium. Consider that ln ΩR (ET ) − ln ΩT (ET ) = − ln Z (57) where Z is constant. From Eq. (54), we insert Eq. (57) and find ln P (E) = ln Ω(E) − βE − ln Z Ω(E)e−βE ⇒ P (E) = (58) Z Notice that Z becomes simply a normalization constant in the probability distribu- tion. Z remains constant due to the fact that it does not depend on the energy E, however, it does depend on the other parameters T, V, N which are kept constant in this calculation. If one has discrete energy values, we can use the normalization such that X P (Ei ) = 1 i X ⇒Z= Ω(Ei )e−βEi (59) i where Ω(Ei ) are the statistical weight of the distribution. Of course, if the energy is continuous as it does in classcial physics, then the nor- malization of the probability distributions denotes Z ∞ Z(T, V, N ) = Ω(E)e−βE dE (60) 0 Z is very special and important in statistical physics and it is called partition func- tion. The universal use of the letter Z comes from a german name Zustandssumme. 20 Lecture 10 Ritwik Mondal The word Zustand means states; thus the meaning of Zustandssumme is that sum over all the states. Therefore, the probability distribution for canonical ensemble with discrete energy levels Ei is Ω(Ei )e−βEi P (Ei ) = X (61) Ω(Ei )e−βEi i Thermodynamic Relations: Average Energy: The great importance of having the partition function Z is that we can calculate many thermodynamic potentials. Let us first calculate the average energy in canonical ensemble X ⟨E⟩ = Ei P (Ei ) i X Ei Ω(Ei )e−βEi i = X (62) Ω(Ei )e−βEi i For the moment we keep this in mind. Now we know the following deinition of par- tition function X Z= Ω(Ei )e−βEi i X X Ei Ω(Ei )e−βEi Ei Ω(Ei )e−βEi ∂ ln Z 1 ∂Z i i ⇒ = =− =− X (63) ∂β Z ∂β Z Ω(Ei )e−βEi i Comparing Eq (62) with Eq. (63), we find the following relation ∂ ln Z ⟨E⟩ = − =U (64) ∂β If the system is not expeienced by any external field, this is actually the internal energy U of the system as we have studied in thermodynamics. However it has to be remembered that in the more general case where there are external field contri- butions to the energy we can only write ∂ ln Z ⟨E⟩ = − (65) ∂β 21 Lecture 10 Ritwik Mondal Entropy: We continue to work with the partition function, however, in order to use the no of microstates, we borrow the following formula from micro canonical ensemble S(E) Ω(E) = e kB (66) The partition function can thus be written as   X S(Ei ) −βEi X S(Ei ) Z= e kB e = exp − βEi (67) kB i i Notice that the partition function is a product of two factors. Because the tem- perature must be positive, the factor S(Ei ) has to be an increasing function of Ei S(Ei ) and its exponential e kB has to be an increasing function too. On the other hand, e−βEi is a decreasing function of Ei. The product of such two factors will have a peak at some energy Ei. For a large number of particles, this peak will be very very sharp. Therefore, the sum in Eq. (67) can be replaced by the peak value of Ei. That peak value is the internal energy of the system U we can write U − TS     S Z = exp − βU = exp − (68) kB kB T 22 Lecture 10 Ritwik Mondal This is an important result because the entropy in canonical ensemble now can be calculated from the above equation U − TS ln Z = − kB T U ⇒ S = kB ln Z + (69) T However, from thermodynamics we also know that the Helmholtz potential is F = U − T S. Thus, we can also calculate the potential from partition function F ln Z = − ⇒ F = −kB T ln Z ⇒ Z = e−βF (70) kB T 23 Lecture 10 Ritwik Mondal Grand Canonical Ensemble Even though, the canonical ensemble is the most important one in statistical physics, sometimes the usefulness of other ensemble theory may become convenient. One of them is grand canonical ensemble. The physical situation described by the grand canonical ensemble is that of a sys- tem that can exchange both energy and particles with a reservoir. As usual, we assume that the reservoir is much larger than the system of interest, so that its properties are not significantly affected by relatively small changes in its energy or particle number. Note that the reservoir must have the same type (or types) of particle as the system of interest, which was not a requirement of the reservoir for the canonical ensemble. So far, we have not talked about the no of particles and its connection to the chemical potential. Normally the no. of particles in a system cre- ates a chemical potential that can change if one more particle enters into the sys- tem. The chemical potential is denoted by µ. In the grand canonical ensemble the macrostate is defined by constant temperature T , volume V , and chemical potential µ with a set {T, V, µ}. Let us consider the energy and no of particles in the system is denoted by E and N , respectively. The energy and the no of particles in the reservoir are ER and NR. For the total composite system, as it is isolated, we must have total energy ET and total no of particles NT are constant. ET = E + ER (71) NT = N + NR (72) The probability distribution of the system having N particles and energy E is Ω(E, N )ΩR (ER , NR ) Ω(E, N )ΩR (ET − E, NT − N ) P (E, N ) = = (73) ΩT (ET , NT ) ΩT (ET , NT ) Following the same procedure as in the case of canonical ensemble, we take the ln which gives ln P (E, N ) = ln Ω(E, N ) + ln ΩR (ET − E, NT − N ) − ln ΩT (ET , NT ) (74) 24 Lecture 10 Ritwik Mondal As usual we expand the second term in the vicinity of ET and NT : ln P (E, N ) = ln Ω(E, N ) + ln ΩR (ET − E, NT − N ) − ln ΩT (ET , NT ) ∂ ln ΩR (ET , NT ) ≈ ln Ω(E, N ) + ln ΩR (ET , NT ) − E ∂E E=ET ∂ ln ΩR (ET , NT ) −N + O(E 2 , N 2 ) − ln ΩT (ET , NT ) (75) ∂N N =NT Once again, we already have seen that ∂ΩR (ET ) 1 = =β (76) ∂E E=ET ,NT kB T We have to calculate the following ∂ ln ΩR (ET , NT ) (77) ∂N N =NT In order to do so, let us go back to thermodynamics but now also taking into effect of chemical potential dU = T dS − P dV + µdN (78) when the total energy E (U is normally called as E) is fixed, we find the following     P ∂S µ ∂S = ; − = (79) T ∂V E,N T ∂N E,V For micro canonical ensemble, the entropy is S = kB ln Ω (80) Thus, using Eq. (79) we have µ ∂ [kB ln Ω] µ ∂ ln Ω ∂ ln Ω − = ⇒− = ⇒ −βµ = (81) T ∂N kB T ∂N ∂N Inserting Eqs. (76) and (81) into the grand canonical probability distribution of Eq. (75) ln P (E, N ) ≈ ln Ω(E, N ) + ln ΩR (ET , NT ) − βE + βµN − ln ΩT (ET , NT ) (82) 25 Lecture 10 Ritwik Mondal As usual we keep the total energy ET and total number of particles NT constant. Thus, we define ln ΩR (ET , NT ) − ln ΩT (ET , NT ) = − ln Z that is a constant. The previous equation can be written as ln P (E, N ) ≈ ln Ω(E, N ) − βE + βµN − ln Z Ω(E, N )e−β(E−µN ) ⇒ P (E, N ) ≈ (83) Z For the discrete energy values, the normalization demands X NX i =∞ P (Ei , Ni ) = 1 (84) i Ni =0 The grand partition function can thus be written as X NX i =∞ Z= Ω(Ei , Ni )e−β(Ei −µNi ) (85) i Ni =0 The probability distribution thus can be written as Ω(Ei , Ni )e−β(Ei −µNi ) P (Ei , Ni ) = (86) Z For continuous energy values though the grand canonical partition function can be written as N X =∞ Z ∞ Z= dEΩ(E, N )e−β(E−µN ) (87) N =0 0 We can also replace the canonical partition function from Eq. (60) N X =∞ Z ∞ N X =∞ Z(T, V, µ) = dEΩ(E, N )e−βE eβµN = Z(T, V, N )eβµN (88) N =0 0 N =0 | {z } Z(T,V,N ) Using the grand canonical ensemble, we can now calculate the average energy and average no of particles. The average no. of particles can be calculated as X ⟨N ⟩ = Ni P (Ei , Ni ) (89) i 26 Lecture 10 Ritwik Mondal Now, we see that " Ni =∞ # ∂ ln Z 1 ∂Z 1 ∂ X X = = Ω(Ei , Ni )e−β(Ei −µNi ) ∂µ Z ∂µ Z ∂µ i N =0 i XNX i =∞ Ni Ω(Ei , Ni )e−β(Ei −µNi ) i Ni =0 =β Z X =β Ni P (Ei , Ni ) = β⟨N ⟩ i 1 ∂ ln Z ⇒ ⟨N ⟩ = (90) β ∂µ Following the same procedure, one can find that the average energy is calculated as X ⟨E⟩ = Ei P (Ei , Ni ) (91) i We calculate " Ni =∞ # ∂ ln Z 1 ∂Z 1 ∂ X X = = Ω(Ei , Ni )e−β(Ei −µNi ) ∂β Z ∂β Z ∂β i N =0 i XNX i =∞ XNX i =∞ −β(Ei −µNi ) Ei Ω(Ei , Ni )e Ni Ω(Ei , Ni )e−β(Ei −µNi ) i Ni =0 i Ni =0 =− +µ Z Z X X =− Ei P (Ei , Ni ) + µ Ni P (Ei , Ni ) i i = −⟨E⟩ + µ⟨N ⟩ ∂ ln Z ⇒ ⟨E⟩ = − + µ⟨N ⟩ (92) ∂β Entropy The grand canonical partition function can be written as X NX i =∞ Z= Ω(Ei , Ni )e−β(Ei −µNi ) i Ni =0 X NX i =∞ S(Ei ,Ni ) = e kB e−β(Ei −µNi ) i Ni =0 NX i =∞   X S(Ei , Ni ) = exp − βEi + βµNi (93) Ni =0 kB i 27 Lecture 11 Ritwik Mondal As we argued in the case of canonical ensemble, we find the sum will simply be re- placed by the peak values of energy and no of particles. These peak values are also the average values.   S Z = exp − β⟨E⟩ + βµ⟨N ⟩ kB ⟨E⟩ µ⟨N ⟩ U µ⟨N ⟩ ⇒ S = kB ln Z + − = kB ln Z + − (94) T T T T Normally as there is no external energy provided in an isolated system, we have ⟨E⟩ = U. We could define a function Φ = U − T S − µ⟨N ⟩ (95) Now replace the entropy from Eq. (94) and we get µ⟨N ⟩   U Φ = U − T kB ln Z + − − µ⟨N ⟩ T T ln Z = −kB T ln Z = − β ⇒ Z = e−βΦ (96) 28 Lecture 11 Ritwik Mondal When we should use Quantum Mechanics? In the black-body radiation, we have studied the energy can be calculated using the wavelength λ as hc E= (97) λ For photons, as their velocity is same as the speed of light, the energy can similarly be represented as E = pc (98) Equating these two relations obtains the de-Broglie wavelength h λ= (99) p This is valid for particles moving at a speed of light c. However, particles can have velocity v which is comparable to the speed of light v < c. These particles become relativistic. In this case, the de-Broglie wavelength is h mv λ= where p = q (100) p 1− v2 c2 If the particle velocity is much smaller than the speed of light v

Use Quizgecko on...
Browser
Browser